Whitfield SubCommittee: Edward Wegman Testimony

Wegman goes through his conclusions in the report.

Highlights:

“The decentering error preferentially produces Hockey Sticks”

” Decentering has a big effect on reconstruction”

” With proper centering, the Hockey Stick shape disappears”

“Dr Mann has unusually large reach with the other 43 authors”

“Peer review is not as independent as would normally be the case”

178 Comments

  1. Steve Sadlov
    Posted Jul 19, 2006 at 9:56 AM | Permalink

    And now, thusly, enters into the Library of Congress as an official report of record given to the People of the United States of America.

  2. Posted Jul 19, 2006 at 12:30 PM | Permalink

    I’m eagerly following where this is heading… thanks for keeping me updated on what’s going on, this site continues to be exceptionally helpful.

  3. kim
    Posted Jul 19, 2006 at 1:24 PM | Permalink

    Are the witnesses listed in order of appearance? Is that lone mister the closing argument?
    =====================================================

  4. Jim Barrett
    Posted Jul 19, 2006 at 5:23 PM | Permalink

    I find this thread quite incredible. NONE of the above quotes appears in either the Wegman report or in the accompanying fact sheet – or perhaps John A does not understand the proper use of quotation marks.

    Shame on climateaudit for this overt display of misinformation!

  5. Lee
    Posted Jul 19, 2006 at 5:43 PM | Permalink

    Wegman’s social network analysis showed convincingly that every one of Mann’s coauthors was at one time a coauthor with Mann. I am shocked, SHOCKED!!! that this is allowed to continue in climate science.

    Not.

    I have to say it – that portin of the Wegman report was a ludicrous hit piece. Showing that someone who has published a lot of papers with coauthors has a lot of coauthors from his field? How utterly surprising – not. I didnt see that Wegman, a prominent statistician, even bothered to do a rudimentary statistical analysis to see if the pattern of coauthorship differeed from other prominent authors in other fields. And he has NO data regarding peer reviewers, so he haas NO data backign his claim of potential problems with peer review. Coauthorshhip is NOT peer review.

  6. Harold Kumar
    Posted Jul 19, 2006 at 6:14 PM | Permalink

    Lee, that portion struck a nerve eh?

    How important is the HS to AGW? If the climate community had ‘moved on’ why are Lee and others defending proven flawed research?

  7. Lee
    Posted Jul 19, 2006 at 6:22 PM | Permalink

    Harold, youj ahv eno clue (obviosuly) about what I am defending and not defending. Please stop attributing to me motives I dont have and arguments ia havent made. That post didnt defend a damn bit of research, flawed or otherwise – it pointed out that a major part of the Wegman report was an absurd hit piece rendered even more ludicrous by the failure of a prominent statistician to bother to see if what he observed was even statistically unusual.

    It “his a nerve” in the asme way that many ludicrous arguments do.

    BTW, I have in the last week made sesveral detailed posts about what you claim I am defending. Before you amke this kind of assertion about what I am ‘defending’, you might bother to see what I am actualy saying.

  8. Paul Penrose
    Posted Jul 19, 2006 at 6:25 PM | Permalink

    Strangely, I didn’t find the social networking thing all that compelling either. So let’s just agree now to ignore that part of the paper from this point on, which will allow us to discuss the statistics without any further distractions.

  9. miss J
    Posted Jul 19, 2006 at 6:40 PM | Permalink

    Lee (at the moment #7): I’d agree. The whole section of the “report” regarding “cliques” was noticeably devoid of references, quantitative analyses, and comparisons. It was a directed attack against a single man.

    Speaking as someone who has spent too much of her life selecting reviewers, co-authors are not chosen to review; conflicts-of-interest are verboten.

    The “report” did not demonstrate that they had ANY data that could inform a conclusion regarding the peer review process that took place nearly a decade ago.

    All the best,
    Jasmine (who has reared her head again on this site only because this issue was emailed to her from a friend).

    P.S. Contrary to the fantasy that “Pat Frank” refers to” Jasmine’s long-hair is brown and not blonde.

  10. Harold Kumar
    Posted Jul 19, 2006 at 6:45 PM | Permalink

    Lee, I’ve read your posts in the active threads of the last week or so. You brought it up, but agreed to drop the social network subject. But my question still stands, why are so many married to the MBH research? And if we were to throw out Mann and derivative works what exactly is left?

  11. Lee
    Posted Jul 19, 2006 at 6:47 PM | Permalink

    Paul, when a piece by a prominent statistician, commisioned by a congressinal committee, contains a major sectionon that is hard to interpret as anything other than an unsupported hit piece, it has to cast doubt on (at least) the motivations for the peice.

    The Wegman report also, it seems, stopped right at the crtitical point: do those errors he identified make a significant difference to the overall reconstruction. If I recall correctly, JohnA’s second non-quote above misrepresents what Wegman, et al did – they didn’t look at the overall reconstruction, they looked at the PCA.

  12. Paul Linsay
    Posted Jul 19, 2006 at 6:51 PM | Permalink

    Though Lee and I are on opposite sides of “A”GW, I have to agree with him that the social network part of the Wegman report is weak if not outright irrelevant. If you look into the development of quantum mechanics or molecular biology, they were both dominated by a very small number of men. In early quantum mechanics, all roads lead to Niels Bohr. If Mann were a good scientist then it would be to his credit that he’s at the center of so much activity in climate science. He’s not though.

    What is damning, as the report points out, is the sharing and reuse of the same small set of data among the “Mannians” while simultaneously claiming independent verification of an unprecedented rise in global temperatures. And of course refusing to let outsiders even peek at the raw data except under duress.

    The statistical discussion was a very clear explanation of the issues, especially the example in Figure 4.6 and Appendix A which shows exactly how the de-centered Mann algorithm produces hockey sticks.

  13. Posted Jul 19, 2006 at 7:01 PM | Permalink

    RC’s response to the debate. Note that the schtick has now shrunk back to the origional 1400 – 2000 year span, avoiding the expression (or lack there of) of the MWP.

  14. Lee
    Posted Jul 19, 2006 at 7:04 PM | Permalink

    Harold, where did I ever ‘agree to drop the social network subject.” I WAS severely sleep deprived the other da, but I cant imagine I said that.

    The NAS report spends a great deal of time explaining what is left. Several of my posts the last week, most notably on Monday when I returned from being banned by JonnA for the weekend, have outlined what they said. I’m tired of writing the lengthy repeated posts, and the spam filte seems to be getting peeved at me as well, so I would prefer to avoid another long post – I refer you to my previous posts and to the NAS report.

  15. welikerocks
    Posted Jul 19, 2006 at 7:10 PM | Permalink

    #4 ” find this thread quite incredible. NONE of the above quotes appears in either the Wegman report or in the accompanying fact sheet”

    LOL that’s because we were watching the LIVE hearing, not reading either of those report.

  16. John Creighton
    Posted Jul 19, 2006 at 7:10 PM | Permalink

    #8 if you interpret the social network concept as because someone is affiliated with Micheal Mann then they must be corrupt you miss the point. This is no better then the eco-fundamentalists discounting someone simply because they can show a connection to the oil industry. Wegman was making two points. The first is the lack of independent review of the climate community. The second more important point is the lack of interaction with other social networks. This later point is the most important point since one of the main conclusions of the report is the lack of interaction with the statistical community.

    Thus if you really want to dispute the social network part of the paper find the bridges (if they exist) between the climate community and the statistical community. Then show the significance of these bridges and how they influence the work of climate scientists.

  17. Lee
    Posted Jul 19, 2006 at 7:16 PM | Permalink

    John, he claims problems with review, which is not supported by a (irrelevant to the issue) network of coauthors. And for the lack of outside statistical support, the network in neither necessaary nor sufficient for that point – what is necessary is to simply point out that nne of hsi coauthor are statisticians. Who his non-statistician coauthors are is irrelevant.

  18. John Creighton
    Posted Jul 19, 2006 at 7:23 PM | Permalink

    #17 I wish people wouldn’t talk in acronyms. Some people are actually new to these blogs. What is nne and hsi?

  19. miss J
    Posted Jul 19, 2006 at 7:24 PM | Permalink

    “The first is the lack of independent review of the climate community. The second more important point is the lack of interaction with other social networks.”

    You have missed two criticisms of these points.

    1. There has been no assessment of the review process.

    2. Defining the “climate community” as only a few dozen people (75 was it?) who have published on “temperature reconstruction” (perhaps that only referred to figure one… I can’t remember how Wegman defined the group of 75?) completely misses out on the rest of their professional lives and the “networks” to which they belong.

    It was an unfair and unbalanced (!) and, frankly, amateurish, way to address the issue of peer review.

    –Jasmine

  20. Lee
    Posted Jul 19, 2006 at 7:28 PM | Permalink

    John, I’m a terrible typist, you are a (quite funny in this case) smart*ss. ‘None’ and ‘his.

  21. Harold Kumar
    Posted Jul 19, 2006 at 7:48 PM | Permalink

    Lee:

    My apologies, I mistyped. What I meant to say was I agree with *Paul* to drop the social network stuff. I didn’t mean to attribute it to you and couldn’t correct it until just now as I was in transit.

    Anyway, my question was not specifically addressed to you but rather to the blog audience.

    I’ll look at the NAS report in more detail.

  22. Jim Edwards
    Posted Jul 19, 2006 at 7:51 PM | Permalink

    #18 –> I’m sure ‘nne’ = none and ‘hsi’ = his.

    #19 –> Even Wegman told the committee that the social network portion of the report “should be taken with a grain of salt” and that this work, to his knowledge, had not been done to assess other groups of academics – although it would be probably be worthwhile to do so. VS also expressed some support for the general idea. This is more sociology than hard evidence. It can’t be used to prove anybody’s bad faith. It is illustrative of a point, nonetheless.

    Everybody seems to be focusing on the diagram of Mann and his 43 coauthors [which should have been described as a control].

    The much more interesting graph was the diagram with Mann removed, and the analogous set of the 75 most prolific authors in paleoclimatology with and w/o Mann. These demonstrate how socially connected Mann is relative to his peers – who generally are much more separated from the group. I conjecture that’s partly because many of those persons are actually producing proxy data, rather than massaging somebody else’s data. Wegman is inherently implying that Mann’s significantly higher connectiveness equates to popularity among his peers. I suppose the opposite conclusion could be drawn [i.e. – he’s so unpopular he has to keep looking for coauthors].

    It is interesting that Dr North also described Mann as a popular fellow – suggesting that he might see it as a plus for awarding tenure.

    These ‘social network’ graphs have been used to explain the game Seven Degrees of Separation. Most people are connected to a small localized group [say, 20]. There are a few popular social butterflies who have a large number widely dispersed connections [say, 300]. These few, very popular people explain why I’m linked to most world leaders in two or three steps.

  23. ET SidViscous
    Posted Jul 19, 2006 at 7:54 PM | Permalink

    I get to Osama Bin Laden in two, which annoys me, but also gets me Dubya in three.

    Of course Osama has a large family.

  24. John Creighton
    Posted Jul 19, 2006 at 8:00 PM | Permalink

    #19 Good points. That does sound like a small representative group of the climate community however it may be a particularly influential group. I am curious as to what percentage of the climate papers reported by the media, referenced in Al Gore’s movie or are authored by members of the IPCC panel lie within this social network. I am curious about what percentage of the referees of various Journals lie within this social group. I confess I haven’t completely read the report so I am not sure how well he used the idea of social networks are to supporting his arguments. Non the less it is an interesting concept.

  25. miss J
    Posted Jul 19, 2006 at 8:06 PM | Permalink

    I’m off to bed… but just couldn’t help but note one small thing: “popularity” & tenure???? That made me laugh out loud! ummm.. trust me… (irrelevant anecdotal evidence coming) I got tenure in a physics department a few years ago and my department head made it clear that popularity was NOT a factor (good thing since a paleoclimatologist from my institution (not even on the radar screen of the “75”) wrote an unsolicited bitter twisted letter for my file) I’ve also been part of tenure decisions… popularity is irrelevant … and close professional relationships (colleagues of colleagues of colleagues) don’t get a say.

    The Kevin-Bacon six (I thought it was six… maybe it’s seven) degrees of separation is another one of those throwback ideas from the last millenium. Network theory is a lot more sophisticated than that.

    All in all… there’s no question that report needs to be taken with a whole lot of salt.

    If you are interested in network theory (well-done graphs are beautiful), there are quantitative measures that assess them (and if it wasn’t bedtime I’d hunt a couple of recent Nature articles down for you). It was really very odd that someone would “throw it” (over their shoulder?) into a report to Congress.

  26. John Creighton
    Posted Jul 19, 2006 at 8:20 PM | Permalink

    #25 I have a question. Assuming that you are right and the social network theory contributes nothing to the argument the way Wegman presented it is it a bad concept to introduce to congress? Granted if the information is not a useful piece of the argument the paper should make that clear, however the idea of network connections is commonly used by warmers to dismiss anyone who disagrees with them as puppets of Exxon.

  27. miss J
    Posted Jul 19, 2006 at 8:53 PM | Permalink

    To respond to the current-#26: As a scientist, if I were making a written statement to Congress regarding networks, I would present those results in a scholarly manner:

    -I would provide citations for my definitions and for the subject matter;
    -I would provide legitimate “control” groups ;
    -I would broaden the definition that was used to define the groups (using people who publish on the topic of ‘temperature reconstruction’ only demonstrated that the person who drew up those networks was not familiar enough with the field to know how peers for peer-review might be chosen);
    -I would use quantitative measures to define the network;
    -I would not single out a named individual in the analysis (ad hominem arguments are ineffective);
    -I would make the axes legible (this might (and any of my former undergrads could have told you this) involve a key);
    -I would not claim that a network of co-authorship was related to peer-review (COI’s are forbidden);
    -I would have published on the subject beforehand (or at least submitted something… probably gotten a co-author);
    -I wouldn’t back away from it verbally as a “grain of salt.”

    hmmm… that was just off the cuff. It’s nearly 11 in DC and that’s past my bedtime 🙂

  28. nanny_govt_sucks
    Posted Jul 19, 2006 at 9:06 PM | Permalink

    There’s a new post over at RC about “what was missing” in the Wegman report to congress: http://www.realclimate.org/index.php/archives/2006/07/the-missing-piece-at-the-wegman-hearing/

    Let’s see if any one of my 3 posts shows up there (paraphrased):

    1. I think you’re missing the part about the ad hoc selection of PCs from the centered PCA.

    2. I think you’re missing the error bars in your 2 graphics.

    3. I think what was missing was Mann’s presence in defense of his studies.

  29. kim
    Posted Jul 19, 2006 at 9:19 PM | Permalink

    It was incest, it was, and it waren’t right.
    ============================

  30. kim
    Posted Jul 19, 2006 at 9:22 PM | Permalink

    My proof? It produced a monster, that crook’t stick.
    =================================

  31. TCO
    Posted Jul 19, 2006 at 9:50 PM | Permalink

    Posted here in case it is censored at RC.

    Steve M. has posted analysis that shows that straight averaging of the proxies does not give a hockey stick.

  32. TCO
    Posted Jul 19, 2006 at 9:54 PM | Permalink

    Another one on RC

    Also how many PCs do you retain in the centered PCA and is the rationale for number of PCs retained described a priori in MBH98?

  33. John Creighton
    Posted Jul 19, 2006 at 11:01 PM | Permalink

    “#

    Another one on RC

    Also how many PCs do you retain in the centered PCA and is the rationale for number of PCs retained described a priori in MBH98? ”

    After correlating the first principle component to the hockey stick by judicious data selection and increasing the standard deviations by off centering or other means, you retain as many principle components as you can to still get a hockey stick. Then you look for some rule that doesn’t need to be relevant of why you select this many principle components even if it does a poor job or representing your proxies (i.e. explained variance) let alone past temperatures.

  34. Posted Jul 19, 2006 at 11:47 PM | Permalink

    RE: #33 and others:

    From a PR standpoint, is there a simple / concise retort to the posting on RC that avoids needing to understand the intricacies of PCA?

    The reason I ask is that when I read the RC post as someone without an in-depth understanding of PCA, my intital takeway is “Oh, ok, this Wegman report found something wrong with the math in one of the main global warming studies done eight years ago, but looks like even if this error is corrected, the results are the same.”

    Of course, I’m more sceptical than that – as my previous posts regarding my brief encounter with Mann about r2 – RE make clear 😉 But how to refute the post on RC in a manner that the general public that follows the global warming debate can grasp? Possible?

    -JR

  35. Posted Jul 19, 2006 at 11:50 PM | Permalink

    Sorry, forgot to ask two other quick questions:

    1. Recommended source for learning more about the math behind PCA?
    2. Does there exist a nice “family-tree” like representation showing MBH98 as the first, top node and tracing subsequent AGW studies that rely upon the flawed analysis?

  36. JMS
    Posted Jul 20, 2006 at 12:04 AM | Permalink

    I think that if you look at this it will explain how many PC’s needed to be chosen to capture all of the variablity in the data.

    You might also notice that there are reconstructions using the original MBH convention, the M&M convention, and no PCA at all. All of them are very similar. Also take note of North’s (?) testimony about the “bonehead” method (basically a simple average, although the did not elaborate) which produced very similar results to the MBH analysis. You might also further note in Rutherford and Mann (2005) that RegEM, which does not depend on the questionable PCA methods of MBH9x, produces very similar results to MBH99. The other reconstuctions which are cited in the NAS report also produce similar results, most of them showing more variability than MBH9x, but all indicating that the current warmth is likely to be anomalous in the context of the last 1000 years.

    As I have pointed out in other posts here, the classes of proxies which the NRC report considered for the most part indicated similar patterns of temperature change over the last millenium. 11th century warmth followed by a cool period centered around the 16th century and anonmalous 20th century warmth. Hmm, seems like most of the proxies have a hockey stick shape.

    Finally, I found the von Storch testimony interesting, he agreed with M&M on the decentering issue, but said that it didn’t make any difference to the reconstruction. As the above link to RC pointed out this is the missing part of the Wegman report: the fact that the error really was not relevant given the underlying data — yes it probably supressed some of the underlying variability (as pointed on in Burger, et. al. 2005) but it was a pretty good first attempt.

    Now that this has all been settled can you please close this blog down?

  37. Bruce
    Posted Jul 20, 2006 at 12:12 AM | Permalink

    The real issue in all of this is that Mann et al have been shown up as producing shoddy work, and lost cred as a result. Pretty simple really. If you want to be taken seriously, especially when you are aiming to influence public policy with your science, it better be robust, and able to withstand scrutiny. If stats are an issue, you better make sure that your stats are robust. You had better release the data and methods, and allow replication as good science requires.

    And as for peer review. OK, but you had better make sure that the peers that you use are a) independent of you, b) diligent and thorough in their checking of work.

    Doubtless the above lessons have been learned emphatically by scientists watching this drama unfold, and we will all be better off as a result. All we need now is for the specialist journals to do their job, and we can progress to finding out whether AGW is happening or not, and if so, what can we do about it.

  38. Posted Jul 20, 2006 at 12:13 AM | Permalink

    Mr. McIntyre has previously shown that if you take a straight average of the MBH98 proxies, you get a flat line with noise.

    How is a flat line “very similar” to a hockey stick? Ever tried playing hockey with a cricket bat?

    JMS, enough sour grapes, thanks.

  39. Posted Jul 20, 2006 at 12:17 AM | Permalink

    RE: #37

    Shoddy work, yes. But if this shoddy work is corrected, does it drastically change the outcome of MBH98 and subsequent studies reliant upon MBH98? This is the question that I think is the real kicker from a public awareness perspective.

    Like I said, my knowledge of PCA doesn’t allow me to draw my own conclusion (and hence any conclusion)at this point. As I mentioned, however, the correlation stats have already discredited MBH98 in my mind.

    -JR

  40. Posted Jul 20, 2006 at 12:20 AM | Permalink

    And I’ll go ahead and state the obvious, if you get “very similar” results doing a straight average, why bother with PCA at all? The behaviour of averages is well understood, confidence interval calculations are easier, etc. Why go out on a limb with a new technique if averaging gets you just about the same result?

  41. JMS
    Posted Jul 20, 2006 at 12:27 AM | Permalink

    I don’t know what Steve has shown, I can only take North’s (?) sworn testimony at face value. I assume, since he was under oath that he was not misrepresenting his results.

    All in all, I think that von Storch, North, Karl and Crowley did a good job of defending the current consensus (hockey blade real, the handle is probably wavier than MBH9x). However, when it came to issues about climatology Wegman really embarassed himself. I would have liked it if they had invited Bloomfield, because he probably would have been able to give Wegman a run for his money.

  42. Jack Lacton
    Posted Jul 20, 2006 at 12:27 AM | Permalink

    My assessment of the description of the social network was to show how the ‘peer reviewed’ work, using the same flawed data and methodologies, brought climate science to the point it has – being completely destroyed in front of a congressional committee. I didn’t see the anaysis being stand alone.

    I keep making this point, and the AGW crowd don’t accept it because they believe their motives were pure – the negative effect of Mann (and his supporters) on funding for worthwhile climate research projects is going to be felt for years to come. There will be some good, of course, in terms of tightening up the requirements for access to data etc in order to get funding.

    One can live in hope that the sobering effect of the Hwang and Mann experience on the editors of Nature et al may drive a change in their process for the common good – and for their own credibility.

  43. JMS
    Posted Jul 20, 2006 at 12:30 AM | Permalink

    #42: check the link I provided. They used PCA so that they might avoid danger of overfitting during the calibration period, something which they demonstrated with the no-PCA recon. However, as they like to say over at RC: “we’ve moved on”. I think this blog should move on too.

  44. mark
    Posted Jul 20, 2006 at 1:36 AM | Permalink

    I think this blog should move on too.

    Sorry, but these guys have been shown to be not only flawed methodologically, but basically incompetent. To “move on,” would give them the benefit of the doubt that maybe they somehow have a chance of pulling a rabbit out of the hat in the future. Not possible, and the Mann cabal should be chastised repeatedly for pushing such crap when they KNEW it was crap. “Moving on” implies that they have some shred of credibility left to get it right in the future. Sorry, but I’m not buying it. The only way to make sure this flawed science gets knocked down is to bash it till the horse is more than dead.

    IMO, there should be legal repercussions, and I think Mann was smart to hire a lawyer (if the rumor is true). He needs one.

    Mark

  45. Jean S
    Posted Jul 20, 2006 at 3:31 AM | Permalink

    re #36 (JMS):

    I think that if you look at this it will explain how many PC’s needed to be chosen to capture all of the variablity in the data.

    No, that’s BS. Why don’t you actually consult a standard book on PCA?

    You might also notice that there are reconstructions using the original MBH convention, the M&M convention, and no PCA at all. All of them are very similar. Also take note of North’s (?) testimony about the “bonehead” method (basically a simple average, although the did not elaborate) which produced very similar results to the MBH analysis. You might also further note in Rutherford and Mann (2005) that RegEM, which does not depend on the questionable PCA methods of MBH9x, produces very similar results to MBH99. The other reconstuctions which are cited in the NAS report also produce similar results, most of them showing more variability than MBH9x, but all indicating that the current warmth is likely to be anomalous in the context of the last 1000 years.

    How many times it must be repeated that there are no “conventions”. There is PCA, and then there is a wierd procedure used by Mann, whose properties many people, me included, have tried to explain
    several times (for latest attempt see Wegman’s Appendix) — seemingly in vain for some people.

    What happens when you do a simple avereging, see Steve’s testemony (Figure 1), he did elaborate. What happens with other type of “conventions” and flavors,
    see the last slide in Steve’s testemony.

    I’m also SO HAPPY that you (and RC!) put this Rutherford (2005) paper on the table! I’n sure Steve will
    return to those comments in upcoming weeks, just follow this blog! At this point, I’m
    just asking you that what makes you think that there are no wierd “conventions” substancially
    affecting the reconstruction in Rutherford et al paper? Have you tried to replicate their reconstractions?

    As I have pointed out in other posts here, the classes of proxies which the NRC report considered for the most part indicated similar patterns of temperature change over the last millenium. 11th century warmth followed by a cool period centered around the 16th century and anonmalous 20th century warmth. Hmm, seems like most of the proxies have a hockey stick shape.

    BS. Only few proxies have a hockey stick, Steve has kindly plotted so many proxies on
    this blog that this should have been clear to you if you have bothered to see any of those posts.
    If you don’t believe, why don’t you plot those proxies yourself? You can start with the MBH network,
    whose data is readily available in many places.

    Now that this has all been settled can you please close this blog down?

    It would nice to completely “move on”, but seems to me that there are quite a few people out there
    who has so tick head that they don’t get even these “old” things correct. Repetitio Est Mater Studiorum.

  46. James Lane
    Posted Jul 20, 2006 at 3:31 AM | Permalink

    The RC response to Wegman is a typical pea and thimble trick. They say using centred data, “after all the N. American PCs had been put in with the other data and used to make the hemispheric mean temperature estimate” the reconstruction is the same. They neglect to mention that this means taking the bristlecones in via PC4 (explaining only 8% of the variance.)

    The subsequent regression stage, however, dosn’t “care” about the order of the PCs – they all receive the same weight. So under this procedure, it doesn’t matter if the bristlecones are PC1 (dominant component of the variance) or low order PC4 – they have equal weight. So it’s hardly surprising that the centred and decentred methods produce the same results, as long as you admit the low-order PC4).

    The main problem with MBH. It’s not robust to the presence or absence of bristlecones. It’s impossible to think that the authors at realclimate don’t know this, and it’s disappointing to see them present such a misleading defence.

    Still on the same RC posting, does anyone understand the graph illustrating the “throw out the PCA entirely”? It sems that the reconstructions all stop at about 1850, and then the instrumental record is grafted on. Given that most of the proxies extend up to at least 1980, why the truncation? Could it be that “when the raw data are used directly” they don’t rise in the 20th century?

  47. Demesure
    Posted Jul 20, 2006 at 4:00 AM | Permalink

    “However, as they like to say over at RC: “we’ve moved on”. I think this blog should move on too.

    #43 : The hockeystick team would have moved on if they had publicly recognized that they didn’t have a satisfying data & method release policy. If they had thorougly explained Mann’s CENSORED directory. If they had agreed the isolation of their work from the staticians community.

    Also take note of North’s (?) testimony about the “bonehead” method (basically a simple average, although the did not elaborate) which produced very similar results to the MBH analysis.

    It’s not true JMS. You should consult M&M’s work instead of buying North’s (?) (and of course Realclimate’s) version.

  48. Jean S
    Posted Jul 20, 2006 at 4:21 AM | Permalink

    Still on the same RC posting, does anyone understand the graph illustrating the “throw out the PCA entirely”? It sems that the reconstructions all stop at about 1850, and then the instrumental record is grafted on. Given that most of the proxies extend up to at least 1980, why the truncation? Could it be that “when the raw data are used directly” they don’t rise in the 20th century?

    The figure is from Rutherford (2005). They all end in 1855, since that’s the start of the intrumental
    record, and Rutherford is basicly based on “extending” the intrumental record with RegEM, so only MBH98 would have been different from the instrumental record. Why they didn’t plot it, I don’t know.

    Observe also the variance of the “long intrumental record” compared to the actual reconstruction. That’s a “feature” of RegEM, see summary of Schneider (2001) for explenation.

  49. Jean S
    Posted Jul 20, 2006 at 4:44 AM | Permalink

    They skrewed it up over RC, the figure they have is Rutherford’s Figure 2, which
    does not include MBH98. I suppose they wanted to have Figure 3 from the same paper…

    [RC moderator’s: if you correct this, please credit to climateaudit with a link!]

  50. fFreddy
    Posted Jul 20, 2006 at 5:36 AM | Permalink

    Re #36, JMS

    Also take note of North’s (?) testimony about the “bonehead” method (basically a simple average, although the did not elaborate)

    Wasn’t it Crowley who introduced this approach ? It wasn’t clear to me if he was referring to an earlier paper or the one that hasn’t been published yet.
    If I remember correctly, I think his simple average requires that you pre-select a limited set of hockey stick shaped proxies. Steve has shown that if you do a simple average of all the proxies in MBH, you get a random sprawl.

    The other reconstuctions which are cited in the NAS report also produce similar results, most of them showing more variability than MBH9x, but all indicating that the current warmth is likely to be anomalous in the context of the last 1000 years.

    MBH9X has been shown, by the impeccably credentialled Wegman, to be rubbish. Let’s see what he has to say about the others, shall we ?

    Finally, I found the von Storch testimony interesting, he agreed with M&M on the decentering issue, but said that it didn’t make any difference to the reconstruction.

    I think that’s what he used to say. In his testimony, he made some reference to it being true, contingent on some geographical criteria, which I didn’t catch, but sounded like moving away from his previous position. Did anyone catch what he said there ?

    Now that this has all been settled can you please close this blog down?

    Heh. In your dreams, dear boy…

  51. fFreddy
    Posted Jul 20, 2006 at 5:37 AM | Permalink

    Re #39, Justin Rietz

    Shoddy work, yes. But if this shoddy work is corrected, does it drastically change the outcome of MBH98 and subsequent studies reliant upon MBH98? This is the question that I think is the real kicker from a public awareness perspective.

    I’d say the real question is, if peer review has failed so catastrophically on this paper, why should we assume it was done any better on any of the subsequent studies. They should all be independently replicated as a matter of urgency. Mr Barton … ?

  52. fFreddy
    Posted Jul 20, 2006 at 5:37 AM | Permalink

    Re #41, JMS

    I would have liked it if they had invited Bloomfield, because he probably would have been able to give Wegman a run for his money.

    I thought he was the guy behind North, who stepped in at one point to say that they had no argument with Wegman’s conclusions, or some such. He certainly didn’t try to argue with Wegman.

  53. fFreddy
    Posted Jul 20, 2006 at 5:43 AM | Permalink

    Re #44, Mark

    The only way to make sure this flawed science gets knocked down is to bash it till the horse is more than dead.

    I wish it weren’t so, but I agree with this. The risk/return for working scientists is all out of balance; there needs to be a substantial readjustment, or this will all happen again in a different area tomorrow.

  54. TAC
    Posted Jul 20, 2006 at 5:47 AM | Permalink

    I was at the hearing, I once worked on the Hill (staff to a middle-of-the-road Democrat), and here are my impressions:

    1. Statistical Issues: There was no defense whatsoever of Mann’s methods. Crowley, selected by Mann to represent him, stated that he would not discuss the issues. North (“inappropriate choices”) and Wegman (“incorrect”) were in agreement that Mann’s methods were wrong, as were all the other witnesses who addressed the question. Most of the Rs stated explicitly that Mann’s methods were wrong; one D used the word “false”, and the other Ds were careful not to suggest they thought Mann’s methods might be correct.

    2. Does the HS matter for AGW? The impression I got was either no (North, Crowley) or unsure (Wegman, others). Although the Ds stayed away from defending Mann, they argued effectively that the question should not be whether Mann was wrong but whether Mann’s errors matter with respect to the larger issue of AGW. I think the Democrats were successful in this regard, with Inslee, Stupak, and Waxman hammering this nail pretty hard.

    3. The Minority (Democrats) seem really angry, apparently about the failure of this Committee to address GW up to now (and about a million other sources of frustration). Rep. Inslee (might have been Rep. Stupak) made a big point about “why you are here” to Wegman, with the clear implication that it was not about statistics.

    4. After the dust settles, my sense is that we have “moved on”. The HS is recognized as discredited (I notice that even RC’s HS graphics no longer extend back past 1400). Barton (and the others) now have a powerful story to tell about how climate scientists can make mistakes (no surprise there) and how hard it is to get them to “‘fess up” (which may actually be a surprise), which he presumably will trot out whenever he wants to justify some policy that runs counter to scientific consensus.

    It will be interesting to see how (if?) this story gets reported in the media. I think I saw Richard Kerr (Science) and Richard Monastersky (Chronicle of Higher Education) in attendance. Ralph Cicerone (president of NAS) was also present.

    Finally, some personal impressions: SteveM was compelling and convincing. North was very smooth; his white hair, midwestern voice, and relaxed manner create the impression of an old sage. He makes a great witness. Wegman would have benefited from some coaching about how to deal with seriously talented interrogators like Rep. Inslee. Jasmine — wearing the head scarf — is brainy and lovely and definitely a woman (in response to a question that came up earlier on this blog).

    It was an interesting day.

  55. Jean S
    Posted Jul 20, 2006 at 5:58 AM | Permalink

    re #54: Thanks for the report.

    It will be interesting to see how (if?) this story gets reported in the media.

    So far, it seems like you attended a hearing that did not exist. 😉

  56. Michael Jankowski
    Posted Jul 20, 2006 at 6:24 AM | Permalink

    Trying to play catch-up, haven’t read all the posts…

    Any discussion anywhere about how Wegman noted Mann’s “PCA” technically isn’t even PCA? It’s kind of non-sensical to discuss PCA with regard to MBH98 if that’s not really what it was in the first place.

  57. Jean S
    Posted Jul 20, 2006 at 6:24 AM | Permalink

    I think I saw Richard Kerr (Science)

    For a comparison, this is what he said earlier:
    GLOBAL WARMING: Millennium’s Hottest Decade Retains Its Title, for Now (Feb 11, 2005)
    CLIMATE CHANGE: Yes, It’s Been Getting Warmer in Here Since the CO2 Began to Rise (Jun 30, 2006/NAS report)

  58. fFreddy
    Posted Jul 20, 2006 at 6:28 AM | Permalink

    Re #54, TAC
    Thank you for that.
    I have to say, I really hope you’re wrong about no 4 – I had assumed that the natural next step would be to start looking at the other studies, and eventually to clean up this whole area. It hadn’t occurred to me that Barton might be happy to stop at one, but I’m not a politician. I hope the constant din of “this is only one study” will be enough to motivate him to push onwards.

    I think the lady in the headscarf was Dr Jasmine Said, Dr Wegman’s associate – not the same as the person who was posting here under the name Jasmine.

  59. fFreddy
    Posted Jul 20, 2006 at 6:29 AM | Permalink

    Re #46, James Lane

    The subsequent regression stage, however, dosn’t “care” about the order of the PCs – they all receive the same weight. So under this procedure, it doesn’t matter if the bristlecones are PC1 (dominant component of the variance) or low order PC4 – they have equal weight.

    I have never understood how they think they are justifying this. If you have a PC1 with 40% of the variance and a PC2 with 8% of the variance, and you treat them equally in the reconstruction, then surely you are artificially inflating the PC2 by a factor of 5 ?
    Presumably they have some excuse for throwing away the information in the eigenvalue – does anyone know what it is ?

  60. Jean S
    Posted Jul 20, 2006 at 6:30 AM | Permalink

    It’s kind of non-sensical to discuss PCA with regard to MBH98 if that’s not really what it was in the first place.

    That’s what we been saying here all along. It is obvious to anyone who actually understands what PCA is doing. It’s just Lambert likes (and RC people) who try to misguide by talking about “conventions” and “non-centered/centered PCA”.

  61. Jean S
    Posted Jul 20, 2006 at 7:00 AM | Permalink

    Presumably they have some excuse for throwing away the information in the eigenvalue – does anyone know what it is ?

    Well, the way I see it is that the true justification for the use of true PCA in this manner comes from the closely related subject known in statistics as Factor Analysis. There are some other names for the same thing in other scientific disiplines. So basicly with PCA you get out of the proxies (the tree ring networks) the factors, in this case you can think factors as things affecting the growth of the trees in general. So if you have 70 recordings of tree growth, with factor analysis (PCA) you get, say, three dominant factors which represent the “tree growth forcings”. The rest is just immaterial noise.

    Now out of these factors, you should get out the temperature. In MBH98, it is essentially done by later by regressing these factors (the final proxies) with the instrumential data, i.e., the idea seem to be that correlation decides how much each factor (and other proxies) contains temperature information.

    But I want to stress again that the above only applies to PCA. With respect to “Mannian PCA”, it is meaningless. If you actually do the analysis with correct PCA (for NOAMER) with objective selection rules, the PC where the hockey stick lies should be considered noise and thrown away.

  62. Posted Jul 20, 2006 at 7:14 AM | Permalink

    Does anyone know anything about today’s (Thursday) hearing? Info here.

    In the absence of hard, empirical data, the public discussion of climate change has devolved into half-truths, polemics, and partisanship.
    To cut through the confusion and bias, Chairman Tom Davis has called Thursday’s hearing so that the public and Congress can better understand the current state of climate change science, the Bush Administration’s policies, and the various policy approaches available to lawmakers.

    Witnesses:

    Panel I

    Jim Connaughton, Chairman, Council on Environmental Quality

    Dr. Thomas Karl, Director, National Climatic Data Center, National Oceanic and Atmospheric Administration

    Panel II

    Dr. John R. Christy, Professor and Director, Earth System Science Center, NSSTC University of Alabama in Huntsville

    Dr. Judith Curry, Chair, School of Earth and Atmospheric Sciences, Georgia Institute of Technology

    Dr. James Hansen, Chief, Goddard Institute for Space Studies, National Aeronautics and Space Administration

    Dr. Roger A. Pielke, Jr., Center for Science and Technology Policy Research, University of Colorado at Boulder

  63. Jean S
    Posted Jul 20, 2006 at 7:19 AM | Permalink

    re #62: I guess it gets a lot more publicity than the one yesterday.

  64. TCO
    Posted Jul 20, 2006 at 7:22 AM | Permalink

    JeanS: is transformation to the correlation matrix by division by standard deviation also something that makes the analysis “no longer PCA”?

  65. bender
    Posted Jul 20, 2006 at 7:32 AM | Permalink

    RE: #61 and others asking why “PCA” was used in the first place. See paragraph 2 of post 294 in thread “mutual admiration society”

  66. fFreddy
    Posted Jul 20, 2006 at 7:45 AM | Permalink

    Re #62
    Am I being dumb, or is this not available on webcast ?
    Witness list look like an AGW cheerleaders’ convention. No wonder Bloom was touting it …

  67. fFreddy
    Posted Jul 20, 2006 at 7:49 AM | Permalink

    Re #66
    Ah, it should be, from here. Not started yet, though.

  68. John Creighton
    Posted Jul 20, 2006 at 7:57 AM | Permalink

    The eignvalue information is unimportant for the reconstruction because the reconstruction because the reconstruction doesn’t care if T=(P1/lambda)*k or T=P1*(k/lambda). There are two issues I think. The first is if the principle components represent the data matrix well and the second is if the principle components have zero mean. The first isn’t true because the principle components only explain 8% of the data matrix while the second isn’t true because the first principle component is heavily weighted by the bristle stone pines whose proxies were off centered. I am not sure if the hockey stick remains if you include more principle components (I think I read it doesn’t) but if the principle components that are heavily weighted by the bristle stones are the only principle components that are not zero mean and most of the variance in the temperature is due to not removing the mean then the Mann’s “Principle component analysis” will select those proxies which are heavily weighted by the mean.

  69. Jean S
    Posted Jul 20, 2006 at 8:09 AM | Permalink

    re #64: I think this is matter of taste, some authors consider that (i.e., analysis based on correlation (coefficient) matrix) to be also PCA some are not considering it as PCA. IMO, no, it’s still PCA, but should not be used if the variables are already in the same units (like tree rings).

    Those wanting a better explenation what I tried to explain in #61, see here.

  70. fFreddy
    Posted Jul 20, 2006 at 8:19 AM | Permalink

    Re #68, John Creighton

    I am not sure if the hockey stick remains if you include more principle components (I think I read it doesn’t)

    I wouldn’t have thought so. On the basis that throwing away the eigenvalue results in scaling up the importance of the lower order principal components, I’d think that you would end up being dominated by whatever fluff remains around PC8, PC9, etc.

    Jean S, thank you; I have some reading to do.

  71. TCO
    Posted Jul 20, 2006 at 8:22 AM | Permalink

    But is the rationale for saying that Mannian methods are not PCA, the same as that for saying that other transforms are no longer PCA? I think it would be more helpful just to say that the Mann transform is unprecendented and unproven in the PCA literature.

  72. Jean S
    Posted Jul 20, 2006 at 8:36 AM | Permalink

    re #71: Yes, but the key issue is the centering. That is what is unprecedented in the PCA literature. It’s not completely unproven anymore as Wegman’s appendix characterizes some of its bias properties 😉

  73. fFreddy
    Posted Jul 20, 2006 at 8:51 AM | Permalink

    Re #67
    It’s live now.
    Just bloviating politicians …

  74. fFreddy
    Posted Jul 20, 2006 at 9:02 AM | Permalink

    Re #73
    Is anyone else listening to this ? Are you having any problems with sound quality ?

  75. Geoff Olynyk
    Posted Jul 20, 2006 at 9:14 AM | Permalink

    I’m having trouble with the sound quality – it’s terrible. Sounds like someone has stuck a high-pass filter between the microphone and the webcast computer or something.

    Hopefully they fix this soon – I had an enjoyable day yesterday watching the Barton hearing in a window at the corner of my screen and I’d like to do it again.

  76. Brooks Hurd
    Posted Jul 20, 2006 at 9:21 AM | Permalink

    The sound quality that I am getting is mostly unintelligible. The video is acceptable. I thought that it was only my computer.

  77. fFreddy
    Posted Jul 20, 2006 at 9:28 AM | Permalink

    I called them – they are aware of the problem and the IT guys are working on it.
    Are they reading this blog ?

  78. kim
    Posted Jul 20, 2006 at 9:31 AM | Permalink

    They couldn’t find a better technical feedback loop. Well, content, either.
    ====================================

  79. fFreddy
    Posted Jul 20, 2006 at 9:33 AM | Permalink

    Heh. Suddenly I wonder about all those notes being passed to Mr Barton yesterday …

  80. fFreddy
    Posted Jul 20, 2006 at 11:20 AM | Permalink

    And on a lighter note …

  81. Posted Jul 20, 2006 at 11:52 AM | Permalink

    For posterities sake: I submitted a posting yesterday on RC regarding their statement that since MBH98, there has been newer and better proxy data and studies. I asked why Mann then continues to rely upon graphst from MBH98 and IPC2001 (which heavily uses MBH98) in his presentations, as recent as April of this year, if it is known there is something newer and better. On topic, and a valid question. But….

    Yep, you got it, censored.

  82. Jean S
    Posted Jul 20, 2006 at 1:02 PM | Permalink

    I wonder if M. Mann was also in Northern Virginia yesterday (from Opening statement of Chairman Tom Davis, Climate Change: Understanding the Degree of the Problem -hearings today):

    Nor will we be hearing from Vice President Gore, who has spoken often of Congress’s and the Administration’s “blinding lack of awareness” about this “planetary emergency” and whose spokesperson told the L.A. Times the Vice President would “go anywhere and talk to any audience that wants to learn about climate change and how to solve it.” The Committee asked the Vice President to pick any date in June or July, but apparently ours was not one of the “audiences” he had in mind. While Mr. Waxman and I are disappointed, we understand that movie screenings and book signings are time consuming, and we hope his book signing in Northern Virginia went well yesterday.

  83. Jean S
    Posted Jul 20, 2006 at 1:19 PM | Permalink

    Those interested in climate models, you may find Christy’s testimony interesting:

    The real surprise was the temperature record of the 23 stations in the Sierra foothills and mountains. Here, there was no change in temperature. Of course irrigation and urbanization have not affected the foothills and mountains to any large extent. Evidently, nothing else had influenced the temperatures either. These results did not match the results given by models specifically downscal ed for California where the Sierra’s are shown to have warmed to a greater degree than the Valley (e.g. Snyder et al. 2002).

    These results are provocative, but we performed four different means of determining the error characteristics of these trends and determined that the result that nighttime war ming in the Valley was significant but that changes in the Sierras, either day or night, were not. Model s suggest that the Sierra’s are the place where clear impacts of greenhouse warming should be found, but the records we produced did not agree with that hypothesis. The bottom line here is that models can have serious shortcomings when reproducing the type of regional changes that apparently have occurred. This also implies that theywoul d be ineffective at projecting future changes with confidence, especially as a test of the effectiveness for specific policies. In other words it will be almost impossi ble to say that a specific policy will have a predictable or measurable impact on climate.

    [Note: as a follow-up to Christy (2002) on Alabama te mperature trends, we examined the output from 10 climate models. All showed a warming trend for 1900 to 2000. The observations revealed a cooling trend (common throughout the SE U.S. Again, evidence that model reconstructions of the past encounter great difficulty in being correct, and thus future projections should have low confidence attached to them.]

  84. Dane
    Posted Jul 20, 2006 at 1:23 PM | Permalink

    Does anyone recall Lee giving me much grief when I brought up the Sierras? Seems to matter to somebody beside me.

  85. Jean S
    Posted Jul 20, 2006 at 1:52 PM | Permalink

    I just can’t figure out what the expertise of this Marshall Herskovitz (testemony) have to do with climate change? How to produce convincing propaganda?

  86. Jean S
    Posted Jul 20, 2006 at 2:01 PM | Permalink

    re #62: One more thing about the hearings today (as no-one else seems to be interested in that), Hansen did not show up either. Must be busy with Mann and Gore.

    We were looking forward to hearing from Dr. Jim Hansen, NASA’s preeminent climate change scientist. But we learned just days ago that he was no longer available to testify. Let the record show he was not muzzled, not by this Committee at least.

  87. Posted Jul 20, 2006 at 2:26 PM | Permalink

    Hey Jean, Herskovitz was Gore’s stunt double. Jeeze. Talk about parrots!

  88. Jean S
    Posted Jul 20, 2006 at 2:30 PM | Permalink

    Geez, Gavin is complaining about Wegman’s Figure 4 being mislabelled, and they still have the wrong figure in their own post! Gavin, hint: #49

  89. Michael Jankowski
    Posted Jul 20, 2006 at 2:42 PM | Permalink

    Re#60 Jean,

    I’ve known about the centering/de-centering issue all along, and I’ve seen Steve refer to it as “Mannian PCA,” but I’ve (possibly incorrectly) always understood the issue to be “correct PCA vs incorrect PCA.” Wegman et al says it’s not PCA in the first place, but it doesn’t sound like it’s just the centering issue…or is it just the centering issue that prevents it from being PCA?

    At least in Chapter 11 of the NAS report, Mann’s method is referred to as “PCA.” Is there somewhere else in the report where the NAS indicates that it is not PCA?

  90. Joel McDade
    Posted Jul 20, 2006 at 2:42 PM | Permalink

    RC post title: The missing piece at the Wegman hearing

    Answer: Michael Mann

    plop!

  91. MarkR
    Posted Jul 20, 2006 at 3:14 PM | Permalink

    Re Pielke

    “With this normalization, the trend of increasing damage amounts in recent decades disappears. Instead, substantial multidecadal variations in normalized damages are observed: the 1970s and 1980s actually incurred less damages than in the preceding few decades. Only during the early 1990s does damage approach the high level of impact seen back in the 1940s through the 1960s, showing that what has been observed recently is not unprecedented.

    Normalized Hurricane Damages in the United States: 1925–95

    Click to access resource-168-1998.11.pdf

    Not any where near what he’s saying now.

  92. Jean S
    Posted Jul 20, 2006 at 3:22 PM | Permalink

    re #89: Well, mathematically PCA is just an linear (orthogonal) transformation of data. So technically, if you have N input “signals” you have N output “signals”. The first output signal is labeled PC1, second PC2, and so forth. Now what makes PCA different from some other arbitrary linear transformation of data? The fact that PCA has some meaning. That is, the output signals have certain properties other linear transformation do not share. This is the idea of PCA in the first place, and these properties are the reason to use it.

    Now Mann’s “PCA” is just another linear (orthogonal) transformation of data, which does not possess these properties. In fact, it does not possess any (known) meaningfull properties unless you count the mining for the hockey shapes as such 🙂 So why would it be called PCA? It’s kind of like calling a horse carriage “Ferrari” since they both have four wheels.

    NAS report calls it “type of PCA” or something like that. I’m yet to see a reference to literature where this “type of PCA” has been defined or used except MBH9X.

    About PCA use in MBH9X, there is an issue which both Wegman and NAS missed: Mann did use (accordind to Steve, I haven’t checked) the “correct” PCA in the temperature PC calculations, i.e., substracting the mean of the whole series, not only the caliberation period, which is shorter. This raises (at least in my head) some questions…

  93. Jean S
    Posted Jul 20, 2006 at 3:36 PM | Permalink

    re #89: I forgot, yes, the centering is the main issue. One can argue for not centering at all (I seen that few times), but this “short centering” seems to be a true novelty of MBH98! Of couse, that’s a clever trick, why not play around with centering in later works as it seems to produce “right” results 😉

  94. Mark T.
    Posted Jul 20, 2006 at 3:38 PM | Permalink

    Unfortunately, it seems Marshall Herskovitz is just another alarmist stating that we have to bite the bullet and drastically reduce emissions. In spite of the now lacking evidence this is a real problem, he is starting a group to wage war (his words) on global warming. Wake up fool.

    Mark

  95. Mark T.
    Posted Jul 20, 2006 at 3:40 PM | Permalink

    I forgot, yes, the centering is the main issue.

    Well, the main methodological issue. There’s an issue about what the PCs actually represent, since they don’t even correlate to local temperature trends. I’m sorry, but even if a tree ring can be used as a proxy, it won’t be representative of global temperature. Heck, they’re at best only a summer trend indicator anyway (trees don’t grow in the winter).

    Mark

  96. Jean S
    Posted Jul 20, 2006 at 3:57 PM | Permalink

    re #95: Yes, you are right, I meant the main issue in PCA-question. Then it is a completely another ball game, if we start discussing if the fundamental assumptions in (standard) PCA like linearity, stationarity, finite second order statistics etc. hold for tree ring proxies… And if not, how good approximation we get by still using it… and so forth. But you shouldn’t worry about those things, I’m sure Mann did not either and he was named by Scientific American (2002) as one of “50 leading visionaries in science and technology” 🙂

  97. fFreddy
    Posted Jul 20, 2006 at 4:05 PM | Permalink

    Re #96, Jean S

    if the fundamental assumptions in (standard) PCA like linearity, stationarity, finite second order statistics etc. hold for tree ring proxies…

    Jean, are you up on this RegEM stuff ? Are these assumptions also required there ?

  98. Posted Jul 20, 2006 at 4:10 PM | Permalink

    # 83. WOO HOO! Dr. Christy may just make Fresno / San Joaquin Valley famous as the region that killed the AGW hoax!

    PS. I don’t think it’s a hoax, just over-stated and exagerated by alarmists.

  99. Jean S
    Posted Jul 20, 2006 at 4:18 PM | Permalink

    re #97: Yes, more or less. About R05, I just say (now) that download the RegEM code from Schneider’s web page, and try to reproduce R05 results according to description of R05. Can’t do it?!? Don’t worry, I think B&C could not either. Maybe they used a “type of RegEM” in R05… 😉

  100. John Hekman
    Posted Jul 20, 2006 at 4:23 PM | Permalink

    Jean S:
    There is a more fundamental problem with the way that Mann used PCA that bothers me. My understanding of PCA is that it can be used when you have a large number of DIFFERENT variables. Not the same variable such as ring width with different geographic observations of it.

    For example, say you want to test whether milk consumption affects adult height. You could have a multivariate model with height being a function of consumption of milk, meat, beans, candy and lots of other variables. You could then estimate a PC1 index of these variables.

    But what Mann has done can be compared to correlating height with a whole lot of observations of milk consumption by people all over the world, then treating different parts of the world as DIFFERENT variables and forming principle components from the various observations of milk consumption. Clearly, given normal variation, there will be at least one cluster of milk consumers who grow up to be tall, and the hockey-stickomizer will give that group all the weight in the PC1.

    This has to be wrong. If it is not, then any time I want to find a significant relationship between two variables, all I have to do is to divide my data into enough sub-datasets so that PC can find the one sub-set that has a spurious correlation with the dependent variable.

    To beat this one last time, the point is that Mann was supposed to be testing whether tree ring widths explain temperature, and what he did instead was to divide the tree ring observations into “differnt” proxies and find the one or two that “explained” recent temperature trends.

    Comments?

  101. Mark T.
    Posted Jul 20, 2006 at 4:25 PM | Permalink

    Yes, you are right, I meant the main issue in PCA-question.

    I fully realize what you meant, and that you knew the other issues. I just wanted to point it out to others that the rabbit-hole is deeper than just the method itself.

    Then it is a completely another ball game, if we start discussing if the fundamental assumptions in (standard) PCA like linearity, stationarity, finite second order statistics etc. hold for tree ring proxies…

    Yeah, very different. In the end, the method is meaningless in the absence of proper assumptions. Mann himself said, in MBH98 I believe, that in the absence of stationarity and a linear relationship between tree-rings and temperature, all bets are off.

    Mark

  102. Mark T.
    Posted Jul 20, 2006 at 4:30 PM | Permalink

    My understanding of PCA is that it can be used when you have a large number of DIFFERENT variables. Not the same variable such as ring width with different geographic observations of it.

    Uh, different SOURCES, not output variables in that sense. The tree rings are the OBSERVATIONS of the SOURCES. So, in a sense, you get x = Ws where x is the observation vector (tree-rings), s is the source vector (different factors or influences on the observations) and W is the so-called “mixing matrix” that linearly (well, mostly) combines the sources. Since you don’t know W a-priori, you use the PCA (others, too) method to sort of extract the best guess.

    OK, Jean, if I’m wrong about this quickie explanation, feel free to slap me around a bit. 🙂

    Mark

  103. Jean S
    Posted Jul 20, 2006 at 4:48 PM | Permalink

    re #100: See the link I gave in #69. If I understood correctly, you are basicly referring to the “Classification”-property of PCA (the second main thing in the link) whereas the (supposed) use of PCA in MBH was more in this data reduction sense (the first property in the link). See also the model described in the wikipedia link #61.

    But I think now I go to sleep, it’s rather late here…

  104. John Hekman
    Posted Jul 20, 2006 at 5:10 PM | Permalink

    Well, I hope you have a pleasant night. If you look at this again in the morning, I would like to continue to discuss this.

    I understand the point about Mann using PCA for data-reduction. However, the way the reduction occurs uses the variance-covariance matrix to select the data series that correlate most closely with the dependent variable (temp). So my point is still the same. The method is being used improperly unless you believe, like some of these guys, that tree rings DO explain temperature, the problem is just to find the ones that give the best “signal.” THAT is bogus, becuase what they are supposed to be doing is testing whether tree rings can explain temps in the “calibration” stage, and if you are allowed to pick the sub-series of tree rings that works best, you always win.

    I sense that you do not agree with me. If you think that the type of PC1 estimated by Mann COULD be proper if it is just being used for “data reduction”, then try to address the situation I first posed. That is, why can I not divide any and all datasets up into sub-sets, calling them “Illinois milk consumption” and “Tennessee milk consumption” and then use “data reduction” methods of PCA to estimate my hockey stick?

  105. Mark T.
    Posted Jul 20, 2006 at 5:33 PM | Permalink

    John,

    I think your first paragraph is essentially correct. They are putting the cart before the horse (using results as a selection criteria).

    Your second paragraph, I think, is saying “why not use tree-ring observations on a local scale first, then apply these results globally to find the true ‘average’,” right? Then, the way Mann is using PCA is to assume that all tree-rings are receptive, somehow, of global temperature trends and he is then applying the method to calculate the underlying trend. The one trend I get out of this, btw, is that these tree-rings seem to correlate better to global CO2 concentration in the atmosphere, and the PC he’s choosing as “temperature” is actually CO2! Based on what I’ve been reading, btw, the ultimate PC being used would otherwise be considered noise since it is significantly smaller than the largest eigenvalues.

    If you implemented the method as you state, locally first, then find an average of some sort from them, I’d be willing to bet the results would look a lot like an actual average of all the observations: sort of flat.

    Mark

  106. Posted Jul 20, 2006 at 6:18 PM | Permalink

    Jean S. Thank you for Christy’s Testimony in #83. It’s really fascinating, especially since I grew up in that area in Northern California.

    John A. So how about a current “hit report” for CA. How do we compare to RC and Scientific American?

    Steve M. Looking forward to your return and your future posts! As always, nice work! Now how about those ice cores and retreating glaciers?

    If Gavin and Mann have moved on, where is the new/replacement graph that represents their most current or future thinking? And what are it’s PCs?

  107. Jim Edwards
    Posted Jul 20, 2006 at 6:56 PM | Permalink

    The final witness at today’s reform committee – Gulledge’s Powerpoint presentation has a familiar spaghetti graph w/ Mann et Al’s results presented. (See slide 6) Slide 14 notes that uncertainties have been reduced because “warming has reached historic proportions”.

  108. Jim Edwards
    Posted Jul 20, 2006 at 6:58 PM | Permalink

    Click to access Pew%20Center%20-%20Gulledge%20Testimony.pdf

  109. Paul Penrose
    Posted Jul 20, 2006 at 7:22 PM | Permalink

    I’ve been reading about PCA for a while now, and I just don’t think that Mann’s method is valid. As far as I can tell, the proper way to use PCA in this case would be to determine a priori how many PCs to use in the final stage, optimally based on a good physical theory of how temperature affects tree-ring width (including the other major contributing factors like precipitation, CO2, availability of nutrients, etc.)

  110. John Creighton
    Posted Jul 20, 2006 at 7:31 PM | Permalink

    “If you implemented the method as you state, locally first, then find an average of some sort from them, I’d be willing to bet the results would look a lot like an actual average of all the observations: sort of flat.”

    I think that is a very good test of the method. If the weights are essentially flat then you have a good representative set of proxies. If a few proxies are heavily weighted then you don’t have a good well distributed set of proxies to give you good global representative coverage.

  111. Jeff Weffer
    Posted Jul 20, 2006 at 7:44 PM | Permalink

    Why are we pussy-footing around the real issue?

    Mann et al and all of the other climate scientists are deliberately mistating the real evidence to further their own agenda (environmental or grant funding mining.)

    That is not science and it is, in fact, the ultimate crime a scientist can undertake.

    It is, in fact, an even bigger crime since it influenced a major public policy initiaitive that was adopted by many nations internationally. They have done untold harm to the world economy and to the people of the world affected.

    While this does not happen often, they should be held to account by the academic community and they should be charged with a criminal act.

  112. John Creighton
    Posted Jul 20, 2006 at 8:16 PM | Permalink

    #100 (Jean) I was thinking of what you suggested that principle component analysis should be used so that your principle components represent the sub space well of the influential factors. Consider three factors solar (S), volcanic (V) and greenhouse gasses (G).

    If we then find the regression coefficients (X) for each proxi (P) with respect to the drives [S V G]
    P= [S V G] X

    Then when we form our data matrix
    A=[P1 … Pn]

    We can add up the regression coefficients of each proxy component wise. If there are N factors and the component wise sum of the regression coefficients gives equal weights to thes N driving factors, then the principle components should represent the subspace of the driving factors.

  113. TAC
    Posted Jul 20, 2006 at 8:44 PM | Permalink

    In the end, historians of science will have the challenge of deconstructing and then explaining this story. Until then, it is up to us to try to understand what we have all seen. In so many interesting ways, the hockey team was fundamentally wrong about science: Problems with data, with correspondence between observed variables and the variable of interest, with interpretation of results, with appreciating uncertainty; and, to top it off, employing an invalid statistical method almost guaranteed to yield “hockey sticks”.

    The amazing thing is they nearly got away with it. One wonders what would have happened if SteveM had not come along. How long would it have been before someone conducted an audit and documented the errors?

    Perhaps the hockey team is right: There is something out there we ought to be alarmed about.

  114. bender
    Posted Jul 20, 2006 at 9:50 PM | Permalink

    Re: #113
    They would have been tripped up soon enough to suit the slow wheels of science. But not soon enough for the fast wheels of policy. For that you are to be credited. These guys were aware that the two worlds evolve on different time-scales and they sought to slip one through that crack in time.

    Do not underestimate the pressure for an entenured young professor to publish in the academic world. Bridgers in particular are prone to floating too far afield and abusing data and data analysis methods. A little knowledge is a dangeorus thing. It will happen again.

  115. TCO
    Posted Jul 20, 2006 at 10:13 PM | Permalink

    I’m very proud of Steve. But, you do have to realize that he is a bit of an advocate. Will avoid questions like PC1 versus overall reconstruction…doesn’t really want to just let the chips fall where they may in the cases that they make his “case” not as dramatic looking.

  116. Dave Dardinger
    Posted Jul 20, 2006 at 10:47 PM | Permalink

    TCO,

    Will avoid questions like PC1 versus overall reconstruction

    Just who else agrees with you there. I think it’s strictly a case of your not understanding the science/statistics. I haven’t seen anyone else here who both understands exactly what you’re trying to say (I certainly don’t get it) and agrees and knows the theory enough to explain it.

    The funny thing is that I’ve found that the people who know the theory best and you’d think would be most opaque, are actually the most able, quite often, to make their subject accessable to the general public. For instance, I was amazed how easy it was to read Einstein’s popular presentation of his theory of Relativity.

    Now personally I think Steve M does a fine job most of the time making his stuff understandable, though he does sometimes slip up. But you’ve harped so many times on this “PC1 vs reconstruction” that if it were valid someone would have come along and cleared it up. So, IMO, you’re most likely confused.

  117. Jack Lacton
    Posted Jul 20, 2006 at 10:55 PM | Permalink

    #111

    Certainly, Mann et al are guilty of egregious scientific research and working hard to further their own ambitions.

    However, there are two real crimes here. One worse than the other.

    The major criminal is the UN IPCC, as it’s the one that promoted Mann’s worked the hardest and used it to prove AGW and thus support the inanity that is Kyoto.

    The second lots of criminals are the editors of Nature, Science etc that must have known something was up a long while ago but allowed their personal prejudices to prevail. The Congressional committees would do well to roundly criticise these publications, which, of course, would be written into the public record and might make them sit up and take notice that they are responsible for Hwang, Mann etc gaining the prominence they did.

  118. James Lane
    Posted Jul 20, 2006 at 10:59 PM | Permalink

    “Will avoid questions like PC1 versus overall reconstruction”

    Just who else agrees with you there?

    Ahem, Tim Lambert. I’ve never understood TCO’s point either, Dave.

  119. TCO
    Posted Jul 20, 2006 at 11:12 PM | Permalink

    It’s a simple concept and I’ve explained it ad nasueum. Steve has not debated the substance of the difference.

  120. Dave Dardinger
    Posted Jul 20, 2006 at 11:36 PM | Permalink

    re: #119

    Probably because he can’t figure out what you’re talking about either. Of course if you could explain it, you’d also be able to work out the answer to your own question.

    Re: #118

    And Lambert is a quasi-troll. IOW, if someone says something anti Steve M, he’ll agree just to stir up trouble.

    End result. I’m looking for someone like Jean S to chime in if there’s a real problem.

  121. bender
    Posted Jul 21, 2006 at 12:25 AM | Permalink

    Re: #119-120

    if you could explain it, you’d also be able to work out the answer to your own question

    Generally, yes. But not necessarily, and maybe not in this instance. Sometimes you can intuit a problem, but the formal analysis of what’s bugging you is just beyond your reach.

    questions like “PC1 versus overall reconstruction”

    It’s a simple concept and I’ve explained it ad nasueum

    Ok, I’ll bite. Post a few links. No sense letting an unresolved question burn you up. (It is very strange that MBH98 would use PCA on the *instrumental* data and then use the instrumental PCs, not temperature, in the calibration. And maybe this is a closely related issue. Must keep reading …)

  122. bender
    Posted Jul 21, 2006 at 1:26 AM | Permalink

    Re: #111 How could MBH98 happen? Who’s to blame?
    When I look at MBH98 I don’t see an active conspiracy to deceive*. It appears to be an honest case of the “blind watchmaker”. They started with a pattern in mind. They fiddled and fiddled, and the methods grew incrementally more and more convoluted, the patterns started to come out of the clouds, they started to believe, they suspended skepticism, sent it out for review, fiddled some more, maybe another round of review where each reviewer thought the other would cover off on the stats (at least the individual referees’ reviews are always highly independent of one another)… and there you have it: the blind watchmaker’s unlikely product.

    This is clearly not a case of Intelligent Design. This is a monstrosity that should have been aborted in the first trimester. The irony is that if they hadn’t done the fancy spatiotemporal PCA – it never, ever would have got published at that high a level. Nature only likes things that are clever, and a “bonehead average” reconstruction (e.g. Crowley) just wouldn’t make that grade. (It is idealistic to think that the stature of Nature will take any kind of hit over this. Academics are inculcated to to believe it is the most prestigious journal there is. That can’t be changed, especially in this case, as very few academics work in paleoclimatology.)

    *What happened after 1998 may be a different story altogether.

    And that’s where I say: be careful how you present the social network analysis. The links are based on co-authorship. But it is not the number of links among HT co-authors & peers at the time of 1998 that’s the BIG problem. It’s the tightness (i.e. the strength of the ties) that emerged AFTER the first MM criticism. HT will try to misrepresent what Wegman’s social network is *trying* to point to by simply talking about what it *does* point to. It’s not the review process ca. 1998 that’s really troubling; it’s the lack of disclosure afterward. And the social network as it stands only really hints at that.

    Changing that culture will constitute a major step forward. That being said, it might be more energy than it’s worth to make Nature your enemy. They were definitely not at the helm of this ship.

  123. James Lane
    Posted Jul 21, 2006 at 2:14 AM | Permalink

    For the record, I submitted a polite query to RC about the strange second graph in their “Wegman” thread. I simply asked why the reconstructions were all truncated about 1850. It didn’t get posted.

    Jean S thinks they have posted the wrong chart from Rutherford 2005. That may be the case – certainly the chart in no way supports what they are arguing in the text (that the “result” is the same whether you use PCA or not). Weird.

  124. T J Olson
    Posted Jul 21, 2006 at 4:12 AM | Permalink

    Re #122
    “It’s the tightness (i.e. the strength of the ties) that emerged AFTER the first MM criticism. HT will try to misrepresent what Wegman’s social network is *trying* to point to by simply talking about what it *does* point to. It’s not the review process ca. 1998 that’s really troubling; it’s the lack of disclosure afterward. And the social network as it stands only really hints at that.”

    This whole episode reminds me of what that arose in US historical scholarship, circa 1996-2003. Building on a journal atricle, Emory University historian Michael A. Bellesile published a book that argued that American gun culture was a recent post-Civil war creation – that gun ownership was rare when the Second Amendment became enshrined in the US Constitution. Therefore, the right to bear arms could not have been intended by the framers to be an individual freedom; expanding restrictions on gun ownership would not be un-constitutional.

    The book won praise among historians, winning the field’s most important prize, sponsored by Columbia University.

    But a graduate student, Clayton Cramer, was piqued by the original article. Doing related research on gun-culture and law after the book appeared in 2000, he attempted to find Bellesile’s sources in places like San Francisco. Bellesiles claimed his notes were lost in a flood. Or that the sources really were there. But every time the then former student looked, they were not there. In one case, the source cited burned in the San Franciso fire after the Great Earth Quake of 1906, and thus could not have been the archive consulted! Similarly, numerical investigations to verify his probate and census records by Northwestern University Law professor James Lindgren failed, and instead raised many questions about the author’s apparent fabrications.

    After making waves in the National Review, the establishment nonetheless ignored doubts and awarded the Bancroft Prize to Bellesiles for “Arming America, The Origins of a National Gun Culture.”

    But, as wikipedia summarizes it: “After a reporter from the Boston Globe investigated Bellesiles’ claims that early American firearms were of poor quality and operational condition, and found that the reports on Bellesiles’s website systematically altered the condition of guns in Vermont’s probate records in a way to fit his thesis, Bellesiles responded by claiming that his website had been hacked and the documents altered by someone out to destroy his reputation.”

    More historians investigated prior claims, rountinely finding fault with Bellesiles research. His publisher, Knopf, removed the book. Emory University followed up, later resulting his dismissal from a tenured post. Columbia rescinded his award.

    The lesson? First, amatuers like steve can do us invaluable service in truth-seeking; and second, would that the prizes of presitige were all that was at stake in doing Big Science! Instead, the prizes are $6.5 billion next year in federal funding for climate change research – and $29 billion from 2001-06. Obscene amounts compared to quest merely for historical truth.

    Which goes to to the utility of the social network analysis Wegman did. What remains to be seen are the government funding flows and these associations. How did these work? Do they reveal self-interest? Was science prostituted for more than mere prestige and reputation-building? Unlike early commenters above, this possibility cannot be discounted as insignificant until investigators look for and evaluate the facts. As my tale shows above, people have done more for less in academe.

    An ultimate question might be: has Senator and Vice President Al Gore corrupted science in his quest for environmental purity?

  125. Jean S
    Posted Jul 21, 2006 at 5:20 AM | Permalink

    re #123: They are not “truncated”, it is the way RegEM method works. Unlike MBH type of methods, Rutherford’s constructions are not direct linear combination of proxies. The basic idea is that you have some known values (proxies and grid cell (instrumental) temperatures for 1856-1971), then you use RegEM algorithm to “fill in” the missing values which are the grid cell temperatures for 1400-1855. The tempereture reconstruction is then calculated from these grid cell temperatures as if they were measured. In other words, all versions (“hybrid”, “nonhybrid” etc) of their RegEM-based algorithms have the same grid cell values (the instrumental record) for the years 1856-1971, and therefore all reconstructions (calculated from the grid cells) have the same end.

    RegEM (=”Regulized EM”) itself is Schneider’s modification of the well known EM-algorithm. RegEM basicly modifies the covariance estimation in a version of EM by a “regulization” which is supposed to help if there are a lot of missing values. To undertand the procedure of Rutherford’s paper, it is enough to understand how the standard EM-algorithm works (a “gentle” tutorial for math-oriented people here).

    The figure is still wrong in the post. You can download Rutherford (2005) from Mann’s web page 🙂 See Figure 2 (the same as in RC post). Red is “hybrid-20” which refers to their “filtering modification” of RegEm with 20-year “cut frequency” (see here for some more details). Green is “Allproxy Hybrid-20” which is the same as red, but they have used all proxies (MBH98 and Briffas’s MXD networks) not only the MBH98 proxies. The figure does not include MBH98 graph whereas Figure 3 in Rutherford is comparison of “hybrid-20” and MBH98 reconstructions (red and blue, respectively).

    RC moderators, see #49 in this thread for properly crediting this finding.

  126. TCO
    Posted Jul 21, 2006 at 6:04 AM | Permalink

    Dave, he really does understand the difference and I’m really surprised that you don’t. Bender, the issue is as simple as discussion of impact of a method (off-centering) on PC1 versus impact of that method on the ACTUAL HOCKEY STICK RECONSTRUCTION (the graph that Houghton stood behind). Impact on PC1 is not the same as impact on reconstruction, because the reconstruction IS NOT a PC1:
    1. More then one PC are retained in the Mannian method (dilution effect).
    2. PC2 may respond differently then PC1 to a method change (could be a counterbalance, we don’t know it, but it might).
    3. The reconstruction is more then a set of PCs. There is some combination with other things (i.e. the “Mannomatic”.

    The point is that impact of a flaw may be overstated, by looking at an intermediate step. Also that it is disnigenuous to do so if one allows the reader to remain confused as to the difference, does not clarify the difference. (as you are apparently…but am surprised, would think you would have learned from my beating the issue…heck Lambert understands it). The places where it has been disingenuous have been:
    a. One completely false description (called PC1, “the reconstruction” with John Cross).
    b. Failure to answer my question for a clarification (until I beat the issue to a pulp).
    c. Answering questions for impact of a method (from posters) with impact on the intermediate result rather than the final result (and not caveating, clarifying). Ross is guilty of this.

    N.B. Impact on the PC1 itself has some interest as PC1 is referred to as the “dominant mode of variance” in the MBH text. I’m not saying that Steve shouldn’t look at that. I am saying that he should not overstate his case in any disingenous manner.

  127. Brooks Hurd
    Posted Jul 21, 2006 at 7:59 AM | Permalink

    Re: 125 Jean S.
    Does this mean that the application of RegEM is somewhat like using a GCM to “create” temperatures in the past?

  128. TCO
    Posted Jul 21, 2006 at 8:28 AM | Permalink

    How does it differ from interpolation/extrapolation? Why or is it superior to any modeling exercise where you create a regression (curve, linear or nonlinear) and then use that to impute expected values at points inside or outside the range?

  129. beng
    Posted Jul 21, 2006 at 8:28 AM | Permalink

    RE 116:

    For instance, I was amazed how easy it was to read Einstein’s popular presentation of his theory of Relativity.

    Dave, I hadn’t read your post & I’d been thinking the same thing. The written prose of Einstein seems as genuis as his physics understanding, because it’s remarkably easy to read & understand (in a fatherly, old-European style). Isaac Newton was also a brilliant writer.

  130. Jean S
    Posted Jul 21, 2006 at 8:52 AM | Permalink

    TCO, I’ve been trying really hard to understand what you are after, but I’m not yet complety sure. So correct me if I still got it wrong.

    As I understand it, you would like see the total impact of using real PCA instead of “Mannian PCA” to the final reconstruction. The question is reasonable, but how can anyone answer to your question in a satisfactory manner:

    a) PCA calculations are used in several places to reduce the tree ring networks. I do not know if all of ORIGINAL data is still available (or if even Steve knows which tree ring indecies were used), so how can one calculate the correct PCs for each step?
    b) Even someone could, how do you decide how many PCs to keep? Evidence indicates that, contrary to RC claims, Mann did not actually use N-rule. In any case, no real statistician would use the N-rule.
    c) With best info and the most reasonable assumptions available, M&M already did demonstrate (2005, E&E) the effect to the final reconstruction with their emulated MBH98 (see center & bottom panels in figure 1 especially). Why that is not good enough for you?

  131. Mark T.
    Posted Jul 21, 2006 at 8:58 AM | Permalink

    Non-stationarity of the data will effect RegEM the same way it does PCA. Garbagio in, garbagio out.

    Mark

  132. Jean S
    Posted Jul 21, 2006 at 9:02 AM | Permalink

    re #127: In blank terms: yes. In simplistic terms, it tries to “learn” the relationships between proxies and grid cell temperatures where the data is available, and then when temperature is not available, it tries to “guess” those values based on the available proxy values.

    re #128: It would be a lot easier to explain in mathematical terms how the RegEM works than to answer your questions. So why don’t you read and understand it from Schneider’s paper, and then we can discuss if we could find an answer to your questions? The second question especially is a question that’s hardly have a known answer as the properties of RegEM are yet to be established. So if you have even a partial answer, consider writing a journal article.

  133. Mark T.
    Posted Jul 21, 2006 at 9:08 AM | Permalink

    I wonder if I can do RegEM for beamforming…? 🙂

    Mark

  134. Jean S
    Posted Jul 21, 2006 at 9:16 AM | Permalink

    re #133, Sure, but I don’t know if you’d be happy with the results 🙂

  135. bender
    Posted Jul 21, 2006 at 9:21 AM | Permalink

    Re: #128 REGEM
    The Rutherford et al paper, thankfully, puts a name, REGEM, to the creative “training” step in MBH98 (p. 786) that I was just complaining about, and provides a reference: Schneider (2001) – though I note this latter paper was NOT published in the statistical literature, but J. Climate. Red flag.

    TCO’s question is a good one. On the surface it looks like REGEM was designed for estimating missing data from within a space-time series (interpolation). If it is being used to extrapolate, I don’t see how it’s going to have any spatial and temporal autocorrelation structure to work from in estimation.

    Can’t help much beyond that. REGEM is new to me, and I don’t know exactly how it works. Though I’ve always wanted a tool that could do what it purports to do.

  136. Mark T.
    Posted Jul 21, 2006 at 9:31 AM | Permalink

    Hehe, I take it you’re not all that thrilled with RegEM as a statistical method for analyzing any data?

    What I’m thinking about doing, btw, is taking the data and putting into an on-line adaptive PCA algorithm. Then I can track the weighting over time, and ultimately, calculate the stationarity coefficient (alpha, as I recall, in Cichocki-Amari). This will be good practice for my research anyway.

    Mark

  137. Jean S
    Posted Jul 21, 2006 at 9:36 AM | Permalink

    re #135: Yes, and even in Schneider’s paper it’s properties are not really analyzied. But bender and others, no need to put tooo much effort in trying to understand the properties of RegEM, it was not really used in Rutherford et al …

  138. bender
    Posted Jul 21, 2006 at 9:52 AM | Permalink

    Re #136:
    I just think the Schneider (2001) paper should be reviewed by a few real statisticians like Wegman. I can’t comment on it, or the RegEM algorithm, myself because this is the first I’ve ever heard of it. The question I’m raising is not the integrity of the algorithm, but this particular application of it. Interpolation seems reasonable. Extrapolation?? I can’t see how an “overdetermined optimization” method is going to have any more predicitve “skill” than regression when it comes to the dicey game of extrapolation. Maybe you get some benefit in terms of obtaining a nonlinear link function instead of a linear one. But still, as Mark T says, homogeneity of the link function is assumed when “inhomogeneity of the link function” (Von Sotrch) is the problem.

    Personally, I think decoding RegEM may just be sand in your wheels. What you really want are honest error bars on that mean reconstruction curve that every one is after. And one of the key ways that I think the dendroclimatology people are neglecting to consider error is how they do a single RegEM training run against a single error-free proxy. They should be doing hundreds of training runs against hundreds of bootstrapped sample proxies. The bootstrapped sample proxies created by resampling would be so ridden with error (because remember these proxy values are yearly estimates, not knowns) that the RegEM training coefficients will be highly variable among runs and the confidence envelope on the reconstruciton curves will accordingly explode. Then you’ll see the true range of possibilities for the MWP, such as the ridiculous possibility that the MWP was colder than the LIA.

    The key to policy doubt is scientific uncertainty.

  139. David Miller
    Posted Jul 21, 2006 at 9:54 AM | Permalink

    The apostles of Global Warming are preaching that I must accept that the evils of human activity are causing great harm to the planet. “Human produced CO2 is causing extreme and unprecedented warming”, so sayeth the Word of GW. Let it be known that I accept the scripture that the earth is warming – it is afterall coming out of the Little Ice Age and the earth has undergone repeated and significant periods of NATURAL climate change for ages and ages. But I am not without sin. I confess that I do not believe that man is the cause of this fifth horseman of the apocalypse – GASP!

    Indeed, some of us who refuse conversion actually question the Word of GW because we see politics and emotion instead of good clean science (Exhibit A: Al Gore). I for one am very troubled by the fact that the GW clergy can’t simply say “Yep, you are right. The Mann hockey stick is based on invalid technical work. It got picked up as a political football, but we must now cast it aside as a technical cripple.” The inability to acknowledge bad work is always a red flag.

    But please don’t think that I have sinned only this little bit, for I have even more sins that must be confessed here. I don’t think anyone (including the clergy of GW) has a solid understanding of the extremely complex natural system which drives global climate. I have never heard anyone say something along the lines of “The Medieval Warm Period occurred because of X Y and Z natural forces, and the Little Ice Age occurred because of A B and C natural forces”. Basically, if we don’t understand why these events occurred in the past due to natural causes, then how can we isolate out current manmade influences and then predict temperatures 50 to 100 year into the future?

    My sins are too great to number for I also doubt the great and sacred global climate models. I assume these models have the natural system worked out very clearly in order to perform the forecasts. So I further assume these things have been all worked out in the historical record too. This is the critical issue – are these things actually nailed down in the historical period or are the climate predictions hanging out there as point forward extrapolations? A reliable simulation model will have been tested and refined by robust history matching before it is used for forecasting. I think it is fair to ask if the models used for these predictions can also match a substantial period of historical temperature cycles. Are the models calibrated to historical cycles? Are the models run starting from an initialization point a few thousand years ago with the resulting output tested against actual data? Or are they run starting from today with no history match?

    Can anyone provide a plot with three datasets; (a) global climate model simulation output of the predicted global average temperature covering a significant historical period (starting back at least as far as the Medieval Warm Period); (b) actual average historical temperature data for the same time period (this will let us judge the quality of the history match comng out of the model); and (c) the simulation output for the predicted future? More simply these are the actual history, the modelled history, and the modelled future. We generally only get to see “c” since that is where the politics is, as per the topic of this post “b” appears to be contested, and I have never seen “a” presented anywhere.

    And yes, I have more sins of doubt. Can anyone explain how it is known that the causality is that increased CO2 is causing warming, and not the reverse (that NATURAL warming is causing increased CO2)? Is it actually possible to devine the impact of human activity out from such a huge and complex and poorly understood natural system? And just how reliable are the assumptions made regarding worldwide human demographics, economics, technical inovations, etc that go into the the model predictions of future global climate change?

    Father, my tithing and faith rest upon answers to these questions (or the taxes the GW political clergy impose upon me)! Please don’t condemn me to roast in climate hell with these questions unanswered!

  140. jae
    Posted Jul 21, 2006 at 10:00 AM | Permalink

    Great post, David. I agree.

  141. Posted Jul 21, 2006 at 10:20 AM | Permalink

    #137. This seems relevant.

    Limitations on regression analysis due to serially
    correlated residuals: Application to climate
    reconstruction from proxies
    Peter Thejll and Torben Schmith

    Click to access ThejllSchmithJGRAtmospheres2005.pdf

    September 2005.
    [1] The effects of serially correlated residuals on the accuracy of linear regression are considered, and remedies are suggested. The Cochrane-Orcutt method specifically remedies the effects of serially correlated residuals and yields more accurate regression coefficients than does ordinary least squares. We illustrate the effects of serially correlated residuals, explain the application of the CO method, and evaluate the gains to be achieved in its use. We apply the method to an example from climate reconstruction, and we show that the effects of serial correlation in residuals are present and show the significantly improved result.

  142. welikerocks
    Posted Jul 21, 2006 at 10:20 AM | Permalink

    #138 David Miller
    Well said and a tad testy!
    (the restraint we have to have sometimes is unbearable LOL)

    I like it!

  143. Steve Sadlov
    Posted Jul 21, 2006 at 10:23 AM | Permalink

    RE: #124 – It’s interesting (and typically disappointing) that the main stream media made quite a big deal about Bellesiles (at the time, I heard him in an interview on NPR) but made little mention of it when he was exposed as a fraud. To read RC, you’d think the media were some vast right wing conspiracy but I find zero actual circumstantial or hard evidence of such claims, in fact the evidences suggests exactly the opposite is true. The main stream media are a major negative factor in perpetrating junk science and agenda driven “research” results into the public arena.

  144. Steve Sadlov
    Posted Jul 21, 2006 at 10:26 AM | Permalink

    RE: #138 – “We are the priests of the Temples of Syrynx, our great computers, fill the hallowed halls” – LOL!

  145. bender
    Posted Jul 21, 2006 at 10:32 AM | Permalink

    “All the gifts of life all held within these walls.”

  146. TCO
    Posted Jul 21, 2006 at 10:48 AM | Permalink

    130 (JeanS). I’m not “after” anything, JeanS. I’m making a judgement on ethical communication. Sorry about your misunderstanding, but this is not a technical issue, it is a logical and ethical one.

    a. My point is that promoting (or even allowing) a confusion of impact on PC1 versus impact on “the hockey stick” of the off-centering flaw is wrong. It is dishonest. In particular it is vexing since the impact on PC1 is likely an overstatement of the impact on “the hockey stick” (actual reconstruction).
    b. Proper action in EE does not excuse misleading communication elswhere.
    c. Lack of ability to perform the actual reconstruction does not excuse misleading communication.

  147. Steve Sadlov
    Posted Jul 21, 2006 at 10:52 AM | Permalink

    RC are ranting about the hearing. Here is a simple post I have attempted into the queue, not likely to be posted, but nonetheless:

    “What impact would excluding Bristlecones have on the final result? That is the REAL question that REAL Climate needs to discuss.
    by Steve Sadlov”

  148. Michael Jankowski
    Posted Jul 21, 2006 at 10:57 AM | Permalink

    RE #145 – As with everything else in the debate since the start: “It doesn’t make a difference.” No demonstration, no quantification, etc. Mann will point-out that they removed the 20th century bristlecone growth portion that was due to CO2 in MBH99. And then they’ll discuss all the other “independent” studies with similar conclusions, etc.

  149. jae
    Posted Jul 21, 2006 at 10:59 AM | Permalink

    Steve will probably publish a study that takes care of all the other “independent” studies. He has posted more than enough for a great paper.

  150. jae
    Posted Jul 21, 2006 at 11:00 AM | Permalink

    In fact, I’ll bet the study will be submitted soon, if it has not been already. Maybe that will make TCO smile.

  151. nanny_govt_sucks
    Posted Jul 21, 2006 at 11:01 AM | Permalink

    #143: Darn it! I wanted to be the first with a relevant Rush lyric!

    How about:

    “There is unrest in the forest
    There is trouble with the trees”

    Indeed!

  152. kim
    Posted Jul 21, 2006 at 11:05 AM | Permalink

    It is not a new human tendency to blame the actions of the Gods on man’s misbehaviour. I think Gore has tapped into an ancient superstition and is this era’s spiritual leader in this old fallacy. It’s too bad that Divinity School didn’t educate him against such prejudices. He is a desperately false prophet. Don’t sacrifice my virgins for your superstitions.
    =============================

  153. Jim O'Toole
    Posted Jul 21, 2006 at 12:01 PM | Permalink

    143, 151
    Good ones. How about:
    “Science, like Nature, must also be tamed.
    With a view toward its preservation.”

  154. bender
    Posted Jul 21, 2006 at 12:17 PM | Permalink

    “The most endangered species – the honest Man (!) –
    will still survive annihilation”

  155. per
    Posted Jul 21, 2006 at 5:49 PM | Permalink

    this is becoming a rush fest 🙂
    per

  156. Posted Jul 21, 2006 at 6:26 PM | Permalink

    Rush lyrics?

    An ill wind comes arising
    Across the cities of the plain
    There’s no swimming in the heavy water
    No singing in the acid rain
    Red alert
    Red alert

    Notice how so many of them are alarmist in nature?

  157. Allan M.R. MacRae
    Posted Jul 21, 2006 at 9:07 PM | Permalink

    Here is the full article, written in 2002, in which I said it would get colder by 2020-2030. This hypothesis was based on conversations with Dr. Tim Patterson, Carleton University paleoclimatologist. In May 2006, NASA said that solar cycle 25 will be an extremely weak one. It will peak in ~2022.

    I am still comfortable with the points in this article. If I changed anything, I would be even more pessimistic about most alternative energies, particularly wind power and corn ethanol. I’m less sure about solar but it still has a long way to go.

    Regards, Allan

    P.S. For all those budding kid detectives out there – I have spent about half my career in in the fossil fuel industry – does this make an unreliable source of information? No, but it does mean that people like me, who provide 87% of global primary energy, keep you and your loved ones from starving and freezing.
    This article is partially based on another written with Sallie Baliunas, Harvard U astrophysicist, and Tim Patterson, Carleton U paleoclimatologist, at http://www.apegga.org/whatsnew/peggs/WEB11_02/kyoto_pt.htm

    KYOTO HOT AIR CAN’T REPLACE FOSSIL FUELS
    Many critics of the Kyoto protocol say the science does not back up the accord

    Allan M.R. MacRae
    Calgary Herald

    Sunday, September 1, 2002

    The Kyoto accord on climate change is probably the most poorly crafted piece of legislative incompetence in recent times.

    First, the science of climate change, the treaty’s fundamental foundation, is not even remotely settled. There is even strong evidence that human activity is not causing serious global warming.

    The world has been a lot warmer and cooler in the past, long before we ever started burning fossil fuels. From about 900 to 1300 AD, during the Medieval Warm Period or Medieval Optimum, the Earth was warmer than it is today.

    Temperatures are now recovering from the Little Ice Age that occurred from about 1300 to 1900, when the world was significantly cooler. Cold temperatures are known to have caused great misery — crop failures and starvation were common. Also, Kyoto activists’ wild claims of more extreme weather events in response to global warming are simply unsupported by science. Contrary to pro-Kyoto rhetoric, history confirms that human society does far better in warm periods than in cooler times.

    Over the past one thousand years, global temperatures exhibited strong correlation with variations in the sun’s activity. This warming and cooling was certainly not caused by manmade variations in atmospheric CO2, because fossil fuel use was insignificant until the 20th century.

    Temperatures in the 20th century also correlate poorly with atmospheric CO2 levels, which increased throughout the century. However, much of the observed warming in the 20th century occurred before 1940, there was cooling from 1940 to 1975 and more warming after 1975. Since 80 per cent of manmade CO2 was produced after 1940, why did much of the warming occur before that time? Also, why did the cooling occur between 1940 and 1975 while CO2 levels were increasing? Again, these warming and cooling trends correlate well with variations in solar activity.

    Only since 1975 does warming correlate with increased CO2, but solar activity also increased during this period. This warming has only been measured at the earth’s surface, and satellites have measured little or no warming at altitudes of 1.5 to eight kilometres. This pattern is inconsistent with CO2 being the primary driver for warming.

    If solar activity is the main driver of surface temperature rather than CO2, we should begin the next cooling period by 2020 to 2030.

    The last big Ice Age, when Canada was covered by a 1.5-kilometre-thick ice sheet, ended only about 10,000 years ago, and another big one could start at any time in the next 5,000 years. Mankind clearly didn’t cause the rise and fall of the last big Ice Age, and we may not have any ability to control the next big one either.

    It appears that increased CO2 is only a minor contributor to global warming. Even knowing this is true, some Kyoto advocates have tried to stifle the scientific debate by deliberate misinformation and bullying tactics. They claim to be environmentalists — why do they suppress the truth about environmental science?

    Some environmental groups supporting Kyoto also lack transparency in their funding sources and have serious conflicts of interest. Perhaps they are more interested in extorting funds from a frightened public than they are in revealing the truth.

    Do they not know or care that Kyoto will actually hurt the global environment by causing energy-intensive industries to move to developing countries, which are exempt from Kyoto emission limits and do not control even the most harmful forms of pollution?

    The Canadian government wants to meet its Kyoto targets by paying billions of dollars a year for CO2 credits to the former Soviet Union. For decades, the former Soviet Union has been the world’s greatest waster of energy. Yet it will receive billions in free CO2 credits because of the flawed structure of Kyoto. No possible good can come to the environment by this massive transfer of wealth from Canadians to the former Soviet Union.

    Kyoto would be ineffective even if the pro-Kyoto science was correct, reducing projected warming by a mere 0.06 degrees Celsius over the next half-century. Consequently, we would need at least 10 Kyoto’s to stop alleged global warming. This would require a virtual elimination of fossil fuels from our energy system. Environment Canada knows this but doesn’t really want to tell you all the economic bad news just yet.

    What would the economic impact of 10 Kyoto’s be? Think in terms of 10 times the devastating impact of the oil crisis of the 1970s (remember high unemployment, stagflation and 20 per cent mortgage rates) or 10 times the impact of Canada’s destructive and wasteful National Energy Program. Be prepared for some huge and unpleasant changes in the way you live.

    Fossil fuels (oil, natural gas and coal) account for 87 per cent of the world’s primary energy consumption, with 13 per cent coming from nuclear and hydroelectricity. Is it possible to replace such an enormous quantity of fossil fuels?

    Hydrogen is not an answer — it is a clean secondary energy currency like electricity, but it is made from primary energy such as fossil fuels, nuclear or hydro.

    Kyoto advocates want expanded renewable energy such as geothermal, wind, and solar power and biomass to provide our future needs. Is this possible?

    In 2001, there was a total global installed capacity of eight gigawatts (GW) of geothermal power and 25 GW of wind power. Even assuming the wind blows all the time, this equals only one quarter of one per cent of worldwide primary energy consumption. The contribution of solar electrical power generation is so small as to be inconsequential. To replace fossil fuels, we would need to increase all these renewables by a staggering 33,000 per cent.

    Of course, wind doesn’t blow all the time — wind power works best as a small part of an electrical distribution system, where other sources provide the base and peak power. Although wind power has made recent gains, it will probably remain a small contributor to our overall energy needs. A 1,000-megawatt wind farm would cover a land area of 1,036 square kilometres, while the same-size surface coal mine and power plant complex covers about 36 square kilometres. Wind farms cover a much bigger area, are visible for miles due to the height of the towers and kill large numbers of birds.

    What about solar? The electricity generated by a photovoltaic solar cell in its entire lifetime does not add up to the energy used to manufacture it, not to mention the requirement for vast areas for solar farms. These solar cells make sense only in limited special applications or in remote locations.

    Hydroelectric power is another renewable, but environmental activists don’t want more hydro because it dams rivers.

    What about biomass solutions such as ethanol? Canada, the United States and a few other countries may have available crop land for ethanol to partially meet our local needs, but it is clearly not a global solution.

    Many developing countries will reject renewable energy due to higher costs, since renewables usually require subsidies to compete with fossil fuels.

    Conventional nuclear fission or, someday, fusion are the only two prospects that could conceivably replace fossil fuels. But Kyoto activists hate nuclear.

    Conservation is a good solution, but Canada has been improving its energy efficiency for decades, in response to rising energy prices. Significant improvements have been achieved in heating and insulation of homes, automotive mileage and industrial energy efficiency. However, Canadians live in a cold climate and our country is vast. There are practical limits to what we can achieve through energy conservation.

    So where will all the energy come from if we eliminate oil, natural gas and coal? Kyoto supporters have provided no practical answers, they just want to ratify this flawed treaty. It would be nice if our energy supply solutions were simple, but they’re not. In the long run, if we implement Kyoto we will have only two choices — destroy our economy and suffer massive job losses and power blackouts, or break the terms of Kyoto, which will be international law.

    Instead of Kyoto, a new global anti-pollution initiative should be drafted by people who have a much better understanding of science, industry and the environment. It should focus, not on global warming and CO2, but on real atmospheric pollutants such as SO2, NOx and particulates as well as pollutants in the water and soil — and no country should be exempt.

    Then there might be a chance to actually improve the environment, rather than making it worse and wasting billions on the fatally flawed Kyoto accord.

    __________________________________________________________

    Allan M.R. MacRae is a professional engineer, investment banker and environmentalist.

  158. Allan M.R. MacRae
    Posted Jul 21, 2006 at 9:15 PM | Permalink

    Meant to post #157 above at ~#320, “Whitfield SubCommittee: Witnesses to be questioned”.

  159. kim
    Posted Jul 22, 2006 at 2:50 AM | Permalink

    Pebble bed nuclear power plants, franchised from China.
    =======================

  160. John Creighton
    Posted Jul 22, 2006 at 1:22 PM | Permalink

    #157 unfortunately I have to check the facts on both sides of the argument. This link suggests that the energy payback of solar cells is much faster then was suggested in the article that you quoted.

    Click to access 35489.pdf

  161. John Creighton
    Posted Jul 22, 2006 at 1:27 PM | Permalink

    Two more supporting links:
    Link 1
    Link 2

  162. Greg F
    Posted Jul 22, 2006 at 3:03 PM | Permalink

    RE: 160

    This link suggests that the energy payback of solar cells is much faster then was suggested in the article that you quoted.

    John,

    The paper you cited is just more of the same nonsense from the The National Renewable Energy Laboratory. The 4 year payback is based on:

    1) “…did not include the energy that originally went into crystallizing microelectronics scrap…”

    2) “… frameless PV …”

    3) “Assuming 12% conversion efficiency (standard
    conditions)”.

    One is no longer valid. There is a shortage of silicon and will be for some time. The demand by solar panel manufacturers has now exceeded the supply of scrap.

    Two is just silly.

    Three is also optimistic. “Standard conditions” is defined as a cell temperature of about 25 degrees C. The reality is, what isn’t converted to electricity will be turned into heat and the silicon will be at a much higher temperature. Look at the specifications for any solar panel. You will find that power output declines with increased temperature.

    I wish I had more time. Here are some real solar cells operating under real life conditions. I calculated one of the systems and it had a financial payback time in excess of 50 years. How much of the cost do you think is the energy needed to manufacture/ install? 50%? One more thing to note is the inverters. Look at any of the manufacturers spec sheets and you will find the MTBF is 25 years, on the components.

  163. Gerald Machnee
    Posted Jul 22, 2006 at 7:25 PM | Permalink

    Re #157 – **Instead of Kyoto, a new global anti-pollution initiative should be drafted by people who have a much better understanding of science, industry and the environment. It should focus, not on global warming and CO2, but on real atmospheric pollutants such as SO2, NOx and particulates as well as pollutants in the water and soil “¢’‚¬? and no country should be exempt.**
    I agree with this. I have been trying to make this point on RC and here. We should do more research into the possible effects of pollution on water vapour and cloud increase. I think the current Canadian government is going to review Kyoto and they have at least “stated” that they will reform pollution laws.

  164. Paul Penrose
    Posted Jul 24, 2006 at 9:00 PM | Permalink

    When talking about the effeciency of solar power, the proponents always want to look at the DC output of the panels, but what you really need to be measuring is the AC power output from the inverter. By the time those losses are included, solar is a loser. In fact, I think that with current manufacturing techniques it takes more energy to make a complete solar system (including the inverter) than the system will produce in it’s lifetime. So it’s a double loser.

  165. David H
    Posted Jul 25, 2006 at 2:07 AM | Permalink

    Re #164

    Paul, I also remember reading that so much energy went into solar panels that they would never produce enough to repay it, but have searched for the article in vain. Does any one know where it is?

    Also nuclear was recently criticised on a radio programme for the amount of concrete needed to build stations. However another programme pointed out that wind generators have tons of concrete below ground to stop them being blown over. Does anyone have a link to credible info on lifetime carbon footprints?

    If sceptics were awash with oil industry money I would expect someone would have put together a database of these various facts. I remember long ago when CDC persuaded IBM to settle out of court they also had to agree to destroy their database, which had proved to be a devastating weapon.

  166. Jim Edwards
    Posted Jul 25, 2006 at 2:41 AM | Permalink

    #164

    You mention Solar panels should be evaluated by the amount of usable AC the system produces, not the DC output of the panels. This makes sense. Are you giving solar systems credit for not having the significant transmission losses that distant hydro or coal-fired power plants exhibit ? A significant benefit of solar [for residential installations in the Southwest]is that the electricity is produced on-site. What is the economic benefit, if any, of being able to increase a state’s generation capacity w/o having to run additional long-distance transmission lines ?

  167. Posted Jul 25, 2006 at 5:14 AM | Permalink

    At the risk of being very off-topic…

    You can hook DC appliances up to solar panels. But clearly it’s not “free” to do so, you have to buy new appliances, and then you pay a penalty if you try to run them off the grid.

    I don’t think it’s true any more that it takes more energy to make solar panels than they produce, but it’s not far from it. I researched the cost vs. benefits of having solar panels. Spending $5000 on panels and electronic equipment necessary to use them would make about $3000 worth of power over their lifetime. I think the only way to make them properly economical is to use a solar furnace rather than panels.

  168. Allan M.R. MacRae
    Posted Jul 25, 2006 at 6:02 AM | Permalink

    Here is an update on wind power – it is much less efficient than I implied in my Calgary Herald article, above.

    The excellent report which provided this insight is the E.On 2005 Wind Report:

    http://www.eon-netz.com/EONNETZ_eng.jsp

    E.On states that in 2004 it had an installed wind power capacity of 7050MW and an average feed-in capacity of 1295MW or 18.4% capacity factor. E.On operates over 40% of the wind power in Germany (more than the entire installed USA wind capacity), so one assumes their numbers must be representative for their grid size, layout, etc. Reported wind power capacity factors in the USA are substantially higher than 18%, perhaps due to design differences or higher wind quality.

    Figure 7 of the E.On report, entitled “Falling substitution capacity”, states:

    “Guaranteed wind power capacity below ten percent — traditional power stations essential.”

    “The more wind power capacity is in the grid, the lower the percentage of traditional generation it can replace.”

    E.On further states on page 9:

    “In concrete terms, this means that in 2020, with a forecast wind power capacity of over 48,000MW, 2,000MW of traditional power production can be replaced by these wind farms.”

    Re-stating:

    Substitution Capacity = [Predicted 2020 traditional power actually replaced by wind]/[Predicted 2020 total nameplate wind capacity] = only 4%!

    Germany is now at 8% Substitution Capacity, which is bad enough, but will decline to 4% if all goes according to plan.

    So an analysis by a German industry leader, operating more wind power than the entire USA, says it will install 12 to 24 times more wind power capacity than the conventional power that the wind power replaces.

    On this basis, I suggest they would have to give away a wind turbine free with every Bratwurst to make wind power economic.

    Best, Allan

    Full quotation below.

    Page 9 of E.On 2005 Wind Power Report

    In order to also guarantee reliable electricity supplies when wind farms produce little or no power, e.g. during periods of calm or storm-related shutdowns, traditional power station capacities must be available as a reserve. This means that wind farms can only replace traditional power station capacities to a limited degree.

    An objective measure of the extent to which wind farms are able to replace traditional power stations, is the contribution towards guaranteed capacity which they make within an existing power station portfolio. Approximately this capacity may be dispensed within a traditional power station portfolio, without thereby prejudicing the level of supply reliability.

    In 2004 two major German studies investigate the size of contribution that wind farms make towards guaranteed capacity. Both studies separately came to virtually identical conclusions, that wind energy currently contributes to the secure production capacity of the system, by providing 8% of its installed capacity.

    As wind power capacity rises, the lower availability of the wind farms determines the reliability of the system as a whole to an ever increasing extent. Consequently the greater reliability of traditional power stations becomes increasingly eclipsed.

    As a result, the relative contribution of wind power to the guaranteed capacity of our supply system up to the year 2020 will fall continuously to around 4% (FIGURE 7). In concrete terms, this means that in 2020, with a forecast wind power capacity of over 48,000MW (Source: Dena grid study), 2,000MW of traditional power production can be replaced by these wind farms.

    Guaranteed wind power capacity below ten percent — traditional power stations essential Figure 7. Falling substitution capacity The more wind power capacity is in the grid, the lower the percentage of traditional generation it can replace.

  169. Greg F
    Posted Jul 25, 2006 at 6:32 AM | Permalink

    Re:166

    Transmission distribution losses in the US are about 7.2% with 40% of those losses from the transformers. The larger transformers are the most efficient. The smallest transformers, the pole-top that feeds your house, are the least efficient. The power that is fed back into the power grid from a solar panel will go through this less efficient pole-top transformer.

    Typical peek efficiency of inverters range from 93% to 95%. As with transformers, the larger the inverter the more efficient. Assuming the pole-top transformer is 99% efficient the difference in efficiency is pretty much a wash.

    The problem with increasing the generating capacity with solar is that it does little to reduce the traditional infrastructure. Electricity has to be produced on demand, there is no practical way to store the amount of power were talking about. Additionally the distribution infrastructure has to support the peek demand. In southern California peek demand is at 6:00 in August. Not exactly the time of day when solar generated power is at its max.

  170. Allan M.R. MacRae
    Posted Jul 25, 2006 at 6:58 AM | Permalink

    Lower Troposphere vs. Surface Temperatures

    The June 22, 2006 US NAS Report on Climate Reconstruction is located at http://fermat.nap.edu/catalog/11676.html

    On the bottom of Page 30 of this report, it states:
    “Since 1978, instruments on satellites have monitored the temperature of the deep atmospheric layer above the surface, and though regional differences occur, global average trends agree with the surface warming of +0.16 degrees C per decade within +/-0.04 degrees C per decade (CCSP and SGCR 2006).”

    I cannot agree with this statement. It might be a worthwhile exercise for some of the statisticians on this site to examine the data and comment.

    I suspect that the above analysis simply performed a linear regression fitted through all the Global Lower Troposphere (LT) temperature anomaly data from Dec 1978 to July 2005 and produced a trend in the troposphere of +0.12 degrees C per decade – but just look at the LT data – there is no warming trend for the majority of the period – from inception in December 1978 up to April 1997.

    LT temperature data was sourced from Roy Spencer at University of Alabama at Huntsville (UAH), incorporating UAH’s latest corrections for satellite drift, at http://vortex.nsstc.uah.edu/data/msu/t2lt/tltglhmam_5.2

    Summary of LT Global Temperature Anomaly Chart:

    No warming trend in LT from 12/1978 to 04/1997, just oscillation around zero – then the huge 1997-98 El Nino spike peaking in 04/1998 which quickly reversed itself; possibly 0.2 degree C warming from 2000 to 2005 but note the complete lack of correlation with atmospheric CO2 levels, which have been rising consistently, at least since measurements began in 1958 – (see http://cdiac.esd.ornl.gov/trends/co2/graphics/mlo145e_thrudc04.pdf)

    Note the “possible” warming from 2000-2005 may still reverse itself as past upward oscillations have done – also note there is no reliable evidence that the possible 2000-2005 warming was caused by increased atmospheric CO2 levels – it was more likely caused by solar variation.

    Also, even this alleged warming, if caused by greenhouse gases, is not linear, it is logarithmic (warming flattens with increased atmospheric CO2 concentration) – so linear extrapolations of 0.0x degrees C per decade greatly exaggerate future warming (unless we assume excessively high increases in future CO2 levels like the IPCC has done).

    Regards, Allan

  171. Paul Penrose
    Posted Jul 25, 2006 at 8:28 AM | Permalink

    As much as I like the idea of using satellites to obtain high precision temperature data over time, and although I believe this data is much more useful than the land-station data, it’s still much too short to conclude much about climate. In fact, the higher-quality portion of the land-station data is barely long enough to be informative. Climate is about long time periods and anything finer than the centenial scale is mostly noise. Talking about decadal trends seems like a waste of time, for the most part. These long time periods are one reason that I consider climate science to be in it’s infancy.

  172. ET SidViscous
    Posted Jul 25, 2006 at 9:06 AM | Permalink

    Concerning feasibility and lifetime generation of Solar cells.

    If solar cells were such a wonderful thing, and generated more energy than they required to make. Wouldn’t it make sense for the solar panel manufacturers to use solar panels to run the production facility (They can get them cheaper than anyone else). Production does require electricity, even at the assembly level (Soldering irons and the like), so why do they run off the grid?

    I do sell solar panels as part (small part) of my job. They definitely have uses, running your refrigerator is not one of them.

  173. fFreddy
    Posted Jul 25, 2006 at 9:33 AM | Permalink

    Re #172, ET SidViscous

    They definitely have uses, …

    What sort of use are they good for ?

  174. ET SidViscous
    Posted Jul 25, 2006 at 9:37 AM | Permalink

    In my case, powering data loggers left out in the middle of nowhere with power not available.

    Basically low power, remote locations kind of thing. Say if you had a remote cabin. Would be handy to charge batteries to power a radio. Wouldn’t be handy for powering a hot water heater.

    Which gives me a thought. Has anyone yet sucessfuly mated solar panels to the curent fast charging batteries. I’ve noticed that some of the new batteries can charge in a very short period of time. Hour, and be very powerful. For small uses (Radio and the like) this could be a very good combo.

  175. Lance Harting
    Posted Jul 25, 2006 at 9:27 PM | Permalink

    I agree that AGW research is highly suspect and not motivation to find alternatives to a hugely energy dense resource that sprouts out of the ground under pressure. Unfortunately much of it is under countries that have unstable governments not too keen on us at the moment.

    I have confidence that the current price spike in oil will provide market incentives for other energy sources. The current wealth transfer to these countries cannot be in our long-term best interests.

  176. Allan M.R. MacRae
    Posted Jul 26, 2006 at 9:29 AM | Permalink

    It was 45 degrees C in parts of California yesterday. Electrical power systems were overburdened and power outages occurred, so air conditioners could not operate. Some people reportedly died from the heat.

    I am a so-called “climate skeptic”. I believe that over 80% of the current warming trend is naturally-caused.

    As one who has worked most of his career in energy, I suggest that the problem is not simply too much heat – the major problem in California is insufficient electricity to supply peak demand. Many parts of the world routinely experience higher temperatures than California – temperatures in excess of 50 degrees C are common in summertime in North Africa, for example. People cope, and air conditioning helps where electricity is available and affordable.

    The “climate alarmists” claim that the current warming trend is largely humanmade, and is due to the burning of fossil fuels. These alarmists believe that society should decrease the use of fossil fuels and increase the use of alternative energy technologies, particularly wind power.

    However, wind power is very inefficient, and also tends to destabilize the electrical power grid, causing even more power outages. In Germany one company, E.On Netz, generates more wind power than the entire USA. E.On’s 2005 Wind Power Report states: “As wind power capacity rises, the lower availability of the wind farms determines the reliability of the system as a whole to an ever increasing extent. Consequently the greater reliability of traditional power stations becomes increasingly eclipsed.”

    If the climate alarmists continue to have a strong influence on electrical power policy, there is a real risk that our electrical power generating systems will become increasingly destabilized by the introduction of more wind power.

    This is what happens when foolish ideas are accepted by a gullible public and supported by their politicians, and then rammed through the legislative and implementation process. Further destablization of our electrical power generating systems by climate alarmist policies could result in much greater loss of life in the future.

    We have seen this phenomenon at least once before. When DDT was banned, malaria again became a major killer in Sub-Saharan Africa. Although the reasons for banning DDT were never adequately justified, the ban was rushed into implementation. The net result – insignificant improvement in the environment, but tens of millions of avoidable deaths due to resurgent malaria.

    In the big picture, the environmental movement has made some huge strides, particularly in the 1970’s and 1980’s with the reduction of industrial pollution. However, it has made huge and costly mistakes, like the DDT disaster and the more recent global warming fiasco. The key problem is both cases has been the triumph of propaganda and hysteria over sound science. In the regard, the environmental movement has much to answer for.

    Regards, Allan

    Background information

    Wind power is not only unstable – the power generated increases with the cube of the wind speed – but it is remarkably inefficient. E.On reports that its Substitution Capacity is now 8% and will drop to 4% by 2020. This Substitution Capacity is the percentage of conventional power generating capacity that can be permanently idled when new wind power capacity is introduced into the grid. In the USA, it is often called Capacity Credit. This all means that when E.On installs wind power, it still needs over 90% backup by conventional power stations.

    Repeat of previous post, for completeness:

    Readers are referred to E.On’s 2005 Wind Report, at: http://www.eon-netz.com/EONNETZ_eng.jsp

    From Page 9 of the E.On 2005 Wind Power Report:

    In order to also guarantee reliable electricity supplies when wind farms produce little or no power, e.g. during periods of calm or storm-related shutdowns, traditional power station capacities must be available as a reserve. This means that wind farms can only replace traditional power station capacities to a limited degree.

    An objective measure of the extent to which wind farms are able to replace traditional power stations, is the contribution towards guaranteed capacity which they make within an existing power station portfolio. Approximately this capacity may be dispensed within a traditional power station portfolio, without thereby prejudicing the level of supply reliability.

    In 2004 two major German studies investigate the size of contribution that wind farms make towards guaranteed capacity. Both studies separately came to virtually identical conclusions, that wind energy currently contributes to the secure production capacity of the system, by providing 8% of its installed capacity.

    As wind power capacity rises, the lower availability of the wind farms determines the reliability of the system as a whole to an ever increasing extent. Consequently the greater reliability of traditional power stations becomes increasingly eclipsed.

    As a result, the relative contribution of wind power to the guaranteed capacity of our supply system up to the year 2020 will fall continuously to around 4% (FIGURE 7). In concrete terms, this means that in 2020, with a forecast wind power capacity of over 48,000MW (Source: Dena grid study), 2,000MW of traditional power production can be replaced by these wind farms.

    Guaranteed wind power capacity below ten percent — traditional power stations essential Figure 7. Falling substitution capacity The more wind power capacity is in the grid, the lower the percentage of traditional generation it can replace.

  177. Steve Sadlov
    Posted Aug 22, 2006 at 10:24 AM | Permalink

    There were a number of unanswered posts here prior to the thread going off into the weeds. Hoping to continue this discussion back on topic.

  178. bender
    Posted Aug 22, 2006 at 11:04 AM | Permalink

    Re #177 Reviewing this thread … maybe we need another Rush lyric? 🙂