IPCC AR4: No skill in scientific forecasting

John A writes: After a brief search, I found the paper “Global Warming: Forecasts by Scientists versus Scientific Forecasts

This paper came to my attention via an article in the Sydney Morning Herald. It concerns a paper written by two experts on scientific forecasting where they perform an audit on Chapter 8 of WG1 in the latest IPCC report.

The authors, Armstrong and Green, begin with a bombshell:

In 2007, a panel of experts established by the World Meteorological Organization and the United Nations Environment Programme issued its updated, Fourth Assessment Report, forecasts. The Intergovernmental Panel on Climate Change’s Working Group One Report predicts dramatic and harmful increases in average world temperatures over the next 92 years. We asked, are these forecasts a good basis for developing public policy? Our answer is “no”.

So where is the problem? The problem, according to the authors, is that the IPCC and everyone else does not distinguish between forecasts of the opinions of experts and scientific forecasting (with emphasis):

Much research on forecasting has shown that experts’ predictions are not useful. Rather, policies should be based on forecasts from scientific forecasting methods. We assessed the extent to which long-term forecasts of global average temperatures have been derived using evidence-based forecasting methods. We asked scientists and others involved in forecasting climate change to tell us which scientific articles presented the most credible forecasts. Most of the responses we received (30 out of 51) listed the IPCC Report as the best source. Given that the Report was commissioned at an enormous cost in order to provide policy recommendations to governments, the response should be reassuring. It is not. The forecasts in the Report were not the outcome of scientific procedures. In effect, they present the opinions of scientists transformed by mathematics and obscured by complex writing. We found no references to the primary sources of information on forecasting despite the fact these are easily available in books, articles, and websites. We conducted an audit of Chapter 8 of the IPCC’s WG1 Report. We found enough information to make judgments on 89 out of the total of 140 principles. The forecasting procedures that were used violated 72 principles. Many of the violations were, by themselves, critical. We have been unable to identify any scientific forecasts to support global warming. Claims that the Earth will get warmer have no more credence than saying that it will get colder.

Armstrong and Green further point out that those principles of forecasting sometimes run counter to what most people, scientists included, expect. They also point to various failings of scientists who regard themselves as experts (with some emphasis added):

…here are some of the well-established generalizations for situations involving long-range forecasts of complex issues where the causal factors are subject to uncertainty (as with climate):
‘€¢ Unaided judgmental forecasts by experts have no value. This applies whether the opinions are expressed by words, spreadsheets, or mathematical models. It also applies regardless of how much scientific evidence is possessed by the experts. Among the reasons for this are:
a) Complexity: People cannot assess complex relationships through unaided observations.
b) Coincidence: People confuse correlation with causation.
c) Feedback: People making judgmental predictions typically do not receive unambiguous feedback they can use to improve their forecasting.
d) Bias: People have difficulty in obtaining or using evidence that contradicts their initial beliefs. This problem is especially serious for people who view themselves as experts.

‘€¢ Agreement among experts is weakly related to accuracy. This is especially true when the experts communicate with one another and when they work together to solve problems. (As is the case with the IPCC process).

‘€¢ Complex models (those involving nonlinearities and interactions) harm accuracy because their errors multiply. That is, they tend to magnify one another. Ascher (1978), refers to the Club of Rome’s 1972 forecasts where, unaware of the research on forecasting, the developers proudly proclaimed, “in our model about 100,000 relationships are stored in the computer.” (The first author [Amrstrong] was aghast not only at the poor methodology in that study, but also at how easy it was to mislead both politicians and the public.) Complex models are also less accurate because they tend to fit randomness, thereby also providing misleading conclusions about prediction intervals. Finally, there are more opportunities for errors to creep into complex models and the errors are difficult to find. Craig, Gadgil, and Koomey (2002) came to similar conclusions in their review of long-term energy forecasts for the US made between 1950 and 1980.

‘€¢ Given even modest uncertainty, prediction intervals are enormous. For example, prediction intervals expand rapidly as time horizons increase so that one is faced with enormous intervals even when trying to forecast a straightforward thing such as automobile sales for General Motors over the next five years.

‘€¢ When there is uncertainty in forecasting, forecasts should be conservative. Uncertainty arises when data contain measurement errors, when the series is unstable, when knowledge about the direction of relationships is uncertain, and when a forecast depends upon forecasts of related (causal) variables. For example, forecasts of no change have been found to be more accurate for annual sales forecasts than trend forecasts when there was substantial uncertainty in the trend lines (e.g., Schnaars & Bavuso 1986). This principle also implies that forecasters reverting to long-term trends when such trends have been firmly established, they do not waver, and there are no firm reasons to suggest that the trends will change. Finally, trends should be damped toward no change as the forecast horizon increases.

Of course, this isn’t the behavior that a lot of us have seen from the IPCC. A lot of the criticism levied at the IPCC was that the forecasts were too conservative, rather than the reverse.

Armstrong and Green don’t exactly endorse the notion of “scientific consensus” since its is clear to them that such things when they happen in close groups of people working in the same general field, tend to reinforce the bias rather than remove it. I seem to remember Edward Wegman saying much the same thing about group reinforcement.

What of forecasting by experts? Well it turns out that this appears to be no more a guide to the future than asking your mates down the pub:

The first author’s [Armstrong’s] review of empirical research on this problem led to the “Seer-sucker theory,” stating that, “No matter how much evidence exists that seers do not exist, seers will find suckers” (Armstrong 1980). The amount of expertise does not matter beyond a basic minimum level. There are exceptions to the Seer-sucker Theory: When forecasters get substantial amounts of well-summarized feedback about the accuracy of their forecasts and about the reasons why the forecasts were or were not accurate, they can improve their forecasts. This situation applies for short-term (e.g., up to five days) weather forecasts, but it does not apply to long-term climate forecasts.

Research since 1980 has added support to the Seer-sucker Theory. In particular, Tetlock (2005) recruited 284 people whose professions included, “commenting or offering advice on political and economic trends.” He asked them to forecast the probability that various situations would or would not occur, picking areas (geographic and substantive) within and outside their areas of expertise. By 2003, he had accumulated over 82,000 forecasts. The experts barely if at all outperformed non-experts and neither group did well against simple rules.

This method of forecasting by expert opinion was very popular in the 1970s in climate science:

In the mid-1970s, there was a political debate raging about whether the global climate was changing. The United States’ National Defense University addressed this issue in their book, Climate Change to the Year 2000 (NDU 1978). This study involved 9 man-years of effort by Department of Defense and other agencies, aided by experts who received honoraria, and a contract of nearly $400,000 (in 2007 dollars). The heart of the study was a survey of experts. It provided them with a chart of “annual mean temperature, 0-800 N. latitude,” that showed temperature rising from 1870 to early 1940 then dropping sharply up to 1970. The conclusion, based primarily on 19 replies weighted by the study directors, was that while a slight increase in temperature might occur, uncertainty was so high that “the next twenty years will be similar to that of the past” and the effects of any change would be negligible. Clearly, this was a forecast by scientists, not a scientific forecast. However, it proved to be quite influential. The report was discussed in The Global 2000 Report to the President (Carter) and at the World Climate Conference in Geneva in 1979.

Such was the state of the art back then, but now with the advent of personal computers, canvassing experts to report their impressions of data has been transformed through the use of computer models. But are they any better at forecasting?

The methodology used in the past few decades has shifted from surveys of experts’ opinions to the use of computer models. However, based on the explanations that we have seen, such models are, in effect, mathematical ways for the experts to express their opinions. To our knowledge, there is no empirical evidence to suggest that presenting opinions in mathematical terms rather than in words will contribute to forecast accuracy. For example, and Keepin and Wynne (1984) wrote in the summary of their study of the IIASA’s “widely acclaimed” projections for global energy that, “Despite the appearance of analytical rigour… [they] are highly unstable and based on informal guesswork”.

All right, that was the 1980s. What about much more recently?

Carter, et al. (2006) examined the Stern Review (Stern 2007). They concluded that the Report authors made predictions without any reference to scientific forecasting.

I’m sure there’s lots more to be said about Stern’s methodology in other areas but we must press on

Pilkey and Pilkey-Jarvis (2007) concluded that the long-term climate forecasts that they examined were based only on the opinions of the scientists. The opinions were expressed in complex mathematical terms. There was no validation of the methodologies. They referred to the following quote as a summary on their page 45: “Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation and eventually build a structure which has no relation to reality. (Nikola Telsa, inventor and electrical engineer, 1934.)”

I assume the reference to Nikola Tesla isn’t meant to be complimentary.

Carter (2007) examined evidence on the predictive validity of the general circulation models (GCMs) used by the IPCC scientists. He found that while the models included some basic principles of physics, scientists had to make “educated guesses” about the values of many parameters because knowledge about the physical processes of the earth’s climate is incomplete. In practice, the GCMs failed to predict recent global average temperatures as accurately as simple curve-fitting approaches (Carter 2007, pp. 64 — 65) and also forecast greater warming at higher altitudes when the opposite has been the case (p. 64). Further, individual GCMs produce widely different forecasts from the same initial conditions and minor changes in parameters can result in forecasts of global cooling (Essex and McKitrick, 2002). Interestingly, modeling results that project global cooling are often rejected as “outliers” or “obviously wrong” (e.g., Stainforth et al., 2005)

Was Stainforth et al a reference to that ridiculous modelling exercise where they emphasized the top end 11C rise without mentioning all of the ones that fell into deep cooling? Yes it was. Obviously Stainforth knows which ones are outliers and therefore “obviously wrong” and which are not, because he’s an expert.

Taylor (2007) compared seasonal forecasts by New Zealand’s National Institute of Water and Atmospheric Research with outcomes for the period May 2002 to April 2007. He found NIWA’s forecasts of average regional temperatures for the season ahead were, at 48% correct, no more accurate than chance. That this is a general result was confirmed by New Zealand climatologist Dr Jim Renwick, who observed that NIWA’s low success rate was comparable to that of other forecasting groups worldwide. He added that “Climate prediction is hard, half of the variability in the climate system is not predictable, so we don’t expect to do terrifically well.” Dr Renwick is an author on Working Group I of the IPCC 4th Assessment Report, and also serves on the World Meteorological Organisation Commission for Climatology Expert Team on Seasonal Forecasting; His expert view is that current GCM climate models are unable to predict future climate any better than chance

Now clearly this is a serious problem with climate modelling on a regional level, but is it being reported that regional climate forecasts for even three months ahead do no better than flipping a coin?

Then there’s the Hurricane Forecasting Débacle of 2006:

…the US National Hurricane Center’s report on hurricane forecast accuracy noted, “No routinely-available early dynamical model had skill at 5 days” (Franklin 2007). This comment probably refers to forecasts for the paths of known, individual storms, but seasonal storm ensemble forecasts are clearly no more accurate. For example, the NHC’s forecast for the 2006 season was widely off the mark. On June 7, Vice Admiral Conrad C. Lautenbacher, Jr. of the National Oceanic and Atmospheric Administration gave the following testimony before the Committee on Appropriations Subcommittee on Commerce, Justice and Science of the United States Senate (Lautenbacher 2006, p. 3):

“NOAA’s prediction for the 2006 Atlantic hurricane season is for 13-16 tropical storms, with eight to 10 becoming hurricanes, of which four to six could become major hurricanes. … We are predicting an 80 percent likelihood of an above average number of storms in the Atlantic Basin this season. This is the highest percentage we have ever issued.”

By the beginning of December, Gresko (2006) was able to write “The mild 2006 Atlantic hurricane season draws to a close Thursday without a single hurricane striking the United States”.

That’s just in the first seven pages. On page 8 they begin their audit of scientific forecasting at the IPCC, and it goes downhill from there.

Full paper at http://www.forecastingprinciples.com/Public_Policy/WarmAudit31.pdf

240 Comments

  1. John F. Pittman
    Posted Jul 7, 2007 at 5:15 PM | Permalink

    Thank you for the more complete comments and link. Someone on CA had listed this or an abbreviated version. Thank you for making a thread for it.

    We found enough information to make judgments on 89 out of the total of 140 principles. The forecasting procedures that were used violated 72 principles. Many of the violations were, by themselves, critical.

    No matter how many times I read this, about 6th time now, it just gets worse. If I came out and said 89 of the last 140 professional statements I made violated 72 principles amd many of the vioaltions were by themselves critical, I would soon be unemployed. Even worse, if someone else pointed this out to my bosses. Perhaps not only fired but unable to secure professional work in my field.

  2. Posted Jul 7, 2007 at 5:50 PM | Permalink

    I can just here RC Gavin or Mann now: “These people just don’t understand. They do now have the proper scientific knowedge to evaluate our work”.

  3. Posted Jul 7, 2007 at 5:50 PM | Permalink

    Oops. should have been “hear”.

  4. Bob Koss
    Posted Jul 7, 2007 at 5:54 PM | Permalink

    Forecasting Bet

    Last week we told you about an expert in forecasting who challenged Al Gore to a $10,000 bet over who could more accurately predict global temperature increases.

    Professor Scott Armstrong contends that most climate change forecasts use bad methodology, and that global temps will not rise dramatically as Gore predicts.

    Now the professor has received his answer from Gore ‘€” thanks, but no thanks.

    A Gore representative said the former vice president is too busy to take on any new projects at this time.

    Article

    What kind of excuse is that? It’s not a project. It’s a freakin’ bet! What few details necessary to work out shouldn’t take any time at all and could be done by one of his representatives.

    Even Gore doesn’t doesn’t believe what he’s hyping.

  5. John Baltutis
    Posted Jul 7, 2007 at 6:04 PM | Permalink

    See my comment (#14) at Comments On A Review Of “Useless Arithmetic”

  6. Judith Curry
    Posted Jul 7, 2007 at 6:08 PM | Permalink

    Interesting post. However IPCC does not make “forecasts”. They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases. These models do not project solar variability or volcanic eruptions, which would be required for actual forecasts.

    • Posted Dec 21, 2009 at 2:23 AM | Permalink

      Dr. Curry is quite right in stating that the IPCC models make no forecasts. That they do not make them has the consequence that the IPCC’s models are not “scientific” under the philosophy of science.

      A forecast has a property that is called its “truth-value.” A truth-value is a variable which takes on the values of “true” and “false.” That a forecast can be false has the consequence that the associated model is falsifiable. That it is falsifiable satisfies a condition for the model to be “scientific,” under the philosophy of science.

      According to the IPCC, its models make “projections.” A projection lacks a truth-value and it follows that the IPCC models fail to satisfy the falsifiability requirement.

  7. Posted Jul 7, 2007 at 6:15 PM | Permalink

    It is these very “scenarios” that goverments around the world are using to determine policy. Of course they are not forecasts because that would take science as opposed to something akin to the I ching.

  8. Dave Dardinger
    Posted Jul 7, 2007 at 7:26 PM | Permalink

    re: #6 Dr. Curry,

    They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases.

    But even within that limitation they have many other asumptions built in, such as how cloud cover varies with CO2 (via temperature) and how water vapor varies with temperature (via CO2), etc. So unless the phrase “assumptions about the increase of greenhouse gases includes not just how much increase there is but what the physical results of such an increase would be there’s a problem. And if the physical results of greenhouse gas increase is included the entire structure of a given GCM is at issue not just a couple of numbers here and there.

    Note I’ve not got a problem with how CO2 increases by themselves would affect atmospheric, and thus surface, temperature. It’s the interaction of this isolated temperature increase with both water vapor and cloud yielding a final temperature change which needs to be tied down before a scenario simulation has any heuristic value. IOW, you can’t assume what you’re trying to demonstrate.

  9. Stan Palmer
    Posted Jul 7, 2007 at 7:31 PM | Permalink

    re 6

    Interesting post. However IPCC does not make “forecasts”. They conduct scenario simulations

    Prof. Curry:

    How does the IPCC assess the quality of the output of these scenarios. How do they assess the utility of thse outputs as guidance for policy makers? Do they use these outputs to create modal statements suich as likley, very likley etc. If these are not forecasts then what are they?

  10. Posted Jul 7, 2007 at 7:55 PM | Permalink

    re #6 Judith Curry

    Table 10.1 page 756 on the IPCC report shows the various models and the forcing agents they use.

    Please note the “other” column which list “Land use” and “solar” They are used in some models.

    I wonder what the solar variations and land use changes are.

  11. Posted Jul 7, 2007 at 7:57 PM | Permalink

    Re #6
    For the TAR IIASA did not even consider limitations to the supply of fossil fuel in developing their scenarios, and as has often been noted all scenarios seem to have the same unknown probability. Given present knowledge about fossil fuel reserves, ther is not enough fuel to supply the worst 50% of the scenarios even ubder the most extreme assumptions. are you suggesting that scenarios are in some way better than forecasts as a basis for decision making?

  12. steven mosher
    Posted Jul 7, 2007 at 8:19 PM | Permalink

    Which “scenarios” simulated support this action by new jersey

    “The Global Warming Response Act mandates cuts of greenhouse gas emissions throughout
    New Jersey’s economy by about 16 percent by 2020 and 80 percent by 2050 in the country’s most densely populated state.

    Scientists say heat-trapping emissions need to be cut by that much to prevent the worst
    effects of global warming including deadly storms, flooding and droughts. “

  13. steven mosher
    Posted Jul 7, 2007 at 8:49 PM | Permalink

    Re 6.

    I am a bit confused, Dr Curry. you wrote

    However IPCC does not make “forecasts”. They conduct scenario simulations,
    based upon different assumptions about the increase of greenhouse gases.
    These models do not project solar variability or volcanic eruptions, which would be required for actual forecasts.

    I’m familiar with the SRES, so if you had not added the last sentence It would have been clear.

    So, I’m confused by the “actual forecast” language.

    So, if someone projected Zero solar variability and zero volcanoes, would it be a forecast?

    Another way to look at it this. On a 100 year scale, some might believe that Volcanic activity
    does not matter. A significant event may cause a couple years or so of cooling, but On the 100 year scale
    it does not matter. If volcanoes do’nt mattaer on the 100 year scale.. one might as well “project” zero volcanoes.
    Whch is what the GCM do. The other option would be to “inject” a random number of volacnoes between now
    and 2100. The temp curves would respond and wiggle a bit, but eventually the effect disappears as aerosols
    rain out. (Once upon a time Hansen did include “volcanoes” in his projections)

    The same with solar varaibility. On a 100 year scale this variability is assumed by some to be unimportant.
    So they ignore it. Put another way, I’ve seen some folks discuss sub century solar processes.
    Sun spot cycles and such stuff. My sense ( I have no opinion on sun spots) is that No GCM take notice
    of this because they think it unimportant. In short, this type of solar variability is not modelled and
    not projected because they think it unimportant. They project its impact as ZERO.

    PUT ANOTHER WAY, The underlying assumption of some folks is that on the 100 year scale, these two variables
    are of no consequence. So they are PROJECTED, projected to be unimportant… on the century scale.
    Now, you seemed to indicate that a FORECAST requires a projection of these two variables.

    Projecting them as inconsequential on a centrury scale… would see to turn the simulation into a forecast.
    A forecast with a projection of zero solar variability and zero long term impact from volcanic activity.

    I don’t think this is what you meant. So.. could you explain forecast a bit more.

  14. Posted Jul 7, 2007 at 10:10 PM | Permalink

    re 6 and 14

    I am a bit confused too. Table 10.1 lists 23 models and 18 forcing agents. Three of the forcing agents are “urban, land use, and solar”

    Two of the models used urban, 8 used land use and 16 used solar for the 20th century and are “…set to constant or annually cyclic distribution for scenario integrations”. So they used these forcing factors to test vs history, and they may of may not be used in some way for the 21st century. None of the models included urban as a forcing agent in the 21st century.

    Three models used land use and two models used solar as “Y:forcing agent is included”

    To add to the confusion, I didn’t see a definition of urban, land use or solar.

    It is like making a football model and including turnovers in the history and getting good results, but then not using turnovers when making projections.

  15. John Norris
    Posted Jul 7, 2007 at 10:13 PM | Permalink

    re #6

    Sorry Dr. Curry, from a section title in the 4AR Summary for Policy Makers:

    Projections of Future Changes in Climate

    If it walks like a forecast and talks like a forecast, you can call it model simulation, but it’s still a forecast.

  16. Jaye
    Posted Jul 7, 2007 at 10:20 PM | Permalink

    However IPCC does not make “forecasts”. They conduct scenario simulations,
    based upon different assumptions about the increase of greenhouse gases.

    That’s called plausible deniability. Give the impression that “scenario simulations” are forecasts but when appropriate hide behind the technicality of what constitutes an “actual” forecast.

  17. Ian Castles
    Posted Jul 7, 2007 at 11:48 PM | Permalink

    In a presentation to an IPCC Expert Meeting in Amsterdam in January 2003, I pointed out that, although the authors of the IPCC Special Report on Emissions Scenarios (SRES) had stated repeatedly that the scenarios ‘are neither predictions nor forecasts’, the blurb on the back cover of the document stated explicitly that the report ‘describes new scenarios of the future, and PREDICTS greenhouse emissions associated with such developments’ (EMPHASIS added). In reply to this criticism, 15 lead authors of the SRES explained that ‘unfortunately the publisher mistakenly used the word “prediction” in the short text on the back of the jacket … that we as authors unfortunately caught too late to correct’ (Nakicenovic et al, 2003, “IPCC SRES Revisited: A Response”, Energy & Environment, 14, 2 & 3: 194).

    Cambridge University Press was in good company in stating that the IPCC had made predictions. In a joint statement published in “Science” on 18 May 2001 (p. 1261), seventeen national science academies led by the Royal Society (UK) recognised the IPCC as ‘the world’s most reliable source of information on climate change and its causes’, and said that in the contribution of Working Group I to the Panel’s Third Assessment Report ‘The average global surface temperature is PREDICTED to increase by between 1.4 and 3 C above 1990 level by 2100 for low-emission scenarios and between 2.5 and 5.8 C for higher emission scenarios’ (EMPHASIS added).

  18. John Baltutis
    Posted Jul 8, 2007 at 12:41 AM | Permalink

    Re: #6

    However IPCC does not make “forecasts”. They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases.

    Call them what you will, forecasts or projections. However, you left out the part that the projections are biased since they’re based solely on the assumption that “increasing greenhouse gases implies increased warming.” Anything else causing warming is discounted or ignored.

    From the Final Draft Summary for Policymakers, IPCC WG1 Fourth Assessment Report, Page 11 [My comments in brackets.]

    PROJECTIONS OF FUTURE CHANGES IN CLIMATE

    A major advance of this assessment of climate change projections compared with the TAR is the large number of simulations available, which together with new approaches to constraints from observations provide a quantitative basis for estimating likelihoods of expected warming [not likelihoods of expected climate change, but warming‘€”no biases there]. Model simulations consider a range of possible futures [but we shan’t call them forecasts] including idealised emission or concentration assumptions. These include SRES11 illustrative marker scenarios [right! storylines designed to explore the uncertainties behind potential trends in global developments and GHG (i.e., CO2) emissions‘€”once again, no bias there] for the 2000’€”2100 period and model experiments with greenhouse gases and aerosol concentrations held constant after year 2000 or 2100. This Working Group I assessment does not consider the plausibility or likelihood of any specific emission scenario [but we’ll use them anyway to estimate likelihoods of expected warming projections].

  19. John A
    Posted Jul 8, 2007 at 1:08 AM | Permalink

    Judith Curry:

    Interesting post. However IPCC does not make “forecasts”. They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases. These models do not project solar variability or volcanic eruptions, which would be required for actual forecasts.

    Really? Here’s what Armstrong and Green have to say about that one (Page 8):

    In apparent contradiction to claims by some climate experts that the IPCC provides “projections” and not “forecasts, the word “forecast” and its derivatives occurred 37 times, and “predict” and its derivatives occur 90 times in the body of Chapter 8. Recall also that most of our respondents (29 of whom were IPCC authors or reviewers) nominated the IPCC report as the most credible source of forecasts (not projections) of global average temperature

    So I call BS, Judith. Try reading the document first, and then you won’t make such demonstrably untrue statements that even an non-scientist like myself can easily debunk.

  20. maksimovich
    Posted Jul 8, 2007 at 1:28 AM | Permalink
  21. Bob Koss
    Posted Jul 8, 2007 at 2:10 AM | Permalink

    Dr. Curry,

    From the Chapter 8 FAQ:

    How Reliable Are the Models Used to Make Projections of Future Climate Change?

    From Chapter 10:

    The use of ensembles of AOGCMs developed at different modelling centres has become established in climate prediction/projection on both seasonal-to-interannual and centennial time scales.

    Seems clear to me they are implying a prediction/projection equivalence.

    You’ll find forecast listed here. http://www.webster.com/dictionary/prediction

    Can you honestly say the modelling community doesn’t consider what they do as attempted predictions?

    If scenarios are not to be considered predictions/projections why are they heavily included in a chapter entitled “Global Climate Projections”? Why is the chapter not named “Global Climate Scenarios”?

    Do you reject the idea that semantic dissembling is being used to encourage the less astute members of the public into make the leap to believing these “scenarios” are the reality of the future unless we change our purportedly profligate ways?

  22. Roger Pielke, Jr.
    Posted Jul 8, 2007 at 2:44 AM | Permalink

    Judy Curry is correct in her description of predictions in the IPCC, Compare IPCC participant Kevin Trenberth:

    In fact there are no predictions by IPCC at all. And there never have been. . . None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. There is neither an El Nià±o sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond.

    http://blogs.nature.com/climatefeedback/2007/06/predictions_of_climate.html

    Armstrong and Green (and Ian Castles above) are also correct in pointing out in their paper that the IPCC seeks to have things both ways — it presents its work as a look at the future (a forecast) in the context of political advocacy, and its supports dismiss criticisms (as Curry does here) of those forecasts on the basis that they are not really forecasts. (Armstrong and Green are serious scholars of forecasting from outside the climate community whose views deserve attention.)

    When treated properly as an analysis of sensitivities to a range of assumptions, the real message of the IPCC is that the choice of development pathway is the most important variable for beneficial future outcomes (at least according to the metrics used by the IPCC), across all social scenarios and climate model results — for those interested in the details see:

    Pielke, Jr., R.A., 2007. Statement to the House Committee on Science and Technology of the United States House of Representatives, The State of Climate Change Science 2007: The Findings of the Fourth Assessment Report by the Intergovernmental Panel on Climate Change (IPCC), Working Group III: Mitigation of Climate Change, 16 May.

    Click to access resource-2521-2007.17.pdf

  23. Posted Jul 8, 2007 at 2:52 AM | Permalink

    It seems to me that this projections vs predictions issue is really important, and that there has been a lot of muddled thinking and muddying of waters about this by climate modellers and the IPCC. Climate modellers know in their hearts that they cannot predict with any precision, if only because they cannot predict some factors that will be crucial drivers of results (e.g. economic growth, technology change), let alone limitations of their models. So they soften their claims a bit by saying that they are producing projections.

    However, a projection is really just a conditional prediction. IF a certain scenario (a set of assumptions) is true, THEN the likely climate outcome would be this. The value of the conditional prediction then depends on the relevance of the scenario and the quality of the model.

    Now which scenarios do they choose to run the models for? Not a random set of scenarios, clearly. Hopefully, it is a set that reflects their judgement about the realistic range of possible scenarios. In putting forward their projections, they are implicitly claiming that the scenarios for which the models are run are realistic and relevant. In that sense, the scenarios are predicted. They cannot predict a single scenario, but a range, but that still constitutes a prediction in my book. (If they aren’t predicting which scenarios are relatively realistic, why don’t they look at scenarios like one that has a machine to produce abundant free renewable energy becoming available in 2009? Because they consider it less likely than the scenarios they do look at. That’s a prediction.)

    Thus, while they may try to claim otherwise, the “projections” published in the IPCC reports are in fact predictions. They predict a range of scenarios, and then conditional on those scenarios, they predict climate.

    If they are not making predictions, then their work should have no more status in policy debates than any other random projection, such as the “ubundant free renewable energy soon” projection. If the IPCC claims that their projections have policy relevance, then they are predictions.

    • Posted Dec 21, 2009 at 3:21 AM | Permalink

      I agree that the “projections” vs “predictions” issue is really important but wish to clarify some of the details. The IPCC’s case for regulation of carbon dioxide emissions is a specious one whose apparent validity rests upon confusion of “projections” with “predictions.” In logic, the two entities have different characters.

      A “prediction” is a proposition that states the outcome of a statistical event. A prediction has a property that is called its “truth-value.” The “truth-value” is a variable which takes on the values of “true” and “false.” By virtue of the fact that it has a truth-value, a prediction can be false. That it can be false satisfies the condition called “falsifiability” for the associated model to be “scientific.”

      A “projection” is a mathematical function that maps the time to the computed global average temperature. As such, a projection lacks a truth-value. The 3 conclusions follow that: A) a projection cannot be false, B) the associated model is non-falsifiable and, C) this model is not”scientific,” by the definition of “scientific.” A “scientific” model is one that provides “scientia,” the Latin word for “verifiable knowledge.”

  24. John A
    Posted Jul 8, 2007 at 3:21 AM | Permalink

    Roger Pielke Jr:

    When treated properly as an analysis of sensitivities to a range of assumptions, the real message of the IPCC is that the choice of development pathway is the most important variable for beneficial future outcomes (at least according to the metrics used by the IPCC), across all social scenarios and climate model results

    I’m sorry but this is nonsense. We cannot judge whether choice of development is the most important variable, nor even rank that variable in the context of others, without proper analysis of climate modelling and the assumptions of modellers. Since climate models are representations of the modellers’ subjective opinions, then we can draw no conclusions in terms of future climate.

    If climate models were really independent, and tested multiple theories of climate change (rather than just one) and presented a range of testable forecasts in a reasonable timeframe, then we might make some headway.

    But none of those things have happened.

    Here are some of the conclusions of Armstrong and Green (page 14, with my emphasis):

    To provide forecasts that are useful for policy-making, one would need to prepare forecasts not only of global temperature, but also of the net effects of any temperature change; then on the effects of policy changes aimed at reducing temperature changes or the negative effects of it, the costs of such changes, and the likelihood of successful implementation. A failure at any stage would nullify any value to the forecasts.

    We have shown that failure occurs at the first stage of analysis
    . Specifically, we have been unable to find a scientific forecast to support the currently widespread belief in “global warming.” Prior research on forecasting suggests that a naïve (no change) forecast would be superior to current predictions which are, in effect, experts’ judgments only.

    Based on our Google searches, those forecasting long-term climate change have no apparent knowledge of evidence-based forecasting methods, so we expect that the same conclusions would apply to the other three necessary parts of the forecasting problem.

    By relying on evidence-based forecasting methods, we conclude that policies founded on predictions of man-made global warming from models based on the opinions of scientists will be harmful.

    Given the conditions involved in long-term global forecasts and the high uncertainty involved, prior research on forecasting suggests that even if the forecasting methods were properly applied, it may not be possible to improve upon the naïve, “no-change,” forecast. We do not even have evidence that it is possible to make useful medium term (e.g., one to five year) forecasts.

    Null hypotheses are not presented by the IPCC in order to make even preliminary estimates of the models’ validity. But null hypotheses are essential to the practice of empirical science – its just that the IPCC isn’t presenting empirical science at all.

  25. Roger Pielke, Jr.
    Posted Jul 8, 2007 at 3:34 AM | Permalink

    John A-

    Probably best to follow your own advice:-)

    “Try reading the document first, and then you won’t make such demonstrably untrue statements”

    We can indeed prepare for the future without having predictions, accurate or otherwise.

  26. John A
    Posted Jul 8, 2007 at 3:40 AM | Permalink

    Roger,

    You’re going to have to be more specific than quoting my line to Judith. You’re also going to have to produce any statement I have made where I advocate not preparing for the future, or that such preparations are futile because as far as I can recall, I haven’t made such statements.

    Go for it.

  27. Roger Pielke, Jr.
    Posted Jul 8, 2007 at 4:09 AM | Permalink

    John A-

    Sure, you write:

    We cannot judge whether choice of development is the most important variable, nor even rank that variable in the context of others, without proper analysis of climate modelling and the assumptions of modellers.

    And I am saying that this is tantamount to saying that we cannot prepare for the future without a “proper analysis of models” — which is nonsense.

    We can indeed assess the relative importance of development pathway without knowing the skill of climate predictions, simply by looking across a wide range of assumptions about the future. For an example of this approach, which indicates that the hurricane-global warming debate is largely irrelevant to policy priorities regardless of which science proves correct in the end, see:

    Pielke, Jr., R. A., 2007 (accepted). Future Economic Damage from Tropical Cyclones: Sensitivities to Societal and Climate Changes, Proceedings of the Philosophical Transactions of the Royal Society.

    Click to access resource-2517-2007.14.pdf

    More generally, see Steve Rayner’s chapter on climate policy and climate prediction in this book:

    Rayner, S., ‘Prediction and other approaches to climate change policy’, in: Sarewitz, D. et al (Ed.), Prediction: Science, Decision Making, and the Future of Nature, Island Press, 2000.
    http://sciencepolicy.colorado.edu/about_us/meet_us/roger_pielke/prediction_book/toc.html

    and also

    Dessai,S. and Hulme,M.(2004) Does climate adaptation policy need probabilities? Climate Policy 4 107-128.

    Click to access 2004-dessai-hulme-probabilities.pdf

    Bashing climate models is fun, and maybe even healthy for the science, but good climate policy does not depend upon it.

  28. John A
    Posted Jul 8, 2007 at 4:58 AM | Permalink

    Roger,

    I am saying that this is tantamount to saying that we cannot prepare for the future without a “proper analysis of models” ‘€” which is nonsense.

    The problem is in your reading of what I said, and not what I said. It is your implication that I made a statement to the effect that without proper analysis of models, no preparation from the future can be made.

    That isn’t what I meant, nor can I work out why you think I did.

  29. Posted Jul 8, 2007 at 5:01 AM | Permalink

    re: #22

    An excellent suggestion Bob: “Why is the chapter not named “Global Climate Scenarios”?”

    I think this provides an accurate description of the processes. Many of the important data supplied to the GCMs are obtained from scenarios and storylines. The output should then also be denoted the same.

  30. Peter Hartley
    Posted Jul 8, 2007 at 6:16 AM | Permalink

    The discussion of whether the IPCC is making forecasts or proposing scenarios is critical for the policy discussion. According to the dictionary, a “scenario” is “a postulated sequence of possible events” while a “prediction” is “a statement made about the future” and a “forecast” is “a calculation predicting future events.” In order to make policy in a rational way, we need to be able to calculate potential benefits from the policy action and compare them to the potential costs. If the IPCC is only giving us scenarios where the events are merely “possible” but there is no likelihood attached to them, we have no basis to make rational policy. I fear, however, that the real situation is even worse than that. I suspect that the IPCC authors could not even honestly say that each of the calculated “scenarios” are known to have a non-zero probability of occurring.

  31. pete
    Posted Jul 8, 2007 at 6:55 AM | Permalink

    Judith Curry writes in #6

    Interesting post. However IPCC does not make “forecasts”. They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases. These models do not project solar variability or volcanic eruptions, which would be required for actual forecasts.

    Interesting claim. Did you read the article or are you shooting from the hip?

    pg.8:
    In apparent contradiction to claims by some climate experts that the IPCC provides “projections” and not “forecasts, the word “forecast” and its derivatives occurred 37 times, and “predict” and its derivatives occur 90 times in the body of Chapter 8.

  32. John Lang
    Posted Jul 8, 2007 at 7:01 AM | Permalink

    to #6, Dr. Curry, Hansen includes periodic volcanic events in his models. So apparently, there is solar, urban, land use and volcanic impacts built into the forecast models.

  33. pete
    Posted Jul 8, 2007 at 7:02 AM | Permalink

    Bob asks in #22

    Can you honestly say the modelling community doesn’t consider what they do as attempted predictions?

    If they aren’t making predictions that can be compared with observations…. it ain’t science.

  34. John F. Pittman
    Posted Jul 8, 2007 at 7:12 AM | Permalink

    #28 I read your statements with interest. Don’t both imply that more important than mitigation at present, the real effort should be on economic delevopment, especially economic development that includes or allows in the future mitigation? In fact this appears to be an underlying assumption by the IPCC that economic ability translates to mitigation ability. Even though the IPCC indicates that mitigation benefits outweigh costs, both the ability and the amount to be saved by mitigation increase as economy increases? This also implies a ranking that should be used for the general implementation of policies. The best ranking is for policies that increase the economy with mitigation effects that are certain. The worst ranking is for policies that decrease economy and mitigation is uncertain. The majority of solutions will be in the middle, but economic benefits alone have an adjusted value of 99 to 1 over mitigation alone, and this weighing factor should be used by policy makers.

  35. RomanM
    Posted Jul 8, 2007 at 7:14 AM | Permalink

    Prof. Pielke says

    Bashing climate models is fun, and maybe even healthy for the science, but good climate policy does not depend upon it.

    What you should say is the “good climate policy should not depend on it”. Unfortunately, these models are presented by the IPCC as scientifically sound projections with the distinct purpose of advocating for a particular policy direction by creating a sense of urgency which moves the considerations to an emotional level. The policy makers are not for the most part in a position to make sound rational decisions without good scientific advice. Unless the believability of these models is put into a proper context, they have little choice other than to act on their “projections” rather than properly evaluating all of the evidence and forming “good climate policy”. That this is the case is evident from some of the misguided courses of action already being followed.

    RomanM

  36. pete
    Posted Jul 8, 2007 at 7:26 AM | Permalink

    John says in #33

    Dr. Curry, Hansen includes periodic volcanic events in his models. So apparently, there is solar, urban, land use and volcanic impacts built into the forecast models.

    What Judith doesn’t mention is that GCM’s aren’t sensitive to solar variability.

    Agreement between observations and model simulations of Sun’€”Earth system variability differs markedly among different regimes. A major enigma is that general circulation climate models predict an immutable climate in response to decadal solar variability, whereas surface temperatures, cloud cover, drought, rainfall, tropical cyclones, and forest fires show a definite correlation with solar activity.12 For example, when responses to the observed 11-year cycle in total radiative output are modeled, the resulting surface-temperature changes at Earth are a factor of five smaller than those deduced from empirical deconstruction of the surface-temperature record (Figure 3). Either the empirical evidence is deceptive or the models are inadequate-in their parameterization of feedbacks such as cloud processes and atmosphere’€”ocean couplings, for instance, or in their neglect of indirect responses by the stratosphere and amplification of naturally occurring internal modes of climate variability. In contrast, general circulation models of the coupled thermosphere and ionosphere predict dramatic responses to changing solar energy inputs (Figure 4), but a lack of global datasets precludes comprehensive validation.9

    Lean, J. “Living with a variable star,” Physics Today, 58, 6, 32, 2005.
    Whether or not solar variability plays an important role in recent climate change is an open question. However, it is certainly odd for Judith to claim GCM outputs aren’t predictions because they don’t include solar variability “forcasts” when GCMs are apparently insensitive to such effects.

  37. Steve Milesworthy
    Posted Jul 8, 2007 at 7:48 AM | Permalink

    #20 John A

    In apparent contradiction to claims by some climate experts that the IPCC provides “projections” and not “forecasts, the word “forecast” and its derivatives occurred 37 times,

    The keyword here is “apparent”, as each time the word forecast appears, it refers to weather and seasonal forecast. So essentially this statement is rubbish.

    Basically the whole report is a big old pile of nonsense built around a smokescreen of semantic confusion about the meaning of the word “forecast”. Eg. they say because there are uncertainties, we must assume there will be no change at all. Well no! The certainty is that GHGs cause warming, the uncertainties are the feedbacks – so without the models we should assume warming will happen.

  38. James Erlandson
    Posted Jul 8, 2007 at 7:59 AM | Permalink

    Judith Curry on IPCC (comment six):

    They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases.

    Robert A. Heinlein on science fiction:

    … realistic speculation about possible future events, based solidly on adequate knowledge of the real world, past and present, and on a thorough understanding of the nature and significance of the scientific method.

  39. paminator
    Posted Jul 8, 2007 at 8:23 AM | Permalink

    #38- If the seasonal and weather forecasts are for future years, decades or centuries, then you are wrong.

    You say

    The certainty is that GHGs cause warming, the uncertainties are the feedbacks – so without the models we should assume warming will happen.

    Since the feedbacks are uncertain, even in their sign, they should be set to zero. This results in a climate sensitivity of 0.1 – 0.2 C/ W/m^2, or a forecast temperature rise of
    Basically the whole report is a big old pile of nonsense built around a smokescreen of semantic confusion about the meaning of the word “forecast”.

    Which report are you referring to, the IPCC or Armstrong and Green?

  40. Matei Georgescu
    Posted Jul 8, 2007 at 8:25 AM | Permalink

    RE#36, really just semantics but i would take it 1 step further: “good climate policy should not depend entirely on it”. In a nutshell, that’s my main beef with this report and its forecasts, namely that it is presented as imminent occurrence (e.g. see recent Science paper on “Imminent” SW US drought) when there is so much science out there waiting to be done.

  41. Jaye
    Posted Jul 8, 2007 at 8:32 AM | Permalink

    If climate models were really independent, and tested multiple theories of climate change (rather than just one) and presented a range of testable forecasts in a reasonable timeframe, then we might make some headway.

    That’s really it in a nutshell. Until you get there, then preparing for the future is just a guessing game. I suppose there are some easy things to do that don’t really depend on GCM’s, climate change or the IPCC. Make more efficient use of the energy we have and develop cleaner sources. Beyond that though how can anybody seriously advocate carbon caps and drastic de-industrialization that the Goristas would have us do based on what is an immature and incomplete branch of science?

  42. Steve Milesworthy
    Posted Jul 8, 2007 at 8:41 AM | Permalink

    #40
    No, they are in reference to short term forecasts, so I am right 🙂 To be clear, the mentions of “forecast” I saw were in relation to pointing out that climate models were also used as forecast models; the implication being that the basic representation of the science was the same.

    Your argument that the feedbacks should be assumed to be zero is perhaps more justifiable if the uncertainty is very poor, but that is not what they said. And having said that, they even have the cheek to give a positive reference to Svensmark for which the jury certainly is out.

  43. steven mosher
    Posted Jul 8, 2007 at 9:07 AM | Permalink

    In chapter 10 of WG1 the notion of committment comes up. As in regardless of which
    SRES we select we are COMMITTED to .36C of warming by 2030 because of warming
    already in the pipeline.

    Is this commitment a fact, a projection? a preediction, a forecast?

    Hmm. On a related note Dan Hughes and other might enjoy …
    Slightly off topic but I’ve always been bothered by this notion
    of “20” models. There are not really 20 independent models. There are
    groups of models ( three from GISS for example) Essentially models from the
    same team run at different levels of complexity..

    One thing I thought might be interesting is comparing indvidual results
    And then see what happens when they run in “multi model” mode.

    One tidbit
    http://ipcc-wg1.ucar.edu/wg1/Report/suppl/Ch10/Ch10_indiv-maps.html

    Check figure 10.8..

  44. Roger Pielke, Jr.
    Posted Jul 8, 2007 at 9:07 AM | Permalink

    RomanM (#36)- You make a great point, and I agree. However, for good policy to result requires not just showing the flawed basis for a particular, dominant proposal, but also providing an alternative. In other words, you can’t beat something with nothing.

    Critiques of climate models, absent the introduction of another basis for action, are simply endorsements of business as usual. I think that we can do better than that.

  45. jae
    Posted Jul 8, 2007 at 9:14 AM | Permalink

    Judith says:

    Interesting post. However IPCC does not make “forecasts”. They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases. These models do not project solar variability or volcanic eruptions, which would be required for actual forecasts.

    The report cited in the post says:
    “Further, individual GCMs produce widely different forecasts from the same initial conditions and minor changes in parameters can result in forecasts of global cooling (Essex and McKitrick, 2002). Interestingly, modeling results that project global cooling are often rejected as “outliers” or “obviously wrong” (e.g., Stainforth et al., 2005)”

    This is the crux of the problem. The results of the models are being “filtered” by the bias of the modelers. Why aren’t the cooling scenarios shown?

    Anyone who thinks the IPCC process is scientific doesn’t have a clue about the scientific method, IMHO.

  46. jae
    Posted Jul 8, 2007 at 9:17 AM | Permalink

    Roger: absent some good evidence of harm, the “business as usual” scenario ain’t bad. The world is steadily getting better (see Lomborg’s book). Why screw it up with poor science and Gaia crap? The precautionary principle doesn’t make any sense here.

  47. Steve Milesworthy
    Posted Jul 8, 2007 at 9:38 AM | Permalink

    Jae,
    The Stainforth paper refers to the climateprediction.net experiment where a large number of models were run at low resolution on PCs which were identical apart from tweaks to control parameters. Individually, each parameter was tweaked within a physically plausible range, but the outcomes of the models can be viewed as outliers if their results do not conform to current observed climate not because the experimenter is “biased” against them.

    These experiments are more about understanding which the important parameters are that need to be researched more carefully rather than offering useful and plausible projections.

    Haven’t read the Essex and McKitrick book, but they’re hardly modelling experts so I doubt this is a primary reference.

  48. John A
    Posted Jul 8, 2007 at 10:00 AM | Permalink

    Roger Pielke:

    Critiques of climate models, absent the introduction of another basis for action, are simply endorsements of business as usual. I think that we can do better than that.

    That again is nonsense. Let me translate what you’ve just said by changing two words for an equivalent:

    Critiques of tarot cards, absent the introduction of another basis for action, are simply endorsements of business as usual. I think that we can do better than that.

    Now where are we? Do we stop using tarot cards now or continue until something better comes along?

    There is something in the prophecy business (or is it the projection business?) that simply assumes that in the light of current forecasting techniques and if current trends continue, we face a certain apocalypse. This kind of “certain knowledge” has been happening over and over for thousands of years. The Bible is full of prognostications using the data of legends and oral histories of times past together with current events (usually weather related) to project future apocalypse unless everyone reverts to a fundamentalist creed of self-denial, restrictions on liberty and acquiescence to an unelected permanent elite for the common good of all.

    If you don’t think that people in times past weren’t in the “projection business” or the “scenario business” then you’d better read the Old Testament or Revelations again.

    I don’t advocate “business as usual”, I simply request that unless those in the prediction business start winning some short term bets, then we should not be spending limited resources trying to bet on long term effects far into the future.

  49. steven mosher
    Posted Jul 8, 2007 at 10:16 AM | Permalink

    re #38.

    SteveM the worthy. I would not call the entire report Rubish. Some is, some isn’t.

    Let’s grant that the IPCC does a series of “projections” rather than forecasts and lay the semantic issue to the side.

    Now comes the question are the projections of any consequence. Well, clearly, since the subsequent WGs take notice of the projections, they view the projections as credible, believeable, abasis for Action. Otherwise, IPCC are engaged in mere climatilogical scholasticism and calculating the number of Gores that can dance on the head of a polar bear.

    Perhaps projections don’t rise to the level of forecast. A forecast, after all, could be disconfirmed by future observation. Weather forecasts are often wrong. IPCC can’t afford to be “wrong” or disproven. The difference between a forecast and a projection is this:

    A “projection” is insulated from disconfirmation. If you get a projection wrong,
    you can wave your arms; If you get a forecast wrong, you hang your head.

    So,cover your forecast with a condom, and call it a projection you get safe science. Science free from disconfirmation.

    Now, what was NOT rubish in the report. two things:

    1. the discussion of matching models to historical records.
    2. The discussion of NAIVE forecasts.

    I have not seen any good papers on how GCM can reproduce historical records. I’m sure they exist. Links please.

    On Naive prognositcations . I’ll take the observation record at face value for this. I think GISS have .8C warming for the last century, with acceleration in the last 25 years.

    NAIVE prognostication: 1C for next 100 years. So, yes I believe in warming.

    If you want me to believe anything other than this Naive prognostication, then you have to forecast.
    Especially, if you want me to change my behavior today for a benifit 100 years from now that
    I will not enjoy.

    Unless there is some enviromentalist heaven where I will be rewarded with 72 virgins.

  50. L Nettles
    Posted Jul 8, 2007 at 10:31 AM | Permalink

    David Ermer first linked to this article on July 28 in the IPCC Comments now online thread.

  51. Steve Milesworthy
    Posted Jul 8, 2007 at 10:36 AM | Permalink

    #50 steven M the osher

    I’m afraid my observation of reports such as these and, say, the ISPM (Fraser version of the IPCC report), or that Essex and McKitrick nonsense about global temperatures, is that they are purely vehicles for publicising the incorrect strawman arguments in their abstracts and conclusions. A large pile of manure concealing something useful and interesting is still a large pile of manure.

    There is a large literature of model hindcasts on recent history and more distant prehistory, including test cases, such as model behavour following Pinatubo or strong El Ninos. Given that the current range of models diverge within the same IPCC emissions scenarios there is hardly likely to be a perfect fit, but I would argue that ther is enough evidence to suggest a strong enough positive feedback to take remedial action now. If the manure heaps turn out to be hiding a gold mine, we’ll know long before the costs have mounted noticeably.

  52. Peter Hartley
    Posted Jul 8, 2007 at 11:34 AM | Permalink

    #52 What can it mean to say “I would argue that there is enough evidence to suggest a strong enough positive feedback to take remedial action now”? It can only mean that there is enough evidence to conclude that the expected costs of not taking remedial action of some sort (unspecified) are large enough to outweigh the expected costs of taking that (unspecified) action.

    In order to make a claim like this, one needs more than evidence that there are positive feedback effects from anthprogenic CO2 emissions — assuming for the sake of argument that indeed we have such evidence. We also need evidence that there are not negative feedback effects sufficient to substantially offset the positive ones. We also would need evidence that the natural fluctuations are small enough that we can confidently conclude that the future in the absence of anthropogenic emissions would not involve cooling. We also need to attach probabilities to all these possible claims so we can determine expected values.

    Then we need to have probabilistic statements about the possible effects of any future warming. It is not obvious that warming would be bad on net. Do we happen to live in the best of all possible climates? In this regard, for example, a recent article in the American Economic Review provided evidence that projected climate change in the US would most likely be quite positive for US agriculture and I would expect that conclusion to holds even more strongly for Canada, Russia, Poland, Germany and many other places of significant agricultiural production.

    We also need to take account of other possible effects of altering CO2 emissions — in particular the aerial fertilizer effects which appear to be strongly positive on net. Hence, the expected climate effects of prospective CO2 emissions need to be sufficiently negative to offset the expected direct benefits of those same emissions. On top of that, we need to measure the expected direct costs of whatever actions are proposed to limit CO2 emissions.

    There are actually many steps to take when going from climate science to policy and climate “scenarios” and “storylines” are not a sufficient basis for rational policy prescriptions.

  53. steven mosher
    Posted Jul 8, 2007 at 11:44 AM | Permalink

    RE 52.

    Ever the worthy.

    I’m afraid my observation of reports such as these and, say, the ISPM (Fraser version of the IPCC report), or that Essex and McKitrick nonsense about global temperatures, is that they are purely vehicles for publicising the incorrect strawman arguments in their abstracts and conclusions. A large pile of manure concealing something useful and interesting is still a large pile of manure.

    When some see a pile of manure they ask “where is the Pony?”

    I like Ponys.

    Well, I was underwhelmed by the paper. Still, I think there needs to be much more transparency on the skill models have or do not have
    in matching a historical record and on how the various “levels” of models are coordinated.

    There is a large literature of model hindcasts on recent history and more distant prehistory, including test cases, such as model behavour following Pinatubo or strong El Ninos. Given that the current range of models diverge within the same IPCC emissions scenarios there is hardly likely to be a perfect fit, but I would argue that ther is enough evidence to suggest a strong enough positive feedback to take remedial action now. If the manure heaps turn out to be hiding a gold mine, we’ll know long before the costs have mounted noticeably.

    Lots here:

    1. I’ve only found a couple papers. I’ll search harder. Pointer to the BEST please and thankyou.
    2. I read one of Gavins, I Struggled with the notion of Performance metrics. I did not
    see any clearly stated. I’ll it again, maybe I was impatient.
    3. I don’t think reasonble folks are looking for a perfect fit. I think a lot of the presentation ( the damn spaggetti
    graphs ) hide the misfits, leading to suspicians . Who is systematically low who is systematically high,
    what is this attributed to, how was it corrected… That whole process is fraught with uncertainty.
    I think it should be better documented.
    4. REmedial action. Here is what I was going to say in the last post. Every time I see the US gov. pour another
    cent in to repopulating New Orleans I am gobsmacked. If one believed in AGW, if one believed that the
    sea would rise again ( does gore have an Ark?) if one believed that Hurricanes would get either more
    frequent, or more frequently strong, or more frequently landfalling, or more rare but freakishly strong, WHY
    would you have a policy of returning people to a death zone? Why? Gobsmacked I am.

    5. Whatever actions are taken now to fight AGW will not benefit me. The warming is already in the pipeline.
    So, why, even if I believed in this religion, would I take action today for which there is no benefit.

  54. oconnellc
    Posted Jul 8, 2007 at 12:06 PM | Permalink

    steven mosher, you write a well reasoned post, except for #5. For example, being male, I will never get breast cancer, yet I see benefits to breast cancer research. Being caucasian, I will unlikely ever get sickle-cell anemia, yet I see benefits to finding a cure. I have no children, yet I see benefits to research into juvenile diabetis. I will never live in Africa, yet I gladly donate money to charities that fight hunger there. Perhaps you did not mean to sound the way you sounded?

  55. joshua corning
    Posted Jul 8, 2007 at 12:22 PM | Permalink

    Interesting post. However IPCC does not make “forecasts”. They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases. These models do not project solar variability or volcanic eruptions, which would be required for actual forecasts.

    I think the major disconnect or misunderstanding here is that although these are not forecasts the IPCC and its proponents want these scenario simulations to have the same legitimacy as forecasts would.

  56. John A
    Posted Jul 8, 2007 at 12:43 PM | Permalink

    Given that the current range of models diverge within the same IPCC emissions scenarios there is hardly likely to be a perfect fit, but I would argue that ther is enough evidence to suggest a strong enough positive feedback to take remedial action now. If the manure heaps turn out to be hiding a gold mine, we’ll know long before the costs have mounted noticeably.

    Got faith?

  57. John Baltutis
    Posted Jul 8, 2007 at 6:54 PM | Permalink

    Here is a news item from the latest Science. It is reporting on a Commentary in Nature that argues the modeling uncertainty is greater than the IPCC admits.

    CLIMATE CHANGE: Another Global Warming Icon Comes Under Attack, by Richard A. Kerr

    Climate scientists are used to skeptics taking potshots at their favorite line of evidence for global warming. It comes with the territory. But now a group of mainstream atmospheric scientists is disputing a rising icon of global warming and researchers are giving some ground.

    The challenge to one part of the latest climate assessment by the Intergovernmental Panel on Climate Change (IPCC) “is not a question of whether the Earth is warming or whether it will continue to warm” under human influence, says atmospheric scientist Robert Charlson of the University of Washington, Seattle, one of three authors of a commentary published online last week in Nature Reports: Climate Change.

    Instead, he and his co-authors argue that the simulation by 14 different climate models of the warming in the 20th century is not the reassuring success IPCC claims it to be. Future warming could be much worse than that modeling suggests, they say, or even more moderate. IPCC authors concede the group has a point, but they say their report’€”if you look in the right places’€”reflects the uncertainty the critics are pointing out.

    Twentieth-century simulations would seem like a straightforward test of climate models. In the run-up to the IPCC climate science report released last February (Science, 9 February, p. 754), 14 groups ran their models under 20th-century conditions of rising greenhouse gases. As a group, the models did rather well (see figure). A narrow range of simulated warmings (purple band) falls right on the actual warming (black line) and distinctly above simulations run under conditions free of human influence (blue band).

    Not so certain. The uncertainty range in the modeled warming (red bar) is only half the uncertainty range (orange) of human influences.

    But the group of three atmospheric scientists’€”Charlson; Stephen Schwartz of the Brookhaven National Laboratory in Upton, New York; and Henning Rodhe of Stockholm University, Sweden’€”says the close match between models and the actual warming is deceptive. The match “conveys a lot more confidence [in the models] than can be supported in actuality,” says Schwartz.

    To prove their point, the commentary authors note the range of the simulated warmings, that is, the width of the purple band. The range is only half as large as they would expect it to be, they say, considering the large range of uncertainty in the factors driving climate change in the simulations. Greenhouse-gas changes are well known, they note, but not so the counteracting cooling of pollutant hazes, called aerosols. Aerosols cool the planet by reflecting away sunlight and increasing the reflectivity of clouds. Somehow, the three researchers say, modelers failed to draw on all the uncertainty inherent in aerosols so that the 20th-century simulations look more certain than they should.

    Modeler Jeffrey Kiehl of the National Center for Atmospheric Research in Boulder, Colorado, reached the same conclusion by a different route. In an unpublished but widely circulated analysis, he plotted the combined effect of greenhouse gases and aerosols used in each of 11 models versus how responsive each model was to a given amount of greenhouse gases. The latter factor, called climate sensitivity, varies from model to model. He found that the more sensitive a model was, the stronger the aerosol cooling that drove the model. The net result of having greater sensitivity compensated by a greater aerosol effect was to narrow the apparent range of uncertainty, as Schwartz and his colleagues note.

    “I don’t want certain interests to claim that modelers are dishonest,” says Kiehl. “That’s not what’s going on. Given the range of uncertainty, they are trying to get the best fit [to observations] with their model.” That’s simply a useful step toward using a model for predicting future warming.

    IPCC modelers say they never meant to suggest they have a better handle on uncertainty than they do. They don’t agree on how aerosols came to narrow the apparent range of uncertainty, but they do agree that 20th-century simulations are not IPCC’s best measure of uncertainty. “I’m quite pleased with how we’re treating the uncertainties,” says Gabriele Hegerl of Duke University in Durham, North Carolina, one of two coordinating lead authors on the relevant IPCC chapter, “but it’s difficult to communicate” how they arrived at their best uncertainty estimates.

    Hegerl points out that numerical and graphical error ranges in the IPCC report that are attached to the warming predicted for 2100 are more on the order expected by Schwartz and his colleagues. Those error bars are based on “a much more complete analysis of uncertainty” than the success of 20th-century simulations, she notes. It would seem, as noted previously (Science, 8 June, p. 1412), IPCC could improve its communication of climate science.

  58. Bob Koss
    Posted Jul 8, 2007 at 7:29 PM | Permalink

    It seems to me that if any of the models were considered reliable by the modelling community, there wouldn’t be a couple dozen groups using different models. Instead there would only be a couple models, with a couple dozen or so groups concentrating on perfecting them.

    Surely some of these groups must now be backing a dead horse. All the models can’t be right. If they can’t all be right, why do the modellers using the wrong ones not realize this and instead spend their time helping perfect the best of the rest?

    Evidently the understanding of the subject is so convoluted that everyone thinks the other models have so many inaccuracies as to be irreparable; and that their model is the one staying closest to the yellow brick road.

    Not very confidence inspiring. Especially considering these are all numerical programs and nobody is willing to put a number on the future accuracy of their own particular model. e. g. This model is accurate to within 0.5% per year.

  59. Posted Jul 8, 2007 at 8:26 PM | Permalink

    I call this use of climate models “scientific money laundering”. In A Skeptical Layman’s Guide to Anthropogenic Global Warming I wrote:

    The models produce the result that there will be a lot of anthropogenic global warming in the future because they are programmed to reach this result. In the media, the models are used as a sort of scientific money laundering scheme. In money laundering, cash from illegal origins (such as smuggling narcotics) is fed into a business that then repays the money back to the criminal as a salary or consulting fee or some other type of seemingly legitimate transaction. The money he gets back is exactly the same money, but instead of just appearing out of nowhere, it now has a paper-trail and appears more legitimate. The money has been laundered.

    In the same way, assumptions of dubious quality or certainty that presuppose AGW beyond the bounds of anything we have see historically are plugged into the models, and, shazam, the models say that there will be a lot of anthropogenic global warming. These dubious assumptions… are laundered by being passed through these complex black boxes we call climate models and suddenly the results are somehow scientific proof of AGW. The quality hasn’t changed, but the paper trail looks better, at least in the press. The assumptions begin as guesses of dubious quality and come out laundered at “settled science.”

  60. airmouton
    Posted Jul 8, 2007 at 8:26 PM | Permalink

    Re: 50

    There is a large literature of model hindcasts on recent history and more distant prehistory, including test cases, such as model behavour following Pinatubo or strong El Ninos. Given that the current range of models diverge within the same IPCC emissions scenarios there is hardly likely to be a perfect fit, but I would argue that ther is enough evidence to suggest a strong enough positive feedback to take remedial action now.

    So you’re saying that models are predictive of historical temperatures? In which case they should be predictive of future temperatures? In which case they should follow normal forecasting procedure, a la Green/Armstrong?

    the certainty is that GHG’s cause warming

    In what? A glass tube? Has anyone adequately proved that they cause an equal amount of warming in a complex climatic system? Is it possible to prove such a thing without a “predictive” climate model? Isn’t there also a certainty that aerosols cause cooling, particulates global dimming, and so on?

    Re: 45

    Critiques of climate models, absent the introduction of another basis for action, are simply endorsements of business as usual. I think that we can do better than that

    If you’re lost in the woods, the conventional wisdom is to stay put.

    Growing environmental awareness is “business as usual.” Doing better may mean improving things faster. It may also mean dumping iron into the oceans, or deforesting the Amazon to grow corn.

  61. Posted Jul 8, 2007 at 8:30 PM | Permalink

    Wimpy says: I will gladly pay you Tuesday for a hamburger today!

    Or, translated into IPCC speak: I will gladly face the music in 2050, for the bad predictions I make today!

    You see, they are not so irresponsible after all!

  62. Steve McIntyre
    Posted Jul 8, 2007 at 9:09 PM | Permalink

    I obviously think that a thorough audit of at least one GCM is long overdue, something that Armstrong endorses, but I think that there is too much rhetoric in Armstrong’s article and the subsequent commentary. Phrases like the following are amusing, but are no substitute for detailed analysis:

    They referred to the following quote as a summary on their page 45: “Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation and eventually build a structure which has no relation to reality. (Nikola Telsa, inventor and electrical engineer, 1934.)”

    Also, just because GCMs may not be very good doesn’t mean that AGW isn’t an issue. Personally I think that any connection between increased CO2 and 2.5 deg C should be arguable without involving GCMs and that GCMs probably introduce a plethora of irrelevant problems, which people have a tough time solving. I’m surprised that no one has been able to identify a clear exposition of this linkage, but just because IPCC hasn’t addressed this, doesn’t mean that it can’t be done. Why it hasn’t been done is a different question.

    One more time, I urge people to be less angry. It’s a tone that I try to avoid and I urge others to as well.

  63. Posted Jul 8, 2007 at 10:27 PM | Permalink

    Unfortunately Steve whilst some of the article may be rhetoric it is a fact that much of the AGW debate has been just that. The involvement of people like Gore has not allowed the scientific debate to take place on a rational basis and I am afraid that it is likely that only when public opinion begins to turn will some of the real work begin. Why do an audit when they don’t have to? How many companies would forego an audit if they could?

  64. tetris
    Posted Jul 8, 2007 at 10:45 PM | Permalink

    Re:63
    Steve,
    We need to bear in mind that, while it has become norm in Western societies to treat being angry as undesirable, it remains a perfectly normal emotion, experienced by most of us under certain circumstances. Fact is, CA and other groups’ good works notwithstanding, we’re faced with governments around the world making policy decisions involving trillions of taxpayers dollars, and casually talking about 20-100 basis point reductions in GDP as if we’re dealing with “one or two scoops of icecream”. All of this based on IPCC GIGO [Garbage in, Garbage out] “recommendations”, “projections”, “forecasts” or whatever else they may be called in Orwellian newsspeak. As someone with a background in scientific and corporate due diligence and having served on boards and as CEO internationally, this is a political shambles and good cause for feeling angry, indeed. Your comment about tone is well taken.

  65. Steve McIntyre
    Posted Jul 8, 2007 at 10:50 PM | Permalink

    If anyone feels angry about this sort of stuff, then take a valium before posting here. Angriness comes across very poorly. I don’t like reading it, I generally tune it out, but it gives a bad impression.

    As to people worrying about politicians spending money foolishly, it’s fine to worry about it, but please worry about it elsewhere so that this site can maintain its niche focus on scientific issues. We can have a little amusement with the Al Gore concert and such from time to time, but I don’t want to discuss policy here.

  66. Nate
    Posted Jul 8, 2007 at 11:18 PM | Permalink

    Roger(45)

    You make a great point, and I agree. However, for good policy to result requires not just showing the flawed basis for a particular, dominant proposal, but also providing an alternative. In other words, you can’t beat something with nothing.

    What happened to “We don’t know”?

    To reference the last post on here about Gore, I bet anyone $1,000 that I can use the movement of my three cats over an hour time period to more accurately predict average global temperatures for the next 100 years than can the IPCC(pick one scenario of your choice).

  67. Paul G M
    Posted Jul 9, 2007 at 12:17 AM | Permalink

    Anger and Judith Curry

    Steve you are perhaps right that anger has no place on a scientific website like this.

    Well despite all the semantic musings above about the meaning of forecasts versus projections etc, in the UK the “scenarios” of the IPCC are treated as facts. They have replaced all known scientific laws with one Increased CO2 and other GHGs=increased temperature and that is bad.

    Challenge this new religion and you will face anger and derision – 2,500 or is it 2,500,000 scientists can’t be wrong.

    I note that you use humour yourself. I’m glad that you think the entire perversion of science and the squandering of vast sums of public and private money on some spectre is funny.

    The models don’t work because the functioning of the climate system is still not understood and so the models cannot reproduce it.

    The amazing thing, as you say in 63, is that the IPCC has not yet presented anything credible to prove the assertion that increasing CO2 = increasing temperature yet this is now taken as a fact. In our schools, proper science teaching is being replaced by material that is “relevant”. Here “relevant” means the AGW myth presented as a fact.

    I like the Gorefest post. No anger there then and all totally scientific.

    Bets wishes

    Paul

  68. John A
    Posted Jul 9, 2007 at 1:29 AM | Permalink

    Steve McIntyre:

    …just because GCMs may not be very good doesn’t mean that AGW isn’t an issue.

    Since multiproxy studies are deeply flawed (as you have shown) and GCMs “aren’t very good”, on what basis do you judge AGW to be an issue? What is the issue with AGW? Or even GW?

  69. Jean S
    Posted Jul 9, 2007 at 1:58 AM | Permalink

    Déjà  vu:

    We found no references to the primary sources of information on forecasting despite the fact these are easily available in books, articles, and websites.

  70. STAFFAN LINDSTRÖM
    Posted Jul 9, 2007 at 2:28 AM | Permalink

    #55
    oconnelc FYI, males CAN get breast cancer! Itⳳ
    rare but it does occur mostly to men aged 60-70!
    Just you know it…And how do you know youⳬl
    never live in Africa?? Your personal “GCM”??

  71. DaveR
    Posted Jul 9, 2007 at 3:51 AM | Permalink

    #66

    This is a scientific website? Maybe McIntyre’s posts are, but the majority of the space is taken up by posts from various loons bursting with “geek rage”. Just look at John A’s response to Judith Curry in #20. You have a real, live atmospheric physicist posting here and all she gets is sneers and anger. Way to go guys. Very mature.

  72. MarkW
    Posted Jul 9, 2007 at 5:06 AM | Permalink

    I’m still waiting for someone to demonstrate this “evidence” of strong positive feedback. Even if 100% of the current warming were due to CO2 and feedbacks, it would argue for a neutral feedback. Since the evidence that only a fraction, perhaps even a small fraction of the warming is due to CO2 and it’s feedbacks, the evidence is in that that feedback is on net, negative.

  73. MarkW
    Posted Jul 9, 2007 at 5:14 AM | Permalink

    DaveR,

    So when a “real scientists” posts something that is demonstrateably wrong, in your opinion us “geeks” are supposed to bow down and worship her anyway?

  74. John A
    Posted Jul 9, 2007 at 5:16 AM | Permalink

    Just look at John A’s response to Judith Curry in #20. You have a real, live atmospheric physicist posting here and all she gets is sneers and anger. Way to go guys. Very mature

    Judith Curry posted something that was demonstrably untrue. I cited accurately and to the point. By the way, another real, live geoscientist complimented me for doing so.

    In this case, BS is how I called it. It would have been immature not to.

  75. TonyN
    Posted Jul 9, 2007 at 5:58 AM | Permalink

    Let’s not forget Lahsen,M Uncertainty Distribution Around Climate Models, Social Studies of Science, 2005. I’m surprised that Armstrong and Green didn’t cite this paper as it suggests that the 19 models used by the IPCC are not independent, as they claim, but closely related to each other.

  76. RomanM
    Posted Jul 9, 2007 at 6:13 AM | Permalink

    Yes, DaveR (#72), this IS a scientific website and some of us “loons” are also “real, live scientists” in our own fields. When we see our areas misused and misunderstood in some areas of climate study, we tend to question the entire result.

    JohnA has raised many good points in his initial posting on this thread. Climate models for the most part are not sufficiently advanced to be trusted as reflecting reality. Many of them are basically deterministic and do not include multiple external factors which are not completely understood. The shot-gun approach of varying parameters in all directions until the result says what you want it to say has no credibility. Putting error bounds or a probability structure on the range of results in these circumstances is just nonsensical. The modeler needs to demonstrate a priori that a model has the “skill” to reflect the recent past and the present situation. THEN, projections forward from the present with that particular model might have some believability. Statistical methodology exists for including unpredictable factors in these models and should be used. However, I am somewhat dismayed by the reluctance of many of the climate research groups to include professional statisticians in the process instead relying on a tightly knit circle of people who do not always have the proper expertise to apply the methods.

    I appreciate having this web-site as a venue to hear from both “real, live scientists” and the many others who participate in the discussion. Maybe we can all not only learn something from it, but also have some input in improving future models.

    RomanM

  77. Steve Milesworthy
    Posted Jul 9, 2007 at 6:27 AM | Permalink

    #54 Steven Mosher the Pony lover

    I’m afraid I do most of my reading for my own benefit, and I’m very bad at making lists of papers. Typing “climate attribution” into Google Scholar gives a reasonable list. Presumably the IPCC report will also cite appropriate papers. Hegerl et al “Climate Change Detection and Attribution: Beyond mean temperature signal” J Clim 2005 has just caught my eye, and seems to cover some issues with models.

    “Global Cooling After the Eruption of Mount Pinatubo: A Test of Climate Feedback by Water Vapor” Soden et al 2002. look at feedbacks identified by models.

  78. Steve Milesworthy
    Posted Jul 9, 2007 at 6:44 AM | Permalink

    #77 RomanM and others
    With a forecast you look after the event at a number of locations and decide whether the forecast agreed with the actual temperature/humidity/rainfall.

    You can’t do this with a climate model because by the time you’ve validated it a) a new and better model has come along, b) three large volcanoes and a major development of new coal-fired power stations have fouled up your projected emission scenarios, or c) the runaway greenhouse has obliterated mankind (joke!). It’s hard to validate on past climate since there is a question about how “tuned” the model is and how good the past observations are.

    As noted in #78 see the “climate attribution” papers for the statistical tests that are used for looking for the “anthropogenic fingerprint” in the temperature record.

    So that is why the 2100 projections are projections, and why the IPCC report does not ever refer to them as forecasts however many times they use the word forecast in chapter 8.

  79. JP
    Posted Jul 9, 2007 at 7:24 AM | Permalink

    The 2007 IPCC SPM has 4 Emission Scenarios in which it bases many of its forecast projections. The SRES, located on page 18, gives Scenarios A1,A2,B1,B2, which project likely GHG emissions based on projected future economic activty, as well as demographic projections.

    The most human friendly projection A1 is based on the current dynamic economic trend, which is beneficial to most people. However, it is the one scenario in which the rate of CO2 emissions continues to rise, and with it global temperatures.

    The other 3 scenarios are based on a combination of slower population growth in 1st World nations, but continued population growth in the lesser developed ones. The B1 scenario is the most “eco-friendly”; the world economies continue to grow, but most of the growth is centered on technology and services, and not heavy industry….

    What fascinates me about these scenarios, is there is little mention of the huge demographic shifts that are well on thier way, in the 15 wealthiest nations. The IPCC does have a scenario (B1) in which the world populations peak at mid-century, but continue to grow at a slower rate (they call this convergence). However, even if the 3rd world continues to grow population wise, how can they expect anything but economic problems if the 12 or 13 of the wealthiest nations begin to see thier populations not only age, but dwindle in numbers?

    I know this is more of a demographic-economic question, but many of the scenarios the IPCC builds are based upon economic and demographic projections. How can there be increased CO2 levels (and its attendent temp rises) if 10 of the 15 wealthiest nations will be producing and consuming less. Even China is totally dependent upon wealthy Europe and North America for its exports. Ditto for India. Africa and South America obviously do not have the infrastructure, political regimes, nor a consumer base to pick up the slack.

    If we are to assume that GHSs drive our climate, then the skill of making climate projections is subject economic and political forecasts.

  80. John Davis
    Posted Jul 9, 2007 at 7:48 AM | Permalink

    I think Niels Bohr had it right:

    Prediction is very difficult, especially if it’s about the future.

  81. MarkW
    Posted Jul 9, 2007 at 7:55 AM | Permalink

    #79,

    How do you know if the newer model is better, if you don’t bother comparing the results of either to the real world?
    Or do you just assume that this years best guess MUST be better than last years best guess.

  82. Hans Kelp
    Posted Jul 9, 2007 at 8:04 AM | Permalink

    # 79

    It is nice to see someone admit that the IPPC screwed it. Thank you.

    H.K.

  83. Hans Erren
    Posted Jul 9, 2007 at 8:22 AM | Permalink

    re 80:
    A2 has a world population of 15 billion in 2100. I wouldn’t call that a “likely scenario”. The SRES scenarios are possible scenarios. The one they left out is an ongoing global recession. So the SRES scenario suite doesn’t even cover all possible scenarios.

  84. Steve Milesworthy
    Posted Jul 9, 2007 at 8:24 AM | Permalink

    #80 JP
    You seem to be confusing levels of emission and CO2 concentration. CO2 stays around a looong time. So even if emissions are reduced (which some of the scenarios assume) levels in the atmosphere can continue to rise.

    #82 MarkW
    I didn’t say that you don’t bother to look at results of the old model, but this is a secondary consideration when you have a newer model that reflects current climatology more realistically.

    #83 90% of the contributors here think the IPCC “screwed it” so you’re in the right place to hear what you want to hear 🙂

  85. Ross McKitrick
    Posted Jul 9, 2007 at 8:28 AM | Permalink

    #48: Steve, Chris is a physicist who has worked in atmospheric radiation and thermodynamics all his career, including a stint helping develop the CO2 radiation code for the Canadian Climate Model. A primary reference to the discussion of parameterization is Essex, Chris (1991). “What Do Climate Models Teach Us about Global Warming?” Pageoph 135(1) 125-133. Your comment reminds me of a run-in he had with a Canadian “science” reporter who was preparing a blacklist for his fellow journalists of who not to talk to about global warming. The premise was that anyone who was not qualified to talk about climate science should be excluded from the media, but he noted Chris had published op-eds and had given interviews. The would-be gatekeeper informed Chris that he had examined his CV and only found reference to papers on radiation, thermodynamics and numerical computation, so what right did Chris have to present himself as an expert on climate? It took some effort to explain to the reporter why he was not enough of an expert to judge who is an expert. I think the same conclusion applies in your case.

  86. Ross McKitrick
    Posted Jul 9, 2007 at 8:33 AM | Permalink

    #28: Roger, when you say policy does not depend on it, are you referring to adaptation or emissions abatement? I agree that adaptation policy hardly needs GCM precision, since we are merely talking about resilience to a range of weather patterns that we can specify based on historical experience. But on the question of the optimal emissions reduction target, which is the policy debate attracting considerably more attention (rightly or *wrongly*) there’s no way to get around questions involving GCM precision. If CO2 has only trivial effects, that obviously matters for the calculation of the marginal damages function and the implied optimal policy target.

  87. steven mosher
    Posted Jul 9, 2007 at 8:33 AM | Permalink

    SteveMilesworthy, re #79

    you wrote

    “#77 RomanM and others
    With a forecast you look after the event at a number of locations and decide whether the forecast agreed with the actual temperature/humidity/rainfall
    You can’t do this with a climate model”

    Isn’t this exactly what Gavin did a while back on RC. Looking at Hansens 1988 forecast. Hansen took
    proposed emmission scenarios– 3 scenarios: BAU bracketed on the top and bottom. Threw in a volcano in
    the 1990s and “projected” . Now, looking back Gavin analyzed how well the model hit the mark. So,
    it can be done because it has been done.

    Continuing, you wrote

    “because by the time you’ve validated it a) a new and better model has come along,
    b) three large volcanoes and a major development of new coal-fired power stations have fouled up
    your projected emission scenarios, or c) the runaway greenhouse has obliterated mankind (joke!). ”

    Well, gavin just performed this analysis of Hansen’s old model, despite improvements so I don’t
    see the issue. The only issue would be this. If your old model missed the mark, then people might
    question the new model. on issue B, emission scenarios being different that your projection.
    The SRES have emission scenarios that have a large spread. And I refer back to Hansen. He ran
    3 profiles and “guessed” at a major volcano. His guess at emissions was good and his guess at
    a volcano came in pretty good. Consequently, his projection, forecast,simulation was accurate
    to a point. So, there is nothing impossible in doing this. It’s just unlikely you will guess the
    emissions scenario and the volcanoes very accurately, the latter especially.

    I think the issue is the GCM crew don’t want to be on the hook for a Emissions scenario. And they
    shouldnt be. But they need to be a bit more forthcoming about it. The simulations produced
    global temps that ranged from about 1.5C to 6.5C This spread is Uncertainty in the SRES.

    Finally, a while back I stumbled on some data showing an update to the SRES where they actually
    put probabilities on them. Like you I dont keep reading lists, so If you find something similar
    shout back.

  88. Joe Ellebracht
    Posted Jul 9, 2007 at 8:39 AM | Permalink

    OK, I will be the 99th person to weigh in on forecasts versus scenarios.

    As everyone knows the UN forecasts not just temperatures, but also population. These forecasts also employ scenarios, mostly about fertility, but also incorporating specific assumptions about other things, e.g. HIV. Just because the scenarios are published as scenarios doesn’t make them non-forecasts. They are treated as forecasts by the UN. See for example the press release language here:
    http://www.un.org/News/Press/docs//2007/pop952.doc.htm

    Here is the headline from the press release:

    WORLD POPULATION WILL INCREASE BY 2.5 BILLION BY 2050

    So having scenarios and relying on explicit named assumptions still allows a forecast to exist. But IMO if you publish a scenario and state “this is not a forecast” then indeed it is not a forecast. For example if the UN had published a scenario for world population with the explicit assumption of “sun explodes on July 10, 2007” they would have probably have also stated “this is not a forecast, this is a scenario.”

    The UN wouldn’t do such a thing. A scenario that is not a forecast must have some utility to be worthy of publication. Scary scenarios without utility are not published by respectable people. A useful, non forecast scenario might perhaps be contingent on some really critical assumption, but the assumption is one that is unlikely or not knowable. Good then for contingency planning. Other non forecast scenarios might be exploratory, looking into theoretical relationships, to see if there exists evidence to support the relationships. For example, what if latex paint on weather sheds causes the temperatures reported to rise on sunny days above that reported in the past when whitewash was used. What would be the impact on temperatures reported across the US, given that 5% of the sheds were repainted each year?

    But the IPCC reports are for policy makers, so the scenarios are not scientific theories being considered, because the policy makers have nothing to add to the consideration of these kinds of scenarios. (Even after inventing the internet, I doubt that A. Gore, Jr. knows how to use html.) Perhaps then they are for contingency planning. I think in this regard, they have utility. What if our climate models are right? How might we adjust to this view of the world? Should we clear northern forests to plant corn and use the wood to construct boat residences for all Bangladeshi?

    However, the IPCC reports are not presented as contingency planning, but as the most likely explanations of how the future climate will be under various scenarios of future human activity. And not just activity, but activity that can be adjusted by policy decisions. Thus they are marketed as policy dependent forecasts. Here for example is a short excerpt from a speech by Prof. Jean-Pascal van Ypersele, Vice-Chair of IPCC Working Group II, located on the IPCC website.

    Click to access Jean-Pascal_van_Ypersele_may07.pdf

    Before the end of this century, (without particular emission reduction policies) global temperature is likely to increase by 1.1 to 2.9°C (2 to 5.2°F) if we follow the emission scenario B1, or 2.4 to 6.4°C (4.3 to 11.5°F) if we follow the fossil intensive scenario A1FI.

    Besides indicating the sheer dynamism of this man as a speaker, it is clear to me at least that he is using the scenarios as policy dependent forecasts.

    IMO trying to hide under the invisibility cloak of scenarios versus forecasts isn’t really an honorable response to the criticism made, unless the scenarios are specifically denied to be policy dependent forecasts.

  89. Steve Milesworthy
    Posted Jul 9, 2007 at 9:18 AM | Permalink

    #88 Steven Mosher
    OK yes you can do these reassessments and yes if temperatures had stayed much the same since 1988 we wouldn’t be having this blogversation. Really what I meant to say is that this can’t be your primary method of validating models because of the time delay involved.

    #86 Ross
    Apologies to Essex, I was thinking of someone else who made a similar quote. I found the following quote in a Fraser Institute document. Is this still a relevant concern following all the work gone into satellite observations…?

    The differential equations describing temperature change due to variations in the optical depth of the atmosphere are so sensitive to minor changes in the lapse rate (the rate of cooling as you gain altitude) and the surface albedo (reflectivity) that actual temperatures could go up or down in response to CO2 increases (Essex, 1991).

    As an aside, I was contacted by a Canadian software engineer looking into developing better auditing and metrics of climate models, and I heard on the grapevine that this was related to something going horribly wrong with a Canadian climate model…

  90. Kenneth Fritsch
    Posted Jul 9, 2007 at 9:33 AM | Permalink

    Re: #46

    Critiques of climate models, absent the introduction of another basis for action, are simply endorsements of business as usual. I think that we can do better than that.

    That is a God awfully weak defense of climate models and/or any new mitigation policies based on these models. There are some of us who obviously think that we can actually do a lot worse than “doing nothing” and particularly so if “doing nothing” means allowing for a more free market of ideas and actions to occur.

    I am sorry to see this thread get side-tracked into definitions of projections, predictions and scenarios when the critical issue, as I saw it, was the formulation of expert opinion and what that means in terms of certainty and likelihood of conclusions on which critical mitigation might be based. I have been continually asking the question: How did the authors of the AR4 determine the levels of uncertainty and likelihood that they placed on their conclusions listed in the reports? So far no one has ventured an answer ‘€” not even a reply.

    Further in this regard, we know that the AR4 uses/selects published reports to support its positions but does not in all cases pass judgment on the validity of the papers’ contents. In some cases, as revealed here recently, they use methods not traceable to peer-reviewed publications. One can put together scenarios until hell freezes (or bubbles) over, but the important points are: can we place odds on the occurrences actually happening and if we can what are they and how are they calculated. Otherwise we have something more akin to daydreaming with a scientific bent.

  91. Pat Frank
    Posted Jul 9, 2007 at 9:41 AM | Permalink

    #6 — Judith wrote, “Interesting post. However IPCC does not make “forecasts”. They conduct scenario simulations, based upon different assumptions about the increase of greenhouse gases.

    Except somehow the IPCC fail to communicate the fictional nature of their projections to the media or to the public, and despite the fictional character of their non-predictions the IPCC claim to detect a high probability of impact on climate by anthropogenic CO2.

    Meanwhile climate scientists, like you Judith, discuss their results in the context of the same anthropogenic warming; likewise failing to inform the media or the public of the fictional character of your discursive context.

    The result is a kind of ersatz-science bait-and-switch, where you only admit of ‘what-if’ projections when you’re pressed, but as soon as the pressure is off immediately speak as though real climate predictions had been made.

    The same sort of carnival atmosphere pervades dendrothermometry. The entire climate science enterprise is rife with this sort of sloppy and tendentious thinking.

  92. steven mosher
    Posted Jul 9, 2007 at 10:30 AM | Permalink

    RE 90.

    Well, as always, the Steves are coming to a convergence of sorts.

    You note that “if the temp had remained the same as in 1988” we would not be having this
    conversation.

    I bet we would, but the explaination is long and I’m laconic. ( ok thats a fib)

    Gavin brought out Hansen’s old “projection” “forecast” “simulation”
    because it suited his interest. Simply, Confirmation Bias. Hansen hit the Mark. Huzzah!
    Therefore, trot out that Pony. My point: Either you commit to a policy of reasssement and
    publish your successes AND failures or not.

    So, doing a reassement only when it suits you is, how shall I say, not exactly cricket.

    Commit to reassesment, or pipe down. ( not you, just generaly speaking)

    Now, the next question. Would we be having this conversation if temps had been constant since
    1988? I say we would. I say we would still have the AGW conversation because AGW is held immune from
    disconfirmation. Some excuse can always be made. “It wasnt a forecast.” ” We didnt understand all
    the processes.” Quite simply, No one who holds the theory puts forward the following:

    1. conditions under which I will give up the theory.

    At the turn of the 20th century Aj Ayer put forward an intersting question to theists. is there Any
    evidence that will change your belief in the existence of god. While I am not a logical positivist, Ayers
    challenge has merit. If a theory is not open to a practical disconfirmation, then it’s scientific status is suspect.
    ( ayers was more harsh than this)

    Finally, you said that reassesment is not the primary method of Validating the models. It should be.

    VALIDATION means that you BUILT the right model. Models are valid when they represent the reality
    they are intended to represent. The primary validation test is “does the model represent reality”
    can it predict the future. Does the missile hit the target.

    Now, The Issue with GCM is that they are Untestable. Not in the abstract, just practically. A model
    that predicts how water will boil is easy to test. A model that predicts Greenland will boil 1000
    years from now is testable on its face, but in practice, UNtestable. Science fiction. An intersting problem for
    the philosophy of science types. It is even a more intersting problem for Policy. If the GCM are
    valid, then they would seem to indicate that ABATEMENT taken today won’t show up for decades.
    In short, politicians take action, with no measureable short term consequence. This is not a good
    thing for obvious reasons. especially If the Policies cause short term pain, but only long term benefit.

  93. Steve Milesworthy
    Posted Jul 9, 2007 at 11:21 AM | Permalink

    #93 Steven Mosher
    We wouldn’t be having this conversation because I’d have implicitly accepted that global warming is not a big issue. I’d probably have the same job as now, but with a focus on forecasting rather than climate requirements.

    I suspect Gavin’s study is purely in response to the Patrick Michaels hatchet job rather than in response to the needs of science.

    GCMs are testable and tested now. While a warmer world will introduce more uncertain feedbacks, there is plenty of climate variation in the existing world over which the GCM must work. Exactly the same subroutines run whether the grid cell is over the southern polar ocean or the tropical ocean.

  94. steven mosher
    Posted Jul 9, 2007 at 11:40 AM | Permalink

    RE 94.

    Fair enough. So Here is what I glean: from 1988 (hansens projection) to present, you have seen the observation record reflect
    an increase of say… .15C ( ???) is that about right? .08c/ decade, but accelerating in the past 25 years..
    So, lets say, you’ve seen .2C since 88. ( somebody will check this)

    And the physics of the models are “coherent” with this.

    Now, comes the question. What will make you disbelieve?

    I’ll use an analogy that I hope won’t offend you.

    Some people believe God loves them. When good things happen, they see this as verification of God’s love.
    When bad things happen, they don’t stop believing in god. They come up with Epicycles. Explainations
    like “god loves me, but he is testing me”

    So, from a pragamtic viewpoint, they hold their belief, in the face of evidence to the contrary because
    they need to believe. Or they have benefits from believing. Or overturning a system of belief is
    very hard.

    So, now comes the question. What observation would make you give up your belief.
    one cooler year, 2 years of cooling? 3? 5? 10?

    This isnt a prelude to a bet. Just a question. Do you have a clear conception of what facts
    would cause you to give up AGW?

  95. Steve McIntyre
    Posted Jul 9, 2007 at 11:57 AM | Permalink

    What are the statistical odds of 3 consecutive posts by 3 different Steve Ms? IF the Hockey Team were doing the stats, they would say that this is 99% significant; Juckes would say that the odds were in one in a milllll-yun 👿

    In keeping with Team statistical methodology, I’ve improved the chances of this happening by deleting a post on the grounds that the intervening post was “inappropriate”. (Sorry Mark, your post wasn’t really “inappropriate”, it’s just that you weren’t a Steve M.)

  96. MarkW
    Posted Jul 9, 2007 at 12:16 PM | Permalink

    If you split your check from Exxon with me, I might be willing to change my name.

  97. steven mosher
    Posted Jul 9, 2007 at 1:02 PM | Permalink

    MarkW just needs a homogeneity adjustment to his name,

    First I perform a 180 deg rotation on the first initial of his last name. His W is REALLY an M.

    Then I perform a 90 degree rotation on the M in his first name, yeilding a sigma which translates into
    an S..The remaining letters “ark” becomes “teve” under a new transformation we have defined called “de arking”

    So MarkW is really a SteveM.

  98. Barclay E. MacDonald
    Posted Jul 9, 2007 at 1:14 PM | Permalink

    At any rate a valuable post Dr. Curry. Whatever we might say about it, it highlights a distinction I certainly did not recognize. In future readings my understanding will be slightly broadened. Thank you.

  99. MarkW
    Posted Jul 9, 2007 at 1:15 PM | Permalink

    Aaargh, ye’ve discovered me secret identity.

  100. Dave Dardinger
    Posted Jul 9, 2007 at 1:33 PM | Permalink

    BTW, in case anyone doubts the validity of S. Mosher’s “de-arking” note that “Ark!” is what God commanded of Noah while the Hebrew word “tsavah”, quite cognate with “teve” I’m sure, means to order or give a command. QED

  101. steven mosher
    Posted Jul 9, 2007 at 1:50 PM | Permalink

    RE 101.

    In fact, I’m going to run my Bible code software on the IPCC report
    and search for “signals” that relate to hockey stick, Mann, Gore.

    When I get time, in unthreaded of course, I’ll muse about the apocalyptic
    imagination and it’s avatars in AGW.

  102. Sam Urbinto
    Posted Jul 9, 2007 at 1:50 PM | Permalink

    To the top, however you explain it, these are guesses. They are not science. Some of the guesses will be correct. Pointing them out later is bogus, because there are a lot of the guesses. You have to know which one(s), if any, are valid beforehand for them to be anything worthwhile.

    The answer that’s assumed is based upon certain beliefs. The temps are really rising, the CO2 is causing it, and the new temps will have worse and worse effects as time goes on. (That this is stated more as fact and (therefore?) is probably wrong.) But based upon the “fact”, the explaination is that we have to do something to lower CO2 output, a lot, and now.

    Everything is set up to make people think the guesses are fact and the underlying assumptions leading to them are fact. It’s put forward that way, and when it’s thought of that way, nothing’s done to clarify it. In fact, steps are puposely taken to make it as sure as possible that it’s not clarified. This is no accident!

    As far as policy, however, if the apparent climate change is indeed bad, what is the issue with developing useful, cost efficient and beneficial ways to take care of the “problem”? Or in other words, we should be doing things that will be a benefit, since they’re probably things we should be doing regardless of the answer. We just have to be smart about it.

    Or look at it another way. We don’t need to know the answer. So rather than spending time trying to find what the answer is (since we probably can’t anyway) let’s spend the time and money we spend on it now for other things that are of some use. And also, think about this; it’s probably already too late to fix the attitudes (or at least the way they’re going), so let’s use the time and money to shape the way the results of the attitudes take us to those things that are useful to do.

    When presented with two bad choices, take the one that’s the least bad. Don’t spend time trying to make them both go away when you can’t.

  103. Steve McIntyre
    Posted Jul 9, 2007 at 1:56 PM | Permalink

    #98. Now that you mention it, the name MarkW is (in the climate science phrase) “remarkably similar” to SteveM.

  104. steven mosher
    Posted Jul 9, 2007 at 2:09 PM | Permalink

    RE 99.

    I think we kinda all piled on Dr Curry, with everyone trying to take their best shot.
    Not exactly good form. Next time I will take a deep breath.

    Not that the good Dr. didn’t have a defensible position, it was just
    a MANY on One fight. One person who has handled this kind of melee with aplomb has been
    Neal King, back on the Parker Post. He took the shin kicking pretty well.

    Milesworthy stepped in as a suitable second both in Parker and here. So, I think Huzzahs
    for him, even though he doesnt like Pony’s.

    Dibbs on his mongrammed towels.

  105. tetris
    Posted Jul 9, 2007 at 2:13 PM | Permalink

    Re: 66
    Steve,
    Like it or not, when it comes to the IPCC we’re not only dealing with politicized [and therefore very questionable] science, but increasingly with the foolish policy decisions politicians are now making based on the IPCC GIGO. You may wish it to be different but in fact quite a number of your participants are talking policy on your site [ref: Ross M at #87 above]. Maybe you should consider opening a seperate “policy” related thread to keep it out of the science oriented discussions. BTW, valium is used to manage anxiety, not anger. Cheers.

  106. Ross McKitrick
    Posted Jul 9, 2007 at 2:57 PM | Permalink

    #90 The issues in Chris’ Pageoph paper wouldn’t be affected by an accumulation of satellite data. The main issue is the necessary but simplifying assumption of invariance for parameters that are, in reality, variable. He gives examples of parameterizations of the lapse rate and albedo effects in which minor changes in parameter values within the known range of natural variability can lead to qualitatively different predictions, including surface cooling in response to increased CO2. He doesn’t argue that that’s what would happen, only that if models always predict surface warming, they have to be tuned to achieve that.

  107. MarkW
    Posted Jul 9, 2007 at 3:06 PM | Permalink

    I guess this means my chances of talking Steve out of some of his Exxon loot are now shot?

  108. Kenneth Fritsch
    Posted Jul 9, 2007 at 3:43 PM | Permalink

    Re: #105

    I think we kinda all piled on Dr Curry, with everyone trying to take their best shot.
    Not exactly good form. Next time I will take a deep breath.

    Re: #105

    Please, Stephen, spare me the apologies. She gives as good as she takes. My problem is that the tenor of the thread changed at that point from a discussion of expert opinion and measuring scientific likelihood to a semantics debate of what it was that the IPCC was actually doing. It ain’t what the differences in meaning between presenting scenarios and making predictions are about, it’s how the results of the AR4 report are used by the media, scientists and policy makers.

    I do not blame Judith for that. She made a simple point that could have been easily ignored in regards to the bigger picture context of the discussion. Unfortunately boys will be boys.

  109. Sam Urbinto
    Posted Jul 9, 2007 at 4:37 PM | Permalink

    It did turn nasty, especially since you could argue (or at least I think it could be argued) that if you don’t include (or set to 0) things like what she mentioned, it’s not a forecast. She didn’t say it wasn’t being treated as one, so the interpretation of what she was saying was out of bounds. In my opinion. We don’t want the level of discourse here to turn into like that on other sites, where people are viciously attacked for perceived meanings, do we?

  110. Steve Milesworthy
    Posted Jul 9, 2007 at 4:56 PM | Permalink

    #105, 109
    I suspect Judith was long gone before the pony manure hit the fan. Her tone suggests she has seen this strawman argument before.

    Steven Mosher asks what observation would make me give up my belief. Well a jolly good drop in temperatures over a 4-5 year period that can’t be explained by a volcano or other significant source of aerosols would help. Alternatively, observations of weird changes in lapse rate supporting a different choice of certain model parameters 🙂

  111. Sam Urbinto
    Posted Jul 9, 2007 at 5:01 PM | Permalink

    Straw man? What, that they can call it whatever they want, but it’s how they are used that determines what they are. Look at the intentions, I’d say it’s pretty clear what they are trying to do.

  112. Steve Milesworthy
    Posted Jul 9, 2007 at 5:30 PM | Permalink

    #112 Sam
    I’m not suggesting that it should be obvious to everyone what the distinction between a “forecast” and an IPCC “projection” is, or even that the media will always fully understand the nuance, since they are terms used in a technical way.

    But someone such as Armstrong, who is prepared to invest time to write a paper, will (or should) understand the difference. Even if they do not think the difference is valid, they should have the honesty to explain why it is not before piling in with their criticisms.

  113. Kenneth Fritsch
    Posted Jul 9, 2007 at 6:07 PM | Permalink

    Re: #112 and #113
    You miss the point. The discussion, as it too frequently does, got side tracked on a much less important peripheral issue. While we are discussing the issue, either of you care to comment on the methods used by the AR4 authors in determining the levels of uncertainty and likelihood reported for their conclusions?

  114. steven mosher
    Posted Jul 9, 2007 at 6:33 PM | Permalink

    Thanks SteveM.

    You wrote:

    “Steven Mosher asks what observation would make me give up my belief.
    Well a jolly good drop in temperatures over a 4-5 year period
    that can’t be explained by a volcano or other significant source of aerosols would help.
    Alternatively, observations of weird changes in lapse rate supporting a different
    choice of certain model parameters ”

    1. Volcano. etc We all know that a major eruption get’s a GCM off the hook.
    ( side question, can tree ring reconstructions identify volcano signals?)

    2. Weird lapse rate changes… OK

    So, you want a DROP over 4-5 consecutive years. Jolly good drop.

    Now comes the question: what is distribution of
    year to year temp changes. What is a jolly good drop? For the past 100 years
    we have seen per GISS +.08C per decade +-.002C over the past century.

    AGW argues that this rate is increasing. What is the probability, Given the
    TRUTH of GW, given the rate of Observed increased, that you will see 4-5 years of Jolly good
    decrease?

    ( hmm Do we need a baysian to look at this? dunno)

    Well, if we are suppose to see .08C/ decade (+-.02 95%CI) then I would think ANY yearly decrease
    would be a rare event. and 4 or 5 in a row…Given the decadal estimates, yearly increases, look almost impossible.
    ( ok I know this is Naive, but lets start from here)

    You cant have your estimate of growth (.08C/decade) and your Tight CI, and argue that you need 4-5 years
    of temp decrease. The data and the models and the error estimate make a year to year decrease into
    a rare event, a 1% event. and you want 4-5 of these events in a row to give up your belief.

    Ok.

    So, you set the bar high. Thats ok. You set the bar.

  115. Ian Castles
    Posted Jul 9, 2007 at 11:56 PM | Permalink

    Re #114. As a number of the posts above show, there are differing views about the importance or otherwise of the projections vs. predictions distinction. In 2003, Dr. John Zillman, Principal Delegate of Australia to the IPCC, said that “No aspect of the work of the IPCC has brought me closer to despair about our ability to use science wisely to inform policy-making” as the failure to distinguish between scenarios, projections and predictions, which Zillman described as “fundamental to the whole process” (“Our Future Climate”, World Meteorological Day Address 2003).

  116. Hans Erren
    Posted Jul 10, 2007 at 12:47 AM | Permalink

    re 104:

    MarkW is (in the climate science phrase) “remarkably similar” to SteveM

    If you write MarkW upside down (flip the sign of the principal component), Then the W matches the M which is is a 20% correspondence! (Of course you need to truncate the S from Steve, but that is
    standard practice anyway.)

  117. John Baltutis
    Posted Jul 10, 2007 at 1:04 AM | Permalink

    Re: #85

    CO2 stays around a looong time.

    Models trump measurements says otherwise.

  118. Hans Erren
    Posted Jul 10, 2007 at 2:59 AM | Permalink

    I don’t agree with the conclusions of Dr Tom Segalstad.

    http://www.climateaudit.org/?p=820#comment-47075

    Tom Segalstadt needs to study fick’s law of diffusion.
    ref: Fick, A., Uber diffusion, Ann. Physik Chem., 94, 59, 1855

  119. MarkW
    Posted Jul 10, 2007 at 5:06 AM | Permalink

    #117,

    Did I tell you that my middle initial is O – MOW, flip that 180 degrees for fun.

  120. Joe Ellebracht
    Posted Jul 10, 2007 at 8:34 AM | Permalink

    OK, I will be the 99th person to weigh in on MarkW’s name. It is clearly not orthagonal enough to SteveM to be used for a skillful estimate of height, so perhaps it can be used for age, provided the residuals are Durbin-Watson proof.

  121. Kenneth Fritsch
    Posted Jul 10, 2007 at 9:18 AM | Permalink

    John A, I want to personally thank you for presenting the article “Global Warming: Forecasts by Scientists versus Scientific Forecasts” for discussion in this thread. In my mind, this article puts in play some very important issues that need to be better understood and given more space in intelligent discussions of how we should judge the outputs of the AR4.

    I find what the discussion deteriorated into a bit disconcerting, but with the blogosphere there is always another time and/or another place.

  122. Steve Milesworthy
    Posted Jul 10, 2007 at 10:41 AM | Permalink

    #122 Kenneth
    I’m not terribly knowledgable about politics, but does this put things back on track:

    My basic understanding is that each scenario is a what-if scenario – what if the population goes up by X and car ownership goes up by Y etc.

    They provide a menu of sorts to the rest of the process – calculations of emissions scenarios, which are input intto climate models, which then feed the assessments of physical and economic pluses and minuses.

    The idea is that the policy makers should then assess the scenarios and use their judgement as to what is likely, what is acceptable, and what measures should be taken.

    Once you work out what the costs/benefits for each scenario are you can look at how the emissions are actually evolving, and you can look at how the understanding of sensitivity is improving, and target your policies accordingly.

    So picking a 2C rise, such as what was done by the German Chancellor, is not an attempt to change the laws of nature, but is a statement of a commitment to follow the B1 emissions scenario for now, while allowing for it to be exceeded later should we decide that the climate is at the lower level of sensitivity.

  123. Sam Urbinto
    Posted Jul 10, 2007 at 11:05 AM | Permalink

    #114 That’s rather what I was saying in #110 and #112, who cares what it’s called, it’s still the same thing, the discussion of if ‘it’ should be called apples or divination or fluffy bunnies is a side issue, yes. It’s certainly arguable it is a forecast, for the simple reason that a prediction is supposed to be in more certain terms. But forecast, guess, estimate, and foretell are all synonyms of predict, so there’s really nothing to argue over anyway!

    But what IS it, that’s the real question. It’s a deep dark look into a crystal ball dressed in a lab coat, and then every trick in the book used to make it seem as if the empty coat is not filled with air, but with 25,000 scientists.

    I think they put it pretty well what to think about the methods used by the AR4 authors. Make up your own words for that process of gleaning the future.

    The {words} in the Report were not the outcome of scientific procedures. In effect, they present the opinions of scientists transformed by mathematics and obscured by complex writing.

    Some of the well-established generalizations for situations involving long-range {words} of complex issues where the causal factors are subject to uncertainty:

    Unaided judgmental {words} by experts have no value, due to, among other things, complexity, coincidence, feedback and bias.

    Agreement among experts is weakly related to accuracy.

    Complex models (those involving nonlinearities and interactions) harm accuracy because their errors multiply.

    Given even modest uncertainty, prediction intervals are enormous.

    When there is uncertainty in {words}, {words} should be conservative.

  124. Sam Urbinto
    Posted Jul 10, 2007 at 11:14 AM | Permalink

    And by the way, how did this particular “expert opinion” turn out? And what would you call what they’re doing?

    “NOAA’s prediction for the 2006 Atlantic hurricane season is for 13-16 tropical storms, with eight to 10 becoming hurricanes, of which four to six could become major hurricanes. … We are predicting an 80 percent likelihood of an above average number of storms in the Atlantic Basin this season. This is the highest percentage we have ever issued.”

    Here’s what American Heritage has to say about the synonyms of ‘predict’:

    Synonyms: These verbs mean to tell about something in advance of its occurrence by means of special knowledge or inference: predict an eclipse; couldn’t call the outcome of the game; forecasting the weather; foretold events that would happen; prognosticating a rebellion.

    And Random House:

    To predict is usually to foretell with precision of calculation, knowledge, or shrewd inference from facts or experience: The astronomers can predict an eclipse; it may, however, be used without the implication of underlying knowledge or expertise: I predict she’ll be a success at the party.

  125. Kenneth Fritsch
    Posted Jul 10, 2007 at 12:23 PM | Permalink

    Re: #123 and #124

    Sam, I think you essentially have grasped what I judge to be the essence (or should have been) of the topic of the thread here.

    Steve Milesworthy, I think you have missed my point again so let me attempt to rephrase it.

    As you note policy makers can take the scientific judgments of the AR4 authors into consideration when making policy. The scientist/authors are making judgments about the future climate changes and their effects (mainly adverse as the case would be). The scientists/authors use terms such as “very probable” or “very certain” or “very likely” in attempts to place a probably on what they are predicting/expecting for the future and the past. Without placing any odds on these projections/prediction/scenarios, the report becomes more like a review article and not useful for policy makers. Therefore, the use of the terms in giving levels of certainty and likelihood for the authors’ conclusion becomes, at least in my mind, the most critical feature of the entire process.

    In the AR4 report it is noted that there are some outlined procedures for attempting to make the probability assessments and asks that there be what they call “traceable accounts” of each chapters authors’ method used in actually arriving at these certainty and likelihood levels. Since I find (and I assume most other interested parties do also) that this process is critical to how the public, media and other scientists and statisticians view and accept the reports findings, I have requested that the IPCC make these traceable accounts public.

    My simple question to you would be: Do you have any interest in how the AR4 authors arrived at their published levels of likelihood and certainty for their published conclusions?

    The link that John A offered to start this thread was about expert opinion and its shortcomings for forecasting/ predicting for policy makers and contrasted scientifically established probabilities with opinions of scientists.

  126. Steve Sadlov
    Posted Jul 10, 2007 at 2:41 PM | Permalink

    RE: #123 – Nothing irks me more than a fire breathing radical who pretends to not know anything about politics or claims to have no agenda. I call that “Dr. Judith Curry Syndrome.”

  127. John A
    Posted Jul 10, 2007 at 3:23 PM | Permalink

    Kenneth Fritsch:

    John A, I want to personally thank you for presenting the article “Global Warming: Forecasts by Scientists versus Scientific Forecasts” for discussion in this thread. In my mind, this article puts in play some very important issues that need to be better understood and given more space in intelligent discussions of how we should judge the outputs of the AR4.

    I find what the discussion deteriorated into a bit disconcerting, but with the blogosphere there is always another time and/or another place.

    Thanks for the compliment.

    I don’t think the commentary has degraded that much, and I think its interesting that there is a discussion about whether the IPCC intended its “projections” and “scenarios” to be seen as predictions or not. I believe its a case where the mouth says no and the eyes say yes.

    The take home message that I get from the IPCC forecasts is that they appear to want to create the strong impression of near unanimity that the range of possible futures is limited to these scenarios but like Al Gore, they’re not going to actually bet the farm on them, any of them.

    If you look at the language of the IPCC its filled with weasel words that appear to imply something bad without actually coming out and saying it. Its this slipperiness of language that I simply find counter to my experience of reading hard science in other areas. Behind the facade there’s a fear that events, public opinion and the weather could turn against them – hence the dense layer of linguistic fog that may or may not contain predictions, forecasts, scenarios, best bets or strongly held beliefs or maybe nothing at all.

    I have written to Armstrong and Green requesting an op-ed or commentary on these issues. If and when they reply, I’ll post it up.

  128. aurbo
    Posted Jul 10, 2007 at 8:57 PM | Permalink

    A few comments:

    Scenarios are basically what ifs. Predictions involve selecting one scenario from another.
    Scenarios are indispensable to one’s well-being as we proceed through life. When driving a car on a winding road, one’s mind is constantly creating scenarios such as; .what if there is a car stalled in my lane just around the bend; or a child wandering into the street between cars; or a police car hiding behind that next advertising sign? None of these mental creations are predictions, they’re scenarios.

    In the field of science, especially in academia, unforced disconfirmation is a rare event. A more likely action when a thesis is failing is to first try to convince others that you were right all the time. The late Leon Feistinger, a renown social scientist, in one of his first works while still in grad school, wrote a book entitled “When Prophecy Fails”. It recounts his observations as part of a group of investigators examining the behavior of people who had made an irrevocable commitment to a prediction which ultimately would turn out to be demonstrably false.

    Their target was a doomsday cult who had given away all their earthly belongings to follow their charismatic guru to the mountain top where they would be saved when doomsday arrived. When the appointed day came and went with no consequences, the reactions of the various members of the cult were about as follows. First they assumed that they had made a few small errors in the prediction and picked a new date a short time ahead. When this, too, failed, they reworked their predictions once again. The most interesting behavior during this period of disconfirmation was that they began to proselytize passionately in an attempt to convince others of the urgency and rectitude of their beliefs in order to avoid admitting to themselves that they were wrong. Finally, disillusion set in and some eventually turned on their self-styled prophet.

    Feistinger went on to become renown in his chosen field. He invented the term “cognitive dissonance”.

    I don’t think I have to describe the relevance of Feistinger’s work to the global topic at hand.

  129. Keith
    Posted Jul 10, 2007 at 9:36 PM | Permalink

    I see what you’re getting at overall.

    But being aware of the possibilty of a police car down the road hiding is not a scenario and is not a prediction. Should one be in a state of mind to consider that possibility, it is simply an awareness of a possibility until you pass by and find out there wasn’t a police car there. Not unlike wondering if you’ll run out of petrol until you look at the gauge or wondering if one day you’ll be hit by a meteor until you realize the odds and stop wondering.

    One doesn’t need to build a scenario or a prediction that it’s possible the next number will come up black, and it doesn’t take a model to know that it will come up red or black unless it comes up green.

  130. Jaye
    Posted Jul 11, 2007 at 4:17 AM | Permalink

    RE: 92

    Except somehow the IPCC fail to communicate the fictional nature of their projections to the media or to the public, and despite the fictional character of their non-predictions the IPCC claim to detect a high probability of impact on climate by anthropogenic CO2.

    Meanwhile climate scientists, like you Judith, discuss their results in the context of the same anthropogenic warming; likewise failing to inform the media or the public of the fictional character of your discursive context.

    The result is a kind of ersatz-science bait-and-switch, where you only admit of what-if’ projections when you’re pressed, but as soon as the pressure is off immediately speak as though real climate predictions had been made.

    The same sort of carnival atmosphere pervades dendrothermometry. The entire climate science enterprise is rife with this sort of sloppy and tendentious thinking.

    Is right on. However, if they set-up the model, run it, and publish the estimates for how the temperature changes over time, then they are making a prediction based on the inputs. Period. Suppose, I have very good model of a real missile and I want to see what happens to the kinematic boundary if I make the motor bigger. That adjusted missile doesn’t really exist, its part of a what if study, but it is a prediction of the missile’s performance with a change in initial conditions (size of the motor). Calling it a scenario doesn’t change the fact that we are making predictions of the missile’s behavior if we take 2 lbs out of the warhead and add it to the motor. The comment by Dr. Curry was an unfortunate exercise in CYA.

  131. Hans Erren
    Posted Jul 11, 2007 at 4:32 AM | Permalink

    re 49:
    What would be business-as-usual in 1907, extrapolated hundred years?

  132. Steve Milesworthy
    Posted Jul 11, 2007 at 5:14 AM | Permalink

    #126 Kenneth
    The WG1 report discussion of model results indicates that certain things are “very likely” or “likely”, because they occur in many of the models for all the scenarios, and because previously these things have been formally attributed to warming. The “likely” temperature projections are given for each scenario which reflect the distribution of the model runs for that scenario. “very likely” and “likely” have numbers of 90% and 66% attached to them. I assume the words and meanings have been developed in conjunction with political representatives.

    #128 John A
    Having just reread page 17 of the WG1 SPM, which summarises the projections, I don’t see any weasel words.

    #130
    In the UK, the police are obliged to declare locations of all places where they may site a mobile speed camera. When approaching these places I will check my speed even though there is a small chance that they have set up a camera in that location at that time.

  133. MrPete
    Posted Jul 11, 2007 at 5:39 AM | Permalink

    #89 says

    However, the IPCC reports are not presented as contingency planning, but as the most likely explanations of how the future climate will be under various scenarios of future human activity. And not just activity, but activity that can be adjusted by policy decisions. Thus they are marketed as policy dependent forecasts.

    This is exactly correct, and reminds me of what happened in the runup to Y2k. Back then, I observed a similar fallacy as people confused expectation and preparation. Because the public (and even many experts) failed to distinguish the two, we ended up with a spiral of fear and silly “preparations,” supposedly based on informed opinion.

    * Joe Power Guy expected no problem, but prudently prepared for hours of trouble;

    * Jane Medical thought Joe expected hours, so she prudently prepared for days of trouble;

    * Jack Biz thought Jane expected days, and prudently prepared for weeks;

    * Jill Homeowner thought Jack expected weeks, and prudently prepared for months…

    …and Joe wondered why everyone was going nuts!

    Unfortunately, we’re not preparing for a one-time catastrophic event, so the consequences of confusion and error are much higher. Sadly, I see no evidence that anyone in the media or policy spheres learned a lesson from the Y2K fear-mongering.

  134. Kenneth Fritsch
    Posted Jul 11, 2007 at 9:12 AM | Permalink

    Re: #134

    Steve Milesworthy, by re-stating some of the WG1 contents you are not answering my questions. The authors of all the AR4 chapters have used terms connected to certainty and likelihood for the conclusions they reach in their reports. This is not limited to scenarios and climate predictions, but is interspersed throughout the report text. It is what gives the report legitimacy in presenting the information for policy makers whether it be merely in appearance or in fact. I will repeat below from an earlier post of mine some excerpts from AR4 that explain all this and leads in the end to the statement about “Traceable Accounts”. My questions to you again are: Would you like to see the details of the Traceable Accounts and do you think the IPCC should make them publicly available?

    6 In this Summary for Policymakers, the following terms have been used to indicate the assessed likelihood, using expert judgement, of an outcome or a result: Virtually certain > 99% probability of occurrence, Extremely likely > 95%, Very likely > 90%, Likely > 66%, More likely than not > 50%, Unlikely

    From Box TS 1.1 we have:

    The importance of consistent and transparent treatment of uncertainties is clearly recognised by the IPCC in preparing its assessments of climate change. The increasing attention given to formal treatments of uncertainty in previous assessments is addressed in Section 1.6. To promote consistency in the general treatment of uncertainty across all three Working Groups, authors of the Fourth Assessment Report have been asked to follow a brief set of guidance notes on determining and describing uncertainties in the context of an assessment . This box summarises the way that Working Group I has applied those guidelines and covers some aspects of the treatment of uncertainty specific to material assessed here.

    Uncertainties can be classified in several different ways according to their origin. Two primary types are value uncertainties’ and structural uncertainties’. Value uncertainties arise from the incomplete determination of particular values or results, for example, when data are inaccurate or not fully representative of the phenomenon of interest. Structural uncertainties arise from an incomplete understanding of the processes that control particular values or results, for example, when the conceptual framework or model used for analysis does not include all the relevant processes or relationships. Value uncertainties are generally estimated using statistical techniques and expressed probabilistically. Structural uncertainties are generally described by giving the authors’ collective judgment of their confidence in the correctness of a result. In both cases, estimating uncertainties is intrinsically about describing the limits to knowledge and for this reason involves expert judgment about the state of that knowledge. A different type of uncertainty arises in systems that are either chaotic or not fully deterministic in nature and this also limits our ability to project all aspects of climate change.

    This box also referenced 1.6 of the 4AR where I found a reference to Moss and Schneider, 2000.

    The paper references the use of Bayesian approaches and in its outline of procedures under 6 refers to a traceable account of how the estimates were constructed. How would one go about obtain/trace that “account”?

    Thus in the TAR, we expect Bayesian approaches to be what is most often meant when probabilities are attached to outcomes with an inherent component of subjectivity or to an assessment of the state of the science from which confidence characterisations are offered.

    6. Prepare a “traceable account” of how the estimates were constructed that describes the writing team’s reasons for adopting a particular probability distribution, including important lines of evidence used, standards of evidence applied, approaches to combining/reconciling multiple lines of evidence, explicit explanations of methods for aggregation, and critical uncertainties.

  135. Kenneth Fritsch
    Posted Jul 11, 2007 at 9:41 AM | Permalink

    What would be business-as-usual in 1907, extrapolated hundred years?

    Which says much about the limitations of scenario writing in general. They cannot account for all the feedback that we know will occur and they certainly cannot handle all the innovations and adaptations that will be made on the way.

    My classic rejoinder in the predictions games is that of Nobel prize-winning economist, Paul Samuelson, who once predicted that the USSR GDP would in a few years equal that of the US. In the 1980s all we heard was that Japan’s economy was poised to overtake that of the US — and by many experts in the field. It was not that the US economic potential was underrated, as there were a number of missed steps there, but that the potential problems with the other economies were entirely missed.

    From an economic point of view one might feel that an entirely planned and tightly regulated economy would be easier to write scenarios for and I see some truth in that, in that, it would be controlled within the framework of a limited imagination of the future with respect to the current state of things, but even under these conditions man’s adaptive powers and creativity are not completely extinguished.

  136. Sam Urbinto
    Posted Jul 11, 2007 at 9:52 AM | Permalink

    Re: 1907 extrapolation

    Hans, I’m sure it really requires some sort of logarithmic scaling or some kind of special algoreithm, but if I’m in 1907 and extrapolate the GHCN-ERSST trend lines since 1880, BAU is -.037 C drop in temperature per decade and 2.22 ppm of CO2 being added per decade. Which means right now, the global mean should be -.67 C on the anomaly and there should be a CO2 level of 321 ppm in the ice cores.

    Instead we have about 65 more ppm in the atmosphere, are trending the other direction on temps and are 1.18 C higher than what the extrapolation says. And?

    Is there some other meaningless period of time you’d like to extrapolate off the 1961-1990 baseline anomaly numbers before during or after that period, to predict the future or the current levels? How about 1909-1965 or 1944-1956 or 1937-1981? Or 1977-2007 to tell me what it will be in 2011?

    Woooo 65 ppm and 1.2 C yay!!!! lol

  137. Sam Urbinto
    Posted Jul 11, 2007 at 9:54 AM | Permalink

    #134 Yes, that would be prudent. To check the sign. Especially if you weren’t from the UK and/or didn’t know that. Or you or they got the place wrong that day. On the other hand, sometimes I drive really fast down the freeway and don’t hardly even pay attention.

    So is that a forecast or a prediction? 😀

  138. Sam Urbinto
    Posted Jul 11, 2007 at 10:18 AM | Permalink

    By the way, if you extrapolate 1937-1981, we have no warming, stay at -.05 C and CO2 goes up 6 ppm per decade. So we should be at 356 ppm and -.05 C by that. That’s 30 ppm and +.55 different than reality.

    I must have used a better model this time! Time to tweak it some more. But of course, you can’t have my shource code when I’m done, nyah.

  139. Steve Milesworthy
    Posted Jul 11, 2007 at 11:17 AM | Permalink

    #136 Kenneth
    I thought I did answer your question (for the model projections). I said that it seems that the likelihood depends on the number of models exhibiting the particular trait.

    For example, if you check out the section on the Atlantic Meridional Overturning Circulation in Chapter 10, all models show some weakening of the MOC and, the few that show a shut-down show that it takes a few centuries. This is used to say that the MOC will very likely slow down but is very unlikely to shut down this century.

  140. Paul G M
    Posted Jul 11, 2007 at 3:19 PM | Permalink

    Scenarios, Predictions etc

    The philosophical discussion is interesting. However, to repeat myself, in the UK, the gospel of CO2 rise=temeparture rise and the doomladen forecasts/scenarios/projections/predictions are taken as faith, every other advert on the TV announces the latest Corporate CO 2 reduction programme, vast sums are wasted on wind farms etc etc.

    Mike Lockwood’s pronouncement about solar variation gets top billing on the news and the statement “They concluded that the rapid rise in global mean temperatures seen since the late 1980s could not be ascribed to solar variability, whatever mechanism was invoked” is unchallenged. So what rapid rise is that then? A Mannian one no doubt.

    Regards

    Paul

  141. Kenneth Fritsch
    Posted Jul 11, 2007 at 6:08 PM | Permalink

    Re: #141

    Steve Milesworthy, I have to assume you do not want to answer my direct questions. I put the same questions to Boris and he did not even attempt a reply.

  142. Kenneth Fritsch
    Posted Jul 11, 2007 at 7:21 PM | Permalink

    Sam U, I think Hans Erren’s reference to business as usual was with regards to an economic scenario and not one necessarily involving climate change. If I am wrong then my reply was off the mark.

  143. Hans Erren
    Posted Jul 12, 2007 at 1:31 AM | Permalink

    re 144, 138:
    Indeed the SRES A1B is considered the business as usual economic scenario.
    100 years is a very, very, long time to make a realistic economic forecast, I am sure everybody remembers here the doom and gloom by the club of Rome.

    Why a century ahead? Because the twenty years results are not catastrophic enough?

  144. John A
    Posted Jul 12, 2007 at 2:11 AM | Permalink

    If someone has bought a copy of Lockwood’s paper, could they send me a copy?

  145. Paul G M
    Posted Jul 12, 2007 at 2:17 AM | Permalink

    More from the UK

    Now we have a proposal that AGW/CC should become a part of our National Curriculum for teaching teenagers. No doubt there will be a fair represenatation of “The Science”. Some hope. What makes this truly absurd is that nearly all schools already preach the Goregospel.

    In Britain, the IPCC could announce the discovery of perpetual motion and it would be swallowed whole.

    Cheers

    Paul

  146. Paul G M
    Posted Jul 12, 2007 at 2:27 AM | Permalink

    John A Re Lockwood

    I didn’t have time to get the paper but try this link to the RS’s website. You might also want to look at their overview of “The Science”

    http://www.journals.royalsoc.ac.uk/content/h844264320314105/

    His bio below

    Professor Michael Lockwood
    Professor Lockwood is Chief Scientist (Individual Merit Band 1) in the Space Science and Technology Department at Rutherford Appleton Laboratory and Professor in the Department of Physics and Astronomy at the University of Southampton. He is distinguished for major advances in understanding the connections between Earth’s ionosphere and magnetosphere and the interplanetary magnetic field. His discovery that open magnetic flux from the Sun is highly correlated with total irradiance had a crucial result for climate change studies.

  147. Ian Blanchard
    Posted Jul 12, 2007 at 2:32 AM | Permalink

    Paul G M

    Of course it would, we pay for the IPCC.

    I haven’t yet looked at the Lockwood paper, but had a quick look at the release on the BBC website. Good use of graphics, in that they showed the 5 year smoothed (surface) temperature graph, which continues to climb despite the flat trend over the last three or so years.

    A real case of not comparing like with like in an effort to prove the pre-determined conclusion.

  148. John A
    Posted Jul 12, 2007 at 3:39 AM | Permalink

    Paul G M:

    Paying ⡲5 for one paper is rather steep. I wonder if the journal breaks the terms of the Banff Protocol by charging so much.

  149. Rod
    Posted Jul 12, 2007 at 4:21 AM | Permalink

    Re: 150

    I just clicked on “Open: Entire document” under the heading “Text” “PDF” in the right column.

  150. John A
    Posted Jul 12, 2007 at 4:41 AM | Permalink

    Rod,

    Yesterday it tried to charge me ⡲5 to see the text. Today its free. Go figure.

    Can anyone translate this for me (pages 9,10)?

    For the cosmic ray mechanism, it has been proposed that the long-term decline in cosmic
    rays over much of the twentieth century (seen in figure 4d and caused by the rise
    in open solar flux seen in figure 4c) would cause a decline in global cover of
    low-altitude clouds, for which the radiative forcing caused by the albedo decrease
    outweighs that of the trapping effect on the outgoing thermal long-wave radiation.
    We here do not discuss these mechanisms in any detail. Rather, we look at the
    solar changes over the last three decades, in the context of the changes that took
    place over the most of the twentieth century.

    This I don’t understand. From my reading of “The Chilling Stars” the long term decline in cosmic rays over the 20th Century is the key finding of the Svensmark team relating to changes in albedo from fewer clouds and hence temperature rise and the key component behind the presentation in TGGWS. So how does Lockwood “settle it once and for all” when he doesn’t tackle it in this paper?

  151. Steve Milesworthy
    Posted Jul 12, 2007 at 5:25 AM | Permalink

    #143 Kenneth

    Your question was:

    How did the authors of the AR4 determine the levels of uncertainty and likelihood that they placed on their conclusions listed in the reports?

    And you also indicated that you had asked the IPCC for the “traceable” accounts.

    My answer is that the authors should have include the traceable account within the report itself. They should have described how they have used their judgement of the literature to make their assessments. By following their citations, you should be able to understand how they came to the conclusion they did.

    In the example I gave above, they have done all of this.

  152. Armin
    Posted Jul 12, 2007 at 5:46 AM | Permalink

    #152,

    He doesn’t attack the mechanism. He tries to prove it is not a major role. Remember that – correct me if I’m wrong – sofar there is no direct link to T and cosmic rays. Only to clouds and cosmic rays. Of course clouds affect T, but does it do it scientifically? In this paper they look at the cosmic rays directly compared to T. And they don’t match …

    Don’t get me wrong, I’m not claiming this proves or disproves anything, but this is how I think it is intended. And I mist admit it looks pretty strong, but I’m also looking forward to Shaviv’s, Svensmark’s, etc response.

    Some comments do apply however

    – Climax data goes up to around 13 GeV where Shaviv claims (as I understood) the energies required to up much further, so it may not be the best measurement. The paper says they included from as low as 3 GeV which would mean they are looking at the wrong levels (which Shaviv claimed in the past was the case for more rebuttals of the theory.)
    – Climax readings differ from station by station (as RC conveniently once used to cherry pick to show there was no link). They use only one as I can tell. On the other hand, there are not that many as e.g. weather stations 🙂
    – The T data is smoothed which hides the stall from the last few years.
    – The T data is “primarily from meteorological station data.” Well, just look at the CA articles from the last few weeks. I’d preferred satellite measurements, but I guess we can do the comparison ourselves.

    Note, I’m not claiming anything about the article, as I only looked at it briefly yet. I’ll read it later this week.

  153. Posted Jul 12, 2007 at 8:11 AM | Permalink

    #152, 154

    The only neutron counters that can be used to replicate the cosmic ray connection are near the geomagnetic equator, as the depth of the Earth’s magnetic field shields the lower energy cosmic rays best there while allowing the higher energy galactic cosmic rays through.

    CLIMAX at Boulder is not near the geomagnetic equator and the high energy CR are swamped by low energy CR, so it is comparing apples to oranges.

  154. Rod
    Posted Jul 12, 2007 at 8:12 AM | Permalink

    I note that Lockwood and Frohlich do not reference Vezier or Shaviv. Svensmark in the Chilling Stars suggests muons as the precursor to low level maritime cloud formation. These require very high energies for their formation. There’s an explanation of their formation by cosmic rays here: http://en.wikipedia.org/wiki/Muon.

    I’ve still got to read the Lockwood paper. Notwithstanding the importance or not of cosmic rays – I suspect the level of low level clouds is still very important to climate.

  155. Bill F
    Posted Jul 12, 2007 at 9:03 AM | Permalink

    I posted some information about the various cutoff energies of the neutron detectors and their relationship to GCR measurement on the unthreaded #14 thread in case anybody reading here has not seen it.

  156. Armin
    Posted Jul 12, 2007 at 10:08 AM | Permalink

    #155, #156 and #157

    Bill, Cal, thank you for the update.

    I noticed also that they did not reference Vezier or Shaviv. Assuming this is not done intentionally, this explains why they perhaps missed the important fact of the energies needing to be high enough. As said, Shaviv gave this as a comment to several (attempts of) rebutals already.

    Worse is I also see no reference to Soon, as e.g. “Variable solar irradiance as a plausible agent for multidecadal variations in the Arctic-wide surface air temperature record of the past 130 years” (GEOPHYSICAL RESEARCH LETTERS, VOL. 32, L16712, doi:10.1029/2005GL023429, 2005) or “Solar activity, cosmic rays, and Earth’s temperature: A millennium-scale comparison” (I G Usoskin et al. – Journal of Geophysical Research 110 (a10), 10102 (2005)). If one suspects that they did cherry picking, this isn’t so as also articles that refute a link are missing. E.g. “No evidence for effects of global warming or modulation by galactic cosmic ray” (Harrison, GEOPHYSICAL RESEARCH LETTERS, VOL. 33, L10814, doi:10.1029/2006GL025880, 2006). This although thety do list other work of Harrison.

    Perhaps there are good reasons for this, as one can impossible reference all, but I noticed that in discussions about solar often scientists battle with eachother without actually listening to the others point. Not unfrequently they are talking about different topics. Seeing the comments about energy-levels here I would fear this may again be the case …

  157. Bill F
    Posted Jul 12, 2007 at 10:21 AM | Permalink

    Armin,

    I suspect that some climate scientists just simply don’t understand the complexity of the interactions between the magnetic fields of the earth and sun, solar flux, and cosmic rays well enough to grasp the intricacies of the different theories. I don’t blame them for that, as it isn’t their specialty and unless they have delved deeply into the research papers of the various scientists involved, they almost certainly don’t understand the details of the differences between the various theories. I will give them the benefit of the doubt and say that the guys at RC clearly don’t grasp the theory behind the Svensmark GCR theory, as they continuously use TSI data and Climax data to try to debunk the Svensmark GCR connections. The alternate theory is to suspect that they deliberately use those data hoping to fool the naive sools who go to RC looking for an explanation of what is being said in the news about GCRs and climate.

    I don’t know for sure if Svensmark is right or not, but I do know that the data presented by guys like Lockwood and those at RC surely haven’t done anything to “debunk’ the theory. Lockwood is a guy who should know better, because solar mechanics is his specialty. That leaves open to question what his motives were in using Climax data and the odd smoothing methodology that removes most of the significant changes in things like sun spots, and takes out the recent flat to declining temperature record. I don’t know him and don’t want to cast aspersions on him, but his selection of data and smoothing and the conclusions he reached are quite odd in light of his previous papers trumpeting a rise in solar magnetic flux from the 60s to the late 90s and other changes in solar output.

  158. Mark T.
    Posted Jul 12, 2007 at 11:42 AM | Permalink

    They don’t seem to have a very strong understanding of signal processing methods nor statistics, but that doesn’t stop them, either.

    Mark

  159. Bill F
    Posted Jul 12, 2007 at 11:57 AM | Permalink

    Mark,

    Guys like Mann and Hansen clearly have an agenda they are pushing, so there is no question about why they select the methods they use. They are not ignorant of the problems with the methods they are using…they just assume that most of their targeted population (the general public) will be and that is good enough for them. Having a wbesite like RC where they can attack their rivals and then censor any dissenting comments, while hyping the site as the best source for climate change info only sucks in the uninformed reader that much further.

    For guys like Lockwood, I have read some of his past papers that are not unsympathetic towards solar connections to climate, and the caveats he throws in this paper about not attacking solar theories in general suggest he is trying to narrow the focus of the paper to data evaluation. So it makes me question why he chose such a bizarre methodology for his smoothing. There are clearly relevant statistically sound methods that could have been used to compare the various data sets, but he chose instead to use the odd smoothing that removes inflections in the temperature trend and shifts peaks and lows in the solar data away from the actual data. I would love to see somebody with statistical skills re-evaluate the same data sets using some other more standard methodology to see if his methods are appropriate.

  160. Mark T.
    Posted Jul 12, 2007 at 12:44 PM | Permalink

    Oh, I fully realize this, Bill F., which is why I used the word “seem.” These guys aren’t stupid, nor uneducated. Every signal processing specialist, statistical analysis specialist, or otherwise mathematically inclined professional that I’ve run into, when presented with the obscure methods of the Team, gasps.

    Mark

  161. Kenneth Fritsch
    Posted Jul 12, 2007 at 12:52 PM | Permalink

    Re: #153

    Steve Milesworthy, my questions were posed more generally in terms of the AR4 report as a whole for the authors’ uncertainty and likelihood assignments. Your MOC reply is curiously specific and relates to the likelihoods and uncertainties in the below excerpt from Chapter 10 that would I think in most readers’ estimations to be so lacking in anything quantitative to be almost useless on the face of it for any policy considerations. The graph in Figure 10.15 shows a wide variation of modeling results that by itself must put additional uncertainty into any cumulative or average results. Although you have supposed the method by which the authors arrived at the levels of uncertainty and likelihoods, they have not stated how they arrived at these levels — as is the case in all the parts of the AR4 report which I have read.

    I personally would have much different views of the likelihoods that were determined in these two cases:

    Case 1: The authors took an average of the likelihood assessments of 7 authors with those assessments varying by less than 10%.

    Case 2: The authors used a majority vote that by a margin of 4 to 3 favored a likelihood of 95% (to be published) whereby the breakdown was 4 authors at 95% and 3 authors at ranges from 40% to 60% (or even without any knowledge of the losing authors percentages).

    Taken together, it is very likely that the MOC, based on currently available simulations, will decrease, perhaps associated with a significant reduction in Labrador Sea Water formation, but very unlikely that the MOC will undergo an abrupt transition during the course of the 21st century.

  162. Posted Jul 12, 2007 at 1:59 PM | Permalink

    John A,

    I blogged this report at Principles of Forecasting on June 26 and left a note here at CA. I have a link in my blog post to the paper.

    This may have been what you saw.

    There were also several other mentions of the paper in the comments so it is also possible you are referring to one of them.

    It got short shrift originally. Glad to see you have taken it up.

  163. Posted Jul 12, 2007 at 2:16 PM | Permalink

    Of course we don’t need to be able to predict what the climate will do in the future to adopt policies that will help.

    Since it will be getting either warmer or cooler or staying the same the only viable policy is to promote economic development so we will have the resources to cope if things change.

    However, the IPCC folks and their acolytes are suggesting that we strangle economies with energy taxes (disguised as CO2 taxes) when energy use is critical to development.

  164. Posted Jul 12, 2007 at 2:28 PM | Permalink

    I’m wondering if the climate modelers take into account breakthroughs in technology (Lumborg does).

    If Tri Alpha Energy or Dr. Bussard’s group makes a breakthrough in fusion energy we could be entirely off fossil fuels in 50 years or possibly much less.

    Does the IPCC consider investment in such technology research as essential as the money going into climate science? If not, considering the dire forecasts, why?

    Why is the answer always the strangulation of the global economy with energy taxes?

  165. Posted Jul 12, 2007 at 2:37 PM | Permalink

    #155 Carl,

    Proper detectors can differentiate between low energy CRs and high energy CRs. Such detectors have been around a long time.

    Some one must have used them somewhere. If not there still ought to be data from various experiments where the background has to be subtracted out to measure the signal. The background of course would be CR data.

  166. Posted Jul 12, 2007 at 2:41 PM | Permalink

    re 145

    Why 100 year forecasts? One reason might be that 20 year forecasts could be verifiable. After 10 years of a 20 year forecast , one might have a clue if the forecast was realistic. After 10 years of a hundred year forecast, any differences could be attributed to short term variation or to weather.

  167. Bill F
    Posted Jul 12, 2007 at 3:24 PM | Permalink

    167,

    Its not a function of the technology of the detector, but the energy needed to reach the location of the detector. At the poles, particles with very low energies can reach the surface, while at the equator, particles with less than ~15 GV energy can’t reach the surface. It has to do with the orientation of the field lines of the earth’s magnetic field. The lines at the poles are nearly perpendicular to the earth’s surface, and the particles can enter the earth’s atmosphere parallel to the lines and reach the surface. At the equator, the lines are parallel to the earth’s surface, and particles passing through the atmosphere lose energy as they pass through the magnetic field. Those with energies less than ~15 GV cannot reach the surface at the equator.

    The “geomagnetic cutoff rigidity” of a particular neutron monitor is a function of its position relative to the poles as well as its altitude. Monitors near the poles such as Oulu have low cutoff energies (

  168. Mark T.
    Posted Jul 12, 2007 at 3:26 PM | Permalink

    Why is the answer always the strangulation of the global economy with energy taxes?

    There’s a one word answer to that question, though the motive behind that one word is debatable.

    Mark

  169. Bill F
    Posted Jul 12, 2007 at 3:38 PM | Permalink

    Looks like I got truncated. Here is the rest:

    Oulu near the poles has a cutoff of approx. 1 GV, while Huancayo at 12S latitude and Haleakala at 19N latitude have cutoffs of approx. 13 GV. Particles below those energy levels simply don’t reach the detector. The very high energy CRs that produce muons in the lower troposphere are a very small fraction of the total CR flux, so a detector such as Oulu or Climax that read all CRs down to 1 GV or 3 GV will swamp the reading with low energy CRs, making trends in the high energy CRs impossible to detect.

    The paper by Usoskin that I linked on the unthreaded #14 post discovered a period of time in the early 1970s where flux at Huancayo did not correlate with solar activity or the flux at Climax and Oulu. They speculated that the weakness of the solar activity in the waning period of Cycle 20 was not sufficient to modulate the higher energy particles measured at Huancayo, resulting in no correlation of the flux at Huancayo with the solar activity at that time. Taking that speculation one step further would lead to the conclusion that higher energy solar cycles would have a stronger modulation of higher energy particles. If so, and if the GCR cloud hypothesis is true, we should expect high energy solar cycles to have fewer high energy GCRs, and by extension, fewer lower tropospheric muons, which would result in higher temperatures than would be expected by the change in TSI during the cycle alone. The two highest recent cycles were 21 and 22, from 1976 to 1997…and guess what…we had unprecedented warming during that time period that scientists have found cannot be explained by TSi increases alone. They like to attribute the extra to GHGs…I prefer to attribute it to fewer high energy GCRs due to the strength of the solar cycles.

  170. Sam Urbinto
    Posted Jul 12, 2007 at 5:37 PM | Permalink

    #144 and #145 Climate in 100 years, economics in 100 years. Whatever. Simple extrapolations, or complex models you can’t verify or whatever. Guesses do as well (maybe better!) heh

    #148 I would think that is just too transparent or obvious for anyone to agree with. 😀

    And in general….

    I’m trying to figure it out. Do people not usually make direct comments to what I write because it’s too boring, too long, too dificult, make too much sense to argue with, not enough sense to bother, or do I just seem too neutral or wishy-washy to pick a fight with?

    Maybe I should go to RC and see which one it is!

    heh^2

  171. Steve Milesworthy
    Posted Jul 12, 2007 at 5:57 PM | Permalink

    #163 Kenneth
    I think what you are saying is that you would like the authors to be a bit more explicit in the technique they used to come to their conclusions. Maybe they should. But I doubt whether the politicians would ever read the report in detail. If they see a projection that concerns them they should get their own scientists to assess it rather than rely on the report. Obviously all the really sensitive stuff has had a lot of scrutiny when the SPMs have been prepared, and a lot of the scrutiny will have been politically motivated (in both directions).

  172. Steve Milesworthy
    Posted Jul 12, 2007 at 6:19 PM | Permalink

    #172 Sam
    Rereading your posts in this thread there was a mixture of rhetorical questions (which don’t require an answer), a lot of double negatives (which can be hard to follow, imply that you are not confident in what you say and so don’t invite an answer) and some quite reasonable things that few would argue with.

    Be more direct! I personally have a tendency for being indirect and overqualifying what I say which means my writing can have less impact (unless it is controversial). Even on this blog, where I do have a contrary opinion to most, I irritate the heck out of Mr Sadlov by not saying out loud enough what he thinks I really believe 🙂

  173. Ian Foulsham
    Posted Jul 13, 2007 at 5:45 AM | Permalink

    re #159, Bill F, “So it makes me question why he chose such a bizarre methodology for his smoothing.”

    This is monumentally unfair to make a connection on this, but if you were in Greenpeace, then swapping “Royal Society” for “Exxon” and it would be a legitimate reason to discount anything that the guy says. :o)

    This was in May last year.

  174. Bill F
    Posted Jul 13, 2007 at 8:10 AM | Permalink

    #175 Ian,

    Unlike some of the very aggressive AGW proponents, I don’t take exception to others and ascribe motives to their actions simply because they don’t see a given issue my way. Lockwood has a pretty distinguished history in solar mechanics research, so I won’t blow off his conclusions and dismiss him as an agenda driven alarmist the way I would with guys like Mann or Hansen who have shown over and over again that they will use whatever shoddy science is necessary to get to the “right” answer. I am inclined to give Lockwood the benefit of the doubt regarding his motives and hope that a response to his paper by somebody like Svensmark or Shaviv will elicit a better explanation from him about why he chose this particular method of smoothing and visual comparison.

  175. Ian Foulsham
    Posted Jul 13, 2007 at 8:39 AM | Permalink

    Bill,
    I agree entirely, I was googling to try to find the paper and came across this. It just occurred to me that if he had worked for Exxon at some time in the past, and had published the opposite conclusions, his work would have been discounted.

    I had a really hard time reading the graph posted in the BBC article. It showed the last 20 years cosmic ray activity going down and the last 20 years temp rises. It didn’t show any historical context, where there have been other 20 year lags between solar activity and temp changes. My hope on this is that rather than it being the main thrust of the paper, is probably just the RS press release trying to simplify the message for the MSM.

    If I wanted to be that unscientific and promotional, I could point to the 20 year period to 1975 and map temps against CO2 increase.

  176. Bob Meyer
    Posted Jul 13, 2007 at 10:05 AM | Permalink

    Re: #167

    M Simon wrote:

    Proper detectors can differentiate between low energy CRs and high energy CRs. Such detectors have been around a long time.

    Some one must have used them somewhere. If not there still ought to be data from various experiments where the background has to be subtracted out to measure the signal. The background of course would be CR data.

    Until this week I would have agreed with you. While I was out hunting down weather stations I came upon Priest Mountain, Idaho where I found, along with the MMTS and raingauge, a pyreheliometer made by Belfort Instruments. The address on the manufacturer’s label did not use zip codes so the instrument had to be at least 40 years old. At the time, I had no idea what this instrument was and it took a couple of emails to Belfort to find out. I contacted the Forest Service to see if there were any data available from this instrument and it turns out that there are a great many strip charts that recorded the solar output in watts per sq meter over the last forty years or so. However, the very helpful member of the Forest Service who answered my emails stated

    “I am not aware of any scientist that has used this raw data.”

    The Priest Mountain Experimental Station is among the better situated rural stations. The temperature data is taken very consistently and there is a set of solar radiance data to go with it, yet it is unlikely that the solar radiance data has ever been used by climate scientists. (Anthony: I will get this and a couple of other station surveys uploaded this weekend)

    Here was an opportunity to see if solar radiance correlates with temperature (at least locally) yet the opportunity was ignored. While it’s true that putting strip chart data in a format usable by computers is a big ugly job, that isn’t an excuse to ignore the data especially when you consider the amount of money spent on GW research.

    Supposedly, solar output has not varied enough to influence climate and that would explain why no one has used existing solar radiance data (with corresponding temperature data) to investigate a relationship. Would cosmic ray data be treated any differently if the prevailing theory is that there is no relationship between CRs and climate?

  177. Posted Jul 13, 2007 at 10:16 AM | Permalink

    Bill F.,

    Yes I got all that (re: magnetism etc.). However, even with a spread of particle energies there are detectors that can give a read out of the energy spread.

    I just looked it up and this appears to be the first solid state neutron detector that can differentiate neutron energy. It uses a grating. 1999.

    Click to access pp117.pdf

    I was thinking gamma ray detectors which have been able to differentiate energies for 60 to 80 years. I had assumed something like that had been worked out for neutrons in the same time frame. Evidently not.

    What is surprising is that something could have been done to eliminate most of the low energy neutrons. It is called shielding. It can be calculated. In fact I have a book on nuclear reactor design from 1953 which says that high energy neutrons are hard to stop. i.e. shielding filters out low energy neutrons.

    BTW I’m familiar with the Shaviv paper on the subject and have blogged the cosmic ray/clouds connection extensively.

  178. Kenneth Fritsch
    Posted Jul 13, 2007 at 10:23 AM | Permalink

    Re: #173

    I think what you are saying is that you would like the authors to be a bit more explicit in the technique they used to come to their conclusions. Maybe they should. But I doubt whether the politicians would ever read the report in detail.

    Steve Milesworthy, in my judgment one cannot properly decipher how the critical levels of likelihood and uncertainty were assigned by the authors of the AR4 reports without a more complete disclosure of the methods used by individual groups of authors. That is why I judge the subject of this thread so important and lament it not being discussed more directly and in detail.

    I have a personal curiosity about matters such as these that perhaps stems from my curiosity working as scientist, albeit many years ago. I also have a curiosity about the manner in which posters react to these matters. I find that oft times those more in agreement with the conclusions of the AR4 tend to show a surprising lack of interest in and curiosity about these details of likelihood and uncertainty assignments and particularly when these same posters are scientists. On the other hand, I see posters who expressed less or little agreement with the AR4 results seemingly also having little curiosity in the assignment details and apparently because they have discounted the scientific motivations for these assignments more or less out of hand.

  179. Bill F
    Posted Jul 13, 2007 at 10:48 AM | Permalink

    M. Simon, I assumed too much from the question I guess. The problem with alot of the things we can gather data about now is that we don’t have a long enough record of any of it to make heads or tails out of it. The data sets that are long enough and continuous enough to look at most types of solar activity are too non-specific to separate the wheat from the chaff on GCR energies. I still have a gut feeling that there is a connection somewhere in there between the polarity of the magnetic fields and the overabundance of sun spots and flare activity in one solar hemisphere or the other that is significant. I need some more time and better statistical knowledge in order to really dig deep enough to start sorting out the random correlations for the significant connections.

  180. DeWitt Payne
    Posted Jul 13, 2007 at 7:29 PM | Permalink

    Re: #178

    Forty years of good pyreheliometer data from a rural site would help a lot with evaluation of whether atmospheric aerosols are really declining or not (or are a significant factor at all). Fifty years would have been better as that would put the start well into the cooling (or at least flat) phase of global temperatures. You would need daily integrated total irradiance data that could then be analyzed for trends. I would worry about long term calibration drift, though. Otherwise, it sounds like a treasure trove.

  181. Bob Meyer
    Posted Jul 13, 2007 at 11:03 PM | Permalink

    Re #182

    DeWitt:

    I said 40 years because the instrument is at least 40 years old. The records may go back farther than that, or maybe not that far. My friendly Forest Service correspondent didn’t say just how much data there was although he did say that some records were missing.

    Long term drift might be a problem. When I contacted Belfort Instruments they said that they hadn’t made these devices for “quite a while” so they may no longer offer calibration for this kind of instrument. Since the data wasn’t used by any scientists it is unlikely that calibration was maintained.

    Unfortunately, the data are entirely on strip charts which are not easily converted to a computer friendly form. There is a program that claims it can scan strip charts and other hard copy graphs and produce ASCII files for around $350 called UN-SCAN-IT but I don’t know how good it is. If anyone has experience with this program I’d like to know if it works.

    If doubt that the Forest Service would be willing to turn over this data to just anyone. However, if a legitimate researcher made the request they might be amenable. If anyone here is interested I’ll send Steve or Anthony the contact’s name.

  182. nevket240
    Posted Jul 14, 2007 at 3:30 AM | Permalink

    Sorry for being late, its COLD & WET here in Melbourne. oZ ( Global Freezing)
    #108
    MarkW whats your fascination with Exxon? As an economic entity & a leagl one they have every right to defend themselves. All the money has been driving in a one-way street. So, since the science is settled why don’t we put the $billions into the Heretic side to give their scientists a fair return.
    Besides you obviously did not know of Al Gores mentoring by one Armand Hammer(Occidental) or the funding of both Goracle Snr & Jnr into office. Or that to the best of my knowledge Jnr receives $20k Pa from Oxy. So why the hypocrisy??
    http://www.independence.net/gore/ even if only half is true God help us.

  183. John A
    Posted Jul 14, 2007 at 4:11 AM | Permalink

    nevket240:

    I’m pretty sure MarkW was being facetious. I’ve never seen any Exxon loot, or frankly any loot at all.

  184. PeterS
    Posted Jul 14, 2007 at 9:18 AM | Permalink

    If climate-change theory is an object (which it is) then it cannot be anything other than a ‘made-up’ object. We can fine-tune and swap, at will, the various names we apply to its ‘made-up-ness’, (prediction, projection, prophecy, scenario etc) but the fact remains that it differs from all real objects (resources) present in our space in that it is made-up. It is nothing more than a story about our future. And, like all stories we tell ourselves about the future, its intention is to excite a state of either hope or fear in those who occupy the space into which it is placed (ie everyone) and, as a result, disrupt – or become an obstacle to – our use of real objects.

    ‘Business-as-usual’ is a very good way of describing our ongoing developmental use of real objects – it could be said we are as advanced as we are today because we’ve been able to sustain business-as-usual for long periods of time without allowing this type of made-up object (when placed in the space) to seriously disrupt, or influence, our use of real ones. The most obvious historical example of this being religious objects.

    The current inflation, elaboration and misrepresentation of a miniscule amount of genuinely ‘known’ scientific data can be understood as an abuse of that material in order to manufacture a new made-up object to place into the space and therefore perpetuate the attack on human, business-as-usual, object use. In this sense, a comparison with religious objects is an pertinent one.

    When an object is a made-up one (a story guessing the future) – as the AGW object cannot help being – there is no consensus’ in its authenticity… only a collusion’ in its intended use.

  185. Kenneth Fritsch
    Posted Jul 15, 2007 at 11:21 AM | Permalink

    I’m trying to figure it out. Do people not usually make direct comments to what I write because it’s too boring, too long, too dificult, make too much sense to argue with, not enough sense to bother, or do I just seem too neutral or wishy-washy to pick a fight with?

    Sam U, as you probably already know, picking a fight in internet discussions is a much too easy task and while it might get you some attention it is hardly worth the time and effort for you or the other posters. There have been some rather memorable and profound posts made here that received little in the way of replies. You should not judge or compose your posts with the intent of engaging in an extended conversation. Say what you have to say and say it in your natural tone. It is not a wasted effort if no one replies. Asking direct questions of a particular poster will most often, but not always, produce a reply. You should not repeat a proposition over and over with the hopes of obtaining a reply. We had a well known poster by the name of TCO that used to post here and indulge in repeated criticisms and suggestions and I would repeatedly remind him of this trait.

  186. Steve Milesworthy
    Posted Jul 17, 2007 at 5:49 AM | Permalink

    #180 Kenneth
    Not wishing to start on social-networking, but if you develop a trust for a person or group of people, and/or if you understand a subject, you don’t need to forensically examine every conclusion they come to even if you disagree with them. If you did so, you’d never get any work done.

    As it happens, here’s another take on climate forecasting: looking at what needs to be done to make a climate forecast rather than a projection, and where the limitations lie. (Hopefully you don’t need a subscription…)

    http://www.sciencemag.org/cgi/content/full/317/5835/207

    (disclosure: I have distant professional links with one of the authors).

  187. MarkW
    Posted Jul 17, 2007 at 6:28 AM | Permalink

    So you are saying that people you trust, can’t make mistakes, so you don’t have to look at their work very closely?

  188. Steve Milesworthy
    Posted Jul 17, 2007 at 7:17 AM | Permalink

    #189 MarkW
    No.

  189. MarkW
    Posted Jul 17, 2007 at 8:19 AM | Permalink

    Sounds like it to me.

  190. Steve Milesworthy
    Posted Jul 17, 2007 at 8:29 AM | Permalink

    How so?

  191. Kenneth Fritsch
    Posted Jul 17, 2007 at 9:29 AM | Permalink

    Not wishing to start on social-networking, but if you develop a trust for a person or group of people, and/or if you understand a subject, you don’t need to forensically examine every conclusion they come to even if you disagree with them. If you did so, you’d never get any work done.

    Let us be certain that what we are discussing is revealing the methods used by the authors of the AR4 reports in assigning the levels of certainty and likelihoods to their published conclusions using a prescribed “traceable accounts” of those methods. A process something rather simple to accomplish and something that would be a tremendous asset in building confidence for the most critical parts of the reports evaluations, i.e. certainty and likelihood. Since the publishing of the traceable accounts would be most useful as evidence of a consensus for the more skeptical and since, if most all of us were already convinced of the consensual evidence, the AR4 would become a rather superfluous endeavor, I find your defense of the IPCC in this regard rather weak and devoid of content. Trust but verify.

  192. MarkW
    Posted Jul 17, 2007 at 10:17 AM | Permalink

    #193,

    But since they came up with the correct answer, looking into their data and methods would be a violation of consensus, which is punishable by immediate expulsion from the club.

  193. PeterS
    Posted Jul 17, 2007 at 11:19 AM | Permalink

    #191

    What Steve appears to be saying is that he can only entrust himself to people (or groups) who have a need to arrive at conclusions. Whether he agrees with the conclusions or not seems to be immaterial to that trust, as is the actual trustworthiness – or robustness – of the methods used in arriving at them.

    In other words, people (or groups) who don’t have such a need to arrive at conclusions (those who are quite capable of keeping their options open for a prolonged period of time (otherwise known as ‘sceptics’) are likely to leave Steve ‘untrusting’ of them (or, for want of a more rounded word, ‘insecure’).

    Steve goes on to point out that for him to actually examine conclusions would be a frustration – as he’d ‘never get any work done’ on the urgent project of arriving at them.

  194. Sam Urbinto
    Posted Jul 17, 2007 at 11:59 AM | Permalink

    #174 I know I’m too indirect, I’m trying to be non-confrontational, fair, neutral, and have the discourse be one of thoughts and civility rather than rhetoric and hostility. I was just wondering if anyone was angry at me! :^}

    #187 Point taken. That’s why I asked, you know, a little constructive criticism, some positive feedback (rather than a negative forcing, lol). I mostly just write it all for me. Too bad for me I tend to like to write about the same thing all the time. I’ll cut down on that!

  195. Kenneth Fritsch
    Posted Jul 17, 2007 at 12:40 PM | Permalink

    Too bad for me I tend to like to write about the same thing all the time. I’ll cut down on that

    I did not particularly have you in mind with that comment on repeating onesself. That was more a memo to myself.

    When you do not receive much feedback on posting on the internet one can mistakenly think they were not “heard” and thus the temptation to repeat a point more than once. Repetition also can be symptom of aging which would make me much more susceptible than you. Repetition is a problem I have to constantly beat back — in fact again and again.

  196. Kenneth Fritsch
    Posted Jul 17, 2007 at 12:50 PM | Permalink

    Steve goes on to point out that for him to actually examine conclusions would be a frustration – as he’d never get any work done’ on the urgent project of arriving at them

    That’s worth a chuckle or two, but my point has little to do with Steve Milesworthy’s predilection towards trust and a lot to do with what is proper for the IPCC to do to allow for a complete understanding of what the levels of certainties and likelihoods used by the AR4 authors mean.

  197. Steve Milesworthy
    Posted Jul 17, 2007 at 1:19 PM | Permalink

    #195
    As with MarkW, your analysis of what I say is predictably wrong. Reread and take note of the “Ifs” and the “every”.

    You’ve misunderstood the reasons for why the conclusions are being given. The conclusions are being given because the politicians demand them. Policy is always being made in an uncertain world and experts have to give their best judgement using the information available.

    Scientists would like to keep plugging away till they know the answer to the question of the life, the universe and everything, but the people who are paying them won’t wait the seven and a half million years it would take to find it.

    Nobody deserves trust. But they can earn it. So let’s turn this around. Do you trust Steve McIntyre? Many here do without knowing much about PCA. Have you ever made a life-changing decision based on uncertain evidence? If you never have, I feel sorry for you. Do you live and work with other people? How do they take the fact that you continually undermine them with your distrust.

  198. MarkW
    Posted Jul 17, 2007 at 1:31 PM | Permalink

    Now we are back to the old, even bad data is better than no data argument.

    I find it amazing that everyone except Milesworthy manages to misread what he writes.

  199. Steve Sadlov
    Posted Jul 17, 2007 at 2:01 PM | Permalink

    Scientific forecasting ….. even weather forecasting is highly error prone. Ask the survivors to those who have perished needlessly from hypothermia or other issues related to “unexpected winter-like conditions.” In probably half such cases, multiple weather forecasts downplayed significant outbreaks of polar air. On that note, there is, today, a very dangerous situation arising in the Cascades and even some mountain in California. The earliest ever “fall storm” to occur during my lifetime is bearing down on the West coast – radar returns indicate it has already arrived near the CA-OR border. I really hope we don’t hear of more sad stories on Mt. Hood or Mt. Shasta, in particular.

  200. Sam Urbinto
    Posted Jul 17, 2007 at 2:04 PM | Permalink

    #197, cool.

    #200, I have to disagree with you on this one. There are cases where you have to run with what you have, because you either or in combination a) will never know the exact answer or b) you have to do something at that point in time.

    Even if that wasn’t true, it doesn’t matter in this case. I think Steve’s correct in #197, “The conclusions are being given because the politicians demand them.” In that context, we can either say “We don’t know” which I can pretty much promise isn’t going to be accepted. Or you make the best estimate you can, because you have to give some sort of answer. Even if not, the person asking is going to take what they are biased towards if there’s contradictory conclusions in your answer, between a selection of answers.

    It’s very logical. If I don’t give you what I think to be true, how do I know my viewpoint is going to be represented? At least that gives me a chance to have my hypothesis or theory considered! Anyway, nothing wrong with making estimates and giving percentages. As long as they’re qualified.

  201. MarkW
    Posted Jul 17, 2007 at 3:02 PM | Permalink

    Sam,

    You are creating a circular argument.

    The only reason the politicians are wanting to do something NOW, is because highly politicized scientists have been feeding the politicians and the public bad data from the beginning.

    Bad data is used to create a demand for action, then since bad data is all we have, that’s what we have to go with.

    We also have a case where we know that the data from the models is not an accurate reflection of reality, yet there are those in charge of the models are lying to the public by declaring that the models are creating good data. If the “scientists” were to be up front with the politicians and the public and tell everyone exactly how uncertain their data really is, there would not be a demand for action.

  202. Sam Urbinto
    Posted Jul 17, 2007 at 3:16 PM | Permalink

    Can’t disagree with that. The issue is that that is the situation we’re in, no we shouldn’t be in it, but we are. So we have to deal with it. Why we’re in it and what to do about it are two different things, I think.

  203. Kenneth Fritsch
    Posted Jul 17, 2007 at 4:39 PM | Permalink

    Scientists would like to keep plugging away till they know the answer to the question of the life, the universe and everything, but the people who are paying them won’t wait the seven and a half million years it would take to find it.

    Nobody deserves trust. But they can earn it. So let’s turn this around. Do you trust Steve McIntyre? Many here do without knowing much about PCA. Have you ever made a life-changing decision based on uncertain evidence? If you never have, I feel sorry for you. Do you live and work with other people? How do they take the fact that you continually undermine them with your distrust.

    Having a three-way conversation causes problems like this I guess, but if you are responding to my proposition you are simply avoiding the main issue. It certainly is not a matter of an impractical tedious or difficult task that I am requesting and expecting here. It is simply something that has or should have been done by the AR4 authors and I would like to see it made public. Steve Milesworthy, you have neatly brought the issue of trust into the discussion when in fact it has little to do with my point. Whether it be the IPCC or Steve M why do I need to trust when I can verify. Do people make decisions without verifying? Of course they do and frequently when the process of verification is either not available to them or impractical to carry out, but to take a pass on it when it is available is to either depend on blind faith and trust as a matter of principle or being so timid as not wanting to offend someone for a request for verification or at least better verification. That latter situation should not be a problem when it comes to a scientific question amongst scientists.

  204. Sam Urbinto
    Posted Jul 17, 2007 at 4:54 PM | Permalink

    Hello, can I see your data and check the model you used? No? Thank you very much, I guess your work ifs fake. Once you prove it isn’t, maybe I’ll trust you. Until then, I’ll consider that you’re a liar. Thank you very much.

  205. Posted Jul 17, 2007 at 5:08 PM | Permalink

    Day after day, week after week, month after month and now year after year we hear the cries of “we must do something now before it’s too late”. Well on the basis of looking back to when these cries were first heard it is already too late. Governments around the world are stalling. They are making lots of speaches and funny noises but actually not much is being done. If this were an urgent matter I personally believe it would already have been done. The uncertainty is not lost on those who make the decisions. They may say we are in danger but their lack of action speaks volumes. They may shout down the “deniers” but if it were that urgent they wouldn’t have time to argue.

  206. Kenneth Fritsch
    Posted Jul 17, 2007 at 5:10 PM | Permalink

    Hello, can I see your data and check the model you used? No? Thank you very much, I guess your work ifs fake. Once you prove it isn’t, maybe I’ll trust you. Until then, I’ll consider that you’re a liar. Thank you very much.

    What pray tell are you talking about here and to whom? You might clarify this by using quotes to which post you are replying.

  207. Steve Milesworthy
    Posted Jul 17, 2007 at 5:39 PM | Permalink

    #205 Kenneth

    Perhaps I would understand your point better if you identified a bit in the WG1 that you did not feel reflected the traceability guidelines suggested.

    Click to access AR4_UncertaintyGuidanceNote.pdf

  208. Steve Milesworthy
    Posted Jul 17, 2007 at 5:42 PM | Permalink

    #206 Sam
    The model I’m familiar with is licensed for research purposes and widely used as such.

  209. Bob Koss
    Posted Jul 17, 2007 at 5:57 PM | Permalink

    Steve,

    I didn’t realize the models get licensed. Who is the licensing authority and what sort of validation and verification do they employ?

  210. PeterS
    Posted Jul 17, 2007 at 6:09 PM | Permalink

    Your conclusions cannot be given before they have been arrived at – even to authority figures. I wonder if the enormous need to reach a conclusion may result in having a somewhat premature one?

    Your ‘trust’ AND ‘distrust’ are both, of course, conclusions – arrived at about a person (or a group). If the cost of ‘earning’ your concluded trust is to have an identical need for arriving at conclusions as you do, then I can fully understand your distrust of Steve McIntyre – a person who clearly has a preference for keeping his options open and tolerating in-conclusion for long enough to allow his curiosity some space to shine through.

    Curiosity may well have killed the cat – but conclusion murders curiosity.

  211. Kenneth Fritsch
    Posted Jul 17, 2007 at 6:10 PM | Permalink

    Re: #209

    From your link we have:

    Be prepared to make expert judgments and explain those by providing a traceable account of the steps used
    to arrive at estimates of uncertainty or confidence for key findings ‘€” e.g. an agreed hierarchy of information,
    standards of evidence applied, approaches to combining or reconciling multiple lines of evidence, and
    explanation of critical factors.

    My point has always been that those traceable accounts should be made public.

  212. PeterS
    Posted Jul 17, 2007 at 6:19 PM | Permalink

    Ooops. #212 above is a reply to #199 further above.

  213. MarkW
    Posted Jul 18, 2007 at 5:50 AM | Permalink

    paul,

    I’ve always liked the descriptive watermellon. Green on the outside, red to the core.

  214. MarkW
    Posted Jul 18, 2007 at 5:51 AM | Permalink

    bob,

    It’s a useage license. Like when you buy MS Word, you get a key that activates your license. The license gives you permission to use, but not to modify or duplicate.

  215. Bob Koss
    Posted Jul 18, 2007 at 6:07 AM | Permalink

    MarkW,

    Yeah. I thought about that after I commented.

  216. Steve Milesworthy
    Posted Jul 18, 2007 at 9:13 AM | Permalink

    #213 Kenneth
    The expert judgements have been explained as well as they can be within the reports. You may be asking too much if you want full traceability. What do you want? Transcripts of meetings and phone calls?

    #217
    Yes I did mean a software license so others can use and modify the code for research purposes rather than an accreditation license that allows it to be used for research purpose.

    #212 PeterS
    Conclusions are sometimes made prematurely. That’s what balancing risk is all about.

    But I don’t see where you got the odd impression that I only trust people if they reach conclusions! As it happens Steve McIntyre has said a number of things (conclusions) with which I agree. Few people are entirely trustworthy or the opposite.

  217. Sam Urbinto
    Posted Jul 18, 2007 at 10:41 AM | Permalink

    Excuse me for being so unclear. That was a notional statement about how some think about trust and it wasn’t directed to anyone at all or anything anyone said.

  218. Sam Urbinto
    Posted Jul 18, 2007 at 10:42 AM | Permalink

    Excuse me for being so unclear. I appologize. That was a notional statement about how some think about trust (or how some actions create anti-trust) and it wasn’t directed to anyone at all or anything anyone said.

  219. steven mosher
    Posted Jul 18, 2007 at 10:55 AM | Permalink

    Global warming Precrime

    re 218.

    “The expert judgements have been explained as well as they can be within the reports.
    You may be asking too much if you want full traceability. What do you want?
    Transcripts of meetings and phone calls?”

    Well, lets look at this. The opinions have been explained as well as they can be with the reports?
    Hmmm. This implies no room for improvement. This imples no room for cogent critcism. This implies
    the report is a sacred text of sorts. We can all well imagine ways to improve the report. Further,
    consider the simple addition of a Minority report. ( ever see the movie?)

    Full traceability. Well, we require this of other feilds, I would think that if the subject
    is so important we should require things like signed off meeting notes, video logs,audio logs, phone logs, email records.

    Information is a good thing.

  220. Kenneth Fritsch
    Posted Jul 18, 2007 at 11:33 AM | Permalink

    The expert judgements have been explained as well as they can be within the reports. You may be asking too much if you want full traceability. What do you want? Transcripts of meetings and phone calls?

    Steve Milesworthy, you appear to me to have a proclivity for avoiding facing this issue head on and I doubt that from your POV this will change any time soon. The authors had previously been instructed to produce a traceable account using the generalized and recommended process for determining the published levels of certainty for their critical conclusions. The only interpretation of that request that would make sense to me would be for the authors to record an account that is traceable at some future time. What is so difficult to understand about that?

  221. Kenneth Fritsch
    Posted Jul 18, 2007 at 11:38 AM | Permalink

    Re: #219 and #220

    Sorry, Sam U, I did not realize that my affliction was contagious. I did not realize that my affliction was contagious. Sorry, Sam U.

  222. Steve Milesworthy
    Posted Jul 18, 2007 at 11:40 AM | Permalink

    #221 Steve
    No it doesn’t imply any of the thins you state. It implies that it is easy to see how the conclusions were reached based on the evidence used.

    Seems as it would suit you down to the ground to have some sort of Sarbanes-Oxley rule book thrown at climate scientists to stop them from doing their work.

    I read the minority report and it was full of cr*p 🙂

  223. Steve Sadlov
    Posted Jul 18, 2007 at 11:51 AM | Permalink

    Oh yeah, Sarbox to stop Climate Science, a plot by the Bushies and Big Oil, Inc, to silence St. Hansen, the martyr…. /s

  224. Matthew Drabik
    Posted Jul 18, 2007 at 12:31 PM | Permalink

    #186

    Dude, put down the Chomsky. That way lays madness.

    There are more than sufficient scientific grounds upon which to question AGW.

  225. steven mosher
    Posted Jul 18, 2007 at 12:51 PM | Permalink

    RE 224.

    Well, since the models used CRU land surface data, and since Jones refuses to releases the sites
    used or models used, then one cannot “SEE” that the reports are based on the evidence used,
    to “SEE” this one would have to have access to data sources and methods.

    One “sees” nothing. Look at what you see. you have a report. ink on pages. nothing more
    nothing less. You ask for the comments. Stonewall. You ask for the data. Stonewall.

    WG1 is a jackson pollack. ink on pages.

    The writers need to work like we all have to work.

    1. Keep a phone log.
    2. Keep meeting notes.
    3. Sign documents validating meeting notes.
    4. Keep all raw data and document it.
    5. Keep all analysis programs and document them.
    6. Keep all drafts of reports and comments.
    7. Answer all objections to the satifaction of objectors OR offer them a minority report.

    Nobody want to, in your words, stop scientists from doing their work. Their JOB is to produce
    REPRODUCEABLE results. Do I need to spell PONS AND FLEISHMAN out to you? ?

    Drug scientists understand this. weapons’s scientists understand this. Auto safety scientists
    understand this. every field of science understand the rationality and morality of producing
    reproduceable results.

    Finally, I think the requirements for Climate science should exceed SARBOX. If I am to believe
    the simulations, Global warming could be WAY MORE DANGEROUS than Enron. So, the results require
    much more scrunity.

  226. PeterS
    Posted Jul 18, 2007 at 1:15 PM | Permalink

    #218

    But I don’t see where you got the odd impression that I only trust people if they reach conclusions! As it happens Steve McIntyre has said a number of things (conclusions) with which I agree. Few people are entirely trustworthy or the opposite.

    Steve, you quite clearly state that you are disinclined to examine conclusions arrived at by people you trust – even if you disagree with those conclusions. As the quality of their conclusions appears to have very little influence on the trust you place in the people making them… I understand this to mean your trust is decided by the QUANTITY of conclusions arrived at – rather than their QUALITY. This impression is further encouraged by your apparent reluctance (or disinterest) in having those conclusions (and the material used in arriving at them) ‘forensically examined’ by anyone else either.

    Perhaps you feel that, by surrendering the conclusions for external examination, YOU would become the distrusted accomplice in this cosy arrangement? – especially if they were discovered to be poor quality.

    Hence my observation that the quantity of trust you permit yourself to share with people appears to be inextricably linked to the quantity of conclusions they can arrive at.

  227. John F. Pittman
    Posted Jul 18, 2007 at 3:35 PM | Permalink

    #227 You forgot that we have to certify on penalty of law (5 year felony) that the data is correct, accurate, and true…

  228. steven mosher
    Posted Jul 18, 2007 at 5:30 PM | Permalink

    re 229.

    Hell yes. The first time I got pulled in to answer the questions of an auditor
    ( what is this INVENTORY WORTH?) I broke into a cold sweat.

    I do believe I had an onset of short term memory failure. I also quibbled about forecasts
    projections and simulations and models of inventory value and the dog ate my homework

    the prospect of going to jail for your claims RIVETS your attention, widens your error estaimtes
    as well

  229. steven mosher
    Posted Jul 18, 2007 at 5:45 PM | Permalink

    re 226.

    Chomsky of TG, agreed.

  230. Steve Milesworthy
    Posted Jul 19, 2007 at 6:22 AM | Permalink

    #228 PeterS
    I think my head just exploded in a puff of logic.

    Where we differ is this. The conclusions are based on an peer reviewed expert assessment of peer reviewed science. If I want I can reread the science and decide whether my assessment of it matches that of the author. I don’t need an audit trail to do this. The conclusions draw my attention to what is important to me. If I live in Tuvalu, I will want to further assess sea-level projections. If I live in Turkey, it’s precipitation projections I’m interested in. I won’t commission a single sea wall or desalination plant without reassessing the evidence for myself and looking for other evidence, and I’ll keep all the phone logs I think I need for that process.

    Any mismatches between assessments and evidence will be identified by me and others, and I can take a view on that too.

    The prospect of ending up with the reputation of Pons and Fleischmann is also attention rivetting.

    As it happens, my personal experience of audits are that they do not always get you what you want. My office building has a high environmental rating. When I checked the environmental audit, it turned out that the building is average, but the procedures for running the building were marked at 100% because they have meetings at the right frequency and they’ve got all the manuals in the right place.

  231. MarkW
    Posted Jul 19, 2007 at 8:06 AM | Permalink

    I see we’re back to the old, “If it’s peer reviewed it must be good” myth.

  232. MarkW
    Posted Jul 19, 2007 at 8:56 AM | Permalink

    Wasn’t the Hockey Stick peer reviewed?

  233. Sam Urbinto
    Posted Jul 19, 2007 at 9:11 AM | Permalink

    Re #223 (re 219 and 220 re 210 and 208 re 206) Double post, argghh!

    I hate it when that happens.

  234. Steve Milesworthy
    Posted Jul 19, 2007 at 9:26 AM | Permalink

    #233 MarkW
    No, the comment is related to whether the conclusions drawn by the IPCC authors can be traced to the peer reviewed literature. It is no secret that just because a paper is peer reviewed does not mean it is correct (ideally it should be correct within the bounds of what is known) or that it will stand the test of time.

  235. Steve Sadlov
    Posted Jul 19, 2007 at 9:32 AM | Permalink

    Yes-man reviewed. Worthless.

  236. MarkW
    Posted Jul 19, 2007 at 10:44 AM | Permalink

    The test of time is now down to a few months?

    Milesworthy, you were the one who was making the claim that the peer review process adds a mark of excellence to the paper that has been peer reviewed.

    Such a belief cannot be validated by real world experience.

  237. PeterS
    Posted Jul 19, 2007 at 12:14 PM | Permalink

    Seer reviewed? (then just find the suckers).

  238. Jacket
    Posted Oct 12, 2009 at 6:21 PM | Permalink

    Interesting. Prediction markets work because they accumulate and reward expert’s predictions. How should prediction markets work in general if expert opinion can’t be trusted?

One Trackback

  1. […] Clark Link to Article al gore IPCC AR4: No skill in scientific forecasting » Posted at Climate […]