Consensus Report on North American Climate Extremes

NOAA has released a well-manicured and comprehensive report on observed and conjectured changes in North American weather and climate extremes.
cover
The Final Report of CCSP 2008 provides and up-to-date scientific collation of many peer-review studies along with a consensus interpretation like the UN IPCC AR4 reports. Some of the main findings are summarized in the handy “brochure” provided on the website:
I quote here from the NOAA press release

* Abnormally hot days and nights, along with heat waves, are very likely to become more common. Cold nights are very likely to become less common.
* Sea ice extent is expected to continue to decrease and may even disappear in the Arctic Ocean in summer in coming decades.
* Precipitation, on average, is likely to be less frequent but more intense.
* Droughts are likely to become more frequent and severe in some regions.
* Hurricanes will likely have increased precipitation and wind.
* The strongest cold-season storms in the Atlantic and Pacific are likely to produce stronger winds and higher extreme wave heights.

Along with attribution of the above observed changes to human activity, the report provides a likelihood estimate of future changes. Based upon model projections and expert judgment, it goes without saying that it is “very likely” that the extremes will continue into the future.

From the press release on the NOAA website, report co-chair Tom Karl of NCDC explains the motives of this report and goes on to answer the age-old question: is this flood or rain shower or hurricane caused by global warming? It is usually said as a matter-of-fact statement that one individual weather event cannot be attributed to global warming per se. However, it is likely that with global warming, we will see more of these events. Karl says as much,

This report addresses one of the most frequently asked questions about global warming: what will happen to weather and climate extremes? This synthesis and assessment product examines this question across North America and concludes that we are now witnessing and will increasingly experience more extreme weather and climate events.

This is a landmark document coming from NOAA, which has been lambasted in the past for allegedly censoring or silencing its scientists. Yet, it is an amalgamation of differing viewpoints on such issues as hurricanes and climate change, the obvious hot-button concern going forward into the 2008 Atlantic hurricane season. With the terrible Midwest/Iowa flooding (not seen since 1993) ongoing, the report will get plenty of publicity in the same way that Emanuel’s 2005 Nature paper received after Hurricane Katrina. However, before attributing all observed phenomena to unnatural climate changes, we must not forget that natural climate variations exist and generate extremes all the time including plenty of weather systems.  For instance, the tornado numbers as well as the Midwest flooding were largely expected from the record La Nina conditions seen in late 2007 to early 2008. With the continued negative values of the Pacific Decadal Oscillation (PDO) and large uncertainty in future ENSO conditions, natural climate variations are providing plenty of climate extremes all on their own.

table

136 Comments

  1. deadwood
    Posted Jun 19, 2008 at 10:27 PM | Permalink

    I hope their certainty on other matters is better than it is for ENSO.

  2. SteveC
    Posted Jun 19, 2008 at 10:43 PM | Permalink

    Note from the graph on extreme rainfall events:
    “Figure 4 Increase in the amount of daily precipitation over North
    America that falls in heavy events (the top 5% of all precipitation
    events in a year) compared to the 1961-1990 average. Various emission
    scenarios are used for future projections*. Data for this index at the
    continental scale are available only since 1950.”

    Data is only available since 1950, so why cherry pick the 1961-1990 average

  3. crosspatch
    Posted Jun 19, 2008 at 11:06 PM | Permalink

    “why cherry pick the 1961-1990 average”

    NOAA likes 30-year periods for some reason.

  4. Nylo
    Posted Jun 20, 2008 at 12:57 AM | Permalink

    “For instance, the tornado numbers as well as the Midwest flooding were largely expected from the record La Nina conditions seen in late 2007 to early 2008”

    I don’t understand why you say “record La Niña conditions”. This La Niña is not weak at all, but it is not especially strong either and has reached no record for La Niña conditions. Even the previous La Niña in 1999-2000 was stronger.

    ryanm: go to any variety of ENSO sites on the web like MEI and compare the strength of the previous La Niña with historical events

  5. John A
    Posted Jun 20, 2008 at 2:52 AM | Permalink

    I must be getting very sad in my old age or just delirious with the ear infection I’ve got at the moment, but when I saw this

    …I LOL’d

    Its a sad day for science when groups of scientists can make a statement on climate and only their friends and acolytes think its worth the electrons used to produce it.

    So I produced this in tribute to ensemble climate model predictions everywhere.

  6. rwnj
    Posted Jun 20, 2008 at 3:32 AM | Permalink

    These forecasts are short term. Are they specific enough to be validated in the next year?

  7. Phillip Bratby
    Posted Jun 20, 2008 at 3:37 AM | Permalink

    I counted two “very likely”, one “expected”, one “may” and four “likely” in the first quote. Now that’s what I call good science! Is there a scientific definition of what these terms mean? Is there a probability plus uncertainty associated with each term and if so what is it?

  8. MarkW
    Posted Jun 20, 2008 at 4:30 AM | Permalink

    The Iowa floods were a lot worse than the 1993 floods.
    For example, here in Cedar Rapids, the Cedar River’s normal level is around 12 feet. The previous record flood was 1993, when the water hit 20 feet. Last Friday morning, the water hit 31.5 feet. This flood completely flooded the 500 year flood plain. I’m fine, the water didn’t come within several miles of my place, but I have several friends who’s homes are not going to be salvageable, and another who’s lost his job when his place of employment was destroyed.
    Many buildings along the river, the water reached the second floor.
    They’re expecting the water to drop back to 20 feet sometime today. It’s not supposed to drop below flood stage until sometime next week.

    I never knew this before, but they have been warning people returning to their homes to have the area inspected before pumping out their basements. Apparently, if the ground surrounding your home is too saturated, the hydrostatic pressure of all that water can collapse your basement walls if you pump the water out too fast.

  9. EW
    Posted Jun 20, 2008 at 5:00 AM | Permalink

    MarkW

    Apparently, if the ground surrounding your home is too saturated, the hydrostatic pressure of all that water can collapse your basement walls if you pump the water out too fast.

    Exactly. In the Czech floods in 2002, the basement of the governmental building my husband works in was filled with water. They started to pump it, but to no avail. The building is situated on the river Vltava bank so that the basement water was here to stay, until that of the surrounding ground did not return to the river.

  10. Michael Smith
    Posted Jun 20, 2008 at 5:34 AM | Permalink

    Well, at least the authors of this “SAP” are up front about how they came to their conclusions. In the synopsis, they tackle the issue of “uncertainty”:

    In doing any assessment, it is helpful to precisely convey the degree of certainty of important findings. For this reason, a lexicon expressing the likelihood of each key finding is presented below and used throughout this report. There is often considerable confusion as to what likelihood statements really represent. Are they statistical in nature? Do they consider the full spectrum of uncertainty or certainty? How reliable are they? Do they actually represent the true probability of occurrence, that is, when the probability states a 90% chance, does the event actually occur nine out of ten times?

    Good questions indeed. But after acknowledging that these are important issues, they immediately tell us not to expect any answers.

    It is important to consider both the uncertainty related to limited samples and the uncertainty of alternatives to fundamental assumptions. Because of these factors, and taking into account the proven reliability of weather forecast likelihood statements based on expert judgment, this SAP relies on the expert judgment of the authors for its likelihood statements.

    Translation: Those questions don’t apply because we are relying on experts.

  11. Steve Geiger
    Posted Jun 20, 2008 at 6:36 AM | Permalink

    MarkW – that sounds terrible for Cedar Rapids. My wife’s aunt lives there and apparently her residence is OK but her church completely flooded. Do you know, though, if stages have exceeded the ’93 levels in the majority of locations? About two days ago I was looking at the USACE web site and at that time most records along the different midwest watersheds were still held by the 93 floods. Anyhow, our thoughts go out to those affected by the current flooding.

  12. MarkW
    Posted Jun 20, 2008 at 6:44 AM | Permalink

    All time records have been set all over the state. It doesn’t matter which river either. Cedar Falls and Waterloo were hard hit. Iowa City has been shut down all week. The University of Iowa was particularly hard hit, as the Iowa River goes right through the campus. Many small towns are completely wiped out. Some friends live in Palo, and every single building in that town is flooded.

    Now the water has gotten to the Mississippi. I’ve seen reports this morning that records are being set in St. Louis, with many dikes collapsing.

  13. Joe Black
    Posted Jun 20, 2008 at 7:01 AM | Permalink

    Note that as any complex, chaotic system is observed over time, it is quite likely that new “record” extremes will be frequently observed, likely without bounds.

  14. counters
    Posted Jun 20, 2008 at 7:05 AM | Permalink

    John A, what are you attempting to illustrate with that graph? If it’s a lack of consensus among the models, think about what you’re showing – models of a climate oscillation with understood effects but less-understood causes. Trying to extrapolate from the lack of consensus among El Nino forecasts to lack of consensus for 100 year climate models is extreme, to say the least; they’re completely different, and the “model consensus” comes from the fact that they all tend to converge on the same trend, not necessarily precisely the same values.

    BTW, since John neglected to cite the graph, you can find it on IRI’s page here, along with some interesting (although apparently extraneous considering John’s omission of them) tidbits: “The following graph [John’s] and table show forecasts made by dynamical and statistical models for SST in the Nino 3.4 region for nine overlapping 3-month periods. Note that the expected skills of the models, based on historical performance, are not equal to one another. The skills also generally decrease as the lead time increases. Thirdly, forecasts made at some times of the year generally have higher skill than forecasts made at other times of the year–namely, they are better when made between June and December than when they are made between February and May. Differences among the forecasts of the models reflect both differences in model design, and actual uncertainty in the forecast of the possible future SST scenario.”

    Furthermore, check out the historical performance of those models:

    Seems they’ve done pretty well on nailing the trend in the past… Perhaps we should read for the future, predicted trend that it will remain constant, at an SST anomaly around 0 degrees, rather than discrediting them altogether?

  15. jryan
    Posted Jun 20, 2008 at 7:16 AM | Permalink

    #15 – When you consider that the argument is over an observed 0.63 C increase in global temperature, that list of models stops looking accurate… or accurate enough to determine fractions of a degree.

    They also all trend higher then the observed temperature.

  16. counters
    Posted Jun 20, 2008 at 7:31 AM | Permalink

    jryan,

    A systematic bias in magnitude is one of the results of running certain types of models. If the trends match, though, then the models are still doing something right, although obviously overestimating certain factors. It’s all about the trends; there are no concrete predictions for what the “future temperature” will be – just wide error bars for possible temperatures based on the derived trend. One should never read too much in to those maxes and mins, but should always be concerned with the trend which leads to them.

    BTW, just to add – the graph that John and I each discuss is not a global temperature… it’s an SST based solely on El Nino prediction models.

  17. Kosmos
    Posted Jun 20, 2008 at 7:43 AM | Permalink

    We have The Old Farmer’s Almanac and the Canadian Almanac now we will have the NOAA ALMANAC…

    K.

  18. James Chamberlain
    Posted Jun 20, 2008 at 8:16 AM | Permalink

    Counters, I’d say you cheated a bit with your plots there. The implication from your post is that the mass of points shown on your plot comes from forecasts that ALL start on Sept 6. This is not the case. It looks as if your forecasts start at any and all given months, which is how I know that NOAA updates their ENSO plots anyway. If the forecasts start at or near the given current month where the forecasters know the conditions at that time, of course the mass of data will look like a trend that follows the actual data that occurs(ed).

    I agree that the forecasts are better than random noise, but they are not as good at the mass tracking of points that your accumulation of plots shows either.

  19. counters
    Posted Jun 20, 2008 at 8:31 AM | Permalink

    James Chamberlain,

    The graphs I referenced aren’t my own… they’re straight from the IRI website. Your points are precisely one of the reasons why John’s misconstruing of the graph is inaccurate – all the models are not built equally, built on the same data, or even start at the same point in time. That the models vary so dramatically yet still shape the same general trend is an intriguing thing.

    A caveat, though – starting a model right at the current point of time isn’t the best way to go because you need to account for model spin up and things to equilibrate. I don’t know if this is a problem with the ENSO models referenced here, but I do know it is a problem with climate models.

    I think the model data speaks for itself, and all critical eyes will immediately recognize the vast shortcomings of the ENSO models here as a litmus for predicting exactly what will happen. At the same time, however, it provides pretty solid evidence (based on past matchings) on what we can expect the range of SST anomalies to be due to El Nino. In no way can one extrapolate from these noisy model data that GCM’s are flawed, as John did.

  20. JP
    Posted Jun 20, 2008 at 8:42 AM | Permalink

    The problem is that no organization can with any precision forecast the strength or the duration in changes to ENSO. The last El Nino event (2006) crept up on many people. The folks at Hadley and Hansen himself expected it to last through a good portion of 2007 and give the globe record warm temperatures. Instead, ENSO went neutral in early 2007, before the La Nina event took hold in the summer of 2007.

    Another thing that should be noticed is how AGW theory is making its way in to operational weather forecsting. AGW was until recently the preview of Climate Science, which looked at the “big picture”. Many people have been lectured by climate scientists to ignore the noise and look at the trends. Are the Alarmists now in the “noise” business?

  21. Kenneth Fritsch
    Posted Jun 20, 2008 at 8:54 AM | Permalink

    When a group of mostly like-minded scientists, both from technical and policy mind sets, collaborate to publish this review, would one expect anything but what the scientists’ general comments contained? If we have learned anything from analyzing these papers/reviews here at CA, it is that we really do not gain much new information from these broad and general comments from the involved scientists about these papers. One needs to do some rather detailed analysis of the specific issues and then make your own judgment on the validity and certainty of such statements.

    The general comments are a bit of a give away when all of these aspects of future warming are predicted to be nearly 100% bad. It indeed makes the assumption that the climate average of the present and near past was somehow nearly perfect and a degree or two difference will turn almost all the climate effects into detrimental events. Tom Karl’s comment that we can expect more extreme events with this predicted future climate is right out of any marketing promotional playbook in selling immediate mitigation for AGW. That climate models prediction of extreme events is very uncertain and almost universally agreed to be the case- even though such events may come out of some models predictions – makes the extreme reference an obvious call that has potentially more appeal to the voting public for immediate mitigation than would a matter-of-fact prediction that it will probably get a little warmer in the future.

    And finally, I think the use of the term likelihood in these predictions is a bit disingenuous as its use gives the matter a character of an objective measure when in fact it is almost always by show of hands of the prevailing experts in the matter gathered for a particular review. Why not simply state, as a matter of scientific transparency, who voted how in these matters?

  22. Posted Jun 20, 2008 at 8:55 AM | Permalink

    Another perspective on the CCSP report:

    http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/001462what_the_ccsp_extrem.html

  23. Posted Jun 20, 2008 at 9:07 AM | Permalink

    Apparently, if the ground surrounding your home is too saturated, the hydrostatic pressure of all that water can collapse your basement walls if you pump the water out too fast.

    This is also why you never drain a swimming pool after a rain storm. As you drain it, excess water in the ground will float the pool, breaking the plumbing and making the pool useless for anything but a skate board park.

  24. counters
    Posted Jun 20, 2008 at 9:10 AM | Permalink

    From JP:

    Another thing that should be noticed is how AGW theory is making its way in to operational weather forecsting. AGW was until recently the preview of Climate Science, which looked at the “big picture”. Many people have been lectured by climate scientists to ignore the noise and look at the trends. Are the Alarmists now in the “noise” business?

    I’m not sure quite what you’re getting at. Are you referencing the use of ensemble models in short-term weather forecasting? Although I’m not an operational meteorologists, I participate in forecast competitions at school and elsewhere, and let me tell you – ensemble models are really helpful, and have been for some time! Although I usually begin my forecasting by doing a surface analysis (I used to HATE isoplething, but having it done for me makes it a bit more fun) and upper air analysis to get the “big picture,” as you put it, of the situation. From there, depending on what I’m forecasting, I make a “best guess” of what I think will happen, based on pattern recognition, and then jump to certain models. I like to see what all the different ideas are, and then if there is a pattern in the models that I failed to note, I’ll analyze whether or not to include those details in my forecast. This seems to be a widely used technique among meteorologists I’ve met.

    I’m sorry you wish to characterize me and other AGW proponents as “alarmists,” but we’ve always been in the trend business. Climatology is inherently a statistical science, as it is the study of a complex, chaotic system for which we have an incomplete physical modeling and incomplete physical data. From day one it has been about the probability, the standard deviations, the variance, and above all, the noise. The trends are important because our models don’t produce the temperature for May 2073 – they produce a possible lead up to the temperature for May 2073, and you must take all the data into account when determining its accuracy.

    Climate models, like the ensemble weather models, are all basically just perturbations of the same general idea. A tenet of chaos theory is that of attractors, and the model’s output – based on our current understanding of the atmosphere – tend to settle into a general attractor resembling a trend of temperature increase. They’re very useful as long as we recognize the caveats.

  25. Posted Jun 20, 2008 at 9:15 AM | Permalink

    Roger #22, I like your perspective. The report’s hurricanes section is a little too dependent upon one specific study…

    Have you examined the comments by the peer-reviewers located here? Second draft review comments

    I do not see the final review comments located on the website as of yet.

    Page 19 onwards of the PDF has an illuminating discussion on the Hurricanes debate that hardly exudes certainty. Perhaps Tom Chalko can figure out this mess for us while taking a break from his anti-gravity research.

  26. M.Villeger
    Posted Jun 20, 2008 at 9:15 AM | Permalink

    Glossy brochures cannot subsitute intelligent reasoning: read Marcel Leroux and then the NOAA compilation might have some interest as data rather than interpretation.

  27. Kenneth Fritsch
    Posted Jun 20, 2008 at 9:20 AM | Permalink

    Re: #19

    I think the model data speaks for itself, and all critical eyes will immediately recognize the vast shortcomings of the ENSO models here as a litmus for predicting exactly what will happen. At the same time, however, it provides pretty solid evidence (based on past matchings) on what we can expect the range of SST anomalies to be due to El Nino. In no way can one extrapolate from these noisy model data that GCM’s are flawed, as John did.

    Counters, I believe you make a good case for why scientists (perhaps with some self interest in the matter) can look at these spaghetti graphs and conclude what they do in matters of expert judgments about future predictions. You do note that not all models are equal or processed equally or optimally. So what is one, who looks at the spaghetti and expects some objective measure of predictive skill, to do?

    One major concern of mine would be to plot a number of model results against the observed data in one instance and note that a particular subset of these results were fairly close to the observed data and then in another instance do the same and point to a different subset of model results being close to the observed data.

    Nothing short of a detailed analysis of each model is going to shed much light on this matter.

  28. Posted Jun 20, 2008 at 9:21 AM | Permalink

    From NOAA:

  29. counters
    Posted Jun 20, 2008 at 9:33 AM | Permalink

    Kenneth,

    Nothing short of a detailed analysis of each model is going to shed much light on this matter.

    Precisely. Which is why it’s very frustrating discussing models on blogs because the data is so complex with so many subsets and mini-trends going on that a set of data can be cherry picked to massage any conclusion out of the whole. The diverse, emergent behavior of the model output fascinates me to no end, which is why I’m trying to push my education in the direction where I can be directly studying it.

    I just wish that commentors on the wide variety of climate blogs out there understood even the most trivial, basic concepts of chaos theory so that they could understand why a) model data should always be taken with an educated grain of salt, and b) there are solid, fundamental reasons why the models are imperfect.

  30. James Chamberlain
    Posted Jun 20, 2008 at 9:45 AM | Permalink

    Counters,

    Agreed, it’s not your data or plot and realized that right after I posted. Sorry for pinning that on you mistakenly.

    James

  31. counters
    Posted Jun 20, 2008 at 9:49 AM | Permalink

    No problem, James!

    I’ve rarely if ever commented on ClimateAudit in particular (although I read it daily), so am I just seeing things or do the order of comments seem to change? I swear that I see posts in between my responses to others that I don’t remember seeing before… perhaps I should just go outside and watch the thunderstorms trying to develop around my county.

  32. henry
    Posted Jun 20, 2008 at 10:05 AM | Permalink

    counters said:

    I’m sorry you wish to characterize me and other AGW proponents as “alarmists,” but we’ve always been in the trend business.

    Trend goes up, supports the “theory”. Trend goes down, it’s due to noise…

    Climatology is inherently a statistical science, as it is the study of a complex, chaotic system for which we have an incomplete physical modeling and incomplete physical data. From day one it has been about the probability, the standard deviations, the variance, and above all, the noise.”

    Then why have we heard some climate scientist say “I am not a statistician”, or why do climate scientists refuse the help of established statisticians when working with the data?

    “The trends are important because our models don’t produce the temperature for May 2073 – they produce a possible lead up to the temperature for May 2073, and you must take all the data into account when determining its accuracy.”

    And here again, if we could GET the data from the climate scientists, or be sure that the data already used didn’t keep changing, or if we could get the code, we could also determine the accuracy of their model results.

    I like to see what all the different ideas are, and then if there is a pattern in the models that I failed to note, I’ll analyze whether or not to include those details in my forecast. This seems to be a widely used technique among meteorologists I’ve met.

    Climate science appears to be different. There, they’ll look all the different ideas, and if they show an upward trend, it’s accepted. If it’s a lower trend, it’s rejected. This seems to be a widely used technique among Climate Scientists…

  33. Pofarmer
    Posted Jun 20, 2008 at 10:06 AM | Permalink

    All time records have been set all over the state.

    Since when?

  34. Pofarmer
    Posted Jun 20, 2008 at 10:07 AM | Permalink

    When NOAA can give me an accurate 10 day forecast, maybe I’ll beleive them on stuff more than 10 days out. Everybody was calling for this summer to be hotter and drier than last summer in the Midwest.

    Guess what?

  35. Paul Linsay
    Posted Jun 20, 2008 at 10:29 AM | Permalink

    Counters

    Climatology is inherently a statistical science, as it is the study of a complex, chaotic system for which we have an incomplete physical modeling and incomplete physical data. From day one it has been about the probability, the standard deviations, the variance, and above all, the noise.

    Thanks for admitting that. Now please explain why anyone should take 30 and 100 years predictions seriously with “incomplete physical modeling and incomplete physical data” in mind.

    Climate models, like the ensemble weather models, are all basically just perturbations of the same general idea. A tenet of chaos theory is that of attractors, and the model’s output – based on our current understanding of the atmosphere – tend to settle into a general attractor resembling a trend of temperature increase.

    It’s also true that a chaotic system can have multiple attractors. A bump here, a perturbation there, and suddenly you’re on a different attractor with different properties. Low dimensional systems can have two or three that simultaneously coexist. The climate, with millions of dynamical variables, can have a huge number. How do you do you know which one you’re on? With time scales of hundreds and even thousands of years how can you even map out one attractor for the climate?

  36. stan
    Posted Jun 20, 2008 at 10:43 AM | Permalink

    28,

    Now that’s funny. I wonder how many advanced degrees it took to make that graph.

    Has the graph been peer-reviewed?

  37. jc-at-play
    Posted Jun 20, 2008 at 11:58 AM | Permalink

    Re: #12

    If you go to http://www.weather.gov/ahps/, you can compare current flood crests with the all-time records. In brief, Mark W is largely correct in his statement that “All time records have been set all over the state [of Iowa].” However, contrary to his statement that “records are being set in St. Louis”, the current flood stage in St. Louis is only 37.08 feet, which is FAR short of the record crest of 49.58 feet. In fact, along the Mississipi River, NONE of the 2008 flood crests are higher than the records set in 1993.

  38. anna v
    Posted Jun 20, 2008 at 12:42 PM | Permalink

    counters 24

    A tenet of chaos theory is that of attractors, and the model’s output – based on our current understanding of the atmosphere – tend to settle into a general attractor resembling a trend of temperature increase.

    I find that it is easy for people modeling climate to hand wave chaos about, but have not seen many hands on papers using chaos theory.

    Actually I have seen one, by Tsonis et al,

    http://www.agu.org/pubs/crossref/2007/2007GL030288.shtml

    After all, what is chaos theory but a method to find solutions of coupled non linear many variable differential equations? I agree with Linsay 35. Where are the guarantees that multiple attractors are not lurking in these GCmodels?

    Where are the guarantees that all phase space of the variables and the parameter space has been covered ? Not in the IPCC reports with the spagheti diagrams.

    All this would be interesting discussions among scientists. Unfortunately though these hand wavings have become the bible for some politicians and all environmentalists and countries are being pushed to destroy their economies just in case these apocalyptic projections are right.

    Fortunately the data coming in shows that other attractors are at work than the ones the IPCC patronises.

    Lets vote on attractors then.( sarcasm on)

  39. W F Lenihan
    Posted Jun 20, 2008 at 12:47 PM | Permalink

    The current flooding in the mid-west and Mississippi Basin is a form of the classic rain on snow event we have very winter in the Pac NW. This winter’s snow accumulations were record breaking. Snow melting was delayed by prolonged, cold spring. Add heavy rains to the melt water and you get a flood bonanza.

    It is noteworthy that 500 or 100 year floods can occur as frequently as annually. The reference to years has nothing to d with frequency. It all depends upon the weather.

  40. Scott Lurndal
    Posted Jun 20, 2008 at 1:14 PM | Permalink

    One must also consider, when looking at the 2008 flood events, is the impact that amelioration efforts after the 93 floods had on the 2008 floods. For example, if levees downstream were raised after 93, wouldn’t that cause potentially higher flood levels upstream?

  41. Kenneth Fritsch
    Posted Jun 20, 2008 at 1:17 PM | Permalink

    Re: #28

    Thanks, Ryan, for the graphical presentation of the likelihood levels, however, my dilemma is related to what that means in terms of a show of hands by the gathered experts.

    Does each scientist give a level of likelihood and the final tally is then determined by an mean or median of the individual votes or, alternatively, are the votes weighted a prior giving the individual scientists perceived to have more familiarity with the topic up for voting a more highly weighted vote or does the final arbiter merely get a good feel for the likelihood on the particular issue at hand by knocking it around at the conference table?

    I have never seen it revealed how the likelihood has been determined in any given instance, so maybe it is by a secret handshake. Seriously, I would like to know how the likelihood is determined – and this comes from one who attempted to get an answer on this from the AR4 authorities.

  42. Kenneth Fritsch
    Posted Jun 20, 2008 at 1:28 PM | Permalink

    Re: #40

    One must also consider, when looking at the 2008 flood events, is the impact that amelioration efforts after the 93 floods had on the 2008 floods. For example, if levees downstream were raised after 93, wouldn’t that cause potentially higher flood levels upstream?

    Your comment broaches on the criticism of the Army Corp of Engineers claim long ago of making the Mississippi River plain “flood free” with a system of dams, levees and locks when in fact it may have decreased annual flooding frequency but turned those floods into major disasters when a tall levee was breeched. A less than totally forthcoming or forgetful scientist might confuse this situation for a higher frequency of extreme events of climate due to anthropogenic causes – even though the cause here would be anthropogenic.

  43. MarkW
    Posted Jun 20, 2008 at 1:33 PM | Permalink

    There wasn’t any snow in Iowa when these rains started, it had all melted back in March/April. However the heavy snow pack and more frequent than usual spring rains meant that the ground was saturated. So when the heavy rains came, all of it ended up in the rivers.

  44. MarkW
    Posted Jun 20, 2008 at 1:34 PM | Permalink

    I could have sworn I heard the newsman talking about record water in Missouri. My bad.

  45. Tolz
    Posted Jun 20, 2008 at 2:04 PM | Permalink

    #22 (Roger Pielke, Jr.)
    From your posting on the NOAA report on your site:
    “Finally, let me emphasize that anthropogenic climate change is real, and deserving of significant attention to both adaptation and mitigation”

    As merely an interested reader, I find this editorial comment more provocative than your analysis of the report itself. Because while the CA site is mainly about testing various claims/studies regarding AGW rather than advocating for a particular conclusion (other than that the science is definitely NOT “settled”), there sure are a lot of competent posters here on a whole range of AGW factors, and the conclusion I come to from being an avid reader here is that anthrogenic climate change, particularly with regard to human CO2 emissions, isn’t proved, isn’t significant, and isn’t deserving of the gargantuan bureaucratic “attention” it is getting.
    As someone who respects your guest comments here, as well as elsewhere, I’d love to have you expand on that comment. Thanks.

  46. Posted Jun 20, 2008 at 2:08 PM | Permalink

    #41, Kenneth…

    I will endeavor to find out how these likelihood estimates were determined. My guess it is indirectly buried within the peer-review comments. I put the likelihood of this at 50%.

    I think the chance that there will be climate extremes in the future is 100%, since the Earth’s atmosphere-ocean system is constantly evolving. But backing up — at what point did global warming begin affecting the climate/weather extremes? I seriously ask what date is prior to the effects of human induced climate change? If the changes are accelerating and trending in one direction, where do we begin this trend? 1950, 1970, 1990? If the 2008 Midwest flooding is related to climate change, what about 1993? How much of this is land-use change? If Hurricane Katrina 2005 was an example of human climate tampering, what about Andrew 1992 or Gilbert 1988 or Carla 1961?

  47. Kenneth Fritsch
    Posted Jun 20, 2008 at 3:37 PM | Permalink

    Re: #46

    How much of this is land-use change? If Hurricane Katrina 2005 was an example of human climate tampering, what about Andrew 1992 or Gilbert 1988 or Carla 1961?

    Ryan, I think the point I read from Pielke Jr’s site was that the report under discussion here talked about the changing cost of extreme events without referencing the details that could affect that cost, like population density changes, inflation factors and other non-climate related anthropogenic factors like the Mississippi floods over tall levees and Katrina at a Cat 3 level breaching a levee that the Army Corp of Engineers proclaimed it could withstand – if it had been constructed properly.

    It would seem to me that fossil fuel use gets beat up rather badly and often as an anthropogenic cause of adverse events and some of these other anthropogenic causes are let off the hook rather easily. Makes one wonder why this would be.

  48. maksimovich
    Posted Jun 20, 2008 at 3:42 PM | Permalink

    There is a significant distance in both understanding of enso and quasi periodic “states”, in the “consensus” and that is seen in the literature from those who one could describe as “beautiful minds”(Those who are thinkers and innovators,and those “responders” whose scientific expertise is in a constant feedback loop citing and peer reviewing each others papers,similar to some Appalachian hillbilly “family”.

    Vladimir Arnold differentiated this as maniacs vs Genius,the former producing copious papers about nothing,the latter providing innovative evolution to the scientific problems.

    Here on enso we see an excellent example from some “beautiful minds”.The geometric distance here is apparent.

    Cryptically speaking here one must climb the devils staircase,to see how and where the ball falls,he says tongue in cheek?

    Ghil et al 2008
    Abstract. We consider a delay differential equation (DDE) model for El-Ni˜no Southern Oscillation (ENSO) variabilityThe model combines two key mechanisms that participate in ENSO dynamics: delayed negative feedback and seasonal forcing. We perform stability analyses of the model in the three-dimensional space of its physically relevant parameters Our results illustrate the role of these three parameters: strength of seasonal forcing b, atmosphere-ocean coupling , and propagation period of oceanic waves across the Tropical Pacific. Two regimes of variability, stable and unstable, are separated by a sharp neutral curve in the (b, ) plane at constant . The detailed structure of the neutral curve becomes very irregular and possibly fractal, while individual trajectories within the unstable region become highly complex and possibly chaotic, as the atmosphere-ocean coupling increases. In the unstable regime, spontaneous transitions occur in the mean “temperature” (i.e., thermocline depth), period, and extreme annual values, for purely periodic, seasonal forcing. The model reproduces the Devil’s bleachers characterizing other ENSO models, such as nonlinear, coupled systems of partial differential equations; some of the features of this behavior have been documented in general circulation models, as well as in observations. We expect, therefore, similar behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.

    Click to access npg-15-417-2008.pdf

  49. Eric (skeptic)
    Posted Jun 20, 2008 at 4:39 PM | Permalink

    Re 40: Like you said except downstream, not upstream. By not allowing flooding upstream, they make the water higher downstream.

  50. David Smith
    Posted Jun 20, 2008 at 7:11 PM | Permalink

    I read the tropical cyclone section and found it is the usual promotional piece by the usual people (Holland in this case). That makes it quite difficult to take the rest of the brochure seriously.

    But, I’m interested in the statements about extreme rainfall trends and will go back to the cited sources to see what they say.

  51. Posted Jun 20, 2008 at 7:16 PM | Permalink

    David, did you read through the 2nd draft review comments? Compare the document with what Chris Landsea objected to. The report is heavy Holland, Webster, and Emanuel but that is expected — they are the experts whose judgment is relied upon in these situations.

  52. MJW
    Posted Jun 20, 2008 at 9:24 PM | Permalink

    I was reading the review comments, and was particularly struck by one of the responses to a comment by Landsea questioning the certainty of the report’s link between anthropogenic warming and tropical cyclone changes.

    As an example, the results presented here suggest that (although not yet detected in observations) anthropogenic greenhouse gas forcing may have already caused hurricane core precipitation rates to increase by ~6% due to the 0.5 deg C long-term warming of tropical Atlantic and Gulf of Mexico surface waters, and attendant increased water vapor. While this may seem “tiny” to the reviewer, consider the plight of some New Orleans and Mississippi residents who were trapped between rising flood waters and the ceilings in their homes during Hurricane Katrina flooding. It is conceivable that in some cases, relatively small (~6%) increments of near-storm precipitation might have meant the difference between survival and drowning – a stark reminder of threshold effects.

    The children!!! Think of the children, you heartless SOB!!!

  53. Posted Jun 20, 2008 at 10:22 PM | Permalink

    #52, This is from Kevin Trenberth’s paper in which he came up with the 6% number for Katrina and peddled it in the media quite heavily. I will have to sift through the trashcan to come up with the particulars.

  54. MJW
    Posted Jun 20, 2008 at 11:03 PM | Permalink

    Thanks, ryanm. I was mostly struck by the emotional tone of the response. A heartrending tale of some poor soul going to a watery grave, while Dr. Landsea coldbloodedly minimizes the “tiny” changes to tropical cyclones. Tiny changes the JUST KILLED A HUMAN BEING!!!

  55. Posted Jun 20, 2008 at 11:27 PM | Permalink

    #54, I agree completely and Landsea who contributed greatly to the forecasting and warning of Katrina during the days prior to landfall requires no such reminder of the risks due to hurricane flooding. This is a shameful approach that thankfully is relegated to a very small minority of the tropical cyclone research community.

    The contributions by anthropogenic vs. natural oscillations to changes in tropical Atlantic
    SSTs have been discussed by, e.g., Santer et al (2006), Mann and Emanuel (2006),
    Trenberth and Shea (2006) and Holland and Webster (2007). They all find a substantial
    anthropogenic influence that is now larger than that from natural oscillations (and this is
    consistent with the more general IPCC findings). Vimont and Kossin (2007) also
    acknowledge the potential influence of anthropogenic warming on the AMM.

    This statement is in response to another Landsea objection concerning the Atlantic Multidecadal Oscillation (AMO) and why the report goes for broke on the anthropogenic influences and discounts the natural variations. Sadly so few researchers can receive funding anymore to look at natural variations. The literature will simply be overwhelmed by global warming papers which will allow their authors to head influential Working Groups and produce more pretty pictured brochures.

  56. David Smith
    Posted Jun 21, 2008 at 8:08 AM | Permalink

    Ryan, I had planned to spend my Saturday morning reading the non-hurricane portions of the NOAA brochure. I’m interested in North American rainfall trends, particularly in intense precipitation.

    But, after reading Holland’s hurricane section and seeing its poor quality, it’s reasonable to conclude that the rest of the brochure is likely to be ideo-junk, too.

    So, I’ll look at more of the original papers on rainfall trends but not this NOAA thing. What a waste of public money.

    On the topic of rainfall and Katrina, Doppler estimates of rainfall in the named areas were six to twelve inches (isolated areas). A six percent increase in rainfall adds 0.35 to 0.70 additional inches of water. For perspective, compare that to the storm surge along the Mississippi coast of 25 feet (300 inches) and a New Orleans flooded (to the ceiling) house level of 80 inches. (The region is flat and the rainfall came quickly, so there’s no “valley accumulation” effect).

    People died in Katrina because of governmental (local, state, federal) ineptitude and individual failures to act responsibly. That’s where the focus belongs and not on conjecture about the impact of a half-inch of water.

  57. Craig Loehle
    Posted Jun 21, 2008 at 8:46 AM | Permalink

    I am stunned that there were only 4 reviewers of the CCSP report in the comments linked to above.

  58. Posted Jun 21, 2008 at 9:55 AM | Permalink

    #56, David, when I am not squirreling away time on the blogs, I research extratropical storms and especially those that rapidly develop called “bombs”. I was a little struck by some of the prognostications in that portion of the report which are based upon some fairly non-convincing modeling. Thus, since the hurricane stuff has been beaten to death, yet keeps getting back up, I also look forward to reading about other human-caused things like rain, wind, and temperature.

  59. David Smith
    Posted Jun 21, 2008 at 12:07 PM | Permalink

    Re #58 I’ll check the internet to see what’s publically available on rainfall trends. I hope something is out there based on readily-available raw rainfall data.

  60. UK John
    Posted Jun 21, 2008 at 1:36 PM | Permalink

    This reminds me of the 2007 floods in UK which were blamed by all on AGW, but then later a cold hard look by scientists found out that it was just a bit wetter than normal.

    http://www.nerc.ac.uk/press/releases/2008/12-floods.asp

  61. streamtracker
    Posted Jun 21, 2008 at 4:36 PM | Permalink

    Ryanm says: “However, before attributing all observed phenomena to unnatural climate changes, we must not forget that natural climate variations exist and generate extremes all the time including plenty of weather systems.”

    Ok, but do those cycles explain these types of multi-decade trends?

    and

    Just two of the many trends reported that can not be easily explained by ENSO or PDO.

    • An Inquirer
      Posted Sep 3, 2008 at 6:53 AM | Permalink

      Re: streamtracker (#61),
      Streamtracker, I read this thread only today, and I came across your graphs. Several months ago, I searched for comprehensive studies on length of growing seasons in the United States. I did not find what I expected. As I grew up on a Minnesota farm in the 50s and 60s, we typically had a killing frost in mid September. Now, that frost typically comes in late September and sometimes October. I anticipated that there would be studies that verified my ancedotal observation in a comprehensive and defining manner. I did not find such studies. In fact, the studies that I did find were local studies and many of them did not find the trend I anticipated. Therefore, I am most curious as to the source of your graph, what locations it measures, and how it integrates different locations, and if there is any impact of frost-resistant hybrids that have been developed over the past few decades.

  62. Posted Jun 21, 2008 at 5:04 PM | Permalink

    #61, gee-wiz you nailed us on those two graphs. “Easily explained” is the key phrase there. I guess they are “more easily explained” by AGW than something natural. But since you provide no context a la a drive-by blogger, I guess we will go with your “easy explanation”. Way to go Chalko.

  63. Posted Jun 21, 2008 at 5:21 PM | Permalink

    Re: #61

    Sources for those charts are necessary so they can be checked. Otherwise, they serve no useful purpose.

  64. Posted Jun 21, 2008 at 5:29 PM | Permalink

    Dear Counters:

    John A, what are you attempting to illustrate with that graph? If it’s a lack of consensus among the models, think about what you’re showing – models of a climate oscillation with understood effects but less-understood causes. Trying to extrapolate from the lack of consensus among El Nino forecasts to lack of consensus for 100 year climate models is extreme, to say the least; they’re completely different, and the “model consensus” comes from the fact that they all tend to converge on the same trend, not necessarily precisely the same values.

    BTW, since John neglected to cite the graph, you can find it on IRI’s page here, along with some interesting (although apparently extraneous considering John’s omission of them) tidbits: “The following graph [John’s] and table show forecasts made by dynamical and statistical models for SST in the Nino 3.4 region for nine overlapping 3-month periods. Note that the expected skills of the models, based on historical performance, are not equal to one another. The skills also generally decrease as the lead time increases. Thirdly, forecasts made at some times of the year generally have higher skill than forecasts made at other times of the year–namely, they are better when made between June and December than when they are made between February and May. Differences among the forecasts of the models reflect both differences in model design, and actual uncertainty in the forecast of the possible future SST scenario.”

    Well “Counters” the reasons for my complete lack of genuflection before these model results are all related to the fact that for the past year I have been studying (and continue to study) mathematical modelling starting with matrix algebra, moving on to the the eigenvalue roots to first, second and higher differential equations together with their associated eigenvectors and so on.

    So I totally LOL’d when I saw the characteristic results of extrapolation from a model or multiple models where the extrapolation shows different periodicities and a wide spread of possible results rendering the exercise fun but meaningless.

    You should read Pat Frank’s exposition on climate models and the limits of precision in a recent edition of Skeptic magazine.

    You see, increased knowledge of the underlying math causes me utter bewilderment that people give these model results the time of day.

    I still can’t believe that people are being paid good money to produce such trivially useless results that any math undergrad would laugh at.

  65. Alan S. Blue
    Posted Jun 21, 2008 at 5:50 PM | Permalink

    If you’re looking for confusing multi-decadal trends, go to the pre-1800 records. There you know humans aren’t producing nearly enough carbon dioxide to cause a whit of difference. (If the system is sensitive enough to be influenced by the non-industrialized output of less than a billion humans, then that same sensitivity today would be making a far larger impact.)

    So, explain the Little Ice Age. Explain the Medieval Warm Period. And there’s a long, long list of climactic changes with no particularly compelling explanations. Notice that according to several of the reconstructions of the LIA, we’re still recovering – leading to an underlying trend of global warming that isn’t anthropogenic. (Maybe one degree C/century).

    The argument isn’t “ENSO answers everything!”, but closer to “Some models appear to dramatically underestimate ENSO’s impact”. And “If ENSO can produce a swing from +2.5C/c to near 0C/c, how big is the underlying trend exactly?”

  66. Posted Jun 21, 2008 at 6:36 PM | Permalink

    streamtracker 61, Actually, yes they do. Not just ENSO and PDO but NOA, AMO IO and possibly others. This paper by A.A. Tsonis is an interesting read.

  67. David Smith
    Posted Jun 21, 2008 at 9:30 PM | Permalink

    I’m wondering if there is a time of observation bias (TOB) in the USHCN precipitation records.

    If so, and if it is of significant size and has not been removed, then the apparent trend in US heavy-rainfall days might be overstated.

    The temperature TOB is well-known and significant. If I recall correctly, temperature observations taken near the usual time of minimum temperature (sunrise) “split” the morning-low readings between two adjacent days, which on average understates average temperature. Daily observation times in the US coop network shifted towards morning in the 20’th century, creating a cool bias in the raw data.

    Rainfall, especially in warm weather, tends to be highest in the afternoon and lowest between midnight and sunrise. If there has been a movement towards reading rain gauges at sunrise, like the temperature gauges, then the record has shifted towards “splitting the minimum” and away from “splitting the maximum”. Since there is less splitting of the maximum, maximum rainfall events (eg, afternoon thunderstorms) are now being preserved in the record as a single rainfall event rather than being split over two adjacent days.

    This precipitation TOB, if it is real and of significant size and is not removed, could make it appear that heavy rain days are increasing when in fact there is no such trend.

    Perhaps this is well-known and has been adjusted for in the records, or perhaps it has been found to be insignificant or offset by other factors. I don’t know. I am looking for a paper that examines this question. If anyone knows of one, please post a link. Thanks.

  68. David Smith
    Posted Jun 21, 2008 at 10:12 PM | Permalink

    Below is a chart showing how summertime rain in the south and central US varies with time of day (and with day of week, which is interesting but not relevant to the immediate topic).

    Suppose that someone had been taking daily readings at 6PM (6PM is “18” on the x-axis and marked by a yellow line) and that the hot colors represent the heavy rainfall of an afternoon thunderstorm. The 6PM reading, taken while the afternoon rainstorm is in progress, would tend to split the thunderstorm into two parts, with one part recorded for today’s record and the remainder placed into tomorrow’s record. That would tend to understate afternoon rain event especially if thunderstorms are something of a random event.

    Now switch the TOB to 6 AM (the green line at 30 on the x-axis). This tends to confine all of the rain measured during a thunderstorm to one daily report, rather than two. This switch would tend to increase the reporting of daily heavy-rainfall events.

  69. Kenneth Fritsch
    Posted Jun 22, 2008 at 7:02 AM | Permalink

    David, day of week??? Would not a depedence there indicate an anthropogenic effect — unless the Rain Gods work a five day week.

  70. steven mosher
    Posted Jun 22, 2008 at 7:11 AM | Permalink

    re 67.

    look at a B91

    http://gallery.surfacestations.org/main.php?g2_view=core.DownloadItem&g2_itemId=28573

  71. Geoff Sherrington
    Posted Jun 22, 2008 at 7:17 AM | Permalink

    Re # 64 John A

    Any idea why consecutive 3-month running means are used in the spaghetti graphs? To smooth, one presumes. Do we have any plotted just monthly?

    As I wrote before, with graphs like those, only two logical possibilities exist:

    (a) one can be correct; or

    (b) none can be correct.

  72. steven mosher
    Posted Jun 22, 2008 at 7:22 AM | Permalink

    re. 67.

    here are the instructions.

    http://www.crh.noaa.gov/lbf/?n=b91_instructions

  73. David Smith
    Posted Jun 22, 2008 at 8:38 AM | Permalink

    Re #69 Kenneth, I think the author is trying to say that a drop in reported rainfall on weekends is due to a drop in pollution. I haven’t seen how the author derived his/her plot so I have no clue as to whether the reported day-of-week pattern is real or just some artifact (like a drop in coop observations on weekends).

    I just borrowed the plot because it helped illustrate the time-of-day point.

    It’d be interesting to find the original paper and explore it for the weekend issue. My sense is that the apparent weekend pattern is an artifact of some sort. If it’s real then it’s a rather potent indicator of human impact on climate.

  74. Posted Jun 22, 2008 at 8:49 AM | Permalink

    Re #71

    Geoff,

    …unless one of those can be shown beforehand to be demonstrably superior by virtue of its uniquely insightful mathematical technique rooted in unimpeachable physical theory, then you’d have to say that that the mere fact that one of them will be close to the actual result is more to do with luck than anything else.

    Hence or otherwise, those ensembles are the normal happenstance of mathematical modelling using matrices of 1st and 2nd degree differential equations when plotted outside their domain of measurements.

    And they mean nothing.

  75. David Smith
    Posted Jun 22, 2008 at 8:58 AM | Permalink

    Re #72 Thanks for the info. What I read is this

    At climatological stations, however, precipitation should be measured at the same time the temperature reading are made (preferably after 5 p.m.).

    which is what I think would apply to the USHCN stations. If that has been the practice, and if observation times have shifted over the decades, then it indeed seems like there’s a potential time of observation bias in the precipitation data. And, if the shift is away from late-afternoon readings then there is less splitting of summertime thunderstorm precipitation between two adjacent days.

    Karl (1998) indicates that the increase in heavy-precipitation events is greatest in the warm months but statistically insignificant in winter.

    There are also odd regional patterns, like the US Midwest which seems to have avoided the heavy-rain increase yet it is surrounded by regions showing increases. It makes me wonder whether that is due to nature or due to some aspect of data collection and analysis.

  76. Posted Jun 22, 2008 at 8:13 PM | Permalink

    Oh dear, from the Guardian UK. James Hansen wants Big Oil sued. NASA Scientist wants oil firm chiefs sued.

    James Hansen, one of the world’s leading climate scientists, will today call for the chief executives of large fossil fuel companies to be put on trial for high crimes against humanity and nature, accusing them of actively spreading doubt about global warming in the same way that tobacco companies blurred the links between smoking and cancer.

    This is all part of the launch of http://www.350.org, the website dedicated to reducing CO2 levels to 350 ppm.

    Where did this 350 number come from?

    Dr. James Hansen, of NASA, the United States’ space agency, has been researching global warming longer than just about anyone else. He was the first to publicly testify before the U.S. Congress, in June of 1988, that global warming was real. He and his colleagues have used both real-world observation, computer simulation, and mountains of data about ancient climates to calculate what constitutes dangerous quantities of carbon in the atmosphere. The Bush Administration has tried to keep Hansen and his team from speaking publicly, but their analysis has been widely praised by other scientists, and by experts like Nobel Prize winner Al Gore. The full text of James Hansen’s paper about 350 can be found here.

  77. Andrew
    Posted Jun 23, 2008 at 2:59 AM | Permalink

    76 (Ryan Maue): Lessee, 5.35ln(350/280)~1.2 W/m2 and using his personal senstivity estimate:
    http://www.worldclimatereport.com/index.php/2005/04/28/james-hansen-increasingly-insensitive/
    that makes about .8 degrees over the preindustrial level the “safe” level (but possibly as high as 1.4 degrees) which is about where we are now (surely just a little more warming wouldn’t be so bad?). Joe Romm(el) favors 450, which means he is okay with more than that. I guess extremists aren’t about to bicker over such trivial things (what’s a hundred ppm between friends?).

  78. Geoff Sherrington
    Posted Jun 23, 2008 at 5:27 AM | Permalink

    Re # 68 David Smith

    Observationally correct for rainfall, but at least at the end of the month all of the rain will have been collected and a monthly average can be obtained without using daily data. Or, you can use a separate collector and collect rain for a month.

    This overcomes a problem that TOBS has with temperatures, because the old thermometers were reset daily (or so) and there is no way to composite a month of temp readings without using daily. The MMTS apparatus, properly designed, with the right time contant, the right placement and the right calibration, should overcome the need for TOBS temperature adjustments. What a shame it has been handled so badly.

  79. Posted Jun 23, 2008 at 8:00 AM | Permalink

    Yeah, this report is crud.

    i was willing to be open minded about it but the Hurricanes secion pretty much tells me it’s crud.

    even with that awful Holland study which they use a lot of; with all the studies that have come out since about hurricanes!!

  80. steven mosher
    Posted Jun 23, 2008 at 8:31 AM | Permalink

    re 76. I call his fortran code a high crime. Beyond that spreading fear has always been a more dangerous crime than spreading doubt.

  81. David Smith
    Posted Jun 23, 2008 at 8:50 AM | Permalink

    Re #78 Agreed, Goeff. The monthly rainfall numbers are unaffected by TOB but the daily numbers areaffected. I think that any study which looks at distribution patterns of daily numbers should take TOB into account.

  82. SteveSadlov
    Posted Jun 23, 2008 at 2:45 PM | Permalink

    NOAA does not understand that during the last Ice Age, most of the ice free portion of North America was arid? As a tax payer, I find that intolerable.

  83. SteveSadlov
    Posted Jun 23, 2008 at 3:00 PM | Permalink

    RE: #61 – Are you sure those are not charts with numbers of encounters with rods and grays?

  84. Sam Urbinto
    Posted Jun 24, 2008 at 9:41 AM | Permalink

    73 David Smith

    Regarding the weekend weekday city etc thing on rain. http://earthobservatory.nasa.gov/Study/UrbanRain/

    74 John A, 71 Geoff, others

    Wow, a range of results that look like what they’re created to look like! 🙂

    Everything’s got this annoying 0 +/- 1 or 2.5 or .5 or whatever going on.

    Must be AGW.

  85. counters
    Posted Jun 24, 2008 at 9:45 AM | Permalink

    So I totally LOL’d when I saw the characteristic results of extrapolation from a model or multiple models where the extrapolation shows different periodicities and a wide spread of possible results rendering the exercise fun but meaningless.

    I’m glad that you, like me, are interested in and studying the complex math and facets that go into modeling, but I have to stop you here. The different periodicities of the ENSO cycles from these models are not emergent properties; they’re intrinsic to each individual program and reflect upon the lack of consensus on the precise properties of the ENSO cycle.

    Your argument just isn’t sound. I’ll agree that there are issues with precision in the models – that’s a given based on the nature of the system they’re modeling. However, your outright dismissal of all the models is a bit egregious and unwarranted, particularly so under the mischaracterization of their uses which you insinuate.

  86. Sam Urbinto
    Posted Jun 24, 2008 at 10:51 AM | Permalink

    Yeah, they’re great for making squiggly lines mimicing reality with huge ranges enveloping almost any possibility.

  87. Posted Jun 25, 2008 at 2:56 PM | Permalink

    Climate models get it wrong again: startling discovery. Ozone actually helps destroy methane out over the Tropical Atlantic. Read all about it on the new Escience.com science news aggregator. Pretty good site. Ozone destroys methane

  88. David Smith
    Posted Jun 25, 2008 at 8:36 PM | Permalink

    Time of Observation Bias (“TOB”) is a well-known phenomena of the daily temperature record. Is there a similar phenomena in the daily precipitation record?

    This is a worthwhile question because one of the extreme-weather concerns is the reported increase in heavy-rain days (Karl 1998), especially in the warmer time of the year. Is this reported increase real or are the reports affected by TOB?

    Those are big questions. As a small step I’d like to look at a data sample to see if precipitation TOB might exist.

    My data sample is small (Asheville, NC, 1998-99, hourly precipitation). I chose this because Asheville is home to NCDC and because those two years were available on the internet as a free sample 🙂

    My conjecture is that, if the time of observation has shifted from afternoon to morning over the last 50 years, then perhaps we’ve moved from splitting thunderstorms (typically driven by afternoon heat) between two days to capturing all of that thunderstorm on just one day, all thanks to the shift in observation time.

    I combined the Asheville summer of ’98 (May thru October) and the summer of ’99 (May thru October) into one database of hourly summer rainfall. I then looked at the data using a 6 AM time of observation and a 6 PM time of observation. Here is a column plot comparing those two groups:

    Pardon my poor graphic.

    What is shows is that, by shifting the observation to 6AM (red line), the days of heavy precipitation seem to increase while the days of light and moderate precipitation seem to decrease. (The total should be nearly a wash because the total precipitation across the summers is the same regardless of time of observation.)

    Here is the same data presented as differences between the two observation times (ranked from smallest to largest rain day):

    What this small sample suggests is that there may indeed be a precipitation TOB. An important question is whether this type of TOB might have affected some of the heavy-precipitation studies. I don’t know the answer to that but I do wonder about it, because my TOB example has some of the same characteristics as the studies’ results.

  89. Mark T.
    Posted Jun 26, 2008 at 9:53 AM | Permalink

    Time of Observation Bias (”TOB”) is a well-known phenomena of the daily temperature record. Is there a similar phenomena in the daily precipitation record?

    Uh, I would think no. Rainfall should be cumulative, which would imply that a TOB “bias” would show up only as a lead/lag in the record.

    Mark

  90. David Smith
    Posted Jun 26, 2008 at 10:07 AM | Permalink

    Thanks, Mark. No argument about rainfall being cumulative – it is.

    The bias potential is in regards to placing hourly observations into daily buckets. The sum of all the rainfall in all buckets isthe same but the distribution of rainfall among the buckets changes. The “increase in intense rainfall” reports (“more heavy buckets”)are based on the distribution of rainfall among the buckets, not on total rainfall.

  91. Posted Jun 29, 2008 at 9:18 PM | Permalink

    Whew, I wondered who to blame for the rain today, glad Newsweek could figure it out for me. Global warming causes the weather.

  92. Kenneth Fritsch
    Posted Jun 29, 2008 at 10:52 PM | Permalink

    Re: #91

    Unfortunately the use of generalizations and anecdotal evidence is not relegated to the journalistic sloppiness that is exhibited in this article about extreme weather events and climate change. I find this type of writing in the US mainstream media more pervasive than that.

  93. Curt
    Posted Jun 29, 2008 at 11:21 PM | Permalink

    A question for the peanut gallery here based on assertions in the article cited by Ryan in #91 (6/29 9:18pm): I remember reading a long time ago (before climate-change issues politicized everything) that predictions of what constituted, say, a 500-year flood were typically based on assumptions of Gaussian distributions of weather events — that since most areas don’t have 500+ years of decent weather statistics, calculations of these types of events were extrapolations from shorter records of less severe events.

    The charge was that the distribution of weather events like floods was had substantially “fatter tails” than what was assumed, so the argument that the upper Mississippi has had two “500-year floods” in fifteen years doesn’t really hold up. I haven’t been able to find anything on this recently. Can anyone shed some light on this or provide references? (I know that many a Wall Street firm, like Long Term Capital Management, has gone down in flames on this type of assumption.

  94. Kenneth Fritsch
    Posted Jun 30, 2008 at 9:56 AM | Permalink

    Re: #93

    Just off the top of my head I would add that anthropogenic effects such as urban development, levee/dam building and changes in agricultural methods could be confounded with the anthropogenic effects of GHGs on climate. The question becomes how one compares flooding from historical times to present and feel confident that all these mitigating factors have been included.

    I have always had the feeling that fat tails are easier to explain for man made events such as stock prices and the LTCM experience than they are for more natural events such as climate. Climate well could have longer term cyclical events with just the right conditions occurring at the same time that are viewed as fat tails in the distribution.

    In the article the comment was made that the flooding (should not the measure be rainfall/precipitation and not flooding as measured by runoff into rivers and flood stages) may have been caused by a changing jet stream and then appeared to jump to the conclusion that that change was due to AGW affected climate change. If the jet stream changed in an unusual fashion for an extended period of time than that might partially explain seeing successive record flooding, if not precipitation, over a short period of time. The next logical connection would be the relationship of changing temperatures to the jet stream.

    A careful read of the paper that started this thread and this article show, in my view, that what is being concluded is not well, or at all, substantiated by the paper/article content.

  95. David Smith
    Posted Jul 1, 2008 at 10:57 PM | Permalink

    I’m struck by the similarity between the Asheville precipitation TOB illustration in #88 (second figure) and the Figure 5 plots from Karl’s 1998 paper . The light and moderate events decrease while the heavy events increase.

    It’s unclear to me what physical AGW mechanism might account for such a precipitation pattern shift – if precipitation has increased in the US due to AGW-driven increases in precipitable water then it seems to me like the increase should occur across the board, on light rain days as well as thunderstorm days.

  96. David Smith
    Posted Jul 6, 2008 at 1:21 PM | Permalink

    I’m playing with US precipitation data and various ways to examine it. Extreme rain is increasingly noted as an AGW fingerprint and those claims warrant some examination.

    While I’m becoming familiar with the data I’m generating an odd assortment of plots, some of which I’ll note here.

    Below is a time series of “extreme precipitation” for Iowa Falls, IA, a USHCN station in the flood-battered American Midwest. It’s close to Cedar Falls, IA, which recently made the news with large scale flooding.

    “Extreme precipitation” is defined by me as a day in the top 10% of all Iowa Falls precipitation days from 1949-2005. In this case the cutoff is 0.97 inches/day of precipitation. This definition is arbitrary but seems reasonable.

    I plotted (1) the number of extreme precipitation days per year and (2) the average precipitation amount of those extreme days during that year. This should show if the number of extreme days is rising and also if the extreme amounts are increasing. There is no single perfect measure of trends in rainfall extremes but this seems to give at least a useful impression of what’s happening.

    The plots are here:

    This station shows no trend in the number of extremely wet days per year (blue line). There does seems to be some upturn in the average size of the extreme events (red line) but the increase is only 0.02 inches per decade.

    It also indicates that starting date can play a role in stating whether a trend exists. In this case I chose 1949 because that’s where the database starts.

  97. Craig Loehle
    Posted Jul 6, 2008 at 1:57 PM | Permalink

    The CCSP report seems to imply that any weather trend becoming more extreme is bad. If rain falls as 1/8 inch drizzle, it will all be caught by the forest or grassland canopy and will not reach the ground, and thus will not help plant growth at all. For plants, a heavy rain is better.

  98. David Smith
    Posted Jul 6, 2008 at 2:05 PM | Permalink

    Another Iowa Falls plot, below, shows the annual days of exactly 0.01 inches of precipitation.

    This 0.01 inch value is important because it is the lowest value recorded and care must often be taken just to capture it.

    If these small amounts were missed in earlier times but are now captured then an unreal trend may be introduced into, say, an analysis of trends in annual days of precipitation.

    It looks to me like something changed about 1970 which increased the number of 0.01 inch days recorded. My suspicion is that the change was equipment or procedural and not a natural event.

  99. David Smith
    Posted Jul 6, 2008 at 4:41 PM | Permalink

    Same as #98, except for rainy days in excess of 2 inches:

    This captures the top 3% of Iowa Falls’ extreme precipitation.

    There’s no trend in the average amount of these top-3% events. The incident rate appears to be increasing at the rate of +1 extreme rainy day per year over the course of a century, where the typical such extreme rain would be a 2.5 inch rainfall. That’s not exactly a catastrophe.

  100. David Smith
    Posted Jul 6, 2008 at 5:28 PM | Permalink

    Maybe the trend in Iowa Falls’ extreme rain isn’t in individual rainy days but rather in closely-spaced rainy days. After all, floods often come from rainy periods, not just one downpour.

    Below is a plot of the total rainfall for the prior seven days, calculated for each day from 1949 thru 2005 (some 20,000 days). I truncated the values below 4 inches so as to highlight the periods of greatest raininess.

    Since there are 20,000 datapoints the x-axis resolution is poor but the peaks are visible. What this shows is an era of extended raininess in the 1960s and maybe in the 1990s. Overall there is no impression of a trend toward extended heavy-rain periods in Iowa Falls.

  101. David Smith
    Posted Jul 27, 2008 at 8:33 AM | Permalink

    (This expands on an earlier post. My apology for the length.)

    Precipitation Time-of-Observation Bias

    Rain is a topic which lacks the drama of, say, melting ice. Nevertheless, changes in heavy rain frequency are important because they are cited as evidence of AGW and because, unlike some aspects of AGW, increased flooding would have unquestionable social impact.

    The following are comments on a small part of the rainfall issue – something I term ”precipitation time-of-observation bias” (”P-TOB” for short).

    P-TOB is not mentioned in climate science literature, so far as I can tell. This contrasts with P-TOB’s well-noted relative ”T-TOB” (temperature time-of-observation bias). T-TOB plays an important role in temperature reconstruction. P-TOB, unlike its famous relative, is unnoted.

    This lack of literature reference certainly makes me pause and wonder if my thinking is wrongheaded. Perhaps P-TOB is a basic reasoning error on my part and it simply does not exist. Or, maybe my literature search has missed the explanation of how P-TOB is removed or avoided in studies. Perhaps it is known by another name. Or, maybe on balance any P-TOB effect is too small to matter – I don’t know. Perhaps someone here knows and can share the answer.

    Here’s the basic idea: a lot of intense rain occurs as thundershowers, especially on summer afternoons. If the 24-hr rainfall observation occurs during an afternoon thundershower then that thundershower may be split in two and recorded as two moderate rains rather than one intense rain. Suppose, though, that the observation time is later changed to the unstormy early morning. Such a change would mean that afternoon thundershowers are no longer split and are instead recorded as one intense rain rather than two moderate rains.

    This shift in observation time (P-TOB) may make it appear that thundershowers have become more intense while in reality they have not changed.

    Here is an illustration which may help:

    If the daily rainfall total is taken at “A” ( see the red line) then the total afternoon storm is recorded into one day of the record, as an intense event. However, if the observation is taken at “B” (the blue line) then the storm is split and recorded between two days, appearing as two moderate events.

    If the time of observation changes from B to A and that change is not recognized when records are examined then one may think that there has been a shift towards intense precipitation events. However, that apparent shift may actually be an illusion created by the change in observation time.

    That’s the basic P-TOB idea.

    To reinforce the rainy-afternoon idea here is an example of summertime hourly rainfall distribution. I used Asheville, NC (home of the NCDC) as the example because that hourly data is free and readily available on the internet:

    Asheville rainfall tends to peak in the afternoon, as the plot shows.

    Can the P-TOB concept be illustrated via data? Yes. For that effort I again used Asheville NC 1998-99, but this time I used all months (even though P-TOB may be mainly a summertime affair). I looked at the daily rainfall distributions for observation times of midnight, 6 AM, noon, 3 PM and 6 PM. Below is an example, comparing 6 AM and 6 PM observation times and showing the heavy-rain part of the distribution curves. I can provide other combinations if anyone so wishes:

    The 6 AM observation time for 1998-99 tends to capture the afternoon intense storms as single events while the 6 PM observation tends to split those intense storms into two less-intense storms.

    Suppose I place this example’s entire distribution into quantiles of 5%, for ease of viewing. Here is the result:

    In this example P-TOB shows a distinct pattern, with an apparent sharp increase in the most-intense precipitation (right side) offset by smaller net declines throughout the remainder of the distribution. The net, of course, is zero. If one was unaware of P-TOB then the spike on the right might be reported as a real increase in intense rain rather than as a data illusion.

    Is this odd change in distribution seen in any of the studies? I checked Karl et al’s 1998 study

    Click to access i1520-0477-79-2-231.pdf

    on changes in intense precipitation in the US and noted this plot for the southeastern US (the location of Asheville). The column graph shows the change in precipitation distribution over the course of the 20′th Century, with the most-intense values on the right side:

    Is the similarity between the two a coincidence? It indeed could be. Or, maybe P-TOB affected Karl et al’s results. I don’t know. However I do know that I have difficulty conceiving of a physical reason for the distribution found in Karl et al.

    Below are a few other questions and graphs.

    ** Have precipitation times of observation changed? Yes, at least in North Carolina. Below is a plot of the changes in observation times of the twenty-five North Carolina USHCN stations which report daily values:

    As the plot indicates, there has been a notable shift from afternoon to morning precipitation observations in North Carolina since 1950 (I didn’t review pre-1950 records). If P-TOB exists then this shift in observation time would tend to result in the appearance of more-intense daily rainfall events in the records.

    ** What does the intensity time series look like for those North Carolina stations? The following plot takes the 25 stations as a group and sums their days of intense rain (defined as 2 or greater inches in a day) per year:

    At face value the plot shows a trend towards more days of intense rain in North Carolina.

    Suppose we break the group in two and first look at just the stations which changed from afternoon to morning observations. Their result is here:

    It’s a clear pattern of increasing days of intense rain.

    Now how about those stations which had no change in observation time:

    It’s essentially unchanged – no increase in the number of days of intense rain at stations which had no change in observation time. That’s quite a contrast versus the prior plot.

    These illustrations are small samples which are examined in only one manner and may not be representative of the larger body of data. No conjecture about the actual extent of P-TOB, assuming it exists, is possible without a fuller examination. At this point my only goal is to float the concept of P-TOB and its possible consequences, and provide a modest midsummer diversion while we watch the ice to melt, or not!

    Footnotes:

    The free Asheville 1998-99 hourly data can be found here:

    http://cdo.ncdc.noaa.gov/cdo/3240nam.txt

    Station rainfall history can be found here:

    http://cdiac.ornl.gov/epubs/ndp/ushcn/usa.html

    Images of raw station data sheets can be found here:

    http://www7.ncdc.noaa.gov/IPS/coop/coop.html

  102. Kenneth Fritsch
    Posted Jul 27, 2008 at 9:26 AM | Permalink

    David Smith, your analytical efforts are much appreciated by this poster/reader. I also think they encourage the interested reader of climate science papers, or science papers generally, be they professionals or laypersons, to always dig a little deeper into these matters.

  103. David Smith
    Posted Jul 27, 2008 at 10:38 AM | Permalink

    Re #102 Kenneth, these types of exercises are particularly fun because the NCDC does a good job of making the data available and the data is relatively unaffected by rain gauge proximity to humans (unlike thermometers). There are gobs of precipitation data which can be sliced and diced in many ways.

  104. Craig Loehle
    Posted Jul 27, 2008 at 12:20 PM | Permalink

    David Smith: very nice. This is worth a publication. Even in the North the summer afternoon storms tend to be the biggest.

  105. Jesper
    Posted Jul 27, 2008 at 12:35 PM | Permalink

    Nice work – might be a good test to try the same analysis for winter when precip comes from frontal systems, and the diurnal signal is likely much weaker.

  106. David Smith
    Posted Jul 27, 2008 at 2:49 PM | Permalink

    Thanks. I plan to look at winter time series and also at the US Southern Plains, where summer thunderstorms are often nighttime rather than afternoon events. At this point this is simply a plausible conjecture which needs to be fleshed out.

  107. David Smith
    Posted Aug 15, 2008 at 10:24 PM | Permalink

    Urbana, Illinois is both a nice university town and a source of hundred-year rainfall data. I took a look at Urbana’s history of heavy rain days (days with 2+ inches of rainfall) to see if they are increasing. Here is the time series from 1906-2005:

    Looks like Urbana is experiencing a long-term upward trend in heavy rain.

    But, wait a minute – what about time of observation, which may affect the recording of heavier rain events?
    In checking the record I noted that Urbana switched from late-day (7 PM) observations to midnight observations as as of the end of 1955. This is denoted by the green line.

    If I split the record in two at 1955 what do I get? well, here is 1906-1955:

    and here is 1956-2005:

    When the time series is split in two, so as to keep each series with a constant time of observation, there is little or no increase in heavy rain frequency.

    This result is consistent with the P-TOB idea (see above).

  108. David Smith
    Posted Aug 16, 2008 at 10:30 PM | Permalink

    Danville, Illinois is located only about thirty miles from Urbana. Urbana, as noted in #107, seemed to witness increases in heavy rains over the last hundred years, but Urbana also changed observation times. What about nearby Danville?

    Unlike Urbana, Danville has stayed with one observation time (about 6 PM) over the last eighty years. So, Danville has no time-of-observation effect. What does the Danville heavy-rain time series look like?

    No trend.

  109. Kenneth Fritsch
    Posted Aug 17, 2008 at 11:48 AM | Permalink

    In “Global Climate Change Impacts in the United States” linked here

    Click to access usp-prd-all.pdf

    we have a graph (depicted below) that similarly uses an apparent regime change to point to a trend. In this case the plot is of hours per day over 100 degrees F in Phoenix, AR. The trend line is deceiving when one considers the flat trends from 1948 to 1976 and then again from 1977 to 2000.

    1948 to 2000: R^2 = 0.27; Slope = 5.52 Hours/Day/Century.

    1948 to 1977: R^2 = 0.00; Slope = 0.16 Hours/Day/Century.

    1977 to 2000: R^2 = 0.00; Slope = -1.16 Hours/Day/Century.

    In your case, David, you have made a connection to what may well have caused the regime change while for the Phoenix case an explanation cannot be provided. I see a number of these cases in the climate change literature and normally no explanation is offered — or even a notation that the condition exists.

  110. David Smith
    Posted Aug 17, 2008 at 2:30 PM | Permalink

    Interesting time series, Kenneth. The apparent source of the chart is a 2002 article , which has this abstract:

    Abstract This paper examines the impacts, feedbacks, and mitigation of the urban heat island in Phoenix, Arizona (USA). At Sky Harbor Airport, urbanization has increased the nighttime minimum temperature by 5°C and the average daily temperatures by 3.1°C. Urban warming has increased the number of misery hours per day for humans, which may have important social consequences. Other impacts include (1) increased energy consumption for heating and cooling of buildings, (2) increased heat stress (but decreased cold stress) for plants, (3) reduced quality of cotton fiber and reduced dairy production on the urban fringe, and (4) a broadening of the seasonal thermal window for arthropods. Climate feedback loops associated with evapotranspiration, energy production and consumption associated with increased air conditioning demand, and land conversion are discussed. Urban planning and design policy could be redesigned to mitigate urban warming, and several cities in the region are incorporating concerns regarding urban warming into planning codes and practices. The issue is timely and important, because most of the world’s human population growth over the next 30 years will occur in cities in warm climates.

    It looks to me like the authors attribute the pattern to urbanization, not greenhouse climate change.

    The 70s jump is apparent in other Sky Harbor and Mesa temperature data but not in the data from nearby cooperative stations. I bet 5 quatloos and a quid that the 70s jump is related to construction or other human activity.

  111. David Smith
    Posted Aug 17, 2008 at 3:03 PM | Permalink

    Correction to my #110: the Global Climate Change page 48 discusses the combination of UHI plus AGW. So, if the purpose of its Phoenix time series is to show the effect of UHI, then the time series is within the context of the article. I went with visual impression and brochure context and assumed it was to demonstrate the combination of UHI plus AGW.

  112. Kenneth Fritsch
    Posted Aug 18, 2008 at 9:15 AM | Permalink

    Re: #111

    Your observation is in line with my view that presenting data and graphs as the authors of the Climate Report have done, without reasonable explanations and details, can lead to confusion or wrong conclusions. I think that results from the authors being less in the scientific mode and more in a marketing one for climate policies. Even reading on background can leave these issues up in the air, as is the case, for me at least, with this example.

  113. Jorge
    Posted Aug 18, 2008 at 1:04 PM | Permalink

    Re: #108

    David, I think this is a remarkable series of posts on a topic that would never have occured to me. I see your results as very convincing and certainly cast some doubt on the findings by Karl about increased heavy precipitation days. Unless there is some evidence, which I don’t see, that this possible change in observation time was taken into account in Karl’s study I would have to discount it as potentially flawed.

    It was not clear to me how one could locate the stations and station data that were the main focus of the study but this clearly needs to be followed up.

    It would be truly ironic to find yet another claim about climate change that may have more to do with observational changes than anything else. Thanks again for your diligence in looking into this.

  114. David Smith
    Posted Sep 2, 2008 at 7:02 PM | Permalink

    Rockford, Illinois finally reached 90F for the first time this year, the latest such date in its 115 year history. The report also noted that many of Rockford’s longest stretches of sub-90F days have happened in the last 15 years.

    So, I decided to look at nearby temperature records, choosing three sites in northeast Illinois which have at least a century of records and few missing records. I plotted their combined days on which they reached or exceeded 100F (about 38C). the plot is below:

    It looks like extreme heat has declined in that part of the state, especially compared to the 1930s. I’ll expand this check to see if that pattern holds statewide.

  115. David Smith
    Posted Sep 2, 2008 at 7:29 PM | Permalink

    Re #114 The Rockford file is here .

  116. David Smith
    Posted Sep 2, 2008 at 8:45 PM | Permalink

    A plot of 90F and above days at Aurora, near Rockford, is below:

    At this rate Aurora will run out of summer weather within a few centuries.

  117. Posted Dec 10, 2008 at 7:26 PM | Permalink

    We’re getting light to moderate snow in Houston this evening. That’s quite unusual for 29.6N. I guess this, too, is AGW.

  118. Posted Dec 10, 2008 at 9:21 PM | Permalink

    Awesome David, the Houston Chronicle has a story that Drudge nicely headlined for me: “Snow Surprise in Houston“. Get ready for the -20 to -40F temperatures coming down from Canada. It would be even colder, but the high pressure cell actually has originations over the North Pacific, not the Arctic or Siberia where the -60 to -70F stuff is.

    Here is a link to a 4x updating image of the coldest temperature that will be experienced during the next 6-days at each grid point in NCEP GFS 0.5 degree forecast: North America Lowest Temperatures — 6 days. Or a global view: Global Low Temperatures.

    Likely freeze coming up in LA, Las Vegas, Phoenix, and maybe San Diego, which all moves Eastward for next week…The bottom is ready to drop out…

  119. Posted Dec 10, 2008 at 9:55 PM | Permalink

    Here’s a photo of my snow-coated lemon bush taken an hour ago. A few lemons are visible near the ground:

    This is highly unusual for Houston, especially early in winter. I do wonder what this winter holds for North America.

  120. Mark T
    Posted Dec 10, 2008 at 11:23 PM | Permalink

    It’s snowing in Houston? Wow. We’re starting to get the good stuff here in CO Springs, but it is expected at 6500+ feet of altitude this far north, hehe.

    Mark

  121. Phil.
    Posted Dec 11, 2008 at 9:00 AM | Permalink

    Yesterday the temperature was in the 60s in NJ.

    • UK John
      Posted Dec 11, 2008 at 1:39 PM | Permalink

      Re: Phil. (#122), Phil is right! the temperature depends on the weather.

      Re: David Smith (#120), That observation needs adjustment, its at night, add 5 deg at least.

  122. Posted Dec 11, 2008 at 10:08 AM | Permalink

    Heavy snow expected in Jackson Mississippi, 3-8 inches expected. Also currently snowing in New Orleans, which is under a winter storm warning.

    Here is an image from the webcam in Hahnville LA, which is just west of NOLA:

  123. Mark T.
    Posted Dec 11, 2008 at 10:57 AM | Permalink

    And Copper Mountain hasn’t had any snow in 2 days. Life just is not fair to a skier.

    Mark

  124. Jonathan Schafer
    Posted Dec 11, 2008 at 1:20 PM | Permalink

    We missed out on the snow here in Dallas, but not the cold. Thankfully it’s warming up today and will get warmer through the weekend. Unfortunately Sunday’s high is expected to be around 72 degrees and windy, and as I will be running the White Rock marathon, that could get a little toasty.

  125. Posted Jan 17, 2009 at 8:52 PM | Permalink

    US state low temperature records are in the news. Here’s a plot of the decade in which the current records were set:

    Blue is the record low temperature while red is the record high.

    There is no apparent trend over the period. The 1930s, obviously, were a decade of extremes, both warm and cold.

    Perhaps it’s a bit of a surprise that the 2000s, for all the reported warmth and worry about extreme weather, have set no records so far.

    • Kenneth Fritsch
      Posted Jan 18, 2009 at 12:33 PM | Permalink

      Re: David Smith (#127),

      Not so fast, David. You have shown that the cold to hot state temperature records correlate over the time period used with an Adj R^2 = 0.53 and a slope of 0.371+/- 0.19.

      I leave the remainder of this excercise for someone to conjecture that since GW would logically seem to cause extreme hot temperature records to be broken we can expect that that will cause extreme cold records to be broken and we will have the worst of all worlds.

      As a corollary we could conjecture that a coolling climate would cause the cold records to be broken and that will cause hot records to be broken and therefore again having the worst of all worlds.

      Therefore, one could propose that a nontrending climate, i.e. the status quo, is the ideal.

      All to demonstrate that I have had way too much time on my hands of late.

    • John Norris
      Posted Jan 20, 2009 at 9:59 PM | Permalink

      Re: David Smith (#127),

      Perhaps it’s a bit of a surprise that the 2000s, for all the reported warmth and worry about extreme weather, have set no records so far.

      I am gobsmacked! Whatever that means. What is your source?

  126. frost
    Posted Jan 18, 2009 at 10:59 AM | Permalink

    In that it is like the 40s: a decade having the lowest number of total records which came after the decade with the highest number of total records.

  127. Posted Jan 18, 2009 at 9:09 PM | Permalink

    Re #129 Kenneth, thanks for pointing out the hot/cold correlation. My conjecture is that the common thread is dryness, which is associated with large temperature swings. I believe the 1930s were a time of drought over much of the US.

    Also, of course, single hot or cold spells (July 1936 and January 1996, for examples) can distort the records.

    I watched “Day After Tomorrow” the day before yesterday. Whether AGW leads to more hot or more cold is open for debate, but I think beyond a doubt AGW leads to bad movies.

    • Kenneth Fritsch
      Posted Jan 19, 2009 at 9:45 AM | Permalink

      Re: David Smith (#130),

      Also, of course, single hot or cold spells (July 1936 and January 1996, for examples) can distort the records.

      My view of the data would say that those two periods had a great deal of leverage on the correlation, although I did not bother to analyze it, since my point was to be facetious.

      I watched “Day After Tomorrow” the day before yesterday. Whether AGW leads to more hot or more cold is open for debate, but I think beyond a doubt AGW leads to bad movies.

      I watched some scenes from “Day After Tomorrow” two days before the day after tomorrow and I was set to wondering if those who are the most sincere advocates of immediate AGW mitigation must wince when they view how overdone this movie is. I have never been able to sit through it start to finish, but if the producers ever need a better rationalization for making it, I will gladly provide the hot/cold correlation – with all due credit to you.

    • Mike B
      Posted Jan 19, 2009 at 11:19 AM | Permalink

      Re: David Smith (#130),

      The saddest part about “The Day After Tomorrow” is that there is a great action novel by Allan Folsom with the same name that has nothing to do with AGW.

  128. Posted Jan 20, 2009 at 12:31 AM | Permalink

    Here is more on rainfall in the paleoclimate in the Great Lakes region. I put this in another threat but it looks like it fits here two.

    Great table as well.

  129. Posted Jan 25, 2009 at 12:19 PM | Permalink

    The graph in #127 may be overly influenced by single extreme heat waves or cold snaps. So, here’s a look at broader populations.

    The first is a look at the decades in which summer high temperature records were set. This covers the contiguous US for the three typically hottest months (June/July/August). A single nationwide event would affect the records for one month but not for all three, so this plot should be less-influenced by single extreme events.

    The appearance is similar to the record high plot of #127.

    Here is a similar plot except that it is for record lows in December, January and February:

    There appears to be a modest downward trend in extreme cold events.

    (Note: The final bar in each chart covers 2000-2003 (records posted as of May 2004) and is prorated so as to make an apples-to-apples visual display.)
    Here’s the combination of the two:

    Conclusion – the 1930s in the US were rough.

    Note: The trendlines for all three graphs are essentially flat (no trend) if the prorated early 2000s are excluded from the trend calculations.

  130. Posted Jan 26, 2009 at 3:02 PM | Permalink

    The chart du jour is a time series of the annual number of “warm nights” near Ken Fritsch’s Chicago. Warm nights are defined as those with minimum temperatures of 20C (68F) or higher.

    The two USHCN stations are in northern Illinois not too far from Chicago. They had the least missing data of the station choices but do have other concerns – Aurora is located at a water treatment plant while the Ottawa MMTS is nearly in a bank of trees.

    The trend in Ottawa over the last sixty years is flat to perhaps slightly down. Aurora was nearly flat, too, until about 2000, when the number of warm nights increased sharply. Hard to say if that is microclimate-driven by treatment plant activity or part of a weather-driven trend – I’ll check a few more stations. Regardless, there seems to have been little to no increase in warm nights near Chicago, at least until 2000.

One Trackback

  1. […] look at all the US data then. Last year in this thread on Climate audit David Smith […]