Excellent post by Pielke Jr here
-
Tip Jar
-
Pages
-
Categories
-
Articles
-
Blogroll
- Accuweather Blogs
- Andrew Revkin
- Anthony Watts
- Bishop Hill
- Bob Tisdale
- Dan Hughes
- David Stockwell
- Icecap
- Idsos
- James Annan
- Jeff Id
- Josh Halpern
- Judith Curry
- Keith Kloor
- Klimazweibel
- Lubos Motl
- Lucia's Blackboard
- Matt Briggs
- NASA GISS
- Nature Blogs
- RealClimate
- Roger Pielke Jr
- Roger Pielke Sr
- Roman M
- Science of Doom
- Tamino
- Warwick Hughes
- Watts Up With That
- William Connolley
- WordPress.com
- World Climate Report
-
Favorite posts
-
Links
-
Weblogs and resources
-
Archives
111 Comments
gavin’s response (both there and at Real Climate last week) are most interesting.
The comments at the link were certainly entertaining.
As usual when Gavin is painted into a corner, rather than provide additional data, or graciously admit he was wrong, we get this:
Translation: “I can’t refute Roger’s science, so I’ll take my ball and go back home to RealClimate.”
Gavin sez
Uhh…..yeah.
IPCC 1990 – “Greenhouse: It will happen in 1997!”
I really do believe this “aerosol negative forcing” is a fudge to make the figures fit, and to allow the IPCC to continue with its outlandish claims of sensitivity 2.5-3.0C, when the data will only support 1.0-1.5.
If it cools in the next few years, are they going to crank up their aerosol estimates, or admit that it’s the Sun stupid.
Rich.
LOL
This is the figure Gavin sites (Ax.3), and as he stated it indeed shows similar values without much divergence through 2010. Anyone know where one can find the figure that Roger cites (“Figure A3.1 in that same report”)? I can’t seem to find a A3.1 in the 1992 supplement.
Thanks
cites, sorry for the spelling error.
For your enjoyment:
Just these graphs
More discussion here: Comparison with GISS and Hadcrut.
#10
The problem with starting records relating to global temperature in 1979 (satellite ice data) or 1980 (graph in #10) is that it is the end of a period of unusual cooling. Because of that anomaly, it makes the post-1980 slope greater than it should be.
If one is searching for an AGW ‘slope’, since the pre-1956 warming is “natural,” (according to some) it might in some ways be better to stitch 1980 right after 1949, since there is no satisfactory explanation for the mid-20th Century cooling.
#11
Hi Mike,
Do you have a newbie-friendly graph or pointer to a dataset at hand that shows this?
-S
Mike– I agree there is a problem with starting records in 1980. Notice the data in my graph starts in 1984. Gavin shows data in 1980. đ
I created mine to see how well Hansen’s scenarios forecast. My initial thought was to start in 1988– after Hansen published. But I changed my mind after I read Gavin’s suggestion to use 1984. (He’s done made this suggestion at least twice.)
In anycase, Hansen was aware of the El Chicon volcanic eruption when he ran scenarios AB&C. Had I shown data from 1980 (which you advice against) you would have seen that his model accounted for that in the pre-1984 forcing.
In case you are wondering why I might compare empirical data to Hansen’s prediction after 1984, other than Gavin’s specific recommendation, reasons include:
* Hansen began computations in 1983, and so 1984 is the first year in which a forecast verification is possible. This gives us the longest record for comparison.
* Starting earlier is tests the hindcast. Like it or not, to some extent the model parameterization are tuned to give decent results for hindcasts. Models that don’t hindcast simply can’t get published.
* Starting too soon after after 1959 will be affected by the “cold start” problem for computations. (Gavin uses the term “or so” in his post discussing the cold start problem and suggest that the cold start problem is of lesser importance after a decade or so of model running.)
Anyway, you can feel free to do forecast testing starting with any date after 1984 you like. At least for now, they won’t look any better, and of course, the trend will contain more scatter. The Land-Ocean has been falling behind Hansen’s ABC GCM predictions. Unless it catches up, the comparison isn’t going to look that good.
The only way to make those look good is to compare to land only data. That’s a bit of a orange-to-lemons comparison. After all, while the met station data is taken on the surface of the earth, it exclude the 70% of the surface over water. And the GCM’s compute the GMST weighting by surface area, so the surface of the ocean contributes to the ABC scenario GMST predictions in exactly the proportion that it contributes to the earth’s prediction.
pjaco (8)-
Thanks for posting up the Figure. You can see pretty clearly in it that by 2010 the values diverge.
The Figure showing CO2 emissions can be found on p. 81 (Figure A3.1).
It will also be useful to look at Figure A.9 of the First Assessment Report, on p. 336. The BAU curve clearly shows an estimated temperature change by 2010 of over 0.6 degrees, and emissions were actually higher than the BAU scenario.
All of these assume a 2.5 degree climate sensitivity. Using a 2.8 sensitivity (as was done in 2001) would raise these curve a bit further, and 3.0 as is assumed today even more so.
Hope this helps.
RPJr says:
I’m confused. Why would you compare temp divergence to CO2 divergence by 2010?
Obviously CO2 emissions and temp are not linearly related, so the range for emissions is going to be much larger than a corresponding range for temp. As you can see, the “divergence” in temps for 2010 in Ax.3 is tiny and the highest temp (IS92e) is below .5C:
Also, per your post(s), are you sticking to the 92 Supplemental, the FAR, or switching back and forth between the two for your 1990 values? In your blog you say “IPCC issued its first temperature prediction in 1990 (I actually use the prediction from the supplement to the 1990 report issued in 1992)” yet you suggest that I go back to the FAR for BAU giving +.6C.
The chart you use appears to show a greater temp value for the 1990 IPCC (which you say you have forgone in favor of the Supplemental ’92 info) by 2007 than even the highest temp by 2010 given by the Supplemental 92 info (Ax.3).
Any clarification would be greatly appreciated. Cheers.
#10 Lucia, Hansens predictions for series C are interesting being (if my memory serves me) based on no increases in CO2. That is, the observations are even lower than predictions based on no increase in CO2. Another ‘disturbing’ fact for AGW.
pjaco-
Thanks.
1. Gavin argued that the choice of scenarios does not matter. I pointed to A.3 to show that there is actually a large divergence in emissions profiles very early on. In the figure that you show, the choice of scenario clearly does matter, some, by 2010.
2. I used the IPCC 1992 supplemental to allow for some consistency between 1992 and 1995 which used the same family of scenarios. If you use the SA90 scenario of 1990 as a baseline if you’d like, the results will be higher than 0.6 in 2007. By email I received some criticism for being “too fair” to the IPCC by using 1992, rather than the original 1990!
3. As I told Gavin (five times) I can see where he might arrive at a value as low as 0.45, the value I came up with is a little higher. The difference is probably smaller than the uncertainty in the model predictions and makes no difference in the interpretation and conclusions, but I’d be happy to grant you a value as low as 0.45 in 2007 if that is really what your looking for.
4. When I update this effort to use again, for the 1990 IPCC prediction I will simply use SA90 and the original FAR 1990 (rather than 1992), which has the 2007 temperature prediction at above 0.6. That should make everyone happy;-)
#11
I have taken the original IPCC temperature graph and drawn a trend line from about 1900 to about 1949 when the warming is said to be “natural” then extrapolated the line to 2000, the end of the graph. The recent warming had just caught up to the “natural” trend line!
If one takes the figures provided by Roger for 2007, even GISS fall below the extrapolated trend line.
My point is that, until there is a scientifically persuasive explanation for the mid-20th Century decline in temperatures, the late 20th-early 21st Century rise does not appear to significantly differ from extrapolated “natural.”
Given the headline grabbing Jones’ (using Hadley) and Hansen’s (using GISS) early 07 prediction of 2007 likely to surpass 1998, we can safely conclude that both men would not have made such grandstanding claims unless their GCMs told them that it was a safe bet. The fact that each man’s carefully scrutinized model analysis was terribly wrong is only the latest example of why these GCM based scenarios are without merit.
Combine such recent fiascos, with the longer term scenarios hammered in this thread, only those with a strong core AGW faith would take such pronouncements seriously… the rest of us remain “show me results” dubious to these manufactured and clearly speculative scenarios.
A similar was posted at accuweather and I thought it may be worth your while to re-check it
Re ICE DATA : hope this is OK to post here if not place in appropriate place thanks VG.
There appears to be a difference between IPCC and uiuc Arctic ice data.
uiuc shows a 1982 minimum of 5 Mkm2 and IPCC shows 7.5 Mkm2.
http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-chapter4.pdf figure 4.9
“The uiuc ice map (and graph) Sept. 1, 1982 shows less ice than Sept 1, 1996.”
“Yet the IPCC chart shows the 1996 minimum 30% lower than 1982.”
pjaco-
Here you go, updated Figure, no more 1992 report:
http://sciencepolicy.colorado.edu/prometheus/archives/climate_change/001321updated_chart_ipcc_.html
RPJr:
Thanks for all of the quick responses.
I don’t know. 3.5 (eyeballing) gigatons CO2 doesn’t seem to be a particularly “large amount of divergence” given the rather small amount of temp difference it would imply. Am I overlooking something?
Cheers
I’ll check it out when I get home.
Thanks for your responses.
Unfortunately, my fellow economists often do the same thing Roger accuses IPCC of — half way through 2007, they will issue “forecasts” of inflation for the average 2007 price level over the average 2006 price level, when the inflation in question is already 50% (or more, in fact) in the bag.
#10 #18 Mike Smith:
Whether or not you find it a satisfactory explanation for mid 20th century cooling, I would offer the following. The solar cycle lengths of 10.7 and 11.6 years for cycles 19 and 20 (roughly 1953-1975) were the longest since the 11.9 of 1890-1902. This, like the current cycle which will probably weigh in at 12.5-13.0 years, may be deemed responsible for some global cooling.
Steve M has in his intray an article/paper from me on this subject, which models temperature through a combination of solar cycle length and CO2 concentration, which will give more details. I am indebted to David Archibald for bringing this to my attention, though I come to different conclusions from him in terms of sensitivity to solar length.
Hope this helps, or is at least food for thought,
Rich.
On another forum someone a few weeks ago provided this article from Jim Hansen which validated is 1987 prediction.
Jim Hansen on Micheal Crichton
I’m unable to reproduce the image but Scenario B is supposed to represent a moderate growth in CO2 is the best match to the observation line. Yet I maybe wrong but it seems that I have read somewhere that CO2 raised faster than expected. The observation line is probably from giss which is the one the fartest away from the satellite record.
Sylvain #26,
I made a crude attempt to update that figure here.
The data for 2007 seems closer to Scenario C (GHG emissions stop increasing in 2000), than to Scenario B (business as usual, with an occasional volcanic eruption.)
Someday, I’ll try for a better graphic.
Let me give this a try.
Ah-h-h, there we go.
And note that in order to match Scenario B, it’s not enough to just have years of record warmth, but we need to smash and trash the 1998/2005/2007 (equal within margins of error) records by 2010.
Here is a times series of the tropical lower troposphere (“TLT”) with the effects of ENSO variation removed (more or less). Note that these non-ENSO driven temperatures have been flat to declining for 8 or so years. A generally warm-phase ENSO has sort of disguised this backdrop decline in temperatures.
Here is the same plot but with the volcano-affected periods erased. The visual impression is one of flat temperatures until the early/mid 1990s.
Finally, here is a time series where I replaced the volcanic periods with an estimate of what the temperatures would have been had the volcanoes not occurred, based on the ENSO index values for those two periods.
These are interesting in part because TLT is correlated with global lower troposphere temperature and hint at what the global record would have looked like without the high El Nino and volcanic activity of the satellite era.
John M says:
Your graph is different from this: http://www.climateaudit.org/?p=796
Did you verify that the actual data and model predictions used the same ananomoly baseline?
Re: 30
David
Very interesting. Unless I’m misreading things, all graphs, no matter what, trend downwards after the 1998 peak. If so, that’s pretty compelling.
25 Rich
Is it available online anywhere?
(jae uses esnips)
Re: #31
The look the same to these old eyeballs. What are seeing that’s different?
In the chart that John M posted (January 14th, 2008 at 8:39 pm)
That’s always been another question. Where did he get this “Estimated Temperatures During Altithermal and Eemian Times” (.5 – 1C)?
Probably Hansen informed him that Holocene Optimum was colder, or “warmer only in summer” than 20th century. Previously, Eemian was thought to be substantially, more than 2 degrees warmer than present, but hey – this is climate science. If the MWP is colder than now (see Hockey Stick), and if HO is colder in tropics and warmer just in summer in NH, why Eemian wouldn’t be similar to present conditions?
With regard to the Hansen charts, it is also worth noting that the GISS series displays the highes trends post 1997 of rival data sets for global temperature. Jus one anomolly needs to be noted, the ranking of 2005 as warmer that 1998.
That is another problem with these type of analyses. ex post forecast assessments are made by comparing prior forecasts of, say, Hansen, with ex post observations, which are proprietary data produced by the same person. This is not trivial, given the amount of “adjustment” that goes into producing this data.
It is a bit like saying I can forecast the lottery numbers. Here is my forecast; now I will go pick the lottery numbers and you can see how I did. Absurd.
I must confess I find it disconcerting when the variation between “observed” temperatures is of similar magnitude to the change we are supposed to be analysing. In an area where the science is so “settled” can we really do no better than this degree of uncertainty? (Which I know comes back to something that Steve and co have been saying for ages, that if they were serious about wanting the truth on climate change they’d be making more investment in actually observing it properly.)
I’d also note it’d be useful on Roger’s graphs (which are marvellous) if forecasts had some obvious indicator to show which parts were forecast and which were hindsight – maybe dash the pre-forecast section, or at least mark the forecast date. Yes, one can work it out from the legend but it feels wrong to me to have forecast and fitted sections afforded the same visible status in such graphs.
I think Pielke Jr is rather generous when he says in his linked article:
“The IPCC actually has a pretty good track record in its predictions”.
The predictions are simply straight-line extrapolations of an ongoing trend that started around 1978 (and seems to have finished somewhere between 1998 and 2002). For the Earth’s climate, a trend lasting a couple of decades is nothing unusual. Last year, Hansen claimed his earlier predictions (of the 1980s)where relatively accurate. It shows nothing more than the fact that he (and IPCC) were ‘lucky’ that the trend continued, proving nothing regarding the cause of the trend.
As it happens, it is worthwhile remembering that despite all the studies and conclusions around AGW, the period through which reality has matched AGW theory lasted only 20 years (1978 to 1998. Or you could say 24 years if you assume trend extended to 2002).
“Global Warming” implies a finite, relatively small set of falsifiable hypotheses. “Climate Change” does not.
Once the green politicians realize that surface use effects based ACC (Anthropogenic Climate Change) gives them a much broader mandate for social engineering than AGW, Hansen, Thompson, Mann, et al will be left holding the carbon tax in their cap and capture bag. CO2 based AGW will prove to be a one-generation thing.
Re 18 – Mike, try it again with your “natural” trend from ca 1908 to 1946. Murray
So true, MB #40m but possibly with the cautionary tale of Carbon AGW in mind, policy makers will insist on accurate science. Some good must emerge from this mess.
=========================
Re Mike B (40)
A good point there.
It has been a long time view of mine that when temperature stablises or starts to dip (without a reduction in CO2 emissions), the AGW club will not aknowledge any error. The ‘politicians’ will blame the experts (ie. the scientists)- that idea rings a bell! The AGW scientists (or should that be ‘scientists’?) will claim that the effect of aerosols emitted by China and India have overtaken the CO2 effect, and we will be heading for global cooling again. So, yes the concept of ACC (anthropogenic climate change) is probably valid.
I believe it will be possible to tease out the distinction between albedo and CO2 effect, thus styming that strategy, but my optimisn is only based on the thin reed of truth.
=====================
On the pessimistic side, it amazes me, a child of the Twentieth Century, that there are forces more powerful than truth.
=============================
I agree with Pielke Jr that these scenarios and temperature trends will be used by all sides of the AGW debate to âproveâ a point and that these projections/predictions can be become so abused because the people and groups making them do not spell out the ground rules whereby the predictions can be evaluated. I have several problems beyond this point with typical climate predictions:
1. The use of scenarios contains necessarily two essential parts: (a) the forecast of the variables that are then used presumably in the climate models to predict future climate which is (b) the second part of the scenarios. My interest would be in separating these parts since economic model predictions of the variables inputs into the climate models, such as CO2 emissions, are of a different nature than using a given set of variables to predict future climate. In fact we have more experience with the economic models than we do with climate models and the former models have not been shown to perform very well in doing long term predictions. Why do we not evaluate climate models out-of-sample simply by taking the model fixed at a point at the start of the out-of-sample evaluation and use the inputs of the variables as they occur over the time period of out-of-sample testing.
2. When the various scenarios of Hansen or the IPCC are evaluated I have not seen much effort applied to comparing the actual variables with the scenario variables. Of course if the actual variables as they occurred were inputted, as I suggest above, we would have a better picture of the climate model and have it not in the form of just trends but a look at year to year variations and perhaps regional variations as well.
3. I do not know whether there are sufficient climate model predictions floating around out there but if there were one would also have to take into account the âshowcasingâ of only the successful models and ignoring those which were not.
4. We then have the beta problem that Lucia has referred to here in that the out-of-sample results are insufficient to put much certainty on the results of a statistical test. We have a situation of past temperature increases that are simply extrapolated at or near the rate of the most recent past trends which is not unlike what we see in economic and stock market projections which look good until we have the inevitable down trend. That is why I cannot get excited about the Scenarios B and C of Hansen which do not really show much divergence until late into this decade.
5. If one wants to look at prediction models in general then the ones used to establish the climate model inputs for Scenarios A,B and C should be scrutinized every bit as much as the models used to predict climate.
As Pielke, I think, is suggesting, we would do better to spend our time determining how best to evaluate model predictions than in attempting to put significance and spin on the paltry out-of-sample results to date.
If the GMTA trend actually reflects a growing overall temperature (which seems to be the case, based upon the satellite atmosphere and sea readings coroborating the GMTA trend), since all the AGHG are showing the same curves, the truth is more likely that the gasses follow the temperature rather than the other way around.
I still maintain that land-use changes and particulates (pollution) in the air and on the ground and into the oceans are something more important than trace gasses and water vapor feedback associated with the trace gasses (both AGHG, GHG and non-GHG).
Climate change.
RE: “all graphs, no matter what, trend downwards after the 1998 peak.”
Bu … bu … but … “Greenhouse: It Will Happen In 1997!” Where are all those horse races on what used to be Arctic tundra, where are the widespread underwear as outwear fashions in California (FYI – 40s F today, here … LOL)? By now, we should have been half way to a Venusian existence? Where’s that tipping point? Where’s that runaway GHE?
Kenneth wrote, in 46:
The first issue in determining how to best evaluate model predictions is the fact that model predictions are all over the map.
Back on 12/7/07, RealClimate rebuted a Douglass et al 2007 paper that showed how actual tropospheric warming was much less than what the models predict. And how did RC accomplish this rebuttal? By pointing out that model results vary from predicting a .6 degC/decade trend all the way down to a trivial .01 or.02 degC/decade trend, with predictions of actual cooling above 200mb. See the graph in the post here:
http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/langswitch_lang/sw#more-509
Given that much variation in output, the one evaluation of the models that we can safely make, right off the bat, is to say that most of them are wrong.
#33 Keating
No, my article is not yet online. I admire CA very much and so I want to publish it here, but wanted to avoid the maelstrom of Unthreaded. Therefore I have been in touch with Steve about it; after I’ve finished my current redraft I’ll negotiate again, and possibly the current changes in blog management will affect this. I have also now noticed the global warming blog at solarcycle24.com, so it’s just possible I might publish there.
Anyway, thanks very much for your interest, Pat – do you have any influence over Steve đ
Rich.
re 49 (Michael)
Ahh but given enough models and enough variation, one of the models will always be close to correct. Then they can toast the winner and say how uncanny it is that they predicted the outcome.
Shouldn’t the authors of every model produce a falsifiable set of predictions?
Something like:
A model needs to have strict predictions with uncertainties. That’s the only way that we (the society which uses the model) can evaluate it. Regardless of all the complexities of weather, the modelers can make firm predictions like that. And if done honestly, they should be correct when the predictions are measured. If they need to make the range large to account for the “known-unknowns” then that needs to be understood. If, on the other hand, they are confident that they can nail down more precise changes, let them be clearly tested. You can make falsifiable predictions over a ten (or even five) year period.
Mike Smith,
There you go again! Nature isn’t linear. Just becuase it falls within an extrapolated “natural” trend doesn’t mean it is natural, and just becuase it goes beyond it doesn’t mean it isn’t.
But we are ruining the planet!!!!
Raven #31
I’m not sure which graph you’re looking at. My intent was simply to conduct the trivial exercise of updating Hansen’s own graph with two more years of GISS data. Willis and RJP Jr. have both taken a stab at doing a more thorough analysis of Hansen’s scenarios/projections/ predictions/forecasts/prognosticatons/extrapolations/etc., and have gotten arguments and debates about what the meaning of is is. Lucia is undertaking a similar effort, but I haven’t been over to her blog to take a look lately.
I just wanted to see how a plot that had the AGW seal of approval stood up after an additional two years of data.
Ivan #36
He didn’t inform me of nuttin’. It’s his graph. All I did was add two asterisks to represent 2006 and 2007 data.
And BTW, I checked to make sure the table I took 2006 and 2007 from, which is here, had data consistent with the prior years plotted in Hansen’s graph,
John M says:
Lucia pointed out that the different data sets have different baselines which means simiply overlaying two datasets is potentially misleading.
Raven,
Yes, I’m aware of that. That’s why I’m just sticking to his own graph and using his own data. That is, of course, the “best case scenario” to validate his 1988 projections, and even at that, the observed temperature anomaly is barely above the Scenario that assumes cold-turkey cessation in GHG increases in 2000.
The 1990 RSS average lower troposphere temperature anomaly – +0.055C
The 2007 RSS average anomaly – +0.16C
Total increase in temperatures from 1990 to 2007 +0.1C
Hansen and the IPCC are off by a factor of 5.
Using a single number like global mean temperature to describe a system as complex as the earth’s atmosphere is a bit silly. All these predictions would become more believable and meaningful if the models published temperature maps of the earth. Though I doubt that the GCMs are so outlandish as to predict polar bears at the equator and crocodiles at the poles, a single global average can hide all sorts of errors that a map would expose. A test I’d like to see would be maps (from all the 22 IPCC models) produced today predicting 2009 through 2014 to be compared with the satellite temperatures. It’s got to be the satellite temperatures to take the data out of the hands of the people producing the models. Can’t have Enron auditing its own books.
50
None whatsoever Using my name might hurt your cause…. :>)
Let me know if and when you put it online….
“ItÂs got to be the satellite temperatures to take the data out of the hands of the people producing the models. CanÂt have Enron auditing its own books.”
Why the assumption that satellite records are not problematic in their own right? Christy has had to revise his figures several times already, because there is no direct measurement of the troposphere temperatures. It has to be interpolated from instruments that were designed for other purposes.
Bugs says:
Satellites provide a consistent dataset that covers the entire globe. Satellites also provide data from a source that is not controlled by the people that created the climate models.
Wish it were so, but it t’ain’t….
The satellites have no means of making direct measurements of air temperatures. Instead, the satellites observe and measure radiance. The measurements of radiance must then be analyzed for EM frequencies, range to source, angle of observation throught the atmosphere, and many other factors to infer probable ranges of air temperature. The products of the analyses are further analysed and adjusted by comparison to other known indirect and direct measurements of air temperature, particularly upper air observations by radiosondes. The instruments abord the orbiting satellites are each slightly different in their calibrations for measuring radiance, so they too are subject to adjustments of their measurements to calibrate and account for instrument accuracy, instrument drift, and orbital drift. There are multiple analyses with multiple suites of corresponding adjustments which are intended to remove inaccuracies in the previous analyses. Ultimately, the satellite observations are a multi-layers cake of adjustments seeking to discover and mimic reality. The question is then to what extent are the observations and adjustments successful at inferring reality from radiance measurements?
D. Patterson says:
I see what you saying but I still think the satellite data is the best option because:
1) What we care about is year upon year changes. The satellites could produce absolute temperatures that have no connection to reality but still provide a useful dataset provided they are internally consistent.
2) We have two competing groups that produce a temperature record from the same dataset. That competition keeps both groups on their toes and gives us much better confidence that the “adjustments” are based on sound science.
3) The data sources are not controlled by the people producing the models. The GISS temps are a result of extensive data manipulations that were choosen by the person who created the initial models and who would probably would like to demonstrate that his earlier predictions were right.
Yes, the satellite observations presently appear to be one of the more trustworthy and useful data sources. Just don’t fall into the trap of believing they don’t have their own serious struggles with inferring consistency from indirect and inconsistent instruments and methods. As good as they are or may become in breadth of geospatial coverage, they will always be indirect methods which are subject to potentially critical errors in observation, interpretation, and analysis. I think the present satellite observations are relatively high quality, but understanding their indirect nature and limitations is essential to keep the handling of such systems and data at standards of high quality. For examples of such problems with satellite observations, see Steve Sadlov’s running commentary on the problems with satellite monitoring of the cryosphere in the Ice Island T-3 thread.
Michael Smith said…
And how did RC accomplish this rebuttal? By pointing out that model results vary from predicting a .6 degC/decade trend all the way down to a trivial .01 or.02 degC/decade trend, with predictions of actual cooling above 200mb.
The man (Gavin Schmidt) and his cohorts at RC are mad. It seems to me, that Gavin, et al, thinks that he knows numerical modeling inside out, but it is obvious from their rebuttal above, that there was no goodness of fit test applied at all, just mumbo jumbo & rubbish which he tries to masquerade as scientific facts.
Satellites are in a particularly harsh environment. As a result, the issue of keeping the instruments properly calibrated is problematic. Additionally there are issues with orbital changes, which can affect the data the satellites receive.
Satellite readings can be compared to ballon soundings, where those readings overlap.
Paul, in 59 wrote:
Actually, the models (at least some of them) do produce a temperature map of the earth, of sorts. Go to the article I mentioned in 49 — here is the link:
http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/langswitch_lang/sw#more-509
That article shows two such temperature maps.
As far as comparing the models to the satellite data is concerned, such a comparison is precisely the object of the Douglas et al 2007 study that RealClimate is attempting to refute in the article above. The Douglas paper shows that satellite-measured tropospheric warming is much smaller than what the models predict. RealClimate’s answer to that is to show that the models vary so much that one cannot conclude there is actually a difference.
In other words, they are saying that we needn’t bother pointing out that we aren’t warming as much as the models predict because they have predictions that include virtually no warming at all. The model predictions range down to .01 – .02 degC/decade, which is one or two tenths of a degree per century. Too bad the general public isn’t told that.
Paul, if you are interested, here is a link to the Douglas et al study: http://www.uah.edu/News/pdf/climatemodel.pdf
And yet, if someone from the “denialist” side were to use this as a rebuttal, they’d be raked over the coals.
Next time someone says that the prediction of the models are correct, reply with a question about those models that show the small amounts of warming. Then stand back…
That strikes me as really dubious. Any model that is that broad in its conclusions, that it can’t be proven wrong, is obviously dangerously unscientific, and of course useless. Truly disturbing.
#71
Correction. Someone has used this as a rebuttal, and they have been raked over the coals. That’s why two years ago I started keeping track of alarmist double standards in a database.
Re: #68
Change “can” to “must” in the above sentence. If you know the temperature profile, you can calculate the microwave radiance at the different frequencies monitored by the satellites. Unfortunately, because of bandwidth and noise, the inverse problem is ill-posed. You need to know more than just the satellite orbit and the microwave readings. Balloon soundings provide this additional data. However, you don’t have uniform coverage for the balloon data, particularly over the oceans, so the accuracy of the satellite temperature reconstruction could be questionable. I still think the satellite data is a better approximation of reality than the instrumental surface temperature data, though.
A significant error that RC made in its presentation of model errors in its trashing of Douglass et al. is to include in its estimates of model uncertainties those model runs with surface temperature trends that fall outside the range of surface observations. Douglass et al. constrain their examination to only those model runs with realistic (i.e. comparable) surface trends, and the look to see what happens at higher altitudes. I assume that if RC had imposed such a constraint they would have reached the same conclusions as Douglass et al., since this is what Douglass et al. report to have done. RC either mischaracterized their paper on purpose or simply didn’t read it closely.
Roger Pielke. Jr. says:
IS there a place on the web that I could get more details on this rebuttle of the RC rebuttle?
Shouldn’t there be a topic here distinguishing between models and model runs? AFAIK each model run produces a somewhat different result, even with the same starting conditions and tuning. This heuristic nature of the models seems to point to some random variable within the model. I guess this comes from there being no weather analog ala Starr and Lorenz.
#75 Interesting. Forgot to pick their cherries.
RE #77, this is not true, is it? Are not the models completely deterministic for a given set of initial and boundary conditions?
#77
I think you are on right track,
RC:
Depends what model you’re talking about. #77 S Hales needs to clarify. Presumably the reference was to the model runs Schmidt did in trying to refute Douglas et al. cited in #70.
Meaning there was a time when this was not standard practice?
Steve Geiger says:
My understanding is they include ‘weather noise’ and the modellers generate many runs to determine an average that excludes the weather noise. To make matters worse, modellers appear to exclude runs that produce ‘unexpected results’ as a result of the weather noise. For example, if a run produces cooling then it is assumed to be invalid and exluded from the set.
Parsing further …
Would be interesting to know how they figure out what range of initial conditions would be required to “span the range” of possible weather states. What *exactly* does “spanning the range” mean? Randomized initial conditions obviously wouldn’t do it for you, because it’s the outer realm of initial conditions that you want to poke into. Do they try extreme combinations of ENSO(+/-) and PDO(+/-) for example? Are ocean subsurface temperatures considered (not so well-mixed layer)? For example: THC active vs. THC stalled? I imagine these extreme modal states will stretch the range of “realized unforced noise” by a considerable degree.
All in the theme of trying to figure out what RC means when they say internal climatic variability is “small” compared to external forcing – a question you can’t get answered over there. Is it because of the naive way they view their model ouput, presuming it is an actual climate system, forgetting that it is a toy? Just a question.
In trying to understanding the models that the IPCC uses in their assesments and predictions–am I understanding this properly?…the models assume a baseline on all values, and holding only those values constant, they then force into the models increased CO2 and then out comes the result—temperature increases….
But, the models are predicting up to 100 years into the future—do these models hold all of these values constant over this 100 year span while only CO2 remains the “un”-constant???—When in the history of this planet has the atmosphere remained constant for up to 100 years?
It occurs to me that post-processing of GCM runs (i.e. selection of the fittest) could be considered an undisclosed aspect of “the model” itself. It seems strangely unbalanced to spend bilions on physics experiments only to have one man (who is not a statistician, and apparently doesn’t have the budget to hire one) decide which runs are selected for analysis and presentation to the world. Weak link?
#85 Ceteris paribus. They’re not trying to predict future climate all-things-considered. They’re trying to assess likely responses, all-things-being-equal. This is a reasonable objective. You’ve got to explain the past before you have any hope of predicting the future.
Re: #87
But since the likelihood of “all-things-being-equal” is so low as to be effectively zero, what does this say about the likelihood of their assessments’ being correct?
They can’t explain the past if they keep changing it.
Raven, #88
What the heck is “weather noise”? The solution to a PDE from a fixed set of initial conditions is purely deterministic. The solution may not be predictable in advance due to the nonlinear nature of the system, but they should compute the same answer every time for a set of initial conditions. Do they just add random noise of some kind as the solution evolves?
It sure would be fun to find out what fraction of the runs are excluded. Does anyone know?
Re: #89
IMO this is a very important issue. The climate has been defined as an average of the conditions presented by the weather, but a better definition is that it is the statistical analysis of the attractor that the weather, as a chaotic phenomenon, follows. Or perhaps the basin of attraction it occupies.
The movements of air, water vapor, thermal energy, and chemical species assumed in the climate models are usually some average of the expectations regarding what the weather might do in the specific situation. The movements in the real world are the integral over time/space of what the weather actually does. It doesn’t always cancel out to any predictable value at the level the models work at. (I know this because I’ve seen references to the insertion of Monte Carlo Simulation at critical points within climate models. I don’t have references at the moment, I’ll try to find some if somebody else doesn’t get there first.) If there is a point in the model which is particularly sensitive to small differences in the effect of the weather, it makes sense to make several runs with slightly different values and see the spread in the results.
Paul Linsay says:
That is the term realclimate uses.
Take a look at:
Click to access nature_first_results.pdf
For runs being excluded, take a look at the “cold equator” problem at climateprediction.net. This is something I noticed a while ago but never posted on. A not-insignificant proportion of their runs yield “cold equators” and are discarded as unrealistic. OK, what’s the effect of this?
Same question as #85 posted at RealClimate….
Answer:
[Response: Over the 20th Century, the models assume up to 14 or so different things changing – not just CO2 (but CH4, aerosols, ozone, volcanoes, land use, solar, etc etc.). CO2 is big part, but it is not the only thing going on. Similarly, future scenarios adjust all the GHGs and aerosols and ozone precursors etc. They don’t make assumptions about solar since there is no predictability for what that will be, and although some experiments throw in a few volcanoes now and again, that too is unpredictable. What would you have the modellers do instead? – gavin]
Re: #92
From the Unofficial BOINC Wiki entry on “Cold Equator Model”
At first glance I’d say it was a calibration problem.
#92 Stephen, regarding discarded model runs, this is an indication that the MODEL is unstable, so what is the rationale for this arbitrary act? If I was running a reservoir simulation and the engineer did this he/she would be fired. The act is in fact the determining factor in the results which are biased and trivial, as is the current “warming”.
#94
Exactly. Ceteris paribus in this context means you try to predict the deterministic responses to things you can predict, putting the stochastic/unpredictable aside. It is unproductive criticizing the climate forecasts for what they don’t include and can’t predict, when there is so much to criticize regarding what they do include and don’t predict. Follow?
gavin:
Constructing a confidence interval for a parameter.
Now we are talking about prediction interval of future observation. Clearly longer interval.
Hmmm. Bringin it back to this figure , I guess. What does that brown region mean ? Interesting. Windy at RC.
While on the other hand, any models that have the global-mean temperature RISE to absurd levels are retained, and become the backbone of the modern AGW belief.
AK, #95, Thanks. I was being facetious when I said a correct global mean temperature could hide polar bears at the equator and crocodiles at the poles, but maybe not…
It sounds like a model problem that incorrectly describes the response to too little low cloud. Maybe those positive feedbacks that produce the CO2 amplification?
A trend is a calculated result from a number of random events. It is not a random event in itself. This means his analogy is completely wrong. An better analogy would describe each year’s measurement as a die throw. Over the last 20 years we have see a disproportionate number of throws coming up as 1-2. This suggests that the ‘dice’ are loaded.
Let’s see if I have this right.
Model runs with results that are unreasonably low are discarded.
Model runs with results that are unreasonably high are used to base policy on?
I was loaded last Saturday night, but nobody tried to roll me.
MarkW: Are you happy or sad about that? đ
Anyway, as far as this discussion here: See my comments about the GMTAT on the BB.
RE 85. Google SRES IPCC.
Prepare to have the top of your head explode.
No, Mosh, read the TAR technical summary and compare it against the summary for policy makers.
My head exploded most likely 80 times. But I don’t have any margin of error on that, so it could have been 0 or 1,000,000.
re 106, as soon as my stitches heal
Gavin Schmidt emailed me with a reaction to my comment #75 above. He says that he has done this calculation in the comments, and in fairness to him here is that link:
http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends#comment-75596
Here is part of my response:
“Again, it is not right vs. wrong. There is a probability that the obs come from a distribution represented by the models. Using the full suite this seems to be a reasonably high probability (which was not calculated by either RC or Douglass et al.). Restricting that test to those models with the best accuracy with respect to surface trends appears to decrease this probability.”
Of course I’m a political scientist, so what do I know?
More generally, more closely to my own area of expertise (politics and science), the subject of comparing models to obs in the tropics seems to be so thoroughly tribal (good guys vs. bad guys) that I doubt that much will be able to be said on this other than “large uncertainties” meaning that it will be quite easy to confirm one’s priors by asserting the uncertainties, or trying to restrict them selectively.
Of course, the tribal nature of arguing academics is not new or unique to climate change, it is just far more visible due to the nature of the issue and the visibility of blogs.
Andrew, #53,
I think you may have misread my post. I stated, “when the warming is SAID to be ‘natural’.” I was not offering an opinion on that contention (made by others) one way or another.
Re #108: Sigh……..
Re: 108
Here is Gavin’s response from the link Roger provided:
It appears to me that Gavin has missed the point of the Douglas et al study.
The issue, as I read it, is not which models to exclude — and Douglas et al does not exclude any of the 22. The issue is how to calculate the uncertainty of the models. Douglas is arguing that using the range of the data is misleading due to the existence of outliers. From page 7 of Douglas et al:
So that is a fundamental premise of the Douglas et al paper: that the uncertainty of the mean is a more reasonable estimate of model uncertainty than using the range of model outputs
The range of model outputs includes one model that predicts the following temperature trends:
Altitude(hPa) – Trend, degC/decade
Surface: .028
1000: .024
925: .046
850: .073
700: .027
600: -.026
500: -.026
400: -.001
300: .020
250: .024
200: .032
150: -.001
100: -.136
I see no way to support the claim that âthe science is settledâ if model uncertainty encompasses predictions of trivial to no heating.
I’d be very interested to hear what our resident statisticians think about the best way to calculate the uncertainty of the models.