Excellent posts by Chad and Jeff Id. Please support them by commenting on this at their blogs.
Found this item at AP News My Way about 10 minutes ago. Anyone know anything about the analysts?
Re: Don S. (#2),
Good catch, and an interesting read. My favorite part is the end:
“The current El Nino is forecast to get stronger, probably pushing global temperatures even higher next year, scientists say. NASA climate scientist Gavin Schmidt predicts 2010 may break a record, so a cooling trend “will be never talked about again.”
The end is nigh!
Re: hengav (#3),
“The current El Nino is forecast to get stronger, probably pushing global temperatures even higher next year, scientists say.
What scientists say?
The last I read indicated that El Nino was weak and would not have a significant impact. Does the strength depend on who utters it?
Similar for the warming/cooling trend. If you start at the LIA you will get warming result statistically for a long time yet. However it may be cooling in reality.
Re: Don S. (#1),
I wonder what the stats professors would say if they knew all GISTEMP stations in the USA were urban or airports?
“all of the US temperatures – including those for Alaska and Hawaii – were collected from either an airport (the bulk of the data) or an urban location.”
and no. 2. Look closely at the second article. It only says the found a “warming trend” “in the numbers.” NOTHING about the issue of short term, recent issues. The two AP articles, read together, lead me to strongly suspect that all three found that “recent” temps are flatlined or slightly cooling. Analysis of the squirrely text is my evidence, omitting everything except for the comments about AP’s own experts. For starters,
The experts found no true temperature declines over time.
true and over time In other words, they found variability and uncontroversially warming from 1880. But what about the “recent” temp trend?
Statisticians who analyzed the data found a distinct decades-long upward trend in the numbers, but could not find a significant drop in the past 10 years in either data set.
In other words, `Our statisticians found a drop in the past 10 years, but it was not significant.’ This truth does not help the premise of the article, which is why it is not stated plainly, and the article is a messy jumble of info rather than an explication of the nitty gritty of AP’s expert reports.
“To talk about global cooling at the end of the hottest decade the planet has experienced in many thousands of years is ridiculous,” said Ken Caldeira,
Except that it undermines the political contention that the prime mover, even only mover, of temperature change is C02 levels. That is why it is a hot issue.
I think the stats community should push the AP experts to release their findings, I believe they are not being represented with appropriate exactitude.
This link has more on the 3 professors and where the AP got the data:
Re: Hans Erren (#5),
NASA climate scientist Gavin Schmidt predicts 2010 may break a record, so a cooling trend “will be never talked about again.”
Well he has a lot of faith in the NASA model which is the highest. Maybe he will never talk about the model forecast again.
Re: Gerald Machnee (#8),
Maybe he will never talk about the model forecast again.
funny how “slim chance” and “fat chance” mean the same thing…
It is clear that if you at different scales and different end points you can get different results.
10 years is short, but it has been a period with relativily little volcanoe cooling , which may or may not be an element you take into account.
It is also important to know what exact was the question, and the exact answer given. For example if the question was, “does the data show a SIGNIFICANT cooling tend over the last 10 years”, is not the same as “is the data consistant with a cooling trend”. The data can be consistant with a warming and with a cooling trend. Even using monthly figures rather than daily can materially change the outcome as you change the error limits. The devil is in the detail.
This is why the serious discussion starts when the folks write it all down, including the test used and why they were selected. Interesting enough David W. Peterson has published on Cherry picking, and clearly knows about the problems. Lets hope these specialists publish a paper or post over at the blackboard.
I hope you get the points is not that I claim they are wrong, just that there can be a big diffrence between the “executive summary” which probably includes spin and the details required to make an opinion on the work done.
Could not resist..
Cool idea. we should send those stats professors the yamal series and test if they can see its inner beauty.
The statisticians are looking at numbers without adequate domain knowledge to judge whether a change in the overall trend has occurred. It would be interesting to hear what these statisticians had to say if they had been informed of the concept of climate regime shifts. Many climate scientists see a warm climate regime from about 1910-1945, a cool climate regime from 1946-1975, a warm climate regime from 1976-2007 and a cool climate regime beginning in 2008. The 20th century had two warm regimes and only one cool. If the same pattern holds, the 21st century would have two cool climate regimes and only one warm.
The 2002 Bratcher and Giese paper predicted a change to a cool climate regime in “about four years.” It has to be confirmed yet, but the drastic change at the end of 2007 looks like their prediction came true. If so, and if the pattern holds, we can expect to have cooler temps for the next 30-35 years.
The claim made by Jim Hansen is that because man has elevated atmospheric CO2, the Earth’s energy budget is out of balance causing the Earth to have a fever. Further, even if surface temps do not go up each year, the “heat in the pipeline” (meaning ocean heat content) would rise year over year. Despite continually rising CO2, ocean heat content has leveled off and begun to cool. It would have been more helpful to have the statisticians informed of the actual theory being tested and have them test ocean heat content statistics rather than unreliable and time-lagged surface temps.
Thanks for the links Steve.
My first response to the news item was “A blind test, who do they think they’re kidding? LOL!” The only “blind” in the whole thing would be the author of the news item.
As Ron says, the only way to analyze a data set properly is to understand the full context of the data. What do they teach in Journalism school Cimate Science Statistics. Another one for Ripley’s “Believe It or Not” (Boy, that dates me…)
Re: romanm (#12),
Maybe if they reversed and inverted the data….Or would that just be a confused test?
The confusion is in the mind of the writer. Who’s gonna believe that the statistician is SO dense that he hasn’t a clue where the data came from. Yah, I believe it! These statisticianswould have to be numb. Yah, that’s the guy I want analyzing MY data.
Writer’s embellishment at its finest!
Re: romanm (#14),
The writer knows what he is doing. Reread # 1 article without any resort to any information about anything other than the AP experts. AP hired experts to give the lie to current news about recent temperatures. Problem: those experts came back each or in the aggregate with findings of cooling “in the past 10 years.” Cannot squelch the information, experts might spill the beans. What to do? Treat the findings for the past 10 years with the briefest comment, answer a different question about long term temp increase as if that debunks the recent trend opinions, clutter up the article with comments from others than the experts retained.
Or that is the result, at least. Poor workmanship.
Is there some big surprise here? When I talk to others about AGW, I say that the earth has warmed over the last 100+ years. (0.6 deg C is figure I use (IPCC), but I think this is an upper bound, given the UHI effect on temp sensors.) I then say the last ~10 years show no significant trend up or down compared to the previous 30 years.
I then proceed to talk about the problems associated with measuring/modeling anthropogenic temperature signals in all that noise. Then I move on to hockey sticks, and so on.
If I was handed a plot of the NOAA data, I would say about the same thing as the statisticians regarding the trends, without running any statistical analyses. Big deal. This is just big media trying to make themselves appear all “scientific”-like. I’m sure many people will be tricked. I’ll let Mr. Marley have the final word: “You can fool some people sometimes, but you can’t fool all the people all of the time.”
I have not had time to read the article, however, I have this idea.
To find a trend, ocean heating or cooling, one would have to take a vertical view of the temp. data. The deeper you dive down, the longer the propagation delay for the surface heat to arrive in the deep. In this way you can find the trend where the integration period extends with the depth difference. Doing this calculation over a vast area could probably compensate for errors caused by shifting depths of individual currents.
Just my ten cents idea to those who can actually do these calculations.
They found no trend. I’ve been finding no trend for more than 10 years. It’s Hansen who finds trends.
I don’t think Borenstein understands what he wrote, like Beauregard Bear in Pogo.
(Yes, Roman, I’m dating myself, too.)
Roger Pielke Sr. comments on the AP article, reminding us about ocean heat content.
Sounds like revisionist history to me.
Are these the same statisticians that kept on saying the trend was up in the markets just prior to global financial crisis hit, and almost caused a total collapse of the US financial system?
Considering one day period, at 16:00PM the daily temperature is statistically also still rising (linear trend since midnight), even it is already sinking.
Re: Juraj V. (#25),
Another good point.
One quote pulled out of the middle of a longer sentence is the only quote from any of the three people who did the actual analysis.
Identifying a downward trend is a case of “people coming at the data with preconceived notions,” said Peterson, author of …”
Evidently whatever else they had to say couldn’t be quoted as it wouldn’t have enhanced what Borenstein wanted to present.
They were given partial data from 2009 to represent yearly data. Their opinion of that might have made for an interesting quote.
Re: Bob Koss (#26),
the only quote from any of the three people who did the actual analysis
Likely Peterson found no change, the other two found decrease over past 10 years. Whether this temp “drop” was “not…significant” I do not know, but I would like to know whether “not…significant” is the writer’s interpretation. These two words are not directly attributed to anyone, which I know means they are 95% likely the writer’s self-perceived emendation, or massage of the info gained. However, I wager 98% of the public believes that “not…significant” is attributable to the experts. The experts may wish to publicly clarify this matter. C’mon, guys, don’t you feel a little abused by this article?
From the AP article:
satellite data used by climate skeptics
satellite-measured temperatures preferred by skeptics
One prominent skeptic said that to find the cooling trend, the 30 years of satellite temperatures must be used.
Using the skeptics’ satellite data
the satellite data that skeptics use.
Funny, I thought the satellite data was for everyone to use. I must have missed the purchase of the satellite data by Skeptics, Inc.
Re: nanny_govt_sucks (#27),
Good point. Just because most peer-reviewed papers look at surface temps does not mean they are the best metric for monitoring climate. Satellites have fewer arbitrary adjustments and a more open access to data. The Argo network for ocean temps also has open access. Skeptics prefer these two metrics over surface temps, in part, because they are less open to mischief by the keepers of the records. I would have to go back and look again but I don’t think the article even mentioned ocean heat content which I believe should show the strongest signal of anthropogenic change.
Nice catch. I missed that.
I’m puzzled. Who cares if the trend is up or down. Whatever it is, it is. The question we should be trying to find an answer for is how much is man causing changes to the climate?
Re: PerterA (#28),
The reason people care is if the trend is going down, it may falsify the theory pushed by the IPCC that elevated CO2 is causing temps to go up. If you want to know how much man may be changing the climate, I suggest you look at the research of Roger Pielke. He looks at all the man caused changes to climate, not just CO2.
The climate is the continuation of the oceans by other means.
H/t Arnd Bernaerst
This was a straw man, since nobody claims that there is a downward trend over the 130-year surface series or even over the 30 year UAH-LT series. As noted above, there are legitimate questions about what the data actually measure, but given the data the trend is clearly upwards.
What they should have done is given the statisticians a null data set representing the trend predicted by climate
models over the past 100 years and 30 years in response to the observed increase in GHG’s and asked if the observed temperature data sets were consistent with that. And if they had done that for surface, lower-trop and mid-trop levels for the NH, SH and tropics, then we’d really have a test worth talking about.
Re: Ross McKitrick (#33),
I totally agree as long as nobody is silly enough to claim that “Since statistics involves mathematics rather than physical science, I hardly think “instrumentation” is relevant. ” Climate Progress
Re: Ross McKitrick (#34), Maybe the author of the AP article should have let his fingers do the walking (in Google) and read Chip Knappenberger’s A Cherry-Picker’s Guide to Temperature Trends.
Re: Ross McKitrick (#33),
This was a straw man, since nobody claims that there is a downward trend over the 130-year surface series or even over the 30 year UAH-LT series.
Totally agree. The point I was making but was removed. I don’t understand the point of the experiment, and why this thread was created in the first place. Given it is why isn’t the discussion focused on discounting any merit in performing the experiment in the first place based on the valid argument by Ross McKitrick. Can anyone please explain it to me? As I said, we know from observations that the long term trend in sup and the short term is down. The experiment focused on the long term trend. So, it proved what we already knew, nothing else. It’s a circular argument.
Ron, what I was noting was that each time the satellite data was mentioned in the story, it was connected with the term “skeptic”. It reminds me of the way that “Iraq” was connected with “911″ in the run-up to the Iraq war. Also, there was no connection to the term “credulist” for the other temperature data.
Re: nanny_govt_sucks (#35),
I understood your comment. And it is a good point. My point is slightly different. As a skeptic, I am wiling to admit I favor satellite data over surface temp record. The surface temp record is shrouded in secrecy, the data and methods are not freely available and the adjustments we know about are ill-conceived. As a skeptic, I am also willing to point out I favor ocean heat content over the satellite record because a radiative imbalance will show in the oceans with a much stronger signal. The AP article says it uses surface data because it is the most commonly referenced in the peer-reviewed literature. That’s true. It is commonly referenced because it is the longest record. But that does not mean it is the best metric for monitoring climate change. My point is the same as Ross McKitrick is making – this was not a well-designed test and does not prove anything.
Re: Ron Cram (#38),
As I understand it the surface data may not be collected in an very scientific way for the purpose of determining the global mean temperature. For example, do we include all the measurements taken from the sea using the large number of probes placed around the world? If we don’t then just using land based measurements is a waste of time for determining the global mean temperature. Satellite measurements are a much better way. The problem their is they only go back so far in time. It will be interesting to see what they will say in say 10 years.
Practically speaking, the GW scare started in 1988, after 30 years of cooling and only 10 years of (GISTEMP) warming. Now we have the opposite situation. So?
Has anyone actually seen the excel spreadsheets they were given to verify that it was actually the correct data?
Ron Cram #38
The GISTEMP program managed by James Hansen is no longer secret. It appears to be fully available. Skeptic E.M. Smith now has an operational copy of the programme and its temperature data. Smith has been producing some staggering results by analysing various subsets of the station data.
If you want to be truely blown away by the preliminary examination of global and regional temperature datasets from GISTEMP then go to Smith’s website “Musings from the Chiefio” and explore what is going on at present there.
When the news eventually breaks on the extent of Hansens manipulation of the global temperature data it will most likely be bigger that the whole of the (Mann Bradley & Hughes v McIntyre & McKitrick) Hockeystick debate.
Re: Rob R (#42),
Thanks Rob R – nice find. I had a look at “Musings from the Chiefio”. I’m shocked. If true it would make the hockey stick saga look like a picnic.
Re: Rob R (#42),
Amazing bit of reading. It certainly opens up a can of worms for station selection issues. I hope that Anthony is on to this and that he gets real friendly with this fellow. They would seem to make an excellent duo.
Thanks for this tip Rob R
Thanks for mentioning “Musings from the Chiefio.” I have glanced at a few articles and found them interesting and well written.
Steve, would you consider putting EM Smith on the blogroll?
The other real issue is that, as predicted by the AGW theory and the GC Models, the temperature has not gone up as predicted.
CO2 levels have risen steadily over the last ten years, but they just admitted that temperature hasn’t.
That’s some hypothesis they just disproved
I have reviewed some of Seth Borenstein’s work, and conclude much of the cause behind this problematicly constructed piece is that he is having a crisis of faith. He is a sincere believer. Exhibit A, this recent lede:
The nation’s top climate scientists are giving “An Inconvenient Truth,” Al Gore’s documentary on global warming, five stars for accuracy.
Showing my increasing confidence in understanding climate statistics, if I was one of the experts given the test, I would ask “Are these numbers independent of each other?” The primary reason is explained by Ron Cram (#10).
An update based on comments from Dr. Christy.
Bit OT but Briffa has issued a further response to the Yamal silliness. Mc used a biased sample. Who would have thunk it?
Re: bigcitylib (#51),
If you think I would read anything you post and take it seriously, you are out of your “open mind”
I linked to the actual reply at the old “Keith Briffa responds” thread.
Thanks to BCL for the heads-up. No thanks for the spin & failure to link to the source.
Briffa is way OT here.
FWIW, the “methods, code, and data” for everything I’m doing is available for public review. I put the most important bits in the postings, only holding back those bits that IMO would not interest anyone. Every so often I package those up and make a boring “documenting what I did” posting if it looks like a good idea.
I have made a tarball of the runnable GIStemp (RedHat 7.2 development system, but it ought to run on most Unix / Linux boxes) and have a place that has given me the “OK” to put it up for FTP download, but just have not had the time. I’ve been rather busy discovering, well, “little things”, like that California has exactly 4 thermometers use since 2007. One is in San Francisco. The other three are on the beach in southern California (Santa Maria, San Diego, L.A.).
The validation of this finding was done (see comments) with a completely independent trial on entirely different hardware, software, and methods by a third party. In the article (and related articles on Brazil, Argentina, China, Canada, Australia, and more as I work through the globe) you will find the 3rd party code for doing it on a PC with Windows and my Linux / Unix commands.
Realize that this particular investigation is NOT GIStemp. It is an investigation into the GHCN data set pre-GIStemp. It is strongly related to GIStemp, in that it shows that a failure to do proper maintenance programming on GIStemp has caused the GHCN thermometer deletions to gut the list of stations used by GIStemp. (USHCN shifted to USHCN.v2 in 2007 and GIStemp did not. So it only loads the old USHCN file and as a results, cuts off US History in 2007. GHCN drops most US stations then, too, and leaves the USA with all of 136 thermometers for the whole place. Down from 1850 at peak).
BTW, any one who can do FTP and use Linux / Unix “grep” and “wc” commands can duplicate this on any Linux / Unix machine. Heck, you could even do it with a text editor and patient counting of the 2009 records by hand.
FWIW, I sometimes lurk at this site, but time limits my participation in comments.
E.M.Smith, The Chiefio
Sidebar Postscript: If anyone would like help installing a copy of GIStemp I would be more than happy to help. If out of the “middle of California” area any site visit would require gas money and a free lunch ;-)
Required fields are marked *
Notify me of follow-up comments via email.
Notify me of new posts via email.
Get every new post delivered to your Inbox.
Join 3,115 other followers