## Unthreaded – #39

1. jim edwards
Posted May 11, 2009 at 11:19 PM | Permalink | Reply

The June DISCOVER magazine explains how a proper understanding of due diligence and statistics proves the existence of AGW, by Stephen Schneider of Woods Institute – Stanford and the IPCC.

Corey S. Powell [Discover's Editor-in-chief]: How do you deal with skeptics, both in Congress and in the public, who always seem to have a contrary statistic ?

Schneider : First, with regard to your due diligence as a publisher , why hasn’t DISCOVER published a compelling account of the other side ? Because there isn’t any, that’s a pretty good reason.

There are a lot of things in that speculative and competing explanations category, but there is no preponderence, and that is what is compelling to me.

For example, take the evidence that Robin cited.

If you were a cynic and you asked about the probability of the ice sheet in the north going up, it’s 50 percent. Going down ? It’s 50 percent. And the South Pole going up ? 50 percent. Going down ? 50 percent. Probability they are both going up together ? Twenty-five percent.

What’s the probability of the stratosphere cooling while the earth gets warmer ? Again, assuming we know nothing, 50 percent. Tropospheric warming ? Fifty. The probability that one will go up while the other will go down ? Twenty-five percent.

Same thing for other patterns, like the way high-latitude continents are warming more than low-latitude ones are. With any single line of evidence, you can say, “Oh, well, there’s still a 25 percent chance it’s random,” but what happens when you put all these events together ? The probability of all these events’ lining up the same way is pretty darn low unless we are dealing with global warming.

Apparently, all I need to do to get a Nobel Prize is come up with a model that makes ten observations about the state of the environment, then explain that the odds are 1 in (0.5)^10 that what I observed happened randomly, so my model must be true. Statistics and climate science are easier than I thought.

The article continues with an exchange of misdirection about how solid the base of support for alarmism is within the scientific community:

Ken Caldeira [Stanford U. + Carnegie Institute of Wash.]: Climate Science has reached the point that plate tectonics reached 30 years ago. It is the basic view of the vast majority of working scientists that human-induced climate change is real. There is a real diversity of informed opinion on how important climate change is going to be to various things that affect humans, and there is a diversity of opinion on how to address this problem, but the debate over human-induced climate change is over.

Audience member : I work in a hard-rock mining industry, and the majority of my colleagues tell me I’m crazy when I talk about climate change. Where are some good sources of information that rationally discuss all of these different naysayers’ theories ?

Caldeira : One useful Web site is realclimate.org.

Got to love those miners…

[Discover, June 2009, article: The Big Heat, at page 43.]

• Kenneth Fritsch
Posted May 12, 2009 at 10:28 AM | Permalink | Reply

Re: jim edwards (#1),

What amazes me is how nonspecific and lacking in conciseness some scientists become when attempting to infer something about AGW. What most laypersons are concerned with in this matter is how will AGW affect me and these terse comments from scientists never seem to get to that point. Even Jim Hansen’s references to climate tipping points never seem to be backed up with much else than conjecture.

What gets missed in these scientist/advocate supposed logical arguments against the skeptics is any reference to quantitative changes. I never could see how evidence for a NH warming, melting polar ice and glaciers says much about AGW in and of itself or its future extent. There are various levels of skepticism of AGW just as there various levels of confidence in the opposition camp that would logically be based on what the view holder sees as the extent of AGW and its potential adverse and beneficial effects.

While obviously some of these scientists want to raise their hand to show their position on AGW and a consensus, these short snippets, like the one referenced here, to prove their case for some vague version of AGW, I see those raised hands doing a lot of waving.

• jim edwards
Posted May 12, 2009 at 11:37 AM | Permalink | Reply

It’s a six-page article, and I typed in less a half page of it for fair use / discussion purposes only.

The four scientists [Schneider, Caldeira, Robin Bell, and Bill Easterling] talk about a gamut of issues in a fairly frank way.

Caldeira makes a good point that we shouldn’t acidify the oceans, but then doesn’t clearly explain that this is an effect separate from AGW.

Easterling seems to imply warming is a net negative for agriculture, Schneider says we don’t know.

Schneider also discounts end-of-the-world and C.E.I. scenarios when developing risk assessments for policy makers. He weakly implies that the climate debate is dumbed down because it’s dominated by extreme arguments. … In his final comment, however, he draws a logical connection between “contrarians” and the American Tobacco Institute.

Easterling quite honestly states about the empirical evidence so far,

“…it’s all circumstantial. What made all this come into sharp focus for me was not what we see but what we are able to simulate on a computer. … When we entered into the computer all the various things that forced the climate to change, we were able to faithfully reproduce the temperature record of the past 100 years globally. When you take out the component of human generated carbon dioxide, the models don’t work at all.”

After expressing uncertainty as to whether AGW will have negative effects, Schneider admits that we’ll need to spend \$500 billion to start, with Trillions to follow, if we want to address AGW.

Caldeira follows up that that’s not so much, when you compare it to the 22(?, my figure) % of GDP we spend on medical care. He assumes that the cost to counter the “environmental risk” is a loss of 2% annual economic growth, then argues we should make the leap to avert the risk problems will occur.
He makes a bizarre argument that: since climate skeptics have told him they wouldn’t oppose the use of carbon-neutral technology if it were the existing infrastructure, the cost to transform our economy isn’t a barrier to green policies. [i.e. - his opponents would use it at zero cost, so they must not care what it costs !] He says it’s a problem of cooperation, so we need strong leadership to make it happen.

Caldeira also seems to be in fantasyland when it comes to his understanding of how different societies make decisions. He accepts as a matter of faith that if we decide to spend Trillions to avert possible risks, Europe, China and India will quickly decide to do the same.

Schneider seems more realistic on this point. He assumes it will be a problem getting China, India, Indonesia, Brazil, Mexico, et Al to follow our lead. He implies that Congress would have to pony up another \$500 billion this year to get the poorer countries to follow.

The article ends with a candid statement by Easterling that:

“Even science is not value neutral. We make decisions and we have values in our judgments, but at the end of the day we have a code of ethics that says we look at the data and, using our prior knowledge, we make our best judgment. That’s all it is.”

• Andrew
Posted May 12, 2009 at 11:49 AM | Permalink

Re: jim edwards (#18), Easterling said:

“Even science is not value neutral. We make decisions and we have values in our judgments, but at the end of the day we have a code of ethics that says we look at the data and, using our prior knowledge, we make our best judgment. That’s all it is.”

Am I the only one who sees a contradiction here?

Of course, if you are impressed by curve fitting…

• Peter D. Tillman
Posted Jul 1, 2009 at 10:40 AM | Permalink | Reply

Re: jim edwards (#1),

Haven’t seen a link to this, apologies if it’s a duplicate:
http://discovermagazine.com/2009/jun/30-state-of-the-climate-and-science

Cheers — Pete Tillman

2. stan
Posted May 12, 2009 at 7:59 AM | Permalink | Reply

“To capture the public imagination,
we have to offer up some scary scenarios,
make simplified dramatic statements
and little mention of any doubts one might have.
Each of us has to decide the right balance
between being effective,
and being honest
.”

Dr Stephen Schneider
(interview for “Discover” magagzine, Oct 1989)

3. Mark T
Posted May 12, 2009 at 8:03 AM | Permalink | Reply

Wow. I’m stunned. This guy’s a PhD? In what, disinformation?

Mark

• bender
Posted May 12, 2009 at 8:13 AM | Permalink | Reply

Re: Mark T (#3),
What’s to shock? This is exactly what “advocacy” means. Non-neutrality.

4. Steve McIntyre
Posted May 12, 2009 at 8:25 AM | Permalink | Reply

Let’s not discuss the Schneider 1989 quote which has been discussed endlessly.

Let me draw reader’s attention to Schneider’s use of the term “due diligence” – a term that I’ve used frequently. Is anyone aware of use of this term by climate scientists prior to Climate Audit? (Just asking.)

• Ron Cram
Posted May 12, 2009 at 9:06 AM | Permalink | Reply

Great question. “Due diligence” is a common term in business and especially in mergers and acquisitions. If climate scientists used the term prior to you, I’ve not seen it. Soon after I got interested in global warming, I found this site- but I have done a lot of reading since then (of material written earlier) and do not ever recall seeing the term elsewhere. Perhaps more troubling is I do not think I have ever seen a synonym of the term used either.

Off-topic: I just commented on the Skeptical Science blog regarding John Cook’s post on Sea Level Rise. I think this is a very interesting topic, especially when the alarmists are ratcheting up the rhetoric (thank you for your comment Jim Edwards) at a time when the evidence is going against them.

• Steve McIntyre
Posted May 12, 2009 at 9:50 AM | Permalink | Reply

Re: Ron Cram (#7),

Here’s a surprising use of the term “audit” by frequent blogosphere commenter Ike Solem as supposedly being part of the mandate of the university research community:

That will get the universities back to their real jobs, which include producing skilled people (who then go to work in private industry), serving as an independent auditor of scientific claims (like cold fusion, etc.), doing research into basic science (which may or may not be of any real use), and so on.

• tetris
Posted May 12, 2009 at 10:55 AM | Permalink

Re: Steve McIntyre (#9),
Steve,

My experience in working with university scientists in transferring their science/technology out of the lab into a venture capital financed company tells me that as a rule they have absolute no clue as to the meaning of “due diligence” as you and I would understand it. In fact, many of them are profoundly shocked and offended when the hard questions keep on coming [a bit like Phil Jones, perhaps?].

Ironically, this attitude to due diligence runs counter to one of the basic tenets of the scientific method, which does not require you to come up with a better/different explanation for you to argue that XYZ is wrong.

5. Navy bob
Posted May 12, 2009 at 8:43 AM | Permalink | Reply

re jae #792
Notice in the Telegraph article how the reporter adds his own commentary and conclusion to assure the reader that climate change is really, truly happening, with absolutely no attribution to the scientists mentioned in the story or to any other authority: “The summers 9,500 years ago were warmer than today, though there has been a rapid recent rise as a result of climate change that means modern climate is rapidly catching up.” It’s a gross violation of proper journalistic practice, but as folks often say here, “Hey, it’s, etc.” Oddly enough though, he also relies on his own opinion that the new find of old spruce means it was warmer 9500 years ago today, again with absolutely no attribution to any expert source. Journalism’s political advocacy is still light years ahead of climate science’s.

6. Bob Koss
Posted May 12, 2009 at 9:13 AM | Permalink | Reply

Steve,

If you ever get the time.

The Blogroll link to Julien Emile-Geay is no longer active and hasn’t been for months.

If you want a replacement, I would suggest http://www.drroyspencer.com/ as being informative.

7. Hank
Posted May 12, 2009 at 10:15 AM | Permalink | Reply

This use of the term “due diligence” is amusing. It’s a very peculiar use AND he presumes to speak for the editors of Discover Magazine in his answer. I think this is definitely a case of “monkey see monkey do” on the part of Schneider.

• jim edwards
Posted May 12, 2009 at 10:32 AM | Permalink | Reply

Re: Hank (#10),

The fact that people who don’t agree with the entire official narrative can’t get published is proof, somehow, that the journals are doing their “due diligence”.

He seems to be stuck on this idea that you don’t get to try to prove somebody else’s reconstruction is wrong unless you have a replacement to offer that you can show to be better.

“I don’t know what’s happening, but I recognize that as B.S.” is not acceptable for publication. You have to come up with a competing model with > than 50% [a preponderance] probability to be worthy of publication, apparently.

8. Steve McIntyre
Posted May 12, 2009 at 10:57 AM | Permalink | Reply

Don’t forget that Schneider was the editor for Wahl and Ammann 2007 – the tortured history of which has been discussed in detail both here and at Bishop Hill.

I’ve posted up my correspondence with Schneider which, to me, demonstrates a little (unintended) irony in his use of the term “due diligence”.

9. Andrew
Posted May 12, 2009 at 11:12 AM | Permalink | Reply

85 in Florida today. Must be AGW. I mean it is HOT.

Don’t even get me started on the Schneider quote. It’s astounding. There isn’t any? Really? Some due diligence there…

10. Patrick M.
Posted May 12, 2009 at 11:27 AM | Permalink | Reply

Naysayers are fundamental to the success of Science. A true scientist will welcome skeptics.

11. Andrew
Posted May 12, 2009 at 11:34 AM | Permalink | Reply

It seems another analysis of the USHCN has just been done:
Roger makes a strong case that they need to seriously improve their analysis before publishing. Should make interesting fodder for discussion here. Meanwhile I have a little project I’m working on that I may come back to show you guys. Cheers.

12. Steve McIntyre
Posted May 12, 2009 at 11:53 AM | Permalink | Reply

Folks, go lightly with policy discussions. As readers well know, I don’t want such discussions at the blog, unless there is an explicit waiver (which is not the case here) for reasons provided elsewhere.

• Andrew
Posted May 12, 2009 at 12:04 PM | Permalink | Reply

Re: Steve McIntyre (#20), Agreed. The group talked a lot about policy, but its better if we focus on the science they discussed. Their comments in that regard should be more than enough ice for our shasta.

13. Mark T
Posted May 12, 2009 at 1:00 PM | Permalink | Reply

I’m more concerned with his “statistical” justification for calling it a match. Utter nonsense is a better term for it. Assuming you know nothing, every one of his “50 percent” numbers is, in reality, “I don’t know, and therefore cannot assign a value to it.” He SHOULD know this, but obviously does not. Of course, these guys also think that Mann’s works have statistical merit, so I shouldn’t be surprised.

A similar example, btw, is Drake’s equation. Even if accurate, it is impossible to assign probabilities to any of the terms, and hence, the “answer” is anywhere from zero to everywhere (life exists). Same situation here. Not knowing anything does not mean you can arbitrarily assign 50% to a probability.

If this is his justification, do his colleagues agree? Why?

Mark

14. PhilH
Posted May 12, 2009 at 1:41 PM | Permalink | Reply

“When we entered into the computer all the various things that forced the climate to change, we were able to faithfully reproduce the temperature record of the past 100 years globally.”

Is this true?

• Curt
Posted May 12, 2009 at 2:23 PM | Permalink | Reply

Re: PhilH (#27), For a model’s forecasts to have a chance of being taken seriously, it must be shown to be able to “hindcast” well. This a “necessary but not sufficient” condition, according to at least one prominent IPCC scientist (Trenberth, I believe).

So all of the prominent computer models have been “optimized” to be able to hindcast the conditions (at least the Global Mean Atmospheric Temperature) well. They do this with many radically differing parameters, but all assign a prominent role to CO2 infrared absorption. The big question is whether this optimization was accomplished through fundamentally getting the physics correct, or just by “overtuning”.

Of course, with the fit established in a model with a strong CO2 effect, removing that effect would destroy the fit. Of course, this would be true regardless of whether the fit were valid in the first place.

These claims remind many of von Neumann’s famous quip about parametric models:

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk!”

• bender
Posted May 12, 2009 at 2:35 PM | Permalink | Reply

Re: Curt (#29),

The big question is whether this optimization was accomplished through fundamentally getting the physics correct, or just by “overtuning”.

This question was answered by Kiehl (2007) who showed that many parameterizations are possible (equally likely?), all of them capable of reproducing the surface anomaly record. IOW the physics do not constrain the model parameters all that much. Overuning is definitely the way they get these models to fit the data. IOW there is no question. It has been answered.

• bender
Posted May 12, 2009 at 2:36 PM | Permalink

Re: bender (#32),
And remember: these models do not even get the GMT correct. All are biased, some badly. Therefore the physics CAN’T be correct.

• Craig Loehle
Posted May 12, 2009 at 2:42 PM | Permalink

Re: bender (#33), To elaborate, what they point you to is the “trend” over time in temperature that “matches” (cough cough) the actual temperature (which is itself sparsely measured and corrupted by UHI), but the GMT they really compute can be off by many degrees–though when that happens the longwave radiation computations MUST be wrong since that is a 4th power function of temperature!

• Andrew
Posted May 12, 2009 at 2:40 PM | Permalink

Re: bender (#32), In other words, as I have long said-and many others, the “attribution studies” are a weak hypothesis test at best-basically meaningless. For an interesting alternative “fit” you guys might be interested in this-my little curve fitting exercise with a low sensitivity EBM called “Grumpy”.

• Andrew
Posted May 12, 2009 at 2:26 PM | Permalink | Reply

Re: PhilH (#27), Change it to “all the various things that we assume forced the climate to change” and the statement becomes accurate. The inputs are SWAG’s. But even so, it is not so true:

“Impressive”, no?

• Craig Loehle
Posted May 12, 2009 at 2:28 PM | Permalink | Reply

Re: PhilH (#27), It is only “true” when you allow the modelers to also choose their input aerosols, and clouds, and they each make different choices on these (no consensus there…). It is also easy to show that alternate models can also reproduce the 20th century temp history (see Spencer latest work and also Klyashtorin, L.B. and A.A. Lyubushin (2003), On the coherence between dynamics of the world fuel consumption and global temperature anomaly, Energ. Environ., 14, 773 782).

15. Craig Loehle
Posted May 12, 2009 at 2:20 PM | Permalink | Reply

There are two logical disconnects in Schneider’s Discover interview:
1) When he calculates his odds of all these things happening, this merely shows that yes, it is warming, with absolutely nothing to say about causation (human vs natural).
2) If all the scientists in the world answer the question “are humans influencing climate” with a “yes” this does not mean either a) that the future impacts will be alarming or b) that we can prevent it without going back to the stone age.
It used to be that scholars getting a classical education studied logic and rhetoric, but not any more. (no I’m not that old myself, but I studied these things for “fun”).

16. MJW
Posted May 12, 2009 at 3:11 PM | Permalink | Reply

According to Roger Pielke Sr., “The only basic physics in the models are the pressure gradient force, advection and the acceleration due to gravity. These are the only physics in which there are no tunable coefficients.”

17. Posted May 12, 2009 at 3:15 PM | Permalink | Reply

Here is Ferdinand Engelbeens alternative low fit using the Oxford EBM
http://www.ferdinand-engelbeen.be/klimaat/oxford.html

Oxford EBM model with reduced sensitivities:
1.5 °C for 2xCO2, 0.25 x aerosols, 0.5 x volcanic, 1 x solar.
Results: correlation = 0.884, R2 = 0.792

• Andrew
Posted May 12, 2009 at 3:22 PM | Permalink | Reply

Re: Hans Erren (#37), I don’t want to sound dismissive, but something is clearly wrong with the volcano response-that and I am puzzled as to how he varied the sensitivities-The model doesn’t recover from Pinatubo until the middle of the 1998 El Nino. I think my analysis is at least pretty replicable, but I have no clue what he did.

• Andrew
Posted May 12, 2009 at 3:23 PM | Permalink | Reply

Re: Andrew (#38), Also, Leif might object to the presence of solar forcings which doubtless assume large increases before 1950, for which there doesn’t seem to be any basis.

• Ron Cram
Posted May 12, 2009 at 3:51 PM | Permalink | Reply

Re: Hans Erren (#37),

Hans, interesting. I cannot remember the number Petr Chylek came up with for aerosol forcing. I only remember that his number was much lower than the IPCC numbers. With a lower aerosol forcing, you can have a lower CO2 sensitivity.

Re: MJW (#36),
Great quote! I always enjoy reading Pielke’s work.

18. Ivan
Posted May 12, 2009 at 4:13 PM | Permalink | Reply

It looks like Roy Spencer joined the crowd of deniers of human cause of CO2 rise. http://www.drroyspencer.com/2009/05/global-warming-causing-carbon-dioxide-increases-a-simple-model/ or

http://wattsupwiththat.com/

19. Mark T
Posted May 12, 2009 at 4:29 PM | Permalink | Reply

Spencer’s long been painted with that title.

Mark

20. Greg Goodknight
Posted May 12, 2009 at 5:34 PM | Permalink | Reply

Looks like snip ScienceDaily have discovered proof that the sun and cosmic rays have nothing to do with the past warming. A computer simulation of reality has proved the effect is too small to make a difference. Sure glad reality can be simulated, it sure makes science easier.

“Changes In The Sun Are Not Causing Global Warming, New Study Shows”. 12 May 2009
http://www.sciencedaily.com/releases/2009/05/090511122425.htm

• Andrew
Posted May 12, 2009 at 5:50 PM | Permalink | Reply

Re: Greg Goodknight (#43), One wonders how they can explain the presence of a solar cycle in the data that can’t be explained with TSI if amplifying mechanisms can’t exist. Hm, maybe the models can prove that the cycle doesn’t exist because they don’t show it? Yeah, that would work. Observations be darned! All power to the sov-er computers!

• Posted May 12, 2009 at 7:21 PM | Permalink | Reply

They’re shooting themselves in the foot, then. If solar and cosmic ray mechanisms did not modulate temp significantly, and CO2 supposedly wasn’t a factor, what does that leave? And why isn’t what’s left still a factor?

21. Posted May 12, 2009 at 8:15 PM | Permalink | Reply

Andrew–
Grumpy looks similar to “Lumpy” Discussed here.

• Andrew
Posted May 12, 2009 at 9:28 PM | Permalink | Reply

Re: lucia (#44), Yes, very similar-the name was inspired by Lumpy!

22. kim
Posted May 12, 2009 at 9:01 PM | Permalink | Reply

There is blockbuster news out of Washington, today. The White House’s Office of Management and Budget has sent a memo to the EPA alleging that the Precautionary Principle has been stretched beyond the science, that EPA’s ruling that CO2 endangers public health has been insufficiently documented, and that EPA regulations of CO2 would cause serious damage to the economy. Probably as a result of the memo, EPA’s director, Lisa Jackson, told Congress today that the endangerment finding won’t necessarily lead to regulation.
========================================

23. Scott Brim
Posted May 12, 2009 at 9:33 PM | Permalink | Reply

.
A REQUEST FOR LUCIA:
.
Lucia, concerning your Blackboard analysis of the latest episode of the Endless Schmidt-Monckton Kerfuffle, could I ask a favor of you?
.
Could you please enlighten me as to your understanding of the origin and intended function of the following graphic that Gavin Schmidt posted as part of his side of the kerfuffle?
.

.
More specifically, what is your understanding as to how the “IPCC Ensemble and 95% range” portion of the graph was generated?
.
Presumably, this graphic didn’t have to end at 2009; i.e., it could have been extended as far into the future as might be necessary to illustrate what the IPCC temperature projection is for the next twenty-five to forty years (or further) including the 95% range band.
.
Is my thinking on that score correct?

Steve: I suggest that you ask Lucia at her excellent blog.

• bender
Posted May 13, 2009 at 7:53 AM | Permalink | Reply

Re: Scott Brim (#47),
Better yet, ask Gavin himself. I’m sure he’ll help to the maximum of his ability.

• bender
Posted May 13, 2009 at 7:57 AM | Permalink | Reply

Re: Scott Brim (#47),
It looks to me like those confidence intervals may have been post hoc widened. Compare them to IPCCs.

• Scott Brim
Posted May 13, 2009 at 9:25 AM | Permalink | Reply

Steve: I suggest that you ask Lucia at her excellent blog.

OK, it’s now been done here ….
.
I noted in my comment on her forum that it had been suggested on CA that perhaps I should be asking Gavin Schmidt himself these same questions, but I offered the observation that doing so would undoubtedly prove a useless exercise.

bender: It looks to me like those confidence intervals may have been post hoc widened. Compare them to IPCCs.

Andrew: The problem is he is comparing the spread in anomalies in models, whereas lucia has been doing trends.

A quick look at IPCC AR4 didn’t yield an obviously comparable graphic to what Schmidt had generated, but maybe I simply didn’t look closely enough.
.
The reason I was curious was that the plot of IPCC Ensemble and 95% interval seems to indicate an apparent visual correlation with the plots of HadCRUT3 and GISSTEMP.
.
So… What precisely is he attempting here in terms of a communications objective, and where did the information concerning the IPCC Ensemble and 95% interval shown on his graphic come from?

• Steve McIntyre
Posted May 13, 2009 at 10:04 AM | Permalink | Reply

Re: Scott Brim (#58),

I oppose obscurantism in showing how results are derived, regardless of whether the author is Mann or Monckton. I think that Monckton should provide clear derivations and relevant source code for his results, just as I think that Mann should. If Lucia can’t figure out what he did, then I doubt that I’d be able to either. I support critical analyses from any quarter and if Gavin Schmidt or one of his associates wishes to post a critical analysis of Monckton here for CA readers, then they are welcome to do so, but my own priorities are trying to figure out what IPCC authors did.

• Scott Brim
Posted May 13, 2009 at 10:46 AM | Permalink

Lucia has written a highly enlightening discussion concerning my questions which I think would be very worthwhile to reproduce in full:

Lucia, May 13th, 2009 at 9:10 am
.
Scott,
.
Gavin created that graph. I don’t know if he downloaded data from the climate explorer, or whether he processed data from PCMDI. He describes what he did here:
.
(Lucia quoting Gavin Schmidt) “This can be done a number of ways, firstly, plotting the observational data and the models used by IPCC with a common baseline of 1980-1999 temperatures (as done in the 2007 report) (Note that the model output is for the annual mean, monthly variance would be larger):”
.
I have posted similar graphs. Here’s one I had handy in a recent spreadsheet. Like Gavin’s it uses the AR4 baseline.
.

.
No precisely similar graphic appears in the AR4. Their projections in Figure 10.4 shows 1 standard deviation bounds based on the model mean temperature anomalies. The graph above and the one Gavin shows show larger uncertainty bounds based on the spread of “all weather in all models”. For whatever reason, the authors of the AR4 choose a graphic that suggests less uncertainty in their “prediction/projections”; now that the temperatures have been flat, Gavin prefers to show these larger ones.
.
(Lucia quoting Scott Brim) “Presumably, this graphic didn’t have to end at 2009; i.e., it could have been extended as far into the future as might be necessary to illustrate what the IPCC temperature projection is for the next twenty-five to forty years (or further) including the 95% range band.”
.
Yep. But if we showed this, it would be useful to ask this: The authors of the AR4 did not select the “all weather in all models” spread as their uncertainty bands. So, why should we believe that’s the uncertainty they believe exists? (I tend to think they did not consider that the correct uncertainty for reasons having to do with the unrealistic nature of “weather” in climate models. Those uncertainty bands are ginormous. )
.
(Lucia quoting Scott Brim) “Now, it has been suggested on CA that perhaps I should be asking Gavin Schmidt himself these same questions, but that undoubtedly would prove a useless exercise.”
.
I suspect Gavin would love you to ask this question. He appears to like the notion that we should use the largest possible uncertainty bands when deciding whether or not models are off track.
I can make those graphs for you.

.
This is my initial reponse to what Lucia said, one which further illuminates the reasons why I’m asking these questions concerning Schmidt’s graphic:

Scott Brim, May 13th, 2009 at 10:11 am
.
Thanks Lucia. Your response is highly enlightening.
.
One could label the latest Schmidt-Monckton Kerfuffle as being “The Battle of the Bands”; i.e., what kind of music do the bands play, how popular are the bands, where do the bands live, whose band belongs to who, and how loud (oops “wide”) are the bands?
.
One reason I am so curious concerning the genesis of Schmidt’s graphic is that the plot of “IPCC Ensemble and 95% interval” seems to indicate an apparent visual correlation with the plots of HadCRUT3 and GISSTEMP.
.
The other reason I am so curious is the obvious point that if the uncertainty bands are wide enough, almost anything that happens in apparent temperature patterns over the next ten to twenty years, up or down, could be held to be “consistent with the predictions of the climate models.”

.

• bender
Posted May 13, 2009 at 8:56 PM | Permalink

Re: Scott Brim (#62),

Lucia: For whatever reason, the authors of the AR4 choose a graphic that suggests less uncertainty in their “prediction/projections”; now that the temperatures have been flat, Gavin prefers to show these larger ones.

As I stated in #53.

24. Geoff Sherrington
Posted May 13, 2009 at 12:36 AM | Permalink | Reply

From S Schneider above,

If you were a cynic and you asked about the probability of the ice sheet in the north going up, it’s 50 percent. Going down ? It’s 50 percent. And the South Pole going up ? 50 percent. Going down ? 50 percent. Probability they are both going up together ? Twenty-five percent.

I am a cynic and I say that this is wrong math. It is wrong because the N pole and the S pole areas are not independent, like successive tosses of coins. A strong increase in TSI could make them both rise, above the (wrong) 25% probability, for example.

• D. Patterson
Posted May 13, 2009 at 2:42 AM | Permalink | Reply

Someone needs to introduce Powell to the teeter totter.

• Mark T
Posted May 13, 2009 at 9:28 AM | Permalink | Reply

Re: Geoff Sherrington (#48), Without an a priori knowledge of the distribution, you cannot even make a guess. It’s just bad use of statistics to convince people that don’t know otherwise that there’s some logic in the decision.

Mark

• Andrew
Posted May 13, 2009 at 9:38 AM | Permalink | Reply

Re: Mark T (#59), Yes, it is, as I said above, Walter Wagner style probability analysis. Which I thought was rather clever and funny.

25. Posted May 13, 2009 at 3:49 AM | Permalink | Reply

I need a little help, here.

I recall a post appeared here on CA a few months ago, post which took a look at the claim of an 800-year lag between temperature rise and CO2 level rise, and found it wanting.

But I am unable to find the article either by browsing the archives or using the search function; anybody here has the link?

• D. Patterson
Posted May 13, 2009 at 6:18 AM | Permalink | Reply

Re: FabioC. (#50),

There were a number of threads which discussed the lag in 2007 and one or more brief mentions in 2008. Search CA using “lag” “800″ in the search box in the upper right corner of this web page.

26. D. Patterson
Posted May 13, 2009 at 8:46 AM | Permalink | Reply

When Nature Becomes a Denier

The Catlin Arctic Survey was featured in the news broadcast of ABC News Good Morning America this morning. GMA traveled with the resupply flight, but weather denied the GMA reporter and resupply flight the opportunity to reach the survey team. Instead, the video report was made from Resolute Bay. The news report made no mention of the extensive recovery of first year sea ice in the Arctic. Instead, the report concentrated on the previous loss of ice in the Arctic, the uniqueness of the survey team’s use of radar to measure the ice thickness, and rigors of the Arctic weather. With Nature denying enough progress towards the survey team’s objective at the North Pole before the Summer melt season, the Catlin Arctic Survey team is now preparing to abort the expedition later in the week, according to their latest communique on their Website.

The survey’s April ice report in PDF format emphasizes “less than expected” ice thickness, and omits or minimizes any information contrary to the theme of diminished Arctic sea ice.

ABC News promised to continue their news coverage of the Catlin Arctic Survey, its research results, and their significance to Climate Change.

• Andrew
Posted May 13, 2009 at 9:17 AM | Permalink | Reply

Re: D. Patterson (#55), How can a few months of measurement in sparse locations possibly have “significance to Climate Change”? Just askin’. Since we know eight or ten years ain’t enough.

• D. Patterson
Posted May 13, 2009 at 9:22 AM | Permalink | Reply

Re: Andrew (#56),

The answer depends upon whether your mission is scientific or….

27. Manfred
Posted May 13, 2009 at 1:58 PM | Permalink | Reply

how low is the probability, that DISCOVER publishes schneider’s embarrasing statistics and not publishing a compelling account of the other side – given they would do due diligence ?

• jim edwards
Posted May 13, 2009 at 3:01 PM | Permalink | Reply

Re: Manfred (#63),

Duh !

25 percent

• Andrew
Posted May 13, 2009 at 5:06 PM | Permalink | Reply

Re: Manfred (#63), Re: jim edwards (#64), Apart from the hilarity of the continuing Wagner style probability analysis (which I will repeat over and over until it catches on dang it!) The key to the question is “given they would do due diligence”-which means, extending the Walter Wagneresque math, you need to multiply by 50% again. So 12.5%

More seriously, but still somewhat facetiously, 5%-oh wait, it’c climate science, so 10% (ha ha, CI joke).

28. bender
Posted May 13, 2009 at 8:57 PM | Permalink | Reply

Rescue the models! Even if it means admitting some uncertainty!

29. Posted May 14, 2009 at 1:03 AM | Permalink | Reply

Dear Steve, you can almost certainly be credited for teaching the climate scientists and journalists how to say “due diligence”. However, so far, you haven’t taught them to do it.

• Steve McIntyre
Posted May 14, 2009 at 6:09 AM | Permalink | Reply

Re: Luboš Motl (#68),
Luboš, in the review of our submission to IJC on Santer – which we just received and the response to which I’m mulling over. We did not get the actual reviews – only the editor’s paraphrase as the reviewers refused to permit the editor to provide us with the actual reviews, leaving the editor with the task of paraphrasing the reviews. In places, he seems to have cut-and-pasted anyway and the term “due diligence” appears as follows. (We had referred to a Statistical Appendix in a CCSP report as being to Chapter 5; the reviewer observed (correctly) that the Statistical Appendix was to the report as a whole. AS to back-up, we attached turnkey source code generating all statistics and figures used in the submission, so there was no problem in verifying or referencing results. Nonetheless, the reviewer (as paraphrased by the editor) proceeds:

6…It is also stated that Chapter 5 had an appendix; there was no such appendix. The whole report had an appendix. This is but one glaring example of referencing on the fly with at best half truths and leaves the knowledgeable reader with the impression that the author has referenced the material without actually reading it first. This may be a mis-impression. In which case the author needs to work very hard on greater clarity in their referencing style.

7. The RSS dataset version is different and is why the analysis of that series differs. The readme file on the RSS site documents this and is easily available. This and a lack of referencing conspire to rightly or wrongly give an impression of lack of due diligence in understanding the data before its use.

• Craig Loehle
Posted May 14, 2009 at 6:36 AM | Permalink | Reply

Re: Steve McIntyre (#69), This is the first time I have seen a slight mis-referencing (of the appendix here) being called a big deal. Wow. And the reviewers refused to allow their reviews to be passed on?

• Pat Frank
Posted May 14, 2009 at 11:18 AM | Permalink | Reply

Re: Steve McIntyre (#69), “as the reviewers refused to permit the editor to provide us with the actual reviews,…

In nearly 30 years of publishing in refereed journals, I have never, ever heard of such a thing. I don’t see how a referee can even make such a demand, much less why an editor would feel compelled to honor such.

As soon as one agrees to review a manuscript, one has accepted the terms of journal protocol. That means submitting a timely review which is passed along to the communicating author. No ifs, ands, or buts.

Imagine the level of justice in a court, in which the judge allows that the defense gets only a paraphrase of the evidence against the accused, while the prosecution gets the detailed defense brief. The stink of a star chamber rises.

Climate science clearly inhabits a world of its own, where such corrupted processes are accepted without official qualm or widespread protest.

• Steve McIntyre
Posted May 14, 2009 at 1:29 PM | Permalink

Re: Pat Frank (#71),

The editor explained that reviewers for IJC have the option of choosing “confidential comments to the editor” or “comments to the author” and in our case, they chose the former.

• RomanM
Posted May 14, 2009 at 1:54 PM | Permalink

The arrogance of so many in the climate science community never fails to amaze me. My suspicion is that the reviewers must have chosen to dismiss the situation out of hand on an antagonistic ad hominem basis. This suspicion is supported by the wording in statements such as

This is but one glaring example of referencing on the fly with at best half truths and leaves the knowledgeable reader with the impression that the author has referenced the material without actually reading it first.

This and a lack of referencing conspire to rightly or wrongly give an impression of lack of due diligence in understanding the data before its use.

Not submitting comments to the author has the advantage that the “knowledgeable reader” need not address the issues under consideration.

It amazes me, but does not surprise me.

• Peter D. Tillman
Posted Jul 1, 2009 at 10:55 AM | Permalink

Re: RomanM (#74),

“My objections to the global warming propaganda are not so much over the technical facts, about which I do not know much, but it’s rather against the way those people behave and the kind of intolerance to criticism that a lot of them have…” — Freeman Dyson, http://www.e360.yale.edu/content/feature.msp?id=2151
June 4, 2009 interview by Yale’s “Environment 360″, and a reply to the NYT article below.

Dyson blames Jim Hansen and Al Gore for the pair’s “lousy science”, which is “distracting public attention” from “more serious and more immediate dangers to the planet.” — http://www.nytimes.com/2009/03/29/magazine/29Dyson-t.html?sq=Freeman&pagewanted=all
NY Times, “The Civil Heretic”, long profile of Dyson

Pete Tillman

• Posted May 14, 2009 at 2:08 PM | Permalink

In some countries authors can request to see all referee comments under Data Protection Act 1998..

• Pat Frank
Posted May 14, 2009 at 4:53 PM | Permalink

Re: Steve McIntyre (#72), Typically, the way it works is that the confidential comments to the editor are in addition to the review itself. For example, in one paper I reviewed, the authors chose to use such large data points that their poor fit was obscured. I added a confidential comment to the editor that this sort of thing should be forbidden by policy. But the entire review itself went to the authors.

Other than something extraordinary like that, I never use the confidential comment option. Reviews always go full-fledged to the author.

So, this policy of the IJC, of confidential comments to the editor, as opposed to comments to the author, appears to have been misused. It typically is not meant to hide the critical review from the author. It is only meant to allow additional private comments to the editor. These comments may be critical, but are not central to the critical review of the author’s work.

Hiding a review from the authors flies right into the face of the point of a review. Reviews are supposed, not only to illuminate flaws in analysis, but also to help researchers in their thinking. Reviews are supposed to be constructive. Hiding a review frustrates the very intent of the review itself.

How can one revise a paper, or rebut a point, if the exact nature of the review is willfully hidden? It’s star chamber injustice migrated into science.

• Andrew
Posted May 14, 2009 at 6:00 PM | Permalink

Re: Pat Frank (#78),

in one paper I reviewed, the authors chose to use such large data points that their poor fit was obscured.

This might be an interesting and informative story. Can you elaborate?

• Pat Frank
Posted May 14, 2009 at 6:24 PM | Permalink

Re: Andrew (#79), Hi Andrew, that’s about all there is to say. The work had other flaws that were worse and I ended up recommending rejection after 3 full go-rounds with the authors. But more than that, I really mustn’t say. The review process is confidential, after all.

• Andrew
Posted May 14, 2009 at 9:15 PM | Permalink

Re: Pat Frank (#80), I figured :blush: after I typed that I thought “Hm, that’s probably confidential”. Oh well. Still, its interesting as a vague anecdote. Bogus work is not merely restricted to climate I guess (although free passes from journal editors may be).

• bender
Posted May 14, 2009 at 10:18 PM | Permalink

Re: Steve McIntyre (#72),
Having this option is not at all unusual. What is highly unusual is a reviewer choosing to supply 100% of their commentary in the form of confidential comments to the editor. I’ve never heard of that.

• Posted May 14, 2009 at 2:21 PM | Permalink | Reply

Wow, You need a club membership or a hockey stick to get past the door guards.

The previous paper’s methods are already published, this may turn into one of the best examples of maintaining the appearance of consensus through back door politics. IJC better be careful.

Which undergrad student in climatology doesn’t understand RSS?

• Dave Andrews
Posted May 15, 2009 at 1:45 PM | Permalink | Reply

Seems to me the use of “due diligence” is an intended slight.

But I thought we were now supposedly in an age of greater transparency. If, as others have said, it is unheard of for 100% of reviews to be witheld why has the editor of IJC gone along with it? Why has the editor not protested vigorously about this to the reviewer and if necessary approached someone else to review if the former is adamant?

What is the point of having reviewers for papers if the original authors cannot see and respond to the review?

30. AnonyMoose
Posted May 14, 2009 at 1:37 PM | Permalink | Reply

One attempt to measure part of the deep ocean conveyor seems to have failed. How many theories and models will have difficulty if the deep ocean conveyor doesn’t exist?
http://deepseanews.com/2009/05/deep-ocean-conveyor-belt-reconsidered/

31. Mark T
Posted May 14, 2009 at 2:18 PM | Permalink | Reply

I reviewed a paper for the Franklin Institute this past summer and I seem to recall having an option to let the author see my full comments. As I recall, I said yes, though nothing I said was really critical other than one major point. I don’t know why anyone would refuse, particularly if they are anonymous anyway. I guarantee the author of the paper I reviewed has never heard of me anyway since he was from a foreign country and publishing in an entirely different field from my own (using a technique I’m familiar with, however).

Mark

32. Gerald Browning
Posted May 14, 2009 at 10:06 PM | Permalink | Reply

Steve McIntyre,

Welcome to the world of peer reviewed manuscripts by politically motivated “scientists”.
When the “scientists” cannot refute the results in a manuscript they do not want to see published,
they start nitpicking at details. This not only reflects poorly on the IJC reviewers, but on the incompetence of the Editor (see my earlier comments on this topic).

Jerry

33. Posted May 14, 2009 at 11:27 PM | Permalink | Reply

“This is but one glaring example of referencing on the fly with at best half truths and leaves the knowledgeable reader…”

This is actually a really important issue because it is a means of enforcing the orthodox ‘interpretation’ of a paper. So even if you accuately reference a work, a reviewer can reject it. This has happened to me where a paper reported that the range of species was responding to climate change, and therefore showing risk to species from global warming. I referenced the work saying that their own data showed a range of responses, range expansion and no change at all, and therefore increase in risk to species was uncertain. I was told in no uncertain terms that such an interpretation of the data was ‘preposterous’, and everyone knows global warming increases species risk.

You can’t win on interpretations. I would be interested to hear what they said about the concrete fact that Santer truncated a series that would otherwise have not been statistically significant.

• Steve McIntyre
Posted May 15, 2009 at 7:42 AM | Permalink | Reply

nowhere (in the material paraphrased by the editor) did either reviewer acknowledge or note in passing that we had demonstrated that Santer’s truncation of the data affected the validity of any of their claims.

• Posted May 15, 2009 at 6:37 PM | Permalink | Reply

Re: Steve McIntyre (#87), “nowhere (in the material paraphrased by the editor) did either reviewer acknowledge or note in passing that we had demonstrated that Santer’s truncation of the data affected the validity of any of their claims.”

Which is why in this situation I think one has more chances by sticking to the unassailable facts and pruning out any points that might be used as diversions, however valid they seem. Thats been my approach of late.

• Steve McIntyre
Posted May 15, 2009 at 6:48 PM | Permalink

Re: David Stockwell (#95), I’m still trying to decide what to do with this project. I think that I’d prefer not to discuss the review comments at this time, but will revisit this in the not too distant future.

• bender
Posted May 15, 2009 at 9:49 AM | Permalink | Reply

I was told in no uncertain terms that … and everyone knows global warming increases species risk.

I would be interested to see the actual quote.

34. Craig Loehle
Posted May 15, 2009 at 6:40 AM | Permalink | Reply

I find it ironic that one of their complaints is the Steve M used the wrong version of the RSS data. Shouldn’t authors have documented exactly which data set version they used and even made it available? Just saying…

• Steve McIntyre
Posted May 15, 2009 at 7:31 AM | Permalink | Reply

We didn’t use the “wrong version of the RSS data”. I get very annoyed with failure of Team scientists to provide precise data citations as references to dead tree publications often do not pin down version questions or what they did and it always takes a while to figure out what they did. In our Supplementary Information, we provided a turnkey script that showed the exact provenance of the data. However, re-reading the article, while we provided an indirect data citation in the SI and while we demonstrated a reconciliation to Santer et al 2008 , we did not provide a dead tree citation to RSS as Mears et al… (or UAH) in the article itself. I don’t have any problem adding the relevant references; the reviewer’s tone seems a bit hysterical.

• Craig Loehle
Posted May 15, 2009 at 9:12 AM | Permalink | Reply

Re: Steve McIntyre (#86), Sorry, I misunderstood. I also see that the reviewers did not bother to look at your R script. I have had similar experiences, where I sent complete computations which were ignored.

• Mark T
Posted May 15, 2009 at 9:50 AM | Permalink | Reply

I don’t have any problem adding the relevant references; the reviewer’s tone seems a bit hysterical.

If the only criticisms they can come up with are of the hysterical, or trivial, nature, then you are on good ground in the long run. If there were legitimate criticisms, they wouldn’t have to reach for the minutiae.

Mark

35. SidViscous
Posted May 15, 2009 at 12:17 PM | Permalink | Reply

Interesting article about deep sea currents. Another variable in the models called into question.

http://www.sciencedaily.com/releases/2009/05/090513130942.htm

36. Gerald Browning
Posted May 15, 2009 at 1:39 PM | Permalink | Reply

Mark T (#90),

You assume the Editor is competent and unbiased. If that were the case the Editor would not have chosen such incompetent reviewers or if the Editor made an innocent mistake in choosing the
reviewers (unlikely), the Editor would have thrown out the politically motivated reviews.

Jerry

• Mark T
Posted May 15, 2009 at 2:54 PM | Permalink | Reply

You assume the Editor is competent and unbiased.

Not at all. I only assume that legitimate criticisms would have been lodged by reviewers had they actually existed (or the reviewers were capable of noticing them). Regardless of the editor’s competence and/or bias, trivial review points will always get through. The editor’s bias only becomes a factor if he chooses to reject based on the minutiae, which was really an aside to my original point.

Mark

37. Andrew
Posted May 15, 2009 at 8:10 PM | Permalink | Reply

CNN Money interviews John Christy:
http://money.cnn.com/2009/05/14/magazines/fortune/globalwarming.fortune/?postversion=2009051412

38. Craig Loehle
Posted May 16, 2009 at 8:48 AM | Permalink | Reply

My paper on why tree rings are not valid treemometers is finally out in print (vs. online last Sept.):

39. Gerald Browning
Posted May 16, 2009 at 5:56 PM | Permalink | Reply

Mark T. (#94),

It is too often that an Editor avoids his responsibility to properly judge the quality of the reviewers comments and just counts the votes. This is a sign of a weak or incompetent Editor.
As I have pointed out before, a secretary can count the votes. The Editor is suppose to pick competent, unbiased reviewers (not the case here and the question arises as to why that was the case) and to throw out politically motivated reviews (again not the case here). So we have another example of the peer review process gone very wrong.

Here is a simple example of the problem. My coauthor and I submitted a manuscript to a journal.
One of the reviewers wrote a one line hysterical statement (because he had published an ad hoc method that was shown to be in question in our manuscript). The Editor rejected the manuscript
based entirely on that review. I was later informed by the Editor (who now wanted to work with us) that the reviewer was in the same organization as the Editor and the Editor was afraid that he might lose his support if he allowed our manuscript to be published. Conflict of interest anybody?

Here is the other side of the coin. My mentor and his students submitted a manuscript on turbulence that produced a minimal scale estimate produced by the incompressible NS equations for a fixed viscosity coefficient. This manuscript caused an uproar in the turbulenc modeling community because it clearly pointed out deficiencies in ad hoc treatments of turbulence. The reviews were quite negative, and my mentor mentioned to the Editor that if any of the reviewers could find an error in the mathematics, he would withdraw the manuscript. My mentor also suggested some alternative highly competent, well known, and unbiased reviewers. The manuscript was accepted. This is a case where the Editor was self confident enough (and highly respected) to arrive at the correct outcome.

Jerry

• Mark T
Posted May 17, 2009 at 1:57 PM | Permalink | Reply

Re: Gerald Browning (#99), I don’t disagree, but this isn’t my point. Steve’s realistically on good ground w.r.t. his paper because there clearly aren’t legitimate criticism (not worth rejection in the real world). Since there are (apparently) only BS criticisms, they should be easy enough to address in a manner that leaves the editor no choice.

Mark

40. Craig Loehle
Posted May 17, 2009 at 10:48 AM | Permalink | Reply

Sometimes, the claim that software is proprietary can cover up glaring glitches:
http://www.schneier.com/blog/archives/2009/05/software_proble.html
looks like Steve isn’t the only auditor in the world, but more still needed…

• Ron Cram
Posted May 18, 2009 at 1:41 PM | Permalink | Reply

Your paper on ocean cooling is being discussed at SkepticalScience blog. I thought you might want to comment.

• Andrew
Posted May 18, 2009 at 1:54 PM | Permalink | Reply

Re: Ron Cram (#104), Love the discussion framing headline.

• Ron Cram
Posted May 18, 2009 at 2:12 PM | Permalink

Re: Andrew (#105),

I hope you join the discussion!

• Andrew
Posted May 18, 2009 at 2:16 PM | Permalink

Re: Ron Cram (#106), Maybe at some point, but I’m kinda busy with some stuff right now. Still, it would be interesting!

41. Andrew
Posted May 18, 2009 at 9:29 AM | Permalink | Reply
42. Andrew
Posted May 18, 2009 at 11:53 AM | Permalink | Reply

What can records tell us about trends? Not much.

The red curve is the HadCrut annual global temperature anomaly (note all values have been shifted upward by a constant to get a more symmetrical window). The top black curve shows the highest value to data at each point in the data. The bottom black curve shows the lowest values the period from the date in question to the present. As you can see, an absence of records getting set does not necessarily imply a lack of trend. But they can coincide.

And now I go back to running various analyses of the data until I find another interesting result.

43. curious
Posted May 18, 2009 at 5:39 PM | Permalink | Reply

To D Patterson – Hi D – follow up from the other thread – if you want uncut coverage of Barton’s “5 min” of questions it is here:

5:40 to 12:25 covers it. Regards C

44. barry
Posted May 19, 2009 at 10:08 AM | Permalink | Reply

Regarding Watts’ surface stations project and his recent publication on siting problems, I asked him if there would be an open analysis of the data, as there was here at climataudit earlier in the project (which was, for some reason, discontinued). He hasn’t answered. I will repeat my request here, for obvious reasons, hoping to gain some support.

Will we be able to observe the progress of data analysis as we did at climateaudit through 2007 for good stations? I’d like to stress again how important it is to do this openly and publicly rather than present the results as a fait a complis. Not just because it was fascinating to watch the stats and conversation unfold, but primarily to demonstrate transparency in the scientific process (particularly when we castigate others for keeping data and methods secret). You may elect to publish the results all at once, with data and methodology included, but it can only give confidence to the work if the world can see it being done step by step.

I can’t think of a good reason not to do it that way, as exemplified in the climateaudit analysis – surely one of the most focussed thread evolutions on just about any subject – no snark, just honest analysis and effective exchanges dedicated to revealing the truth. This shows the best of us.

I was a lurker to that excellent discussion and feel quite strongly about my request. This is an important contribution and shouldn’t be marred by the kind of secretiveness shrouding the work of Team et al. Rather, we should (again) set an example – particularly on this signal effort.

Stephen McIntyre, 70% of USHCN sites have been surveyed. Will you resurrect the public analysis of data? (Or is that already underway?)

My thanks and congratulations on this fascinating site.

• Gunnar
Posted May 20, 2009 at 9:21 AM | Permalink | Reply

Re: barry (#109), To what end?

The surface network siting issues may be interesting to some, but it’s not really relevant to climate study. Since 1979, the satellite data is comprehensive. From a climate POV, the surface data only serves to calibrate the satellite data, and that takes only a few good stations. Of course, the surface network data prior to ’79 needs to be good.

Otherwise, the surface network serves the purpose for which is it was designed. Namely, to tell people the weather where they live, which includes cities, around AC units, etc.

• Posted May 20, 2009 at 11:28 AM | Permalink | Reply

Re: Gunnar (#115),

snip – too petty.

• Gerald Machnee
Posted May 22, 2009 at 3:07 PM | Permalink | Reply

Re: Gunnar (#115),

Re: barry (#109), To what end?

The surface network siting issues may be interesting to some, but it’s not really relevant to climate study. Since 1979, the satellite data is comprehensive. From a climate POV, the surface data only serves to calibrate the satellite data, and that takes only a few good stations. Of course, the surface network data prior to ’79 needs to be good.

I disagree. The surface data is still being used to show how we are “warming”. Calibration is not the sole activity. Satellite data may be comprehensive but there are still problems with it. The siting problems being noted by Watts and his crew are being noticed by higher levels. The question is what are they going to do with it. Surface data is important – there is a reason the Steve McIntyre is being refused access to the information.

• Gunnar
Posted May 23, 2009 at 11:23 AM | Permalink

Re: Gerald Machnee (#126), Gerald, you and I have covered this ground before. We agree that it’s being used for a political purpose, not scientific.

A careful, objective look at all methods would show that all are problematic. For example, we don’t actually measure daytime highs in the sun. The reason is that with thermometers, it’s almost impossible to do so. Thermometers are so affected by direct sunlight, that’s it’s not used that way.

We have a standard that says that we always measure it in the shade, but air is such a poor conductor of heat, that it’s significantly cooler in the shade. What one would need is an IR instrument in the shade meauring air in the sun. Wait, that’s what the satellite instruments do.

• Gerald Machnee
Posted May 24, 2009 at 7:03 PM | Permalink

Re: Gunnar (#130),

What one would need is an IR instrument in the shade meauring air in the sun. Wait, that’s what the satellite instruments do.

And you are telling us that we have accurate surface temperatures around the globe measured by satellite(s)?

• Gunnar
Posted May 24, 2009 at 7:29 PM | Permalink

Re: Gerald Machnee (#133), since you and I have been over this ground before, I would be pretty bored rehashing it. In short, LT != surface and like I said, “A careful, objective look at all methods would show that all are problematic.”

• bender
Posted May 20, 2009 at 9:47 AM | Permalink | Reply

Re: barry (#109),
How can anyone say beforehand whether there will be an open public analysis? It’s not like there is a committee at CA or WUWT that conspires to do things that way. The analysis will be a public process only to the extent that analysts choose to participate and make their logic and results public in real time. If you want to encourage that, you have to offer up some quatloos. Bake up some brownies. Bait the analysts with something tempting. That’s what Ken Fritsch does. It works.

• Steve McIntyre
Posted May 20, 2009 at 11:04 AM | Permalink | Reply

Re: barry (#109),

I think that there has been extensive analysis and discussion of the results. At the time of the bulk of this discussion, about 30-35% of the survey was done – this was enough for me to form views on the matter and I’d be amazed if the additional information changed these views:

1) the survey showed that there is a significantly higher trend in CRN5 (let alone Phoenix Airport and such CRN5+ stations) and CRN1-2 stations.

2) there was a reasonably decent population of CRN1-2 stations in the United States

3) there is a very large difference between the CRU/NOAA trends in the U.S. and the GISS trend in the US (as I recall, almost 0.5 deg difference since the 1920s), with CRU/NOAA making no attempt at UHI adjustment (applying the Jones et al 1990 theory that UHI was no more than 0.1 deg/century or so).

4) the GISS adjustment for UHI in the US was very different from the GISS adjustment in the ROW. In the US, the USHCN network, despite its warts, provided a large enough sub-network of rural sites and the GISS nightlights adjustment, despite its warts and despite the warts of the USHCN network, worked well enough that the GISS history for the US was pretty close to a properly QCed history from the CRN1-2 stations.

I think that the study provides strong evidence against the Jones et al 1990 claim that UHI cannot be more than 0.1 deg C/century or so. It gives an objective reason to discount the CRU and NOAA temperature histories in the US. The latter is perhaps timely in that the recent EPA finding relies on the NOAA history, without mentioning the GISS version.

People have argued that this vindicates worldwide GISS methodology. It doesn’t. The GISS adjustment elsewhere in the world is not based on a defined rural network but on inaccurate population statistics and ends up being nothing more than a random permutation. GISS trends in the US are a lot lower than CRU and NOAA, but in the ROW are the same or higher. My own druthers would have been to get corresponding information on ROW sites.

For my purposes, I’m content with the above level of understanding of the situation. At some point, I’ll update things, but it takes me a while to refresh my understanding of a file.

• Mark T
Posted May 20, 2009 at 11:21 AM | Permalink | Reply

this was enough for me to form views on the matter and I’d be amazed if the additional information changed these views

I haven’t heard anything to the contrary.

Re: bender (#116),

How can anyone say beforehand whether there will be an open public analysis?

Indeed. Some of the more interesting (to me) topics here have only a few dozen posts, often consisting mostly of posts from UC, Jean S, or one of a handful of others providing updates to the work they’re doing. Such topics can likely be dissected via email exchanges or otherwise personal correspondence, rather than through open public analysis.

Mark

45. Andrew
Posted May 19, 2009 at 7:05 PM | Permalink | Reply

Interest and useful site for sea level stuff:
http://tidesandcurrents.noaa.gov/sltrends/sltrends.shtml

46. Alex Harvey
Posted May 19, 2009 at 7:33 PM | Permalink | Reply

If anyone out there can answer a question it would be much appreciated.

I have been looking at the new Douglass and Christy 2009 just published (Limits on CO2 Climate Forcing from Recent Temperature Data of Earth, Energy and Environment, Vol 20 no1&2).

There Douglass and Christy write:

2. The atmospheric CO2 is well mixed and shows a variation with latitude which is less than 4% from pole to pole [Earth System Research Laboratory. 2008]. Thus one would expect that the latitude variation of ΔT from CO2 forcing to be also small. It is noted that low variability of trends with latitude is a result in some coupled atmosphere-ocean models. For example, the zonal-mean profiles of atmospheric temperature changes in models subject to “20CEN” forcing (includes CO2 forcing) over 1979-1999 are discussed in Chap 5 of the U.S. Climate Change Science Program [Karl et al. 2006]. The PCM model in Fig 5.7 shows little pole to pole variation in trends below altitudes corresponding to atmospheric pressures of 500hPa.

Thus, changes in ΔT that are oscillatory, negative or that vary strongly with latitude are inconsistent with CO2 forcing as indicated above.

The Karl et al. 2006 chapter that Douglass and Christy refer to is here: http://www.climatescience.gov/Library/sap/sap1-1/finalreport/sap1-1-final-chap5.pdf

Now Chris Colose has challenged me at his blog:

Show me where a model gives a roughly uniform trend for 90 N to 90 S. It doesn’t work like that.

Who is right here?

• Andrew
Posted May 19, 2009 at 8:17 PM | Permalink | Reply

Re: Alex Harvey (#111), I’m squinting at some of the figures and I see hot orange/red splotches near 75 North under 500 HPa in most of them. As far as PCM goes however, it looks about true. Still, an important point that is missed here is that Christy and Douglass make no assumptions about feedback, and the probable cause of the latitude variations are, as I understand it, ice/albedo feedbacks.

• DeWitt Payne
Posted May 19, 2009 at 11:34 PM | Permalink | Reply

Just because the CO2 concentration is more or less constant does not mean the radiative forcing is constant with latitude or longitude for that matter. See these graphs from Hansen et al 2005. Delta T is yet another matter which is far more complicated because there is horizontal as well as vertical heat transfer in the atmosphere.

• DeWitt Payne
Posted May 19, 2009 at 11:35 PM | Permalink | Reply

That’s radiative forcing from 2xCO2.

47. Ivan
Posted May 20, 2009 at 12:23 PM | Permalink | Reply

Steve and others, I would like to refer you to following post on Anthony Whatt’s blog:

http://wattsupwiththat.com/2009/04/11/making-holocene-spaghetti-sauce-by-proxy/

Pretty interesting in my opinion. If guy’s methodology and data are correct paleoclimatological disputes over MWP and Holocene Optimum are pretty much “settled”. And lot of “divergence problem” with tree ring data as well.

48. Andrew
Posted May 20, 2009 at 7:32 PM | Permalink | Reply

I just noticed that I don’t think anyone has pointed out (correct me if I’m wrong) that a pre-print of Steve and Ross’s recent critique of Santer et al. 2008 is available of arxiv:
http://arxiv.org/ftp/arxiv/papers/0905/0905.0445.pdf

49. Andrew
Posted May 21, 2009 at 4:16 PM | Permalink | Reply

Roger Pielke Sr’s presentation to the Marshall Institute:
http://www.climatesci.org/publications/presentations/PPT-111.pdf

50. Andrew
Posted May 22, 2009 at 8:36 AM | Permalink | Reply

Pat Michaels one Temperature record biases:
http://www.cato.org/pubs/regulation/regv31n3/v31n3-2.pdf

51. Craig Loehle
Posted May 22, 2009 at 10:00 AM | Permalink | Reply

Potentially interesting article:
MJ Costello
Title: Motivating Online Publication of Data
Full source: Bioscience, 2009, Vol 59, Iss 5, pp 418-427

52. Curtis
Posted May 22, 2009 at 11:32 AM | Permalink | Reply

Steve: snip – the purpose of this site is to critically examine matters. Serious people worried about climate change present serious arguments and are entitled to be taken seriously. It’s not about policy. I really don’t want to get into publicizing “skeptical” opinion pieces that do not rise above being opinion pieces. Sorry about that.

53. barry
Posted May 23, 2009 at 2:44 AM | Permalink | Reply

Thanks for the reply, Stephen.

It seems to me that an important test of the surface stations project, and of the conclusions offered by Watts, is to construct a temp series for sites nominated by the Watts team as sound, against the ‘adjusted’ temp record for all USHCN sites and compare the differences. Just over a year ago, this was being done with, as you say, about a third of all sites having been assessed. I’m no statistics guru, but wouldn’t it be worhtwhile analysing the data now that theere is better spatial coverage (ie, with the new good sites included)? Last year there were only a few good stations usable for the comparison, and when I queried the results I was advised that the coverage was not good enough and that it was premature to make a call.

For the others commenting, there was, as Stephen says, an extensive analysis (publicly, on climateaudit) of the station data 2007 – 2008. I’m not sure why this was abandoned. I’m only asking that it be resumed – publicly, as it was then. I have every confidence that if there was a thread inviting JohnV and the rest back to the analysis, it would be taken up again. The discussion was already three threads deep when it halted, a monumental effort. It would be a pity if we didn’t see that work extended, and if we didn’t openly apply the same rigour to our own analyses that we do those we suspect. I have quietly observed and supported the surfacestations project and hope that it will be as thorough and transparent as possible.

Steve: I didn’t “abandon” the analysis done then. I cover a lot of topics here and covered other topics. You say: “when I queried the results I was advised that the coverage was not good enough and that it was premature to make a call.” I don’t recall ever making such a statement and I think that my comments have consistently taken the position in my earlier response to you – that there was enough in hand to support the conclusions listed in my earlier post.

• BarryW
Posted May 23, 2009 at 8:06 AM | Permalink | Reply

Re: barry (#127),

Anthony’s effort is related to microsite issues. Even those that pass that filter may be contaminated by UHI or have had site moves or equipment changes affecting the continuity of the data. These is not addressed by this effort. His work IMO is important but not comprehensive enough to resolve all of the issues.

54. Andrew
Posted May 23, 2009 at 9:25 AM | Permalink | Reply

According to RC “any kind of warming” is associated with tropospheric amplification:
http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/
At first glance this appears to ruin all the fun of looking at observed amplification, or lack there of. I mean, we can’t use it for attribution, so what’s the point? However, there may be something useful after all. According to RC, the amplification is expected and observed during short term fluctuations like ENSO. We can therefore look at the short term amplification and see what trends in the surface temp appear to be implied if we assume the satellite data is correct (or the other way around). The fact that we make certain assumptions must be emphasized-that the satellite data are correct (one could do what I will in reverse and more dubiously (IMAO) assume surface is superior to satellite) that amplification variation with timescale isn’t real, that causes of warming do not vary the amplification level (basically these are the assumptions RC makes, so fair game, again IMAO)

Red: HadCrut Surface temp, Blue: UAH divided by apparent coefficient of a linear fit of the residuals of UAH to HadCrut, Green: Hadcrut with trend corrected for the same factor.

Spreadsheet available on request.

Supporting material:

Amplification should be seen in all kinds of warming:
http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/
Surface Trends have serious issues:
http://www.climatesci.org/publications/pdf/R-321.pdf
UAH:
http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

55. Michael Smith
Posted May 24, 2009 at 5:33 AM | Permalink | Reply

In 125, Steve wrote:

Serious people worried about climate change present serious arguments and are entitled to be taken seriously.

Why does a scientist or researcher who presents only his conclusions — who refuses to release the data or the methodology or the computer codes used to reach those conclusions — and who stonewalls your every attempt to examine his data, methodology or computer codes — why does such a person qualify as a “serious” person who deserves to be “taken seriously”?

And what, exactly, does “taken seriously” mean in such a situation? Does it mean that the burden of supporting his claims disappears — and instead becomes your burden to refute his claims — refute them without access to the information necessary to analyze the validity of those claims?

• Steve McIntyre
Posted May 24, 2009 at 6:24 AM | Permalink | Reply

I haven’t studied every article in the world; it’s impossible. I have my areas of specialty. There are lots of people writing in such areas that are worried about AGW and I’m in no position to say that they’re wrong without being familiar with their work.

Even in areas where I am familiar, on a policy level, I place blame primarily on the funding agencies, journals and IPCC for lack of coherent data and methodology requirements, as much as the individual prima donnas. In their shoes, I would feel embarrassed by the prima donnas creating controversy on unwinnable and irrelevant stonewalling and do everything in my power to limit the possibility of such behavior raising doubts of the sort that you express here.

Personally, I agree that the only way for IPCC to discipline such authors is to refuse to cite their work, but it doesnt mean that work done prior to the adoption of such a policy is worthless either.

I don’t have time to debate this BTW.

56. Gene Nemetz
Posted May 24, 2009 at 7:38 PM | Permalink | Reply

Hello Steve M,

The Waxman-Markey Bill passed 33-25. It seems odd to me that no one is talking about this. But then maybe no one knows about it. It requires goals that are unrealistic but are being called “science-based targets”. “This legislation would cut global warming pollution by 17% compared to 2005 levels in 2020, by 42% in 2030, and by 83% in 2050.”

57. Gene Nemetz
Posted May 24, 2009 at 7:38 PM | Permalink | Reply

From the government website :

The Energy and Commerce Committee approved H.R. 2454, “The American Clean Energy and Security Act,”

58. Bob Koss
Posted May 25, 2009 at 1:41 PM | Permalink | Reply

Roger Pielke Sr. responds to the Stephen Schneider interview on global warming debates.

More importantly, I am disappointed that Steve Schneider personally attacked the websites that are listed. I have quite a bit of respect for Dr. Schneider’s past work [e.g. his book Genesis Strategy is an excellent example of why we need a resource-based, bottom-up assessment of vulnerability, as has been discussed in our peer reviewed papers (e.g. see) and books (e.g. see)].

However, his casual denigration of each of the websites, Watt’s Up With That, Climate Skeptic, Climate Audit and Climate Science (each of whose contributions to the discussion of climate science are informative and very valuable) represents a failure to engage in constructive scientific debate.

This cavalier dismissal of these websites illustrates that instead of evaluating the soundness of their scientific evidence, the authors of these websites, who provide a much needed broader viewpoint on climate science, are insulted. This is not the proper way to discuss scientific issues.

I would be glad to debate Dr. Schneider (or any of the other individuals who are listed).

It would be nice to have a debate. But I suspect Schneider will find a way to avoid it. It’s easier for him to present only his view and simply denigrate those with a different perspective.

59. TAG
Posted May 25, 2009 at 3:22 PM | Permalink | Reply

From the Times at:

http://www.timesonline.co.uk/tol/comment/columnists/guest_contributors/article6360606.ece

The dangerous link between science and hype

…Science proceeds through discovery and debate, and hypotheses do not become accepted by flooding the media with press releases…

From what I have read in this blog, this form of science is also extant in Climate Science.

60. Gene Nemetz
Posted May 25, 2009 at 4:50 PM | Permalink | Reply

The Waxman-Markey Bill, “This bill, when enacted into law this year…”. 932 page PDF :

http://energycommerce.house.gov/Press_111/20090515/hr2454.pdf

Carbon Capture and Sequestration…
Electric vehicle infrastructure…
Building retrofit program…
PART A—GLOBAL WARMING POLLUTION REDUCTION GOALS AND TARGETS…
Greenhouse gas registry…
International offset credits…
Requirements for international deforestation reduction program…
Climate change rebates…
CLIMATE CHANGE WORKER ADJUSTMENT ASSISTANCE…
INTERNATIONAL CLIMATE CHANGE ADAPTATION PROGRAM…

61. thefordprefect
Posted May 25, 2009 at 6:52 PM | Permalink | Reply

Something that I find odd. Basically:
CO2 is increasing but every august (NH) there is a very rapid dip. What can cause this very rapid 3/4 globe event?
This plot shows 7 locations across the globe. The NH dip is sharper and more pronounced as you go further north. The only antiphase signal occurs at the South Pole station.

cdiac data
Note the sharpness of this summer dip.
The slopes into and out of this dip are very similar: into the dip is -3.6ppm/month and out of the dip is 3.1ppm/month.

What can change the CO2 level by this amplitude of about 17ppm in 360ppm.

Wouldn’t vegetation would be a slow progressive change as spring moves northward.
CO2 dissolved in the ocean would again be a slow progression.
Looking at the ocean Ice cover at point barrow shows no visual correlation to the dip
September is the traditional month for sea ice minimum.
Algal blooms do not seem to occur at that date
It does not follow temperature or climate (Barrow shows little change in the timing of the minima over the record length)
Also this dip is more or less simultaneous from La Jolla to Barrow

This plot shows manually obtained minima dates from Barrow (houry data) and La Jolla (daily Data)

The barrow houly data:

Any suggestions as to the cause of this global dip?
Mike

• Geoff Sherrington
Posted May 26, 2009 at 6:14 AM | Permalink | Reply

There is an NOAA site

where you can mix your own cocktails of wriggles in the CO2 year. In general, those at high altitude show a wriggle. Those at low altitude, before processing, show a bit of a mess. Try Fairchild, Wisconsin. Even with sea level sites they will use aircraft to try to get gas from 3,000 ft or more.

Now you will tell me that atmospheric CO2 can pass from Barrow Alaska, through this mess of mixing at low altitudes, through the ferocious mixing blizzards of the Antarctic and preserve the wriggles made ny seasonal growth in NH forests at the South Pole? Dreamin’, son.

• thefordprefect
Posted May 26, 2009 at 9:25 AM | Permalink | Reply

Those at low altitude, before processing, show a bit of a mess. Try Fairchild, Wisconsin. Even with sea level sites they will use aircraft to try to get gas from 3,000 ft or more.

Now you will tell me that atmospheric CO2 can pass from Barrow Alaska, through this mess of mixing at low altitudes, through the ferocious mixing blizzards of the Antarctic and preserve the wriggles made ny seasonal growth in NH forests at the South Pole? Dreamin’, son.

I am not telling you anything. All I have done with the first graph is plot the monthly figures from various locations. The wiggles seem to propagate from NH to SH and only south pole station shows a 6 month phase difference.

I cannot explain it. Hence my post.

The next plot plots the hourly, uncorrected data from point barrow (low level I assume)and I have manually selected the minimum point from the plot (see example). The vertical lines are readings of -999 (errors) other errors flagged have not been removed.
These minima have then been plotted against time. The La Jolla data is only available as “daily” and consequently not as consistent.

The plot shows a similar date for each minima at barrow and a similar date for La Jolla after 1988. The earlier La Jolla data shows a delay.

I think most agree that the temperature has risen and Arctic ice receeded over the period of the plots (1975 to 2008) but there has been little change in the date for the dip.

Again I cannot explain this. I would simply like to know its cause. As far as I am aware it does not even have any connection with AGW!
mike

• Gunnar
Posted May 26, 2009 at 9:51 AM | Permalink | Reply

Re: thefordprefect (#140), I would question why you think “CO2 dissolved in the ocean would again be a slow progression.”

Whenever I open up a soda pop, the outgassing is pretty fast. I think Segalstad’s work confirms that it’s quite fast.

• thefordprefect
Posted May 26, 2009 at 10:23 AM | Permalink | Reply

Re: Gunnar (#147), Only that the ocean is heating up progressively from the south. There is no sudden leap in temperature(?) at the same time each year.

Same applies to the flora – growth begins south and travels north until its all sucking CO2

And of course the question remains as to why the same shape dip occurs at a very similar time all over the NH

• Gunnar
Posted May 26, 2009 at 1:05 PM | Permalink

There is no sudden leap in temperature(?) at the same time each year.

It’s a dip, and cold water absorbs more C02 than warm water. On it’s face, it seems to make sense. Ice cover reduces dramatically in the summer and is at a minimum around August. Water is dissolving C02, causing the dip. A good engineer isn’t fooled by graph induced dramatics. IOW, a change of 17 ppM doesn’t seem like a large change, since it’s a change of .0017%. If my math is right (a wise disclaimer among so many mathematicians), one extra C02 molecule out of 58,823 air molecules dissolves in the newly exposed cold water. Not a big deal.

why the same shape dip occurs at a very similar time all over the NH

Because it’s like when the plug is pulled from a bath tub. The level doesn’t just fall on one end of the tub.

Because the contents of the bottle/can are at twice atmospheric pressure .. Such events don’t normally occur in the ocean

The point is that the time constant is not long. The differential equations would be the same. If only seconds are required when the pressure difference is twice, then this small change over a month is reasonable. Besides, I read a study a couple of years ago measuring the difference. They found it was quite variable, but large deltas. IOW, it was not in equilibrium. C02 is always being outgassed from the equatorial belt, and always being absorbed in high latitudes.

• Posted May 26, 2009 at 6:29 PM | Permalink

Re: Gunnar (#155),

“Because the contents of the bottle/can are at twice atmospheric pressure .. Such events don’t normally occur in the ocean”
The point is that the time constant is not long. The differential equations would be the same. If only seconds are required when the pressure difference is twice,

The pressure ratio isn’t 2x it’s 2/385ppm ≃ 5,200x. That would be fine if it were a gas that obeyed Henry’s law however CO2 in water doesn’t so it’s rather more complicated.

• David Cauthen
Posted May 26, 2009 at 12:14 PM | Permalink | Reply

Re: thefordprefect (#140),
For a nice visual

62. thefordprefect
Posted May 25, 2009 at 6:55 PM | Permalink | Reply

Oh! to be able to correct posts!
Barrow hourly data (last 3 years)

Mike

63. Geoff Sherrington
Posted May 26, 2009 at 5:57 AM | Permalink | Reply

About a year ago I mentioned Alice Springs Australia in the middle of the desert. A CA reader said that would make a good test bed because the desrt limited relative humidity which fed back to air temperature and limited it.

If you are the person, might you please put up your hand because i have some more thoughts and data.

64. Craig Loehle
Posted May 26, 2009 at 6:32 AM | Permalink | Reply

It has been noted that in the Mad Cow disease scare in England, the press and govt were conjuring a million dead and suicide pills being given out, when in the end only 163 were infected (which of course is not nothing…). No one ever said “oops, I was exaggerating”. Interesting also that the control strategy, based on models, was to slaughter cows, rather than vaccinate and isolate. The cost was unnecessarily in the billions compared to the cheaper option.

• TAG
Posted May 26, 2009 at 6:52 AM | Permalink | Reply

Re: Craig Loehle (#145), I think that is is conflating mad cow with foot and mouth. Foot and moth was the mass slaughter example. Mad cow was not to eat Tbone and to ban all meat imports for “scientific” (protectionist — no never!!) reasons. Both were examples of why one should not believe computer models but they were separate instances.

65. curious
Posted May 26, 2009 at 11:01 AM | Permalink | Reply

Hi Ford – what is the kit used to measure CO2? Is it the same at all stations? How is the data collected and by whom? Could it be a system thing rather than a physical issue? And, without doing any sums or googling, if NH is major source of human activity related CO2 and August is the NH summer then perhaps we are putting out less – and at the same time the SH oceans soak up more? (notwithstanding Geoff’s thoughts on global mixing!)

66. Gerald Machnee
Posted May 26, 2009 at 11:27 AM | Permalink | Reply

Re the variation in CO2. Those graphs tell us that the science is NOT in. We still have a lot to learn. It may suggest that there is more rapid absorption and release than is thought by some, whether from the water, vegetation, or other sources. This will also question the estimates by some scientists that CO2 (man-made or otherwise) stays in the atmosphere for several hundred years.

67. Posted May 26, 2009 at 11:38 AM | Permalink | Reply

Gunnar:
May 26th, 2009 at 9:51 am
Re: thefordprefect (#140), I would question why you think “CO2 dissolved in the ocean would again be a slow progression.”
Whenever I open up a soda pop, the outgassing is pretty fast. I think Segalstad’s work confirms that it’s quite fast.

Because the contents of the bottle/can are at twice atmospheric pressure, when you open the can you suddenly reduce the pressure (and greatly reduce the partial pressure of CO2). Consequently the contents are now very far from equilibrium and there is rapid outgassing. Such events don’t normally occur in the ocean

68. David Cauthen
Posted May 26, 2009 at 12:16 PM | Permalink | Reply
69. mondo
Posted May 26, 2009 at 12:34 PM | Permalink | Reply

Re fordprefect posts regarding CO2.

Google shows that the ratio of land to ocean in the northern hemisphere is 1 to 1.5, while the ratio in the southern hemisphere is 1 to 4 (http://www.eoearth.org/article/Ocean). If seasonal warming of the oceans were to lead to outgassing of CO2, summer in the southern hemisphere would perhaps release 2.7 times (4 divided by 1.5) as much CO2 on a seasonal basis compared with the northern hemisphere.

But wait. That would mean that the wilder oscillations would be in the southern hemisphere CO2 levels, whereas the fordprefect graphs show the opposite. Must be some other explanation then.

70. See - owe to Rich
Posted May 26, 2009 at 3:32 PM | Permalink | Reply

On the CO2 dip question, my suggestion is that globally temperatures, and especially sea temperatures, reduce in the boreal summer because the Earth is at aphelion in early July. A natural lag of these temperatures from that date, and rapid absorption of CO2 by cooler oceans, explains most of the data. But perhaps not all.

Rich.

71. thefordprefect
Posted May 26, 2009 at 4:36 PM | Permalink | Reply

Thanks for the responses. I’m not sure that an answer has been found yet!
additions to the plot CH4 at barrow plus other sites for CO2 including central Plateau Assy, Kazakhstan about as central to land as possible and Midway island.

Some Hodrik prescott filtering added to the noisy daily data.
It looks to me like about a 1.5 month progression from point barrow to christmass island.

I believe that ice minimum is around mid september a couple of weeks after the co2 dip minimum

Mike

• Geoff Sherrington
Posted May 26, 2009 at 9:50 PM | Permalink | Reply

Sarc is off. I agree with your press for the prominence of these observations because they are yet another demonstration that the science is not settled. I agree with the observations when they show real data – not interpolated/smoothed/prefit-to-wriggles – and when the analysis is as accurate and precise as claimed. (This knocks out most of the observations). In search of a mechanism, I’d look at instrument errors and I’d note that the wriggles happen most at high altitude, where a near-constant, but not constant, pervasive, reactive component worldwide is sunlight.

Who’d like to support the contention that CO2 in air can be measured with an accuracy of 100 ppbv? More dreamin’, I suspect.

• thefordprefect
Posted May 27, 2009 at 1:43 PM | Permalink | Reply

Re: Geoff Sherrington (#162),
The wiggles I plotted are all from normal land height. For point barrow this would be about sea level.
Flask measurements on aeroplanes would be as accurate as ground level (500ppbv errors between measurements are rejected)
Intrument errors? some measurements are done by opening an evacuated flask at the right location/time. Other more frequent measurement are made by “other means” I do not think measurements would cyclically be in error from all locations

-

72. curious
Posted May 26, 2009 at 6:42 PM | Permalink | Reply

Sorry Phil – is that right? 2/0.000385=5200 is the ratio of the contents to 1atm? = 5200*14psi = 72800psi? So pinching a coke bottle/can before I open it with my 1sqin thumb I’m putting out 36tons? I’ve missed something here – not least spinach!

• Posted May 26, 2009 at 6:59 PM | Permalink | Reply

Re: curious (#159),

Partial pressure of CO2 above the liquid in the bottle = 2 atm
Partial pressure of CO2 when exposed to air = 385ppm ≃ 0.000385 atm.

73. curious
Posted May 26, 2009 at 7:22 PM | Permalink | Reply

Thanks – seems more reasonable!

74. D. Patterson
Posted May 27, 2009 at 1:13 AM | Permalink | Reply

In the latest news from a symposium in London, Dr. Chu, U.S. Secretary of the Department of Energy (DOE) says the Obama Administration wants to paint the world’s roofs in white or light colored paint to reflect the sunlight instead of absorbing it. Interesting how Chu wants to geoengineer and increase the albedo of the entire planet without so much as a professional license or an environmental study and permit just as the planet is likely to be entering a little ice age episode akin to the Dalton-Maunder minima events.

• Craig Loehle
Posted May 27, 2009 at 8:23 AM | Permalink | Reply

Re: D. Patterson (#163), My dark roof helps WARM my cold house for 7 months of the year here in Chicago. Don’t you dare touch it. My heating bill is worse than my AC bill.

• Posted May 27, 2009 at 4:25 PM | Permalink | Reply

Re: Craig Loehle (#170), I’ve got no AC bill, doesn’t get warm enough long enough where I am to warrant turning it on. Open windows and a fan works just fine.

However, I think it would be cool to have a high-tech roof that would lighten and darken on it’s own to help regulate the house temperature by absorbing or reflecting solar energy.

• TAG
Posted May 27, 2009 at 7:07 PM | Permalink

Mt roof does change color with the seasons. It is white with snow cover in the winter and dark from the shingles in the summer. I suppose that that is the wrong way round.

Apparently snow with its trapped air is a good insulation so perhaps there is some benefit.

• Posted May 28, 2009 at 9:43 AM | Permalink

Re: TAG (#180),

Heh, yeah, in some places that happens. Not where I live though. On average my roof might have a light covering of snow maybe 2 or 3 days a year. And even then it melts off pretty quickly (or slides off when I walk out the front door). Even this last December when we got a whopping foot of snow over several days, the snow on my roof slid off several times from the heat coming from inside the house. So even then the roof didn’t stay white very long. A metal roof probably keeps snow less than an asphalt one, due to slipperiness. Not sure about tile…

• Craig Loehle
Posted May 27, 2009 at 7:19 PM | Permalink

Re: Jeff Alberts (#180), It would be “cool” but cool gadgets are often very expensive.

By the way, I’ve located a picture of Bender…
http://totallylookslike.com/2009/05/27/bender-from-futurama-totally-looks-like-startling-comics-49/

see what you think.

• Posted May 28, 2009 at 9:44 AM | Permalink

Yeah, like solar panels and wind turbines

Bender always gets the babes. Harumph!!

• Kusigrosz
Posted May 28, 2009 at 8:19 AM | Permalink

Re: Jeff Alberts (#178), Lenticular-printed roof tiles?

75. Alex Harvey
Posted May 27, 2009 at 1:46 AM | Permalink | Reply

D. Patterson, You’ve got to admit it’s an interesting idea though. What if rooftops the world over were set up such that they could be controlled to vary the amount of light reflected. Imagine this could be controlled centrally, what sort of control of the earth’s temperature would it give us?

• D. Patterson
Posted May 27, 2009 at 3:09 AM | Permalink | Reply

You obviously didn’t get the joke…..

Calculate the areal coverages.

76. curious
Posted May 27, 2009 at 2:42 AM | Permalink | Reply

Could make quite a difference to the buildings energy requirements!

77. Alex Harvey
Posted May 27, 2009 at 4:40 AM | Permalink | Reply

We could have reflective ships and… and football fields that change colour… and kaleidoscopic forests and… and…

• Gunnar
Posted May 27, 2009 at 8:21 AM | Permalink | Reply

Re: Alex Harvey (#167), Right, I’ll do my part for the environment by wearing a reflective hat.

Once we take care of AGW, the problem of Man’s ego will be harder to solve.

78. cba
Posted May 27, 2009 at 6:05 AM | Permalink | Reply

It does bring up the question of urban heat island impacts. What if the concentrated urban albedo increase were to cause a reduction in cloud cover overall? Clouds move so if cloud formation/existence were negatively impacted in those areas, the overall effect could be much greater than just the completely insginificant urban areas. Nothing like the joys of unintended consequences. Maybe they could spray paint the clouds with higher reflectivity paint – or better yet paint the oceans, I’ll donate the first paint brush.

79. Mark T
Posted May 27, 2009 at 9:39 AM | Permalink | Reply

Mine, too, but more because I only have to run the AC for 6-8 weeks, and only because we have a two-story home (I’d do a swamp cooler if I had a single floor).

Mark

80. D. Patterson
Posted May 27, 2009 at 10:43 AM | Permalink | Reply

Makes you wonder which scientific bodies and reports Dr. Chu, the white knight, will say he relied upon to support this white paint measure. Think of the environmental impacts.

Third world usage of white lead paints.

Increased demand on already stressed potable water supplies to keep the white roofing white

versus the benefits ??? think percentages of the Earth’s surface

81. Gunnar
Posted May 27, 2009 at 12:44 PM | Permalink | Reply

He [Chu] said lightening roofs and roads in urban environments would offset the global warming effects of all the cars in the world for 11 years.

Actually, this inadvertantly admits that cars have a completely negligible effect.

82. Posted May 27, 2009 at 3:59 PM | Permalink | Reply

Wouldn’t the reflected energy from those high albedo roofs still be retained in the atmosphere? I don’t see how this would change anything.

83. oms
Posted May 27, 2009 at 4:09 PM | Permalink | Reply

Jeff: All other things being equal, the incoming shortwave radiation from the sun would just be reflected back off into space (and since the atmosphere let it in, it will let it back out). That’s the super-simple version, anyway.

• Posted May 27, 2009 at 4:22 PM | Permalink | Reply

Re: oms (#177), So the only difference is between long and shortwave radiation? Re-radiation isn’t the same as reflection?

• oms
Posted May 27, 2009 at 5:38 PM | Permalink | Reply

Re: Jeff Alberts (#179),
That’s the idea anyway. Send the visible spectrum back rather than absorbing it and re-emitting at the temperature of the roof as IR.

84. Posted May 28, 2009 at 12:08 AM | Permalink | Reply

Tom Crowley has an article in the Guardian.

85. Bob Koss
Posted May 28, 2009 at 11:16 AM | Permalink | Reply

This is just too delicious not to put up for posterity purposes.

GISS is finally using the real data sets. The true anomaly 1881-2008 of 00 is shown in the upper right-hand corner. Well not really. They made some type of programming change and introduced a bug. It is a real-time demonstration of the strength of their V&V policy.

This page was used to generate the anomaly map. I included the ocean data, adjusted the mean period to Annual(Dec-Nov) and the time interval 1881 to 2008. Otherwise the default values were used.

Strangely, trying again and changing the map type from anomalies to trends appears to return a more correct value of 0.71C. So the error appears in what they use for the default map type. Too busy to test it out I guess. The page was updated: 2009-05-13, so it may have been this way for some time.

The only photo-shopping done was to reduce the image size.

• Steve McIntyre
Posted May 28, 2009 at 1:31 PM | Permalink | Reply

Re: Bob Koss (#186),

Notice the cold spot in Steig’s hot West Antarctica.

• Scott Brim
Posted May 28, 2009 at 2:34 PM | Permalink | Reply

Notice the cold spot in Steig’s hot West Antarctica.

Probably the result of an insufficiently robust RegEM technique.

86. Steve McIntyre
Posted May 28, 2009 at 3:09 PM | Permalink | Reply

I just got the following block message at realclimate:

• Ryan O
Posted May 28, 2009 at 3:13 PM | Permalink | Reply

Re: Steve McIntyre (#189), Me, too. Wonder if they temporarily took the site down for maintenance, or if they just don’t like us anymore.

• Steve McIntyre
Posted May 28, 2009 at 3:18 PM | Permalink | Reply

Re: Ryan O (#190),

If it’s you as well, it must be a server malfunction. They’ve blocked me from websites in the past ( Mann, Rutherford, Hughes) but not from realclimate, so it’s probably just malfunction.

• stephen richards
Posted Jun 1, 2009 at 12:48 PM | Permalink | Reply

Re: Ryan O (#190),

Steve you are missing out on a lot. Gavin is teaching PC’s and the Hockey stick in his latest novel ‘Dummy’s Guide to the Hockey Stick Controversy’.
I think it’s more about the Antartic problem than the HS. It’s a shame, you could have learned so much

• Posted May 28, 2009 at 6:23 PM | Permalink | Reply

I get it too.

87. jeez
Posted May 28, 2009 at 3:37 PM | Permalink | Reply

malfunction, unless they have an new alien technology for for detecting potential unfriendlies.

88. Steve McIntyre
Posted May 28, 2009 at 4:22 PM | Permalink | Reply

I’m going to a 40th year reunion party tonight for my college (Trinity College) at the University of Toronto. Michael Ignatieff, the leader of the Canadian Liberal Party, was a classmate and sent his regrets.

89. Geoff Sherrington
Posted May 30, 2009 at 5:51 AM | Permalink | Reply

Permission please to raise a subject?

There is more and more use of correlation coefficients between weather sites near and far from each other. I’ve just done another, an overlap of 10 years of annual temperatures at Alice Springs in the centre of Australia, with a site shift of under 15 km. The overlap correlation coefficients are 0.856 for Tmax (mean 28.4 deg C) and 0.845 for T min (mean 12.4 deg C). Height above sea level is about the same, but there is a ridge of low hills between the two. The years were 1942 to 1952 and there was not much population to give UHI, except for troops passing through one station at the end of WWII. The standard deviation within a year varied from about 0.5 to 2.0.

A correlation coefficient of 0.85 is about what I’d expect for close separations in a period as short as 10 years. Yet, I see correlations better than this between stations hundreds of km apart, such as in some of the Antarctic data.

Is there need to bring apples back in line with apples? The correlation coefficient will vary with the sampling interval (from day to day is different to month to month, is different to year to year). I get a bit confused if people are using annual comparisons to analyse daily variations, to use an extreme example. The length of the comparison period also enters the maths of the equations, as it additionally does because of increased likelihood of extreme short events at one station or several from a set.

It follows that I lack some confidence in some adjustments that were “validated” by correlation coefficients versus distance. As I’ve noted before, there is a TSI effect as one moves from equator to pole. At the equator, a day is a day but at the poles the sunlit days and sunless nights can be long. How to correlate on a daily basis?

Question arising: In the Antarctic, was there any sort of TOBs adjustment made to ground station temperature data? If the same one as GISS use in tropic/temperature regions was applied, it would seem to have problems.

• Ryan O
Posted May 30, 2009 at 9:54 AM | Permalink | Reply
Question arising: In the Antarctic, was there any sort of TOBs adjustment made to ground station temperature data? If the same one as GISS use in tropic/temperature regions was applied, it would seem to have problems.

.
No, the neither the raw AVHRR data nor the ground data are adjusted in this fashion. However, the cloudmasked AVHRR may be a different story as Comiso’s method is not fully documented and I am not aware of any code that has been provided to replicate his results.
.
Your comments about correlation are valid. I cannot think of any physical reason for correlations to remain near 1.0 over hundreds of kilometers. I can think of many reasons why this would be a mathematical artifact of the processing, however.

• Geoff Sherrington
Posted May 30, 2009 at 10:55 PM | Permalink | Reply

Re: Ryan O (#197),

It
It’s partly a philosophical question because strictly one can’t do TOBs at the precise Poles because Greenwich MT disappears to a black hole; and the validity of the TOB method decreases as you approach the poles.

The whole subject of assigning time and location to observations from satellites is ripe for errors. I’ve been reading an 1997 book “An introduction to Satellite Imaging interpretation” (Eric D Conway) because it is open about the problems of AVHRR on Tiros N and NOAA A-D (1978 – 1981) and on Tiros N (NOAA 9-14). It’s short on data, it’s not very technical and it describes early methods, but some difficulties remain fixed in time. There is a chapter on Cloud Identification which gives me trepidation. There is even a section on UHI (5 degrees or more).

More work on correlations of surface temp with distance is in progress here. I just can’t believe some of the high values reported by some, but that’s a belief and so a no-no.

• Steve McIntyre
Posted May 30, 2009 at 11:54 AM | Permalink | Reply

Geoff, I did some plots a while ago showing decorrelation of station histories versus distance and it was more or less negative exponential. This was also true in Australia and is probably a fairly general property of these series.

As Ryan says, there aren’t any real correlations in Antarctica of the type that you describe. Steig’s PC decomposition of AVHRR creates false correlations by stuffing 5509 series into 3 PC bins (i.e. only 3 degrees of freedom) and thus series are made to appear similar that aren’t.

90. Ryan O
Posted May 30, 2009 at 8:08 PM | Permalink | Reply

If there was one “skeptic” you would want debating Schneider, Pielke would be it. BTW, I think it demonstrates how far out there much of the community has gone that Pielke is considered a “skeptic”.

91. oms
Posted May 30, 2009 at 9:04 PM | Permalink | Reply

Was there ever a postmortem about that NPR debate a few years back which featured Lindzen and Michael Crichton on one side and Gavin Schmidt and Richard Somerville on the other?

• Orson
Posted Jun 1, 2009 at 8:39 AM | Permalink | Reply

Re: oms (#203),

YES there was.

“In this debate, the proposition was: ‘Global Warming Is Not a Crisis.’ In a vote before the debate, about 30 percent of the audience agreed with the motion, while 57 percent were against and 13 percent undecided. The debate seemed to affect a number of people: Afterward, about 46 percent agreed with the motion, roughly 42 percent were opposed and about 12 percent were undecided.”
http://www.npr.org/templates/story/story.php?storyId=9082151

To my knowledge, no AGW-alarmist has won a debate with a critic. (Obviously, THIS cannot last forever – and assessments of debate outcomes are rarely so authoritative as this one.)

This summer, I am launching a new web site that aggregates all formal debates on global warming. This is a hole that needs filing.

92. thefordprefect
Posted May 30, 2009 at 10:36 PM | Permalink | Reply

Japanese satellite GOSAT first results at CO2 Measurements:
http://www.jaxa.jp/projects/sat/gosat/topics_e.html

93. D. Patterson
Posted May 31, 2009 at 12:50 AM | Permalink | Reply

There are at least some scientists who still understand that computer models are not equivalent to empirical evidence.

Robots with fins, tails demonstrate evolution by Michael Hill, Associated Press Writer…Vassar biology and cognitive science professor John Long…”The thing about robots is, robots can’t violate the laws of physics,” he said. “A computer program can.”

94. Andrew
Posted Jun 2, 2009 at 8:46 PM | Permalink | Reply

Are Arctic Temperature changes connected to the AMO? There is a new paper:

Chylek Petr, Chris K. Folland, Glen Lesins, Manvendra K. Dubeys, and Muyin Wang: 2009: “Arctic air temperature change amplification and the Atlantic Multidecadal Oscillation”. Geophysical Research Letters (in press).

I will make no comment (least I say something which is wrong due to my profound ignorance of such things) and let you guys dissect it. Cheers.

95. Dave Dardinger
Posted Jun 3, 2009 at 11:25 AM | Permalink | Reply

Steve,

Two points. First an English usage point.

but I dare say that a much fewer number are familiar with Truncated Total Least Squares

You can have fewer of something or you can have a smaller number of something, but you can’t have both. So either say

but I dare say that many fewer are familiar with Truncated Total Least Squares

or say

but I dare say that a much smaller number are familiar with Truncated Total Least Squares

The second point is that you’ve posted many of these highly technical posts and I’ve yet to see a member of the team actually engage in a debate on the points. This implies either they’re not capable of such debate (i.e. would be afraid of showing a lack of knowledge), or they think what you say is trivially worthless. But very few people here are capable of going toe to toe with you and this includes many with advanced degrees. So how can you be trivally wrong? This causes, and should cause, a layman observing the situation objectively to question the abilities of the team. I doesn’t prove anything, as I’m sure you’d say, but the question, once raised, has a long-term affect.

96. Dishman
Posted Jun 3, 2009 at 1:00 PM | Permalink | Reply

Warmers of my acquaintance strongly question the value and credibility of everything that has been done so far by CA, WUWT, the two Jeffs, RyanO, Lucia etc. concerning Steig’s paper — the primary objection being that none of the people participating in this exercise have any recognized climate science credentials.

The question of whether or not Steve McIntyre is a ham sandwich is irrelevant. Either his analysis is sound, or it isn’t.

If Steve were offering original data, that would be a different matter, but he’s not. He’s asking questions, and offering analysis (which can be independently verified).

If the “warmers” only rebuttal is that the source of questions or analysis to question the credibility of the source, that speaks volumes about the “warmers”. If they’re not prepared to face questions or conflicting analysis, are they actually confident in the quality of their workmanship?

• BarryW
Posted Jun 3, 2009 at 8:55 PM | Permalink | Reply

Re: Dishman (#214),

And they aren’t even questioning actual ‘climate’ science, but the statistical methods they are using which is not even their field.

I was always told if it can’t be replicated it ain’t science.

97. Eric
Posted Jun 3, 2009 at 2:56 PM | Permalink | Reply

Re: Scott Brim (#7)

Scott is stating a fact not offering an opinion. Warmers discount, dismiss and even mock all of the very good & important work done by those mentioned because they are not part of the climate science cognoscenti.

This is a very dangerous state of affairs IMO given that climate science is such a relatively new and fundamentally multi-disciplinary area of study. Climate scientists should welcome input for critical adjacent areas. They need it and in the long run it only helps their credibility and the science.

If this keeps up I will not be surprised if they soon begin to seek out expert statistician reviewers prior to publication. This will be a good thing.

98. J Solters
Posted Jun 4, 2009 at 11:41 AM | Permalink | Reply

Can anyone point me to a chart/graph which shows actual monthly average global temperatures for the past 120 months ending in May 2009?

99. Andrew
Posted Jun 4, 2009 at 12:19 PM | Permalink | Reply

Climate Science has “short circuited” the scientific method:
http://climatesci.org/2009/06/04/short-circuiting-the-scientific-process-a-serious-problem-in-the-climate-science-community/

100. mondo
Posted Jun 4, 2009 at 3:25 PM | Permalink | Reply

Australia’s ABC interviewed Senator Steve Fielding on Lateline the other night. The transcript can be viewed here http://www.abc.net.au/lateline/content/2008/s2588747.htm and it may be possible to view the interview on the site as well.

Lateline presenter Tony Jones adopts a hostile stance towards Senator Fielding when dicussing the Senator’s attendance at the Heartland Conference in Washington. He mentions, in glowing terms, Hadley CRU, the subject of a post by Steve Mc today.

Lateline has a feedback system. It might be worth sending them some corrective information.

101. Derek Walton
Posted Jun 5, 2009 at 1:18 AM | Permalink | Reply

In the most recent issue of Geoscientist (the magazine (not journal)) of the Geological Society of London (link) there is a report of recent research published in the Journal of Environmental Quality (abstract) concerning the amount of methane released by landfill sites and which is released into the atmosphere.

According to the article, the IPCC currently assumes that between 0 and 10% of the methane produced in landfill sites oxidized before is released into the atmosphere. The new research finds that this figure is far too low, and that in a range of samples, the mean values ranged from 22% oxidation in clayey soil to 55% in sandy soil. The standard error was 6%.

The upshot of this, of course, is that landfill sites may not contribute such a high percentage of their methane emissions into the atmosphere. The methane threat, as the author in Geoscientist writes, may be overblown.

• Andrew
Posted Jun 5, 2009 at 7:40 AM | Permalink | Reply

Re: Derek Walton (#225), Well, it does help explain the slow down in the rise of Methane Concentrations…

102. Andrew
Posted Jun 5, 2009 at 11:34 AM | Permalink | Reply

Climate Progress boldly goes where no blog has gone before:
http://climateprogress.org/2009/06/04/noaa-puts-out-el-nino-watch/

Into the arena of dangerously specific predictions. What a buffoon!

103. Andrew
Posted Jun 6, 2009 at 8:05 PM | Permalink | Reply

Maybe someone here can help me out. I’ve recently become interested in comparing model output of variables other than temperature. I’m particularly interested in precipitation. What sort of datasets are available for precipitation and model out put in that regard?

104. Andrew
Posted Jun 7, 2009 at 1:02 AM | Permalink | Reply

Can climate models be “evidence” in a court of law? It’s an interesting legal question and an interesting science question. Roy Spencer has “nope (but they probably will be anyway)”

FWIW, I pretty sure in the case of California’s mileage standards, this idea was tested-Christy argued, with models, that the effects of the policy would be negligible, and Hansen argued for catastrophe, and we all know the origin of that…Both were admissible according to that judge, so go figure.

105. Ivan
Posted Jun 7, 2009 at 2:53 PM | Permalink | Reply

It looks like JeffId on whattsup have something very close to smoking-gun evidence that Steig’s Antarctica positive trend is due to Penninsula.
http://wattsupwiththat.com/2009/06/07/steigs-antarctic-peninsula-pac-mann/#more-8267

106. Andrew
Posted Jun 8, 2009 at 11:04 AM | Permalink | Reply

Pielke Sr versus Mike McCracken:

• Andrew
Posted Jun 8, 2009 at 8:35 PM | Permalink | Reply

Re: Andrew (#233), Pielke Sr has alerted me to an over-looked climate metric-Northern Hemisphere snow cover. There are a lot of interesting graphs on this site:
http://climate.rutgers.edu/snowcover/

For instance:

A decline is evident, but mainly in a short period in the mid to late eighties. A seasonal analysis is also interesting.

Winter:

Spring

Summer

Fall

A decline in Summer is obvious, and there is some decline in spring to, although it is different in some ways. Winter has no trend and Fall I would say is the same. So the cold months nothing, the warm months less snow. Interesting.

• bernie
Posted Jun 9, 2009 at 5:23 AM | Permalink | Reply

Re: Andrew (#239),
The spring snow cover data is interesting. Though somewhat OT, I was exploring the outbreak of Mountain Pine Beetles here and was looking at temperature data in two western states, Wyoming and Montana, primarily to explore the reasonableness of the assertion that AGW was a major contributor to the infestation. The GISS data suggests that there is no warming trend and this is buttressed by the data from the more extensive COOP stations.
I came across a very interesting UWYO climate site and another very good visual summary of the same data here that suggest that there is a distinct relative warming in March that is in line with the snow cover data though obviously more limited geographically. Assuming that there is no systematic flaw in the way the temperature data is collected/recorded and we are not looking at some very odd 20 year or so cycle, the data clearly indicates that there was no change in Annual Mean temperature when the decade of the 60s is compared to the decade of the 90s.
I have no idea why March trended warmer in the 90s (or even if it is still doing so in Wyoming) but it may be a more viable contributor to the current MPB infestation than a non-observed global warming in Wyoming. I am less clear as to the potential impact of the clearly off-setting cooler October and November.

Since this discussion occurred at a clearly very pro-environment that believes in strong AGW, it is also noteworthy how little interest there appears to be in looking at the actual temperature data – even though the assumed warming figures strongly in their explanations of the phenomena at issue – in this case MPB.

• Andrew
Posted Jun 9, 2009 at 9:58 AM | Permalink

Re: bernie (#242), I wish people in the media who cover such stories would do fact checking like this. Checking for an actual warming trend in the local data would seem to me to be a first step before you jump to the conclusion that global warming is to blame. And yet no one seem to do this-there is no desire to either. Thank you.

• bernie
Posted Jun 9, 2009 at 11:20 AM | Permalink

Re: Andrew (#243), I agree, but in this instance we are looking at a fairly sophisticated environmental pressure group with access to scientists. Indeed Dr Logan is one of the foremost authorities on Mountain Pine Beetles and has done a number of micro-climate studies on the little pests. So I think there is more going on here than simply a lack of diligence.
By the way, I think I saw some snow coverage data or snow accumulation data at the UWYO site.
What did you think of the map graphics?

• Andrew
Posted Jun 9, 2009 at 11:40 AM | Permalink

Re: bernie (#244), Interesting stuff.

107. Stu Miller
Posted Jun 8, 2009 at 12:01 PM | Permalink | Reply

All of you who despair of the problems of climate “science” today take heart. It seems the field of aeronautics in its earlier years faced a similar problem and survived. The first book on airplane design, written in 1915, was reviewed by the editor of the periodical “The Aerorplane” as follows:
“It seems well to make clear why these two writers should be taken seriously by trained and experienced engineers, especially in these days when aeronautical science is in its infancy, and when much harm has been done both to the development of aeroplanes and to the good repute of genuine aeroplane designers by people who pose as aeronautical experts” on the strength of being able to turn out strings of incomprehensible calculations resulting from empirical formulae based on debatable figures acquired from inconclusive experiments carried out by persons of doubtful reliability on instruments of problematic accuracy.”
Sound familiar?

108. Stu Miller
Posted Jun 8, 2009 at 12:04 PM | Permalink | Reply

Sorry, link to the book here
http://www.archive.org/stream/aeroplanedesign00barnrich#page/n5/mode/2up

109. See - owe to Rich
Posted Jun 8, 2009 at 12:23 PM | Permalink | Reply

Hello,

Please can anyone tell me where the UAH satellite data are these days? They used to be at http://www.atmos.uah.edu/data/msu/t2lt/tltglhmam_5.2 but not any more. http://www.atmos.uah.edu gives a very cryptic message.

TIA,
Rich.

110. VG
Posted Jun 8, 2009 at 10:00 PM | Permalink | Reply

NANSEN has once again readjusted upwards
http://arctic-roos.org/observations/satellite-data/sea-ice/observation_images/ssmi1_ice_area.png
so in fact now 1/2 2008 and ALL of 2009 within normal SD.
As mentioned previously on ice extent thread (now gone), the drift of this old satlelite is down and will thus be constantly be re-adjusted most likely upwards to reflect reality. Sorry AGW your days are coming to an end LOL

• DeWitt Payne
Posted Jun 8, 2009 at 10:18 PM | Permalink | Reply

Re: VG (#240),

NANSEN and NSIDC at the moment are on a different planet from the rest of us. If you take their recent data on Arctic ice area and extent(from a satellite that is known to be failing) seriously, I have a title to a bridge in NYC I’d like to sell you. Arctic ice extent and area have been below the 1979-2000 or even the 1979-2007 average all year.

111. Scott Brim
Posted Jun 9, 2009 at 3:37 PM | Permalink | Reply

.
The impact of these pine beetles of the forest is quite amazing. Near my hometown located in western Montana, every other pine tree is dying or dead.
.
Montana friends who spend their summers as firefighters and who know the forest well tell me that there haven’t been enough periods of continuous severe cold winter weather to keep the beetles in check.
.
To be effective, there has to be at least one severe cold weather episode lasting six to eight weeks with no interruption, but these haven’t been happening in the last ten years or so.
.
Severe cold periods still occur, but they don’t last as long and are interrupted by warm spells which allow the beetles to recover.
.
I presume that temperature records are available which support the observation that the necessary six-to-eight week continuous cold spells have been absent.

112. Paul Penrose
Posted Jun 9, 2009 at 5:46 PM | Permalink | Reply

Jack,
Please explain to me why it would make any difference where the data originated for a time series when applying statistical analysis to it. For example, explain how adjusting for serial autocorrelation is different for temperature data than stock market data. Or how endpoint selection in a smoothing filter is any different for tree ring data than malarial infection rates in humans, for example. By the way, both of these (serial autocorrelation and filter endpoint selection) are real issues found in climate study papers.

113. bernie
Posted Jun 9, 2009 at 5:57 PM | Permalink | Reply

Scott:
According to the research I looked at and what I learned from Dr. Logan and Oliver Ramsay who had extensive person to beetle experience combating these pests it actually takes very severe weather to knock them down (minus 40C). I suspect that you and your firefighter friends are probably correct about prolonged cold snaps.
However, it is unclear as to the actual factors that promote the outbreaks, hence my thoughts about milder springs. There are other issues as well. Apparently a lack of moisture makes the tree vulnerable since they cannot generate the sap needed to push the blighters out.
Certainly the temperature records should be available though I could not find any that were publically accessible. As you can see from the 30 year averages however there has been no movement in the winter months. Obviously an average does not address your point. My view is that it is a problem that deserves real research and that resorting to the mantra of AGW does not help and actually is likely to be immensely counter-productive. You will note that they closed comments on the thread, which is a strange way to run a blog.

114. Scott Brim
Posted Jun 9, 2009 at 9:23 PM | Permalink | Reply

Bernie: However, it is unclear as to the actual factors that promote the outbreaks, hence my thoughts about milder springs. There are other issues as well. Apparently a lack of moisture makes the tree vulnerable since they cannot generate the sap needed to push the blighters out.

Yes, that’s right, I forgot to mention that drought conditions are also considered a factor. I’ve sometimes wondered how often these kinds of infestations came and went in the days before humans began actively managing the forest and before we began actively suppressing fires.

115. Sparkey
Posted Jun 10, 2009 at 7:19 AM | Permalink | Reply

Actually, he missed the point, which Sparkey elucidated much better than did I.

Thank you.
I guess in the process of being the son of one English Ph.D and the husband of another something must have rubbed off. I tried showers and earning two Engineering degrees but evidently I can’t remove it.

116. Gunnar
Posted Jun 10, 2009 at 8:24 AM | Permalink | Reply

Steve, heard this morning on the radio that Winds have been decreasing steadily since 1973, caused by (you guessed it), global warming. As a sailor, I haven’t noticed this, and reports of professional sail boat races haven’t mentioned it either. The current volvo ocean race would presumably notice something like this.

Maybe someone could do a statistical analysis of this? A google search found this http://www.msnbc.msn.com/id/12612965/.

But the radio report seemed more recent than May 2006, and specifically mentioned 1973.

117. Sparkey
Posted Jun 10, 2009 at 9:58 AM | Permalink | Reply

To cite one or the other as proof-positive of a position is an appeal to authority that is unjustified. It is evidence. Period.

rephelan,
My interpretation of your post is that you define “appeal to authority” far too narrowly. Any citation that bolsters ones credibility is defacto an “appeal to authority”, evidence of ones competence in a subject, be it University degrees or years of experience, is one way. Quoting others who support your position is another way of establishing credibility that is an “appeal to authority”. In the strict retorical sence, an “appeal to authority” can be any claim that enhances the credibility of the witness or his argument. (My “appeal to authority” on that is my wife, Ph.D in English 1994, TAMU and my father Ph.D in English 1959, Rice University).

Evidence can be used as an “appeal to authority” AND an “appeal to authority” can be used as evidence to establish ones credibility.

Therefore (as Severian wrote):

they are teleconnected.

QED

118. Andrew
Posted Jun 10, 2009 at 11:58 AM | Permalink | Reply

A new paper rebuts arguments that the Medieval Warm Period was heterogeneous and therefore irrelevant? You decide:
http://co2science.org/articles/V12/N23/EDIT.php

By means of various mathematical procedures and statistical tests, Esper and Frank were able to demonstrate that the records reproduced in the AR4 “do not exhibit systematic changes in coherence, and thus cannot be used as evidence for long-term homogeneity changes.” And even if they could be thus used, they say “there is no increased spread of values during the MWP,” and that the standard error of the component data setsis actually largest during recent decades.” Consequently, the researchers concluded that their “quantification of proxy data coherence suggests that it was erroneous [for the IPCC] to conclude that the records displayed in AR4 are indicative of a heterogeneous climate during the MWP.

119. Andrew
Posted Jun 10, 2009 at 12:26 PM | Permalink | Reply

Hm…will Gavin and Mann duke it out over their disagreement on a new study about winds dying out? Will RC discuss it? Should be interesting.

• curious
Posted Jun 10, 2009 at 4:47 PM | Permalink | Reply

Re: Andrew (#258), Thanks

The ambiguity of the results is due to changes in wind-measuring instruments over the years, according to Pryor. And while actual measurements found diminished winds, some climate computer models — which are not direct observations — did not, she said.

!! .. there’s more:

One of the problems Pryor acknowledges with her study is that over many years, changing conditions near wind-measuring devices can skew data. If trees grow or buildings are erected near wind gauges, that could reduce speed measurements.

etc…

• Andrew
Posted Jun 10, 2009 at 7:44 PM | Permalink | Reply

Re: curious (#262), Yes, it is a good article with many gems. Feels vindicating, almost.

And now for my next trick, I will offer the latest WCR story-on sulfate aerosols:
http://www.worldclimatereport.com/index.php/2009/06/10/sulfates-and-global-warming/
Will modelers tweak their sulfate forcings to account for recent cooling anyway? Time will tell…

• Ron Cram
Posted Jun 11, 2009 at 7:27 AM | Permalink

Re: Andrew (#264),

Petr Chylek of Los Alamos National Lab has published a series of papers with the opposite conclusion, that the cooling impact of aerosols has traditionally been given too much credit. I believe the first of these papers were published in 2007.

• Andrew
Posted Jun 11, 2009 at 8:46 AM | Permalink

Re: Ron Cram (#267), I’ve read some of Chylek’s papers. He makes very good points and I sometimes wonder how the advocates get away with ignoring his work.

Anybody else remember The International Conference on Global Warming and the Next Ice Age? I believe Chylek organized that. What ever happened to that?

120. Ivan
Posted Jun 10, 2009 at 4:31 PM | Permalink | Reply

There is a strange inconsistency between official UAH data for May 2009 and data published on Roy Spencer’s personal website. I just sent him following letter:

Dear Dr Spencer,

I noticed that version of latest UAH data for May 2009 published on your website is very different from official data published on http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

On your website overall trend is 0.043, official trend 0.05. On your website NH anomaly is 0.43, official anomaly 0.05. Tropics at your website -0.168, official -0.15, South Hemisphere on your website, just like NH and overall trend, – 0.043, official data 0.05.

What version of data is correct, official or published on your website?

Any idea what’s going on?

• Andrew
Posted Jun 10, 2009 at 4:39 PM | Permalink | Reply

Re: Ivan (#260), I’ve noticed some differences before and usually I chocked them up to rounding. The official data only reports 2 decimal places, he always reports three. However it is not only odd to round up when you are not even half way there yet, the tropical anomaly is even in the wrong direction. Hm…Someone has made an error.

• Andrew
Posted Jun 16, 2009 at 11:03 AM | Permalink | Reply

Re: Ivan (#261), The data seem to match those here:
http://vortex.nsstc.uah.edu/data/msu/t2lt/tltglhmam_5.2

It seems that the “official” file has some funky rounding.

121. Andrew
Posted Jun 10, 2009 at 8:01 PM | Permalink | Reply

The BBC is skeptical of climate “projections”?

http://www.bbc.co.uk/blogs/climatechange/2009/06/the_unpredictable_weather.html

I’m sure that the fact that you can’t nail down the future “doesn’t matter” of course…

122. EddieO
Posted Jun 11, 2009 at 3:28 AM | Permalink | Reply

Can anyone tell me if the UK met office has a “Climate Model”? I sat in a meeting recently with a chap who claimed that they do and that it is particularly accurate at predicting the future which would make it somewhat unique for a climate model.

• Posted Jun 11, 2009 at 9:35 AM | Permalink | Reply

Re: EddieO (#266),

And would likely mean they’ve not been using it, considering how inaccurate they’ve been over recent years.

123. Andrew
Posted Jun 11, 2009 at 12:10 PM | Permalink | Reply

Is stationarity dead? RP Jr. writes about it:
I’m on the side of those who say “Stationarity wasn’t killed by AGW, it was always dumb” But It is interesting to see a frank discussion of the subject even if not in entirely AGW context.

124. Mark T
Posted Jun 11, 2009 at 1:09 PM | Permalink | Reply

As if this is a new discovery. Engineers, particularly those that work with detection/estimation algorithms, have long sought ways to deal with a natural world that does not comply with an assumption of stationarity. Try to assume that communication systems always propagate through stationary channels and suddenly, when your cell phone won’t connect (or stay connected), you’ll understand what I mean.

Stationarity isn’t dead, nor has it always been dumb. It is simply a concept that rarely can be applied except in the theoretical world.

Mark

• Andrew
Posted Jun 11, 2009 at 3:01 PM | Permalink | Reply

Re: Mark T (#271), I don’t disagree, I’m just bad at articulating what I mean.

• Mark T
Posted Jun 11, 2009 at 3:39 PM | Permalink | Reply

Re: Andrew (#273), That’s OK. I know you’re thin-skinned w.r.t. bender, too.

He’s been quiet lately, btw. He’s sort of bursty in here.

Mark

125. Mark T
Posted Jun 11, 2009 at 1:34 PM | Permalink | Reply

Oh, and btw, my ultimate complaint with using PCA/RegEM/ is, and has always been, that these methods are meant to be applied to signals with stationary statistics (wide-sense stationarity in particular, i.e., stationary in the mean and variance). There are methods (online, block) for dealing with signals that are not stationary, but to date, nobody has used these methods (that I’ve read).

Mark

126. Andrew
Posted Jun 11, 2009 at 3:27 PM | Permalink | Reply

Great site for monitoring global precipitation:
http://disc2.nascom.nasa.gov/Giovanni/tovas/rain.GPCP.shtml

127. JS
Posted Jun 11, 2009 at 10:55 PM | Permalink | Reply

Assuming this is the appropriate place for otherwise OT discussion, I was wondering if anyone happens to read the Journal of Economic Perspectives? In their latest issue they have a survey on the economic effects of climate change by Richard S.J. Tol. (Working paper version for those who are not AEA members here).

Of particular interest Figure 1 but some quotes:
“A first area of agreement between these studies is that the welfare effect of a doubling of the atmospheric concentration of greenhouse gas emissions on the current economy is relatively small – a few percentage points of GDP.”

“A second finding is that some estimates… point to initial benefits of a modest increase in temperature, followed by losses as temperatures increase further.”

“Projections of future emissions and future climate change have become less severe over time – even though the public discourse has become shriller. The earlier studies focused on the negative effects of climate change, whereas later studies considered the balance of positives and negatives.”

“The emissions of greenhouse gases are predominantly from high-income countries while the negative effects of climate change are predominantly in low-income countries.”

Discuss.

128. Craig Loehle
Posted Jun 12, 2009 at 10:01 AM | Permalink | Reply

I call your attention to:
“How to Identify and reduce MUD (Made-Up Data). D. B. South. 2009. J. Forestry 107(4):214-215.

Some choice quotes on how to recognize MUD:
-you ask for a reference and none is provided (and you may be ridiculed for even asking);
-the person who produced the number is not willing to bet money on the number;
-you are told the data are unavailable for legal resons or due to company policy;
-the number is based on extrapolation of a graph;

Sounds like a CA reader to me!!

129. Ivan
Posted Jun 13, 2009 at 9:28 AM | Permalink | Reply

Look specially in the second part of article. Can someone provide Bohr’s paper that allegedly proved CO2 is not a greenhouse gas?

130. Ivan
Posted Jun 13, 2009 at 9:31 AM | Permalink | Reply

Niels Bohr reported his discovery that the absorption of specific wavelengths of light didn’t cause gas atoms/molecules to become hotter. Instead, the absorption of specific wavelengths of light caused the electrons in an atom/molecule to move to a higher energy state. After absorption of light of a specific wavelength an atom couldn’t absorb additional radiation of that wavelength without first emitting light of that wavelength. (Philosophical Magazine Series 6, Volume 26 July 1913, p. 1-25) Unlike the glass which reflects IR back where it comes from, CO2 molecules emit IR up and sideways as well as down. In the time interval between absorbing and reemitting radiation, CO2 molecules allow IR to pass them by. Glass continuously reflects IR. Those who claim that CO2 molecules in the atmosphere can cause heating by trapping IR have yet to provide any empirical scientific evidence to prove such a physical process exists.

• DeWitt Payne
Posted Jun 13, 2009 at 10:55 AM | Permalink | Reply

Re: Ivan (#279),

Bohr is of course correct for a free atom or molecule in a vacuum. For most of the atmosphere, though, equipartition of energy through collisional energy transfer applies and the average kinetic energy of the system does increase on absorption of a photon and decreases on emission. As far as empirical evidence, IR emission and absorption spectra for the atmosphere can be calculated that agree quite well with observed spectra. The spectra vary with temperature and pressure just as predicted by atmospheric radiation transfer theory.

131. cba
Posted Jun 14, 2009 at 5:22 AM | Permalink | Reply

translation,

the atmosphere density is high enough to where there is plenty of bouncing around for an atom or molecule during the microsecond or nanosecond time the molecule absorbs a photon and re-emits one. The energy can be conveyed away to other molecules and received energy can excite our molecule of interest. Hence, the randomness of the interactions creates an equipartition of energy. THere has been a study for decades, both theoretical and experimental of these excitation lines under atmospheric conditions and they have been cateloged into data bases, line by line and there are thousands and thousands of them. In fact, over a million are in HITRAN. One can ascertain what the absorption is for an atmospheric make up with various amounts of 39 molecules cateloged under various conditions of temperature and pressure.

Remember 1913 was about 20 years before the discovery of the neutron and before the real beginnings of quantum mechanics.

• DeWitt Payne
Posted Jun 14, 2009 at 11:37 AM | Permalink | Reply

Re: cba (#281),

Thanks for the translation. People sometimes ask me to explain atmospheric radiation transfer. I should point them to my post and your translation to show what a bad idea that would be.

132. DeWitt Payne
Posted Jun 14, 2009 at 11:35 AM | Permalink | Reply

Transferring from the Cloud Super-Parameterization thread:
Re: Ferdinand Engelbeen (#105),

The problem with your graph on CO2 and deltaD is that you plot CO2 ppmv while the influence of CO2 is actually ln(CO2(t)/CO2(0). When you do that, the lag in the acceleration effect of CO2 becomes more obvious. I’ve plotted (I should reverse the x axis, but I’m too lazy) the log ratio of CO2, the deltaD and a very simplistically corrected deltaD assuming that 30% of the change in deltaD comes from CO2 and is proportional to the log ratio of CO2. Note that about 40% of the change in deltaD has occurred before the effect of increasing CO2 becomes significant. The effect of the CO2, besides increasing the difference in deltaD, is to somewhat smooth the Antarctic Cold Reversal that more or less corresponds to the Younger Dryas event in the Northern Hemisphere. As I’ve stated before, the problem most people have with lagged CO2 forcing is that they assume that the underlying forcing is constant, or at least monotonic, and that the effect of CO2 should be obvious. Neither assumption is correct.

• Posted Jun 15, 2009 at 4:07 AM | Permalink | Reply

DWP, thanks for the graph, I will make a new one which is more detailed and includes the relative influence of CO2 on temperature… You are right that the influence is logarithmic, but the graph you plotted does inflate the influence on temperature towards the Holocene. Most of the reaction of CO2 on temperature is within 30 years, which is within the resolution of the ice core CO2 measurements. Thus any increase of CO2 (lagged or not) must give an immediate accelleration of temperature without lag and thus a change in slope. See a theoretical example of a 30% feedback of CO2 on a linear increase of temperature here:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/feedback.jpg

Of course, one should include the most important feedback, the ice sheet albedo, as that is the main driver for the whole transition. I haven’t seen any Dome C figures for ice sheets (yet), but I can use the Vostok ice sheet figures to see what that gives (including the Younger Dryas)…

• DeWitt Payne
Posted Jun 15, 2009 at 8:23 AM | Permalink | Reply

Thus any increase of CO2 (lagged or not) must give an immediate acceleration of temperature without lag and thus a change in slope.

But the question remains: how do you detect a change in slope when the underlying forcing (nofb in your chart) is unknown? My answer to that question is that you can’t. Any real forcing is going to look sigmoidal, not a steady linear ramp and plateau. Btw, I can come very close to reproducing the CO2 concentration data from Dome C using a linear transform of the deltaD data (deltaD*1.804+962) and a time constant of about 1500 years. See this plot. The next step will be to generate a graph similar to yours but with a sigmoidal forcing with and without CO2 feedback.

• DeWitt Payne
Posted Jun 15, 2009 at 11:36 AM | Permalink

Ok. Here’s the graph I referred to above. I generated a sigmoidal (1/(1+exp(x)) deltaD curve then calculated CO2 with no feedback using the coefficients I calculated previously including a 1600 year time constant for CO2 evolution. Then I iteratively applied the CO2 forcing using the 30% effect coefficients as above. Convergence was rapid. Five iterations was sufficient. Tell me again that the beginning of the acceleration from CO2 feedback is obvious.

• Posted Jun 15, 2009 at 12:31 PM | Permalink

DWP, I agree that it is impossible to see a difference if you start without a lag, but it is the lag which makes that there is an accelleration in trend after the lag time, there you can see the difference. If there is a huge feedback of CO2 on temperature, that should give an increase of the slope at the moment that the CO2 increase starts, thus about 800 years after any temperature increase. This is a real lag for upgoing temperatures, and up to several thousands of years, for the opposite trend: from an interglacial to a glacial condition. That can be seen at the end of the Eemian, where temperature are already at minimum (and ice sheets show a maximum), before CO2 levels start to decrease. The following drop of 40 ppmv CO2 doesn’t have a measurable impact on temperature: http://www.ferdinand-engelbeen.be/klimaat/eemian.html
The lag of CO2 is no artefact of the gas age – ice age timing, as the CH4 levels, also measured in the gas phase, far more closely follow the temperature trend.

Even if the underlying trend is not known, this should be seen in the trend (a 30% feedback is quite huge). In this case, the increase remains rather linear with plateau’s (probably ice sheet related). If the linearity was -in part- a result of the CO2 feedback, that should imply that the sum of the other forcings (or forcing x sensitivity) together is changing just with the opposite sign and amplitude as the CO2 feedback…

• DeWitt Payne
Posted Jun 15, 2009 at 5:31 PM | Permalink

The CO2 feedback was calculated from the lagged CO2, not equilibrium. A time constant of 1600 years was used to calculate CO2 from temperature (deltaD 18O actually). This leads to a CO2 curve that is about 800 years behind the temperature curve, similar to observed results.

This is somewhat inelegant, but it seems to have worked. I performed the calculation iteratively. I generated a deltaD curve using the equation (I really should learn TEX) deltaD(t)=(40/(1+exp((t-17000)/1000))-445. Then I calculated the CO2 curve by first calculating the equilibrium CO2 CO2e(t) = 1.803571*deltaD(t)+982.1196 (the constants were derived from fitting Dome C data under the assumption that CO2 is proportional to deltaD). Then I calculated the lagged CO2l(t) = CO2l(t-1)+(CO2e(t)-CO2e(t-1))*(1-exp(-delta(t)/tau)) where delta(t) is the time step size, in this case 100 years and tau is 1600 years. Then I calculated a new deltaD(t) column from the lagged CO2 data deltaD1(t)=deltaD+43*ln(CO2l(t)/CO2l(initial)) (30% Dome C deltaD change again). Lather rinse and repeat. After five iterations, the resulting deltaD curve is not a pure sigmoid, but the changes are subtle. It can be fit fairly well but not perfectly with a sigmoid curve deltaD=(60.137/(1+exp((t-16572.63)/1188.879))-445.23. Throw in some noise and you’re definitely not going to be able to tell it’s not a pure sigmoid, much less tell when CO2 starts to accelerate (amplify) the warming.

• Posted Jun 16, 2009 at 7:29 AM | Permalink

Thanks DWP. Indeed it looks like that the resolution of the (real) CO2 measurements is not fine enough to see the accelleration caused by the CO2 feedback… I have looked to other ice cores, but none of them have a better resolution for CO2 in the 20,000-10,000 BP period.

133. cba
Posted Jun 14, 2009 at 12:52 PM | Permalink | Reply

DeWitt,

Hopefully, between the two comments it will make at least some sense to most readers. Sometimes, knowing the material really well can be a disadvantage when conveying the information to others who are not familiar with it. Considering the overall subject involves multidisciplines and even experts in some areas are completely ignorant in other areas, there’s virtually no such thing as over clarifying an explanation.

134. Andrew
Posted Jun 14, 2009 at 2:12 PM | Permalink | Reply

Sea Ice Trends

135. curious
Posted Jun 15, 2009 at 1:15 AM | Permalink | Reply

Report by the IPCC on Alternative Metrics here:

http://www.ipcc.ch/pdf/supporting-material/expert-meeting-metrics-oslo.pdf

Exec. summary discusses uncertainties. Sorry if it has already been discussed/mentioned and I’ve missed it.

136. Derek Walton
Posted Jun 15, 2009 at 5:13 AM | Permalink | Reply

Steve et al. have discussed at length the different requirements of replication and data disclosure in mining and in climate research. This article provides a good 2 page summary of the requirements for mineral exploration complanies and might be of interest to readers who want an introduction to the subject.

137. cba
Posted Jun 15, 2009 at 6:45 AM | Permalink | Reply

Ferdinand Engelbeen: (287)

Crudely estimated, current land ice amounts to coverage of about 10.4% of all land. It contributes about only 6 Watts/m^2 in albedo (total of around 105 W/m^2). Sea ice contributes another 2W/m^2. That’s a total of 8W/m^2 and it is probably somewhat of an overestimate. This compares to a co2 doubling of around 3.5 W/m^2. If you take the amount needed for a doubling from 1750 levels, that’s around 2.5 W/m^2. Add these up and you get about 10.5 W/m^2 for no ice and CO2 at 2x the 1750 content.

The nominal Earth albedo we experience now is about 0.30. Of that, 0.08 is from the surface and 0.22 is from the atmosphere, mostly clouds which cover around 60-64% of the surface at any one time.

In 1998 it appears that the albedo dropped by 10% for a while and it has yet to fully recover. The drop was in cloud cover and due to internal oscillations of the overall system – ENSO – El Nino Southern Oscillation. A 10% drop in albedo corresponds to 10.5 W/m^2.

Glaciation has significantly more effect than what we experience now – but only when cloud cover permits. Assuming cloud cover can create a strong negative feedback, the presence of large ice sheets will short circuit that feedback as it doesn’t matter much whether there’s daytime clouds or not. It would be expected that there isn’t that much cloud cover either.

Albedo is the most important factor in Earth’s temperature. Ice sheets might become the most important factor in albedo some of the time – but not all the time.

• Posted Jun 15, 2009 at 7:07 AM | Permalink | Reply

Re: cba (#289),

Thanks cba…

Cloud cover indeed is probably the most important unknown over any period of earth’s climate, including the most recent one. Models still have a lot of trouble in the tropics, but also in the Arctic to reflect reality: from http://www.cicero.uio.no/fulltext/index_e.aspx?id=3277 :

There is a significant deviation between the models when it comes to cloud cover, and even though the average between the models closely resembles the observed average on an annual basis, the seasonal variation is inaccurate: the models overestimate the cloud cover in the winter and underestimate it in the summer.

Thus the cooling in winter is stronger and summer melting is less than modelled, thanks to clouds…

Ice core d18O in N2O more or less reflects (global) ice sheet volumes of the glacial/interglacial periods and variability within these periods (D.O.-events), but for albedo one need area (and the latitude spread…). Any idea how to translate that?

• Posted Jun 15, 2009 at 7:19 AM | Permalink | Reply

Re: cba (#289),

cba,

Second thought: with a strong negative cloud feedback, the cooling of the interglacial-glacial transition would be limited, as the decrease in albedo would increase insolation mainly towards the tropics, preventing further ice sheet formation. The opposite also is limiting, as once the bulk of the ice sheets disappeared, increased temperatures over a larger area gives more clouds, reducing the albedo…

138. TAG
Posted Jun 15, 2009 at 8:01 AM | Permalink | Reply

This is pertinent to the issue of peer reviewed and its ability to deal with the prejudices and preconsptions of scientists.

From the article in New Scientist at:

http://www.newscientist.com/article/mg20227084.500-flat-universe-may-be-the-new-flat-earth.html

The calculation reveals how strongly astronomers’ prejudices can affect their conclusions

This quote is from the author (Joseph Silk of Oxford) of a paper acepted fro publication in the Monthly Notices of the Royal Astronomical Society. It concerns the commonly accpteed result that the universe is flat. Using Bayes theorem, the aithors show that using commonly acceptd assumtpions and WMAP data concerning teh cosmic microwave background, the probability that the universe is flat is about 98%. However using some more open assumptions,the probability declines to about 67%. Astronomers’ starting points can affect their conclusions.

This is a paper that would of benefit for consideration in the next IPCC interim report along with other rigorous studies into the capabilities of peer review.

139. cba
Posted Jun 15, 2009 at 8:39 AM | Permalink | Reply

Ferdinand Engelbeen: (291)

increased insolation could only provide increased clouds where moisture is available for evaporation. It’s also not the only required factor as there’s the topic of how clouds can form.

note that a decrease in albedo only can increase the insolation in the area of the decrease. Decreasing albedo at higher latitudes can only increase the insolation at the higher latitudes. If the low latitudes that receive the bulk of the insolation are primarily ocean, then there should be plenty of action. During the melt of the glaciers, there should be melt pools and evaporation and presence of water higher up in latitude, much more so than now. Once melt pools occur, surface albedo plummets as those pools have far less albedo than snow and ice.

140. MJW
Posted Jun 15, 2009 at 2:52 PM | Permalink | Reply

Roger Pielke, Jr. is now blogging at a different website, here, so the link in the sidebar blog list should be updated.

141. Posted Jun 17, 2009 at 8:00 AM | Permalink | Reply

Ohh man. It’s so weird to see the new rise of Keynesian economics. We studied him quite a bit in college and the general takeaway was that he was a crazy, foul-mouthed eccentric. Strangely enough, his ideas seem to be working.

Steve: I presume that you’ve not read Keynes in the original. He was extremely accomplished in both theoretical and practical areas and had an excellent writing style. The conclusion at your college says more about your college than about Keynes. The economies were very different at the time. Kenyes has been used to justify many things that may or may not be consistent with his writings; people can disagree as to what Keynes would have prescribed in present circumstances.

• bernie
Posted Jun 17, 2009 at 10:47 AM | Permalink | Reply

Re: Mouli Cohen (#5),
Steve is absolutely correct. Besides Keynes’ accomplishments as a mathematician, he saw clearly that much depended upon the psychology of individuals and crowds – particularly the “animal spirits” of entrepreneurs and the expectations of consumers. One might argue that much of his work focused on what could be done to modify the effective demand curve to off-set a loss of optimism and risk-taking on the part of consumers and investors.
He was also prescient in his analysis of the Versailles Treaty and a re-animated and vengeful Germany.
If you want to see a fuller assessment I suggest Skidelsky’s very thorough biography.

• Steve McIntyre
Posted Jun 17, 2009 at 11:04 AM | Permalink | Reply

Re: bernie (#8),

Keynes wrote sharply about the perils of spurious correlation – even coauthoring when he was young with Yule who is still cited on this topic, including, not infrequently, at CA. A point that I’ve mentioned in the past and why I placed the Keynes article online some time ago.

142. Ivan
Posted Jun 17, 2009 at 5:06 PM | Permalink | Reply

Steve, basis for assessments like that agains you protest is following; Keynes was neo-mercantilist crack pot who believed that building the pyramids was good counter-cyclical policy of Pharaohs, advocated protectionism, balance of trade doctrine, who wanted to forbid usury (!), who thought interest rate was psychological phenomenon, who believed that public works shouldn’t provide any useful effect, and that ideal were shovel-ready projects like digging and closing the wholes etc. In the forword for German edition of his book (1938) he said his concept of government interventionism much more easily could be implemented in authoritarian than in democratic regime. I suppose that nobody except similar etatist eccentrics doesn’t take him seriously today in economic theory (and politicians of course, because Keyns’ crack-pot theory of fiscal stimulus and government spending as cure for recession justify them in doing what they want to do anyway).

As for what Keynes would say in 1970s or today – nobody can now. Hayek wrote a devastating critique of Keynes’ monetary theory in 1930s, only to see Keynes saying, ” I have changed my mind, that is not my theory anymore”. Hayek didn’t bother to refute silly,mercantilist mistakes of “General Theory” only because he thought Keynes would change his mind again, so paying effort to criticize him was futile (that was mistake. But, that job was done by Henry Hazzlit in 1959). Hayek, who knew Keynes very well said once that most probably Keyns would became strongest fighter agains inflation after the WW II,as if he never wrote General Theory.

Steve, maybe, you should study Keynes on your own, or at least without aggressive tutelage of Keynsians.

• Steve McIntyre
Posted Jun 17, 2009 at 5:27 PM | Permalink | Reply

Re: Ivan (#303),
I said that I didn’t have time to debate this issue and I’m not going to. While I was offered a scholarship at MIT, I didn’t go. Most of my reading of Keynes was on my own. I think that I’m a pretty independent reader and not easily swayed by conventional opinions, so you don’t need to worry about that. My comments on Keynes were nuanced if any of you bothered to read what I actually said. I’m not all that sold on many aspects of Keynes’ theory. That doesn’t mean that he wasn’t an accomplished and insightful writer. Or that the failure to perceive that says more about the college than about Keynes.

Will people accept the fact that it’s possible that I have a sensible understanding of these issues and that I don’t have the time or interest to debate it.

143. srp
Posted Jun 17, 2009 at 5:50 PM | Permalink | Reply

Keynes’s General Theory, his magnum opus, is generally considered to be obscure and confusing. The younger generation at the time who were influenced by it admitted later that his obscurity helped spread his influence, as those who went to the trouble of figuring out what he might have meant felt like initiates in an exclusive club of smart people.

In subsequent decades there has been a cottage industry in macroeconomics and history of economic thought churning out exegeses on “what Keynes really meant.” This literature tries to suss out who are the true followers, the heretics, etc. Hilarity ensues. The one thing I would not say about Keynes is that he was a clear expositor of his views.

144. JS
Posted Jun 17, 2009 at 10:38 PM | Permalink | Reply

I find the concept, specifically with respect to Keynes (but also in the wider sphere of, say, climate science), that one individual is the holder of the truth and it is the job of devotees to understand the inviolate truth of the master to be amusing and sad. Keynes had some briliant ideas, but he was not infallible. Einstein was also briliant, but not infallible (“God does not play dice” being the quintessential example). Thus, whether or not any individual was right or wrong on any particular topic really shouldn’t be at issue when discussing other topics. What matters is what is right.

Perhaps it the history of ideological battles in economics, but for whatever reason, there is a greater emphasis on “due dilligence” and empirical verification in economics than I see in many other academic areas. A pure science, that has always had access to copious quantities of good experimental data, wouldn’t have developed such procedures, because they were assumed unnecessary. A more pure social science wouldn’t have developed such procedures because they would still be mired in arguments about, say, phonics versus whole language. But a discipline that has made a transition from social science to more empirical science might just be able to teach the pure sciences something on that front. It is the history of economists’ and econometricians’ mistakes that make them well placed to make methodological comments on climate science.

And, thus, I quote a small part from the Nobel citation for Samuelson. It matters not whether he was a Keynsian, a Neo-classicist or a Neo-Keynsian (he was all three) but that: “More than any other contemporary economist, Samuelson has helped to raise the general analytical and methodological level in economic science.”

I look forward to that also occurring in another area we all know and love.

145. David Smith
Posted Jun 18, 2009 at 5:12 AM | Permalink | Reply

Earth Observatory commentary on current volcano and SO2 ( link )

146. Mark T
Posted Jun 18, 2009 at 11:48 AM | Permalink | Reply

Of course, I’ve never thought to check his Wikipedia entry, either, which may explain my ignorance regarding his economic history. I’ve not really read much of what he has written, either, just the odd occasion when he gets quoted somewhere on the blogosphere.

Mark

147. Mark T
Posted Jun 18, 2009 at 1:52 PM | Permalink | Reply

Unfortunately, the claim that Sowell started out a Marxist is stated in the Wikipedia article, but not referenced. That he chose to follow Stigler to Chicago indicates he was already leaning towards classic liberalism by the time he was college-aged (after a stint in the military). I was also surprised that he got a GED rather than a high-school diploma. Kudos to his hard work!

Mark

148. BarryW
Posted Jun 18, 2009 at 2:58 PM | Permalink | Reply

It was as an undergrad:

149. nevket240
Posted Jun 18, 2009 at 7:59 PM | Permalink | Reply

Oceans Rising Faster Than UN Forecast, Scientists Say (Update2)

Share | Email | Print | A A A

By Alex Morales

June 18 (Bloomberg) — Polar ice caps are melting faster and oceans are rising more than the United Nations projected just two years ago, 10 universities said in a report suggesting that climate change has been underestimated.

Global sea levels will climb a meter (39 inches) by 2100, 69 percent more than the most dire forecast made in 2007 by the UN’s climate panel, according to the study released today in Brussels. The forecast was based on new findings, including that Greenland’s ice sheet is losing 179 billion tons of ice a year.

“We have to act immediately and we have to act strongly,” Hans Joachim Schellnhuber, director of Germany’s Potsdam Institute for Climate Impact Research, told reporters in the Belgian capital. “Time is clearly running out.”

In six months, negotiators from 192 nations will meet in Copenhagen to broker a new treaty to fight global warming by limiting the release of greenhouse gases from burning fossil fuels and clearing forests.

“A lukewarm agreement” in the Danish capital “is not only inexcusable, it would be reckless,” Schellnhuber said.

Fossil-fuel combustion in the world’s power plants, vehicles and heaters alone released 31.5 billion metric tons of carbon dioxide, the main greenhouse gas, 1.8 percent more than in 2007, according to calculations from BP Plc data.

‘Rapid and Drastic’

The scientists today portrayed a more ominous scenario than outlined in 2007 by the UN Intergovernmental Panel on Climate Change, which likewise blamed humans for global warming. “Rapid and drastic” cuts in the output of heat-trapping gases are needed to avert “serious climate impacts,” the report said.

The report called for coordinated, “rapid and sustained” global efforts to contain rising temperatures. Danish Prime Minister Lars Loekke Rasmussen, also in Brussels, told reporters that nations have to reverse the rising trend in emissions of heat-trapping gases.

“We need targets,” Rasmussen said. “All of us are moving toward the same ambitious goals.”

Scientists from institutions including Yale University, the University of Oxford and the University of Cambridge compiled the 39-page report from research carried out since 2005, the cutoff date for consideration by the IPCC for its forecasts published in November 2007.

Sea Levels

Ocean levels have been rising by 3.1 millimeters a year since 2000, a rate that’s predicted to grow, according to the study. The projections of sea levels rising by a meter this century compare with the 18 to 59 centimeters (7 to 23 inches) forecast by the IPCC.

“There are indications that rates of sea-level rise are higher than projected, and impacts like Arctic melting are more rapid,” Martin Parry, who supervised part of the UN panel’s 2007 study, said in a telephone interview. He wasn’t involved in writing the new report.

Oceans are warming 50 percent faster than the IPCC predicted and Arctic sea ice is disappearing more rapidly in summer — exposing darker ocean that absorbs more heat, the study said.

The academics produced the study, “Climate Change –Global Risks, Challenges and Decisions,” by compiling research submitted to a conference in Copenhagen in March. They also drew from an October 2006 report into the economics of climate change by Nicholas Stern, then the U.K. government’s chief economist.

Doing-Nothing Cost

Stern’s study, which wasn’t included in the IPCC report, said that the cost of avoiding the worst impacts of climate change can be limited to 1 percent of economic output while doing nothing could lead to damage costing as much as 20 percent of the world’s gross domestic product.

“Greater near-term emissions lock us into greater climate change requiring greater costs from climate impacts and more investment in adaptation,” Stern wrote in today’s study. “Furthermore, they lead to a faster rate of climate change with greater challenges for adaptation.”

By 2050, when the global population will be an estimated 9 billion people, per-capita gas emissions will need to have fallen to about 2 tons a year, compared with levels as high as 20 tons a person currently in the U.S., the report proposed.

The University of Copenhagen coordinated the effort by the 10-school International Alliance of Research Universities. Other members include the University of California at Berkeley, Peking University, the Australian National University, ETH Zurich, the National University of Singapore and the University of Tokyo.

To contact the reporter on this story: Alex Morales in London at amorales2@bloomberg.net.

Last Updated: June 18, 2009 11:57 EDT

obviously in Pre-Copenhagen mode.
regards

• BarryW
Posted Jun 18, 2009 at 9:47 PM | Permalink | Reply

Re: nevket240 (#5),

OT but I can’t let this go by:

Sea Levels
Ocean levels have been rising by 3.1 millimeters a year since 2000, a rate that’s predicted to grow, according to the study. The projections of sea levels rising by a meter this century compare with the 18 to 59 centimeters (7 to 23 inches) forecast by the IPCC.

Increasing? I get, using the U of Col numbers, 2.9 mm per year from 2000 inclusive. However from 2005 the rate is 1.17 mm/yr, Funny definition of increase in rate.

To get to a meter by 2100 you would have to have a rate of about 11 mm/yr (91 yrs left to 2100) or over 3x the 3.1 rate they quote from 2000.

Where is the supporting data for this?

• Geoff Sherrington
Posted Jun 19, 2009 at 7:35 AM | Permalink | Reply

Re: BarryW (#13),

I’m trying to find out which datum point is used for sea level change. Most logical suggestion so far has been centre of Earth for satellite reference, but Earth is elastic enough under tides etc to show changes in position of centre unrelated to sea level changes, just temporary redistributions of surface geometry/geoid shapes. Anyone know the datum?

Nice typo. Just corected “seal level”. Bloody penguins on the brain.

Is a global change in atmospheric relative humidity of enough weight to change sea level measurably?

150. Willis Eschenbach
Posted Jun 18, 2009 at 9:46 PM | Permalink | Reply

Well … as the local guy who writes about TOPEX satellite sea level, I guess now’s the time to compare and contrast this claim quoted above that sea level rise from melting ice will overwhelm us in our beds and sweep us out to sea …

Oceans Rising Faster Than UN Forecast, Scientists Say (Update2)

By Alex Morales

June 18 (Bloomberg) — Polar ice caps are melting faster and oceans are rising more than the United Nations projected just two years ago, 10 universities said in a report suggesting that climate change has been underestimated.

Global sea levels will climb a meter (39 inches) by 2100, 69 percent more than the most dire forecast made in 2007 by the UN’s climate panel, according to the study released today in Brussels. The forecast was based on new findings, including that Greenland’s ice sheet is losing 179 billion tons of ice a year.

“We have to act immediately and we have to act strongly,” Hans Joachim Schellnhuber, director of Germany’s Potsdam Institute for Climate Impact Research, told reporters in the Belgian capital. “Time is clearly running out.” …

with the reality:

Note the change in the 5-year trend (black line, right scale). Since 2003, the five year trailing trend has fallen from ~ 0.4 m/century (16 inches/century) to ~ 0.2 m/century (8inches/century).
.
Be afraid … be very afraid …
.
w.

Posted Jun 19, 2009 at 6:27 AM | Permalink | Reply

Willis Eschenbach:

- Nice chart, I got lambasted elsewhere for suggesting there’s been a significant change in the sea level trend. That right side Y axis label is incorrect “Meters / yr”? Think your caption has it right, “meters / century.”

152. Gary Strand
Posted Jun 19, 2009 at 11:52 AM | Permalink | Reply

Instead of guessing how SLC is computed, why not go to the documentation and read?

• Andrew
Posted Jun 19, 2009 at 12:11 PM | Permalink | Reply

Re: Gary Strand (#31), I suspect peopl were looking for more than that. Here at CA we like to know all the minutia you see.

• Posted Jun 19, 2009 at 12:14 PM | Permalink | Reply

Interesting read. The impression I get from it is that after they’ve done half a dozen ‘corrections’, sprinkled in a few ‘averages’ then mixed well with some ‘smoothing’ they get a change measured in millimeters of something which depth is measured in miles.

If one were to take away the tides, ocean heat content changes, ice melts/freezes, outpourings of rivers, seismological events and a myriad of other things and produce a perfectly flat surface over a period of decades you’d still have difficulty measuring changes to the millimeter in the ocean height. Add all these realities to the equation and what you have is nothing more than a best guess.

• Gary Strand
Posted Jun 19, 2009 at 12:54 PM | Permalink | Reply

Re: Eric J D (#33),

Interesting read. The impression I get from it is that after they’ve done half a dozen ‘corrections’, sprinkled in a few ‘averages’ then mixed well with some ‘smoothing’ they get a change measured in millimeters of something which depth is measured in miles.

Given that with GPS we can measure distance in centimeters to/from objects thousands of kilometers away, dismissing SLC measurements because of the range of magnitudes doesn’t seem right to me.

• scott lurndal
Posted Jun 19, 2009 at 4:47 PM | Permalink

Re: Gary Strand (#34),
Civilian GPS has quite poor resolution in the Z axis, which is being measured here.

• Andrew
Posted Jun 19, 2009 at 5:32 PM | Permalink

Re: scott lurndal (#331), This ain’t civilian!

• Mark T
Posted Jun 20, 2009 at 9:49 AM | Permalink

Re: Gary Strand (#326), Being able to accurately calculate the position of a fixed object and extrapolating that to measuring the average height of a non-uniform, constantly flowing mass that covers 70% of the earth is not an analogy any reasonable person would make.

Re: scott lurndal (#331), Not really. There are ways around this, but they are difficult to deal with. Likely any government sponsored scientific endeavor could utilize receivers that can decrypt the P-code, but it still takes a significant amount of processing time and even then, “measuring the distance in centimeters” is a stretch at best. The processing time, btw, isn’t the amount of time that it takes to process the data, it is the amount of integration of the incoming code that is required to get a high enough SINR for the desired accuracy. One source I’m reading says 10-50 cm of accuracy requires 5 minutes of integration, which is hardly useful as a comparison to measurements involving constantly moving objects.

Mark

• Geoff Sherrington
Posted Jun 20, 2009 at 8:07 PM | Permalink | Reply

After you read the documentation and find it creates more questions than answers, you can do what I did and seek someone who has had a high-level career in the topic. Like learning brain surgery from “Readers Digest” then going to a top surgeon.

I don’t require advice along the lines of ‘read another “Readers Digest”‘ thank you.

CA is very good at attracting real scientists with actual experience. These people are often generous of their time to answer questions that are suppressed or generalised by others.

Gary, if you know which primary datum point(s) is used for measuring sea level, you are free to tell me. In the final analysis, all observations should be able to be reconciled to one point and all used methods should be in agreement.

• Andrew
Posted Jun 20, 2009 at 9:10 PM | Permalink | Reply

Re: Geoff Sherrington (#341), Okay, this is a little grammar nazi of me but:
“which primary datum point(s)” is horrifying morass of mixing plural and singular. True, English is deficient in the capacity to convey certain statements of this kind, but you start with the plural which proceed to singular datum and then take point and make it maybe/maybe not plural.

I’m a pedant. Please pick a number and stick with it.

• Geoff Sherrington
Posted Jun 22, 2009 at 6:01 AM | Permalink

Re: Andrew (#342),

Some papers pivot on a datum point singularity(s) (like the geoid centre) while others refer to selected, scattered stations that are chosen to agree with each other to within pre-ordained bounds. Still others refer to multiple surface locations of approximately but adequately known locational position(s) where satellite reflections are quantified.

Mate, I’m confused too. If you select a multiplicity of terrestial statistically-adjusted reference locations on the basis of approximate or hypothetical consanguinuity, then you are estimating a precision rather than an accuracy. (Simple example: an island perimeter, like Honolulu, as the landmass elevates vertically uniformly, temporally from the contiguous adjacent msl, however defined). If you use satellite ranging, then you get into circular or elliptical arguments because the satellites sometimes use ground data to confirm position and consequently time. Remember that an atomic clock in a satellite might be extraordinarily precise, but the diurnal angular velocity of the (near-tropical in particular) earth surface varies hugely in comparison with the ability of an atomic clock. It’s like some Arab countries used to define time zones by stating that midday was when the sun was overhead. Try that one out on your GPS. Then try a stick in the sand.

While all this is going on, the plasticity of the Earth and its deformation through tidal and orbital effects substantially and rather seriously conceptually compromises the efficacy of establishment of a datum point or several datum points.

Finally, the Maldives are not drowning.

• Andrew
Posted Jun 22, 2009 at 6:10 AM | Permalink

an island perimeter, like Honolulu, as the landmass elevates vertically uniformly, temporally from the contiguous adjacent msl, however defined

Small point-the perimeter of an island is unknowable in principle:

• oms
Posted Jun 22, 2009 at 2:36 PM | Permalink

Re: Andrew (#358),
“Small point-the perimeter of an island is unknowable in principle”

I thought he was referring to the location of the perimeter (the zero-height contour with respect to some MSL) rather than its length.

• Andrew
Posted Jun 22, 2009 at 3:22 PM | Permalink

Re: oms (#362), I’ve never heard of perimeter used that way, but I guess I got confused!

• Geoff Sherrington
Posted Jun 22, 2009 at 7:48 PM | Permalink

Re: Andrew (#363),

OK, I posted deliberate verbal obfuscation piece. Apologies. Climate science articles are trending this way, so I thought you might understand it more readily.

Simply put, more work is needed to find a datum point for sea levels. The science is not settled.

Re: Andrew (#358),

When others were smoking pot in the 60s I was reading fractals. I treasure a 1993 calendar from IBM with stunning images of fractal origin, with references to Benoit B Mandelbrot who wrote on coastline length (actually after Richardson). The calendar is a constant reminder that massive increases in computing power do not need to correlate with the beauty of man-made images, or with the skills of their makers.

Or with the accuracy of global warming models – how’s this for prescience from IBM 1993:

A genetic algorithm developed by Karl Sims was used to generate this image on a massively parallel supercomputer. In the genetic algorithm, expressions which generate surface textures from the LISP programming language are randomly generated and “mutated” by the computer, and “selected” and cross-bred by the user. This powerful random process rapidly creates striking images which are entirely procedural in nature; that is, they represent the unmodified output of the computer program. … The coloring, the pattern and the sandy surface texture all arose spontaneously in the process of “aesthetic selection”.

• Andrew
Posted Jun 22, 2009 at 8:34 PM | Permalink

Re: Geoff Sherrington (#364), I wasn’t even alive in the sixties, but I also really appreciate the beautiful art and math of fractals. Interesting to hear what they were up to with big computers in those days…

• Posted Jun 22, 2009 at 11:37 PM | Permalink

When others were smoking pot in the 60s I was reading fractals. I treasure a 1993 calendar from IBM with stunning images of fractal origin, with references to Benoit B Mandelbrot who wrote on coastline length (actually after Richardson).

Interesting, apparently you beat Mandelbrot to it, his first paper on fractals was published in 1974.

• John Baltutis
Posted Jun 23, 2009 at 1:39 AM | Permalink

Re: Phil. (#369),

From Fractal

Roots of mathematical interest on fractals can be traced back to the late 19th Century, the term however was coined by Benoît Mandelbrot in 1975

See, especially the History section. Based on that I conclude that in the ’60s one could be reading about them, whether they had that name or not.

Now, is there anything of substance you wish to post?

• Mike B
Posted Jun 23, 2009 at 11:52 AM | Permalink

Cut him some slack…a lot of people confuse the 60′s and the 70′s, especially those that were smoking pot.

Oh wait a minute, he said he didn’t smoke any.

Nevermind.

• John Baltutis
Posted Jun 23, 2009 at 3:49 PM | Permalink

Re: Mike B (#373),
Cut who some slack? I was chastising Phil, who didn’t say anything WRT the wacky weed, but claims one couldn’t ponder fractals before Mandelbrot coined the term, but not the mathematics.

• Geoff Sherrington
Posted Jun 24, 2009 at 12:10 AM | Permalink

Thank you, John and again at #375. Though I cannot claim to have been a deep student, I am familiar with the broad history as another describes below. My interest was in the way that multiplying cells fit into certain 3D patterns.

The first significant work on “Julia Sets” was done by Gaston Julia in 1917 while he was in a hospital recovering from injuries received during the first world war. Much of the initial work on the properties of these sets was done by Julia and Pierre Fatou, a contemporary and rival of Julia, in the 1920s. One of their major achievements was to show that there are two quite different types of Julia sets.
In the 1970s, an applied mathematician, Benoit Mandelbrot, working at the IBM Research Laboratory, did some computer simulations for these sets on the reasonable assumption that, if you wanted to prove something, it might be helpful to know the answer ahead of time. Mandelbrot had been one of Julia’s students in 1945, and he was familiar with the papers of both Julia and Fatou.

Ok, let’s make it 1970s, Phil. I made an error. Maybe. I seem to recall related reading while in a certain job that fixes it in the late 60s. The exact term “fractal” was probably coined in the 70s.

If you like to combine maths with beauty (and not pay so much attention to the date) try
Prusinkiewicz, P. and Lindenmayer, A. (1990). The Algorithmic Beauty of Plants, Springer-Verlag, New York.

• Posted Jun 25, 2009 at 4:18 PM | Permalink

Ok, let’s make it 1970s, Phil. I made an error. Maybe. I seem to recall related reading while in a certain job that fixes it in the late 60s. The exact term “fractal” was probably coined in the 70s.
If you like to combine maths with beauty (and not pay so much attention to the date) try
Prusinkiewicz, P. and Lindenmayer, A. (1990). The Algorithmic Beauty of Plants, Springer-Verlag, New York.

Mandelbrot claimed to have originated the term in a ’74 paper. I agree with you about the beauty of some of the fractal images, I first met Mandelbrot in the mid 80s IIRC, and my first paper on the subject of the application of fractals was published in the late 80s.

• Geoff Sherrington
Posted Jun 28, 2009 at 12:23 AM | Permalink

Re: Phil. (#385),

You were indeed lucky to have met the Mandelbrot. I hope you were able to spend interesting time with him. I have not checked, but I suspect that “Scientific American” in its recreational mathematics section might have mentioned some of his work before it was formalised as the “fractal”. Not sure. I gave away my library a decade ago.

BTW, did you notice that unlike some others of whom we read, I was able to concede that I could be wrong and you were able to accept that. Thank you for the courtesy and the example.

• Mark T
Posted Jun 28, 2009 at 11:15 AM | Permalink

BTW, did you notice that unlike some others of whom we read, I was able to concede that I could be wrong and you were able to accept that.

He can’t.

Mark

• Posted Jun 28, 2009 at 3:47 PM | Permalink

You were indeed lucky to have met the Mandelbrot. I hope you were able to spend interesting time with him. I have not checked, but I suspect that “Scientific American” in its recreational mathematics section might have mentioned some of his work before it was formalised as the “fractal”. Not sure. I gave away my library a decade ago.

Yes, he’s an interesting guy. I’d been inspired by a meeting with Sreenivasen to try to apply fractal geometry to a problem I was working on and made a point of seeking out Mandelbrot. He was interested in the application and even used some of my slides. Certainly Scientific American has had a few articles on him I recall one in ’85 and a cover in 78, there well may have been others.

BTW, did you notice that unlike some others of whom we read, I was able to concede that I could be wrong and you were able to accept that. Thank you for the courtesy and the example.

You’re welcome.

153. Gary Strand
Posted Jun 19, 2009 at 12:56 PM | Permalink | Reply

The Leuliette et.al. paper seems to me, on quick perusal, to cover the minutiae.

• Andrew
Posted Jun 19, 2009 at 1:16 PM | Permalink | Reply

Re: Gary Strand (#35), Much better. But we’ll see, about everyone. Else a tough crowd

• Harry Eagar
Posted Jun 19, 2009 at 3:37 PM | Permalink | Reply

On the other hand, your link shows this concerning the Jason measurements: ‘This bias has a global average value of 131.8 mm and a standard deviation of 12.7 mm.’

Wow!

For the island I live on, that’s around 4 centuries worth of sea level change, as we understand it locally. (My island is sinking.)

154. BarryW
Posted Jun 19, 2009 at 7:29 PM | Permalink | Reply

I thought part of the resolution with GPS depended on how many satellites you could see. what would that be for another satellite?

155. David Cauthen
Posted Jun 19, 2009 at 8:19 PM | Permalink | Reply

WAAS GPS is routinely accurate to 1.0 meter laterally and 1.5 meters vertically. That’s meters not millimeters. It’s used primarily in aviation, allowing planes to land in inclement weather with ceilings as low as 200 feet agl. AFAIK, it’s as good as gps gets.

156. David Smith
Posted Jun 20, 2009 at 7:09 AM | Permalink | Reply

OT – Nice photo of the recent Sarychev volcano column, and its SOx, headed for the lower stratosphere. The photo was taken from the International Space Station. The circular hole in the clouds is the consequent downdraft of air around the column. Links here ,

157. Mark T
Posted Jun 20, 2009 at 10:00 AM | Permalink | Reply

Oops, I’m sorry, that should be “10-30 cm of accuracy.” There are dual frequency receivers that can potentially sub-cm accuracy, but they take even longer for integration time.

Mark

158. Harry Eagar
Posted Jun 20, 2009 at 1:01 PM | Permalink | Reply

While reading Meyer’s report at climate-skeptic, I noticed this statement in the report:

‘we are dealing with a coupled non-linear chaotic system.’

Maybe not. I’d like to see the peer-reviewed evidence for that. Edward Lorenz, who developed the mathematical theory of chaos to deal with weather, said he did not know whether climate was chaotic or not. (In the Danz Lectures.)

It isn’t. It’s antichaotic. That is, if mathematical chaos means that small changes to inputs can have unpredictably large outputs, then Earth climate demonstrates over several billion years that even large changes in inputs have predictably small changes in outputs.

• Michael Jankowski
Posted Jun 20, 2009 at 1:09 PM | Permalink | Reply

Re: Harry Eagar (#11), maybe he’s referring to the models, which can get quite chaotic. For a lot of folks, the models represent reality even better than reality itself.

• Mark T
Posted Jun 20, 2009 at 1:51 PM | Permalink | Reply

Re: Harry Eagar (#338), Good point.

Mark

• DeWitt Payne
Posted Jun 20, 2009 at 9:45 PM | Permalink | Reply

A chaotic system can still be bounded. If Milankovitch cycles do indeed trigger the transitions from glacial to interglacial over the last two million years, then there is clear proof that small changes in input do indeed produce large changes. Unfortunately, mathematical proof of chaotic behavior, a positive Lyupanov coefficient, can only be determined if all the differential equations that describe the system are available. Good luck with that.

159. David Cauthen
Posted Jun 21, 2009 at 7:17 AM | Permalink | Reply

From what I’ve learned about Topex/Poseidon, it uses radar altimetry to determine sea level. The altitude of the satellite is determined by gps to an accuracy of +/- 2cm, which gives you a 40mm spread for its sea level output. But you’re right Gary, given enough time gps can be very accurate.

160. Andrew
Posted Jun 21, 2009 at 2:13 PM | Permalink | Reply

161. charlesH
Posted Jun 21, 2009 at 2:40 PM | Permalink | Reply

I is my understanding that co2 forced warming is strongest in cold dry areas (less water vapor masking). Thus co2 forcing will lessen the temp difference between the arctic and the tropics. Now storm intensity is driven by temp differences. The greater the difference the more intense the storm.

Co2 induced warming will lessen storm intensity not increase it.

Why is the basic supposition that more co2 will cause more extreme weather challenged? It seems completely false.

What am I missing? Anyone?

• Andrew
Posted Jun 21, 2009 at 3:17 PM | Permalink | Reply

Re: charlesH (#71), My understanding from standard meterological texts is that this is true in the extratropics, but in the tropics the situation is much more complicated (thus the hurricane debates).

One other note: the reason for reduced temp gradient has nothing to do with CO2 itself. I think it is mostly the effect of the ice-albedo feedback-which applies to any effect.

• Hemst 101
Posted Jun 21, 2009 at 4:04 PM | Permalink | Reply

Re: Andrew (#72),

My geography of the US may be a little shaky, but what part of the US is in the tropics? Hawaii? So Charles has a point. Also, this is a “hidden” point of my #19. USA warming, causes cooling which causes more severe storms which proves warming and around and around we go. This is science? By the way Steve, I did write Lucia an e-mail. Thanks for the idea.

• Andrew
Posted Jun 21, 2009 at 4:33 PM | Permalink

Re: Hemst 101 (#73), Sorry, I forgot that the context was US (however we do get hit with hurricanes) not the world. So the point is basically there.

• DeWitt Payne
Posted Jun 21, 2009 at 7:57 PM | Permalink | Reply

Re: charlesH (#347),

I think I’m going to have to bookmark a link or something to make replies easier since this seems to come up so often.

I is my understanding that co2 forced warming is strongest in cold dry areas (less water vapor masking). Thus co2 forcing will lessen the temp difference between the arctic and the tropics.

I have it from Gavin himself that this is wrong. Radiative forcing from doubling CO2 is lowest at the poles and highest in the tropics. The reason is that the atmosphere is warmer and less dense in the tropics so the altitude at which the optical density of the center of the CO2 band at 15 micrometers becomes less than one is higher and colder which turns out to be much more important than overlap with water vapor. See this graph from Hansen et.al., 2005 for example. If you want to see for yourself, try doubling CO2 in the Archer MODTRAN calculator for Tropical and then again for Sub-Arctic Winter to see the difference in forcing and the temperature change required to restore the outgoing LW radiation intensity after CO2 doubling.

“Conventional wisdom on polar amplification is that it is due to feedbacks, not forcing.”
Gavin Schmidt, personal communication, 2008.

• oms
Posted Jun 22, 2009 at 10:22 PM | Permalink | Reply
Re: charlesH (#347),
I is my understanding that co2 forced warming is strongest in cold dry areas (less water vapor masking). Thus co2 forcing will lessen the temp difference between the arctic and the tropics.

I have it from Gavin himself that this is wrong. Radiative forcing from doubling CO2 is lowest at the poles and highest in the tropics.

“Conventional wisdom on polar amplification is that it is due to feedbacks, not forcing.”
Gavin Schmidt, personal communication, 2008.

It seems what Gavin said is consistent with what charlesH said. The CO2-forced warming is largest at the poles but due to resultant effects (in this case, circulation changes) not due to radiative forcing directly to the poles.

• DeWitt Payne
Posted Jun 23, 2009 at 8:02 AM | Permalink

Re: oms (#368),

It seems what Gavin said is consistent with what charlesH said.

The poles warm and cool faster than the tropics (polar amplification) no matter the source of the warming. Tropical temperatures at the LGM were about 4 C lower than present. Central Greenland temperature from the GISP2 ice core was about 20 C lower and Antarctic temperature measured by the Vostok core was about 8 C lower. CharlesH’s statement about less water vapor masking can only be read as meaning direct radiative forcing from CO2 is stronger at the poles. That is incorrect.

Whether increased average temperature will result in more severe weather is still somewhat controversial, but I am quite sure that those who propose that it does are well aware of polar amplification. They do this for a living, after all.

• oms
Posted Jun 23, 2009 at 12:48 PM | Permalink

CharlesH’s statement about less water vapor masking can only be read as meaning direct radiative forcing from CO2 is stronger at the poles. That is incorrect.

I must have misunderstood his statement. By “water vapor masking” he is referring to IR absorption?

• DeWitt Payne
Posted Jun 23, 2009 at 7:14 PM | Permalink

Re: oms (#374),

he is referring to IR absorption?

That’s the way I read it. It’s a common error even by people who should really know better like Prof. Judith Curry. See for example here. The absolute forcing from CO2 is lower at high latitudes compared to the tropics and the rate of change of forcing with CO2 concentration is lower as well. That’s rate of change of forcing as usually calculated, i.e. before the troposphere equilibrates. Increasing temperature at high latitudes will increase radiative forcing at constant CO2, assuming that the specific humidity increases with temperature (water vapor feedback), a pretty good bet at high latitudes.

• DeWitt Payne
Posted Jun 24, 2009 at 3:26 PM | Permalink

Why is there polar amplification? I still think it’s related to the difference in specific humidity in the tropics compared to the poles, but I’ve come up with a better way of stating the hypothesis (I think): The climate system acts to maintain a constant difference in enthalpy between the tropics and the poles, not a constant difference in temperature. The rate of change of enthalpy with temperature is much higher in the tropics than the poles so a change in temperature in the tropics produces a larger change in temperature at the poles. Since the closing of the Isthmus of Panama ~2 Myr BP, ocean currents act to transfer heat from the south to the north explaining why Arctic temperatures change more than Antarctic temperatures.

162. Wrapupwarm
Posted Jun 22, 2009 at 1:18 AM | Permalink | Reply

An over simplification maybe, but there are a number of ways in life to win an argument (or support a scientific hypothesis):-

1 Use facts and information without distortion and present accordingly.

2 Use facts and information with distortion and present accordingly.

3 Make up facts and information and present accordingly.

4 A combination of all three above.

Items 2,3 and 4 above will eventually lead to exposure. For Item 1 there is still no guarantee the argument will be supported.

In the debate about global warming an argument which is always used is that most organisations,governments etc support the view that it is man made and that the science is settled.

In the uk the statistics, prior to the war in Iraq, was that over 60% of the population were in favour of the war due to the prescence of wmd. If this question was asked now it drops to, say 10%.

Conversely at present the overwhelming “evidence” in the UK (and elsewhere) is that Climate Change (nee Global Warming) is a certainty, no doubts; almost all politicians, media and august bodies.
However at least 40 to 60% (it varies) of the population consider global warming to be naturally occurring.

Maybe it’s a question of we won’t get fooled again.

163. Geoff Sherrington
Posted Jun 22, 2009 at 5:07 AM | Permalink | Reply

Re: JamesG (#78),

Please don’t try to presume motives for writers here when they have not expressed motives. There might be a group simply wishing to see the application of correct science, with its related trappings like data integrity, quality control, appropriate statistics, non-biased interpretation, etc.. The concept of an intellectual comfort zone has not happened to me and I do not imagine that I am alone.

• curious
Posted Jun 22, 2009 at 10:30 AM | Permalink | Reply

Re: Geoff Sherrington (#79), Agreed – proper investigation is to follow science rather than preconceptions and it doesn’t take too much effort to guard against this. Even when deeply involved in something one should always seek the toughest critics and the strongest devil’s advocates to keep one “honest”.

164. Jim Turner
Posted Jun 22, 2009 at 10:03 AM | Permalink | Reply

Mann’s 1998 hockey stick was rather inplausible when looked at objectively – it just didn’t fit with the enormous amount of anecdotal data re the MWP and LIA. None the less, it hid behind some abstruse statistics and it required the persistent and technically expert efforts of M&M to take apart the statistical guts of it and demonstrate the flaws of its workings.
This by contrast, along with other recent publications (barrier reef coral deposition rates is another example) are so obviously flawed that a non-specialist critical analysis, within the capability of any moderately intelligent and educated person, is all that is required to expose the flaws.

snip – over-editorializing

165. Posted Jun 22, 2009 at 10:05 AM | Permalink | Reply

The issue is human nature. We believe what suits our point of view and facts/logic won’t necessarily dissuade us and we often just refuse to read things we instinctively oppose. For some people CO2 thermageddon is as dangerous a threat as WMD’s in the hands of unstable people. Holdren is one of these people. Thus it suits those worriers, to (quoting that MI5 memo) “fit the facts around the policy”. They truly believe this is for the greater good. It also suits the mainstream media to play up those fears and play down the facts. Will we ever learn? No! The evidence is that we keep falling for the same lies over and over again.

The concept of “saving the planet” sways a lot of people who would otherwise not give a damn one way or the other. Not that anyone is actually saving the planet.

David Attenborough on the other hand seems to have become an AGW advocate on the back of propaganda from Hadley scientists but deep down it now appears he was predisposed to believe it because he is apparently a population control nut

Which is kind of strange, since Attenborough has relied on the masses watching and enjoying his films and documentaries.

166. Ron
Posted Jun 22, 2009 at 7:50 PM | Permalink | Reply

Re: JamesG (#353),
James, excellent insights. Your third paragraph elaborates the old observation that the best test of a bad manager was that when he couldn’t solve a problem he quickly created a new one for all of us to work on. Maybe a basic business management course should be part of every science degree program.
Ron

167. Geoff Sherrington
Posted Jun 22, 2009 at 10:10 PM | Permalink | Reply

Permission is sought to revisit an old topic, Time of Observation Adjustment, TOBS. An explanation was given on a 2004 web page by John Daly

http://www.john-daly.com/tob/TOBSUMC.HTM

I will assume that you have read what is written. The only point that I seek to make is that the explanation appears to omit a crucial piece of logic, namely, that the thermometers are reset when read. Thus, in the short example given in the URL, readings taken at 9 am would be correct and require no adjustment.

I’m not on 100% certain ground because columns are not labelled well, so I’d appreciate a view of this or another data set where a TOBS was indeed required. Remember that a recording max/min thermometer is designed record the correct max and the correct min in a nominated, repeated 24 hour period.

The urge to correct readings to a globally uniform midnight reference seems to be the drive behind the adjustment logic.

If the original records were unadjusted, there might be an occasional event when a high or low was recorded on the wrong day, but the means of a year of Tmax and Tmin should be correct with maybe a very small error, possibly more correct than the adjusted version.

Please shoot me down.

• Mike B
Posted Jun 23, 2009 at 11:42 AM | Permalink | Reply

Goeff, I just bumped an old thread on TOBS, you might want to have a look at it.

Time of observation bias in min/max thermometers is a real phenomenon. It results when the observation time is close to the typically daily high temperature (late afternoon), which tends to double-count high temperatures, or the daily low temperature (early morning), which tends to double-count low temperatures.

The problem is that to get a good adjustment for the bias, you need good station meta-data, primarily, the time of day the station is read. This particular meta data is not very reliable, and as a result, NOAA actually infills (both in time and in space) the time of observation.

• Geoff Sherrington
Posted Jun 24, 2009 at 12:12 AM | Permalink | Reply

Re: Mike B (#372),

Mike, Agreed, but I’m thinking that the effect has more influence on the time axis than the temperature axis and that it’s a small effect. I’m about to read Karl again to ensure that his examples use reset thermometers. Any worked examples welcomed. Refs?

• Mike B
Posted Jun 24, 2009 at 10:54 AM | Permalink

I worked up a bunch of examples from airport thermometers that give hourly readings. You can simulate what you’d get from a reset thermometer by using 25 consecutive readings (not 24).

The significant effects I found were (from largest to smallest):

Time-of-day effect: Biggest effect is reading it near the daily high ~5pm biases the reading high.
Time-of-year (seasonal effect): Effects are largest when the differences between afternoon highs and morning lows are largest; typically January and June.
Latitude effect: Time of observation bias is small near the equator, because there is often little difference between high and low.

I have the stuff in an excel file somewhere. I can send it to you if you like. Poster JerryB provided a link to the hourly airport temperatures. You might try searching CA for “JerryB Airport Temperatures”.

But here’s the real trick: If the time of observation of all stations has remained consistent for all time, there would be no reason for a TOBS correction! The NOAA tobs correction that accounts for nearly half of the warming since WWII is based on what appears to be the assumption that there was a massive, gradual shift from afternoon to morning readings following WWII. As I stated earlier, I seriously doubt the metadata supports that dramatic of an adjustment.

• D. Patterson
Posted Sep 14, 2009 at 2:12 AM | Permalink | Reply

The only point that I seek to make is that the explanation appears to omit a crucial piece of logic, namely, that the thermometers are reset when read.

Please note, a substantial number of temperature observations do not use mercury thermometers requiring a reset when read. Instead, the temperature observations are often made using thermographs, for one example.

• Geoff Sherrington
Posted Sep 14, 2009 at 5:26 AM | Permalink | Reply

The discussion back then was more or less about max-min mercury thermometers. In some examples I have seen, the logic of TOBS does not refer to the step of resetting the pegs at reading time. This makes the logic hard to follow. Presently, I tend to think that TOBS adjustments introduce more unreality than they solve. This is because the actual record on many days is adjusted because of “suspect” days nearby.

Yes, I’m well aware of drum recording devices and have used them. It puzzles me that there is little evidence of sudden changes in the station records on the dates when thermometers stopped and more continuous recording started. Why go from one design to another unless you expect a change, an improvement? At many sites there is indirect evidence of mathematical splicing of records to taper the change and avoid a step change. It is becoming more difficult to get actual, unadjusted records to see if this is the case.

• D. Patterson
Posted Sep 14, 2009 at 1:49 PM | Permalink

Agreed. This is why it is disturbing to see so much discussion about the properties and limitations of mercury thermometers and associated observation methods as if the other thermal instruments and their associated procedures simply did not exist. Although mercury thermometers may have been more prevalent in HCN observations than airways observations, it can only be inaccurate and misrepresentative in the results to neglect the other instrumentation when discussing TOBS and other issues with observational accuracies. The adjustments to the records where changes in instrumentation has taken place warrants very close scrutiny for misapplications of adjustments to the original observations.

168. Andrew
Posted Jun 24, 2009 at 7:24 AM | Permalink | Reply

Anthony has become a thorn in NOAA’s side:
http://wattsupwiththat.com/2009/06/24/ncdc-writes-ghost-talking-points-rebuttal-to-surfacestations-project/

169. oms
Posted Jun 24, 2009 at 4:35 PM | Permalink | Reply

DeWitt Payne, have you seen this publication which may be of some interest to you?

• DeWitt Payne
Posted Jun 24, 2009 at 5:52 PM | Permalink | Reply

Re: oms (#382),

Interesting article. I need to read it more carefully, but I was intrigued by the following quote:

Contrary to what is sometimes surmised, there is no universal principle that constrains free tropospheric relative humidity changes to be negligible or even to be of the same sign as near surface relative humidity changes.

So does that mean that upper tropospheric lapse rates don’t have to decrease with increasing temperature, or decrease as fast, and the big red spot goes away?

• oms
Posted Jun 24, 2009 at 11:46 PM | Permalink | Reply

So does that mean that upper tropospheric lapse rates don’t have to decrease with increasing temperature, or decrease as fast, and the big red spot goes away?

The tropospheric lapse rate (i.e., the static stability) seems to go up for both much colder and warmer climates (Fig. 11) for different reasons (meridional temperature gradient vs. greater latent heat flux in the drier and wetter regimes, respectively). At least in the simulations, it does not suddenly change sign after you reach a certain amount of “excess” heating.

170. Andrew
Posted Jun 27, 2009 at 10:47 AM | Permalink | Reply

David Stoxkwell breaks down HadCrut into EOFs:

171. Al Gored
Posted Jun 27, 2009 at 2:30 PM | Permalink | Reply

There is an updated version of mi, in R.

Su Y.-S., Gelman A., Hill J., Yajima M.,
Multiple imputation with diagnostics (mi) in R“,
Journal of Statistical Software,
to appear.

Our mi package in R has several features that allow the user to get inside the imputation process and evaluate the reasonableness of the resulting models and imputations. These features include: flexible choice of predictors, models, and transformations for chained imputation models; binned residual plots for checking the fit of the conditional distributions used for imputation; and plots for comparing the distributions of observed and imputed data in one and two dimensions. In addition, we use Bayesian models and weakly informative prior distributions to construct more stable estimates of imputation models. Our goal is to have a demonstration package that (a) avoids many of the practical problems that arise with existing multivariate imputation programs, and (b) demonstrates state-of-the-art diagnostics that can be applied more generally and can be incorporated into the software of others.

(To get the R package, open your R window, click on Packages, Install packages, and grab mi.)

172. Andrew
Posted Jun 27, 2009 at 7:14 PM | Permalink | Reply

Check out John Christy’s EPA Submission:
Lots on S08

173. scp
Posted Jun 27, 2009 at 8:37 PM | Permalink | Reply

Haven’t watched it yet, but this video might be interesting to CA readers
“How Do We Know? Physics, Forcings, and Fingerprints”
http://www.researchchannel.org/prog/displayevent.aspx?rID=28217&fID=345

• scp
Posted Jun 28, 2009 at 11:26 AM | Permalink | Reply

Re: scp (#389), watching the video now. Michael Mann, the Hockey Stick and Kim Cobb all make appearances. In a 12/2008 video, temperature graphs end before Y2K.

• Andrew
Posted Jun 28, 2009 at 2:02 PM | Permalink | Reply

Re: scp (#393), When the science was settled, so no new data is needed (end snark)

174. Mark T
Posted Jun 28, 2009 at 11:15 AM | Permalink | Reply

Do the same, that is.

Mark

175. Geoff Sherrington
Posted Jun 29, 2009 at 5:07 AM | Permalink | Reply

Now and then I post a new topic from left field, with apologies to ongoing discussions that I might interrupt.

which is a paper from a forthcoming conference in December. This particular paper from a group of Scandinavians examines some effects of Global Warming talk on children aged 9-13.

There are other papers previewed at http://climatecongress.ku.dk/abstractbook/
that also give me shivers down the spine.

It was my hope that this type of work ended in 1945.

176. Andrew
Posted Jun 29, 2009 at 11:55 AM | Permalink | Reply

El Nino is official:
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf
prepare for a warm-up, in active Atlantic Hurricane season, etc.

• John M
Posted Jun 29, 2009 at 6:29 PM | Permalink | Reply

Re: Andrew (#397),

Are you saying we’re officially in an El Nino “event”?

Hmmm, that’s funny, I seem to recall a lot of AGWers getting all technical about that when we had La Nina “conditions”.

And since when does an El Nino lead to a more active Atlantic Hurricane season?

• Andrew
Posted Jun 29, 2009 at 7:43 PM | Permalink | Reply

Re: John M (#398), I said in-active (hyphen left out, my bad). I have no idea what those AGWers were talking about though.

177. Bob Koss
Posted Jun 30, 2009 at 5:01 AM | Permalink | Reply

The puzzle of Punta Arenas.

I set the base period to 2008-2008 to compare the land stations existing in 1889 with those still in existence in 2008. This allows direct comparison between the two years without having to consider missing months or stations moving in or out during a multi-year base period. Here are the anomaly maps generated by Giss from this page.
250 km smoothing radius.

1200 km smoothing radius.

The anomaly at 250 km is -0.32C and 1200 km is -0.48C. An amazingly large difference.
Further investigation led to looking at Punta Arenas, located -53S -70.85W at the tip of S. America. In 1889 it was the most southern station in the Giss inventory. No other station within 1200 km in any direction.

The anomaly for Punta Arenas is -0.2750. The station is 10 km due east of the center of grid-cell -53S -71W. I expected the station anomalies in the 250 km map to be arranged in a 3×3 grid, with -53S -71W as the center cell containing the value -0.2750 and the surrounding eight cells having declining values with distance. Here is what I found in the gridded data file that is created with each map.

250 km smoothing. All the plotted anomalies.
Lon. Lat. Anomaly
-71.00 -55.00 -0.2760
-69.00 -55.00 -0.2760
-67.00 -55.00 -0.2760

-71.00 -53.00 -0.1344 station location
-69.00 -53.00 -0.1344
-67.00 -53.00 -0.1344

-71.00 -51.00 -0.0280
-69.00 -51.00 -0.0280
-67.00 -51.00 -0.0280
____________________________

1200 km smoothing. Only the plotted anomalies at station latitude.
Lon. Lat. Anomaly
-89.00 -53.00 -0.2750
-87.00 -53.00 -0.2750
-85.00 -53.00 -0.1902
-83.00 -53.00 -0.1620
-81.00 -53.00 -0.2003
-79.00 -53.00 -0.2386
-77.00 -53.00 -0.2555
-75.00 -53.00 -0.3062
-73.00 -53.00 -0.3062
-71.00 -53.00 -0.3449 station location
-69.00 -53.00 -0.3449
-67.00 -53.00 -0.3969
-65.00 -53.00 -0.4142
-63.00 -53.00 -0.4619
-61.00 -53.00 -0.5097
-59.00 -53.00 -0.5611
-57.00 -53.00 -0.7156
-55.00 -53.00 -0.7156

How can any grid-cell anomalies exceed the station anomaly of -0.2750? Why is the actual station anomaly shifted out of the station grid-cell?

At 1200 km smoothing, the Punta Arenas cells have about 50% of the weight of the US 48 states. Same goes for Chatham Island at -43.95s -176.57w.
Am I missing something here? To my mind, the way this is done is very unphysical.

• Geoff Sherrington
Posted Jun 30, 2009 at 5:43 AM | Permalink | Reply

Re: Bob Koss (#400),

Bob,

If reasonable extrapolation principles are followed, both Punta Arenas and Chatham Island should be discarded from the data set before analysis.

By comparison with grade interpolation in mineral deposits, if a sampling point is so isolated that it cannot be shown to have connectivity with the main data, it is rejected as an outlier. (Maybe, as a separate project, it will be investigated further if it is indicative of the possibility of a separate new occurrence of ore. But, it will not be included in the anlaysis of points that define the ore deposit.)

• Bob Koss
Posted Jun 30, 2009 at 6:21 PM | Permalink | Reply

Re: Geoff Sherrington (#401), I understand your point, but also understand they are trying to cover as much of the globe as possible with temperature data. So I don’t really object to them doing it. I don’t care for the idea of isolated data points being weighted the same as clusters of points. They have an out-sized influence on the early record, where more of them would be found. You can see the influence in the global anomaly maps above. The 1200 km anomaly is 150% of the 250 km anomaly.

What I don’t understand is their method of smoothing. It creates such unphysical grid-cell values. They present a mishmash of up/down anomalies with no physical relationship to the originally measured location. It makes no sense to me.

You and I look at the same thermometer. We agree it shows a 7C temperature. I then announce the temperature 1200 km west of our location is 7C. 600 km west is 8C. Where we are standing is 6C. 600 km east is 5C. 1200 km east is 4C. Are you going to believe me? I think not. This seems to be similar to what they are doing with their smoothing.

The best I could say is that my confidence in the temperature being 7C decreases with distance from our location. If I wanted to attempt to plot that confidence as anomalies from average temperature, the anomalies would have to move toward zero with increasing distance from the thermometer location and eventually become unknowable.

• Geoff Sherrington
Posted Jun 30, 2009 at 11:26 PM | Permalink

Re: Bob Koss (#403),

I did not discuss the invented data around Punta Arenas because (a) I do not know how it was derived and (b) I do not know how it COULD be derived. The inconsistency was noted then put aside as yet another example of amateurish approaches that would benefit from professional mathematical input.

178. Andrew
Posted Jun 30, 2009 at 11:47 AM | Permalink | Reply

Roger Pielke Senior calls a Realclimate post “Misinformation”:
http://climatesci.org/2009/06/30/real-climates-misinformation/

179. Craig Loehle
Posted Jul 1, 2009 at 7:11 AM | Permalink | Reply

Marvelous paper on flaws of peer review. He documents toxicology papers that dose animals with the LD50 does and then find metabolic/genetic changes. Duh. You can dose with salt at that level and find damage.
Frank Dost. Peer-review at a crossroads–a case study. Environ Sci Pollut Res (2008) 15:443–447
DOI 10.1007/s11356-008-0032-1

180. DaveJR
Posted Jul 1, 2009 at 2:47 PM | Permalink | Reply

Currently headlining in New Scientist “Sea level rise: It’s worse than we thought” ;).

• Geoff Sherrington
Posted Jul 1, 2009 at 10:59 PM | Permalink | Reply

Re: DaveJR (#408),

http://www.ocean-sci.net/5/193/2009/os-5-193-2009.pdf
“A new assessment of the error budget of global mean sea level rate
estimated by satellite altimetry over 1993–2008″

The conclusion is that mean sea level is unchanged in the last few years. It’s quite a careful paper. I did not know before that corrections to satellite altimetry rely on climate models to provide real time estimates of factors like humidity variation with altitude. Circles within circles within circles.

So, someone is right and someone is wrong about mean sea level change.

Philosophically, we seem to be entering a pre-Copenhagen phase where the papers have been written, the case put together and no changes can be accommodated. The book is closed.

It’s darned inconvenient when an author presents a new view. The preferred option is to bury it, jusdging from some other examples.

• Posted Jul 2, 2009 at 8:40 AM | Permalink | Reply

Geoff, check this comment over at RC:

Hank Roberts Says:
30 June 2009 at 13:58

http://news.stanford.edu/news/2009/february18/aaas-field-global-warming-ipcc-021809.html

“Working Group 2 Report” for the IPCC fifth assessment, which will be published in 2014.

“In the fourth assessment, we looked at a very conservative range of climate outcomes,” Field said. “The fifth assessment should include futures with a lot more warming.”[bold by me]

It looks like you are correct. The conclusions of a yet-to-be-conducted literature review have been obtained prior to the literature even being published.

Must be Climate Science.

181. Posted Jul 2, 2009 at 6:31 AM | Permalink | Reply

(1) It is not clear to me that full-bore GCMs are used to carry the what-if scenarios into the future. I think sometimes the extrapolated / EWAG forcings are specified for use in special-purpose, stripped-down versions ( or application mode ) of general-purpose GCMs. Hopefully, someone can provide info to clarify the situation.

(2) Note that the GCMs ( or whatever the calculational method ) remain not-validated. The results of the method do not correspond to the data.

(3) It is my understanding that 30 years is the minimum time required to establish Climate. This being the case, it seems that 90 years is the minimum time period for which the rate of change of Climate can be determined. The first and second 90-year periods set the base rate of change of Climate. Then, the second and third 90-year periods set the period to check if the rate of change has increased or decreased.

Note that if (3) is correct, everything and anything using less than 90 years of data / calculation is cherry picking. Oh, and that would be 90 years of past data; no fair extrapolating beyond the end of the measured data.

All corrections appreciated.

• Gary Strand
Posted Jul 2, 2009 at 7:12 AM | Permalink | Reply

Re: Dan Hughes (#13),

(2) Note that the GCMs ( or whatever the calculational method ) remain not-validated. The results of the method do not correspond to the data.

Quick question – how would you (meaning Dan) validate a climate model?

• Andrew
Posted Jul 2, 2009 at 9:20 AM | Permalink | Reply

Re: Dan Hughes (#410), We do have enough data to test the idea of acceleration IMAO: There isn’t any significant difference between the rates of change from 1911-1941 and 1978-2008:

So I would say at this point that we have our 90+ years and they don’t support the “acceleration” hypothesis just yet.

HOWEVER, this assumes that the surface data from HadCrut (or any others) is correct. I think there is strong evidence of a warm bias.

• Posted Jul 2, 2009 at 11:44 AM | Permalink | Reply

Re: Andrew (#414),

Very neat, Andrew.

• BarryW
Posted Jul 2, 2009 at 12:08 PM | Permalink | Reply

Re: Andrew (#414),

I’ve been playing around with the HadCrut data trends. I took all trends over the data using different time periods and noticed a sixty year cycle in the trends with peaks at about 1880, 1940 and 2000. If there is a sixty year cycle then what you are measuring in your graph is the slope from the trough to the peak of a cycle. Measure from about 1940 to now and I think that is a more realistic value. If I take 1940 to 2000, I get about 0.065 degC/decade.

• Andrew
Posted Jul 2, 2009 at 2:05 PM | Permalink

Re: BarryW (#416), Interesting…You know, when I look at that I must confess “AMO” pops into my head…I’m not sure what that may mean, if anything, just my pattern matching skills as a homo sapien kickin’ in.

Re: Gerald Machnee (#417), I think you want this:

• BarryW
Posted Jul 2, 2009 at 3:29 PM | Permalink

Re: Andrew (#419),

I was thinking it might be PDO, but I don’t know enough to even speculate. I think it’s hard to tell if it’s really cyclic (but it sure looks like it). If it is there also appears to be a trend associated with it. Each peak of the trend graph is a little higher than the last. The kicker is that the 20-30yr trends are dropping and should continue to drop to about 2030. They may even go negative. What I haven’t seen is this cycle in the models. They get the present upslope but they don’t mimic the cycle.

• Andrew
Posted Jul 2, 2009 at 5:15 PM | Permalink

Re: BarryW (#420), I have learned to be very cautious with claiming that I see patterns related to these various, er, indices. Many (bender in particular) do not think they have much physical meaning. But just because of that doesn’t mean my pattern matching instincts turn off I just remind myself that it tends to give false positives-patterns that aren’t there.

• BarryW
Posted Jul 2, 2009 at 7:36 PM | Permalink

Re: Andrew (#421),

I agree but OTOH, looking at just part of a cyclic pattern will also give you the wrong answer. Without a larger data set you can’t show that you’re not just reading something into it. The fact that the trends fluctuate that much is what I find interesting. I think it belies the statement that 30 years is a cutoff point for climate vs weather, at least as far as temperatures go.

182. Gerald Machnee
Posted Jul 2, 2009 at 12:23 PM | Permalink | Reply

Re #414 and #416.
I am not sure if it applies to you discussion, but a researcher at a University in Alaska did a study on the slopes of temperature in the Arctic or in Alaska. The charts looked similar. I do not have the paper handy at the moment.

183. Posted Jul 2, 2009 at 1:36 PM | Permalink | Reply

hot air = small sheep

184. Peter D. Tillman
Posted Jul 3, 2009 at 9:13 AM | Permalink | Reply

http://online.wsj.com/article/SB124657655235589119.html

“The EPA Silences a Climate Skeptic”

…the climate-change crew — , led by anonymous EPA officials — is doing what it does best: trashing Mr. Carlin as a “denier.” He is, we are told, “only” an economist (he in fact holds a degree in physics from CalTech). It wasn’t his “job” to look at this issue (he in fact works in an office tasked with “informing important policy decisions with sound economics and other sciences.”) His study was full of sham science. (The majority of it in fact references peer-reviewed studies.)

Sigh.
Peter D. Tillman

185. Andrew
Posted Jul 3, 2009 at 12:09 PM | Permalink | Reply

A surprisingly good interview with Gavin by Popular Mechanics-whom he advises (GASP!)-Here:
http://www.foxnews.com/story/0,2933,529953,00.html

Sure, he still manages to get some stuff wrong (Actually Gavin, that melting isn’t “steady” anymore!) But one almost sees a little lukewarmer in him. I really liked this: “But, he says, many news stories prematurely attribute local or regional phenomena to climate change. This can lead to the dissemination of vague, out-of-context or flat-wrong information to the public.” Although, well, what do you call Picturing the Science, then?

186. realist
Posted Jul 6, 2009 at 11:41 AM | Permalink | Reply

According to RealClimate the new sunspots are caused by global warming

187. Allan M R MacRae
Posted Jul 6, 2009 at 7:27 PM | Permalink | Reply

Allan M R MacRae (18:24:23) : Your comment is awaiting moderation

ON HOW CLIMATE MODELS OVERSTATE GLOBAL WARMING

Edited from a previous post on wattsup:

Allan M R MacRae (12:54:27) :
“There are actual measurements by Hoyt and others that show NO trends in atmospheric aerosols, but volcanic events are clearly evident.”
But increased atmospheric CO2 is NOT a significant driver of global warming – that much is obvious by now.

What has that to do with aerosols?

**************************

The Sensitivity of global temperature to increased atmospheric CO2 is so small as to be inconsequential – much less than 1 degree C for a doubling of atmospheric CO2.

Climate models assume a much higher Sensitivity, by assuming that CO2 feedbacks are positive, when in fact there is strong evidence that these feedbacks are negative.

Climate model hindcasting fails unless false aerosol data is used to “cook” the model results.

Connecting the dots:

The false aerosol data allows climate model hindcasting to appear credible while assuming a false high Sensitivity of global temperature to atmospheric CO2.

The false high Sensitivity is then used to forecast catastrophic humanmade global warming (the results of the “cooked” climate models).

What happens if the false aerosol data is not used?

No false aerosol data > no credible model hindcasting > no false high climate Sensitivity to CO2 > no model forecasting of catastrophic humanmade global warming.

Regards, Allan

Supporting P.S.:

Earth is cooling, not warming. Pass it on…

• See - owe to Rich
Posted Jul 7, 2009 at 3:00 PM | Permalink | Reply

Re: Allan M R MacRae (#426), I agree with you about the mythical aerosols to help the hindcasting. But I don’t agree that that implies sensitivity less than 1C. In my CO2-solar model I don’t use aerosols but get a sensitivity around 1.4C, which I consider to be just low enough not to be alarming.

Rich.

188. Poptech
Posted Jul 6, 2009 at 9:29 PM | Permalink | Reply

I am I really reading excuses for not having backup data and software quality control? The whole situation is worse then I thought. Counters, matching past climate is an advanced exercise in curve fitting nothing more and it does not prove or validate a model’s accuracy. You complain about the word “knobs” but that is used to explain it to laymen. Either the models have variables that are adjusted based on the discretion (no matter the alleged scientific reasoning) of the scientist performing (or requesting) the model run or not. Please don’t try to claim that close enough matches to other models makes everything ok. So if you have multiple people who are all wrong yet they all agree on the wrong answer that makes them all right? Really and you are scientist?

“These codes are what they are – the result of 30 years and more effort by dozens of different scientists (note, not professional software engineers), around a dozen different software platforms and a transition from punch-cards of Fortran 66, to Fortran 95 on massively parallel systems.” – Gavin Schmidt, RealClimate.org

Counters you will stop hearing complaints (maybe) when 100% of all climate model code, scripts (all of it) is fully released, independently audited and software quality control procedures are applied. Even that is not going to settle the parametrization problems or the myriad of things models don’t come close (close is not good enough) to getting right.

The only thing that seems “controlled” is getting the model ready to provide the IPCC’s conclusions.
[ed: author renamed to Poptech]

189. Chris Schoneveld
Posted Jul 7, 2009 at 12:53 AM | Permalink | Reply

Why this focus on software as if better software would resolve the greenhouse conundrum? Reading Pielke, Lindzen and other climate scientists on the other side of the fence, the issue of global warming is not just the inadequate software but the more fundamental question of whether positive or negative feed backs are dominating the climate system? The best software in the world cannot change the rubbish in/rubbish out principle.

190. cba
Posted Jul 7, 2009 at 6:50 AM | Permalink | Reply

Chris,

It’s likely worse than that. While GIGO does apply, it may be that the system is truly chaotic and it may not be possible to create a predictive system that doesn’t go off into the weeds after a few cycles – even assuming perfect knowledge of the true input parameters.

• Ausie Dan
Posted Jul 8, 2009 at 1:10 AM | Permalink | Reply

Re: cba (#430), I had assumed that weather, like “the economy” is caotic. Refer Edgar E. Peters “Fractal Market Analysis” re Sunspots p101 and 847 Nile River records analyses by Hurst (1951). Does anybody disagree? Dan

191. Andrew
Posted Jul 7, 2009 at 12:40 PM | Permalink | Reply

Roger Pielke Senior discusses an interesting paper about reanalysis data sets:
http://climatesci.org/2009/07/07/new-paper-global-and-regional-comparison-of-daily-2m-and-1000-hpa-maximum-and-minimum-temperatures-in-three-global-re-analyses-by-pitman-and-perkins-2009/

(By the way, this is the same Andrew who just complained that someone is pretending to be me.)

192. James Lane
Posted Jul 8, 2009 at 12:33 AM | Permalink | Reply

Australian weather mystery explained:

http://www.euroa-gazette.com.au/articles.aspx?d=13712

193. Ausie Dan
Posted Jul 8, 2009 at 1:12 AM | Permalink | Reply

re 435 – I meant Nile river records spanning over 847 years. Dan

194. Ausie Dan
Posted Jul 8, 2009 at 1:25 AM | Permalink | Reply

re 435 – read chaotic – sorry Dan

195. Edouard
Posted Jul 8, 2009 at 3:26 AM | Permalink | Reply

On the german blog “Klimalounge” I have asked Mr Rahmstorf how warming could have accelerated, when it didn’t get warmer the last 10 or 8 years.

snip

196. Nylo
Posted Jul 8, 2009 at 5:56 AM | Permalink | Reply

OT, but I have been looking (quite unsuccesfully) for a post from Steve McIntyre which I am sure I have read in the past in this site, relating to the Bristlecone Pine Trees, where he was showing some correspondence he had with dendrochronology experts discussing whether it was right or not to use these trees as temperature proxies. Can anybody help me to find that post, please?

Thanks a lot.

Steve: There are many references to bristecones here. I suggest that you look at the references in MM2005 (EE), our submission to the NAS panel, the comment in the NAS report. Aside from bristlecones themselves, there is an entirely separate issue with Graybill’s bristlecone chronologies – see discussion of Ababneh and Almagre. As to private correspondence (as opposed to things in the public record), I can’t recall anything relevant.

197. cba
Posted Jul 8, 2009 at 6:21 AM | Permalink | Reply

Re: See-owe to Rich (#433),
You have an interesting little model but it suffers from the same problem as essentially all sensitivity measurement attempts. That is the inability to account for cloud variation. Most of Earth’s albedo is due to clouds, something like 0.22 out of the total of roughly 0.3 is due to clouds, at least while we’re not in a major glaciation condition. The average cloud conditions we see attribute about 75 W/m^2 average effect on reducing incoming solar energy. in 1998, a rather hot year, we experienced a drop in total albedo amounting to around 10%, much of which, but not all, recovered within a year or two. Note that a 10% variation in cloud cover would generate an instant variation of around 7.5W/m^2 while a 10% change in albedo would cause a variation of around 10.5 W/m^2. Compared to a CO2 doubling of a little over 3.5 W/m^2 – which hasn’t happened yet – this is a tremendous unknown. It’s a bit like like trying to measure the weight of a flea without knowning weight of the dog where the flea is located. 1998 proves that the Earth’s albedo is not a constant as is usually assumed since it has only been measured for around 30 years and even that has gaps. Clouds are also associated with all sorts of other factors, aerosols or particulates, cosmic rays, solar cycles, volcanoes, human air pollution, along with the randomness of the weather. They also come in a variety of types as well as timing which give rise to different effects that only average out to our average cloud effect.

It’s pretty suggestive that there is a closed loop system in operation here that maintains a temperature setpoint. However, it’s a noisy system with at least some inputs varying by at least 10% although there is well under 1% variation of the setpoint temperature.

It’s also suggestive that your residual left for co2 after the direct solar impact is far less than what cloud albedo changes could have caused over that time frame. As for the 1998 albedo variation and the record of T vs albedo that I tried to do, it provides a sensitivity for a w/m^2 of closer to 0.13 K / W/m^2 or around 1/2 degree C for a co2 doubling. This is both the only T variation that accounts for albedo variation and it’s one of the strongest variations since it includes a 10W/m^2 difference between the min and max values- swamping even the long term co2 variation. The most interesting result of this is that it comes out with a net change in temperature which is less than a simple stefan’s law calculation would provide – indicating the presence of significant negative feedback actually occurring in the atmosphere and hence falsifying the hypothesis there is a positive net feedback occurring due to any increase in power absorption such as a co2 increase.

• See - owe to Rich
Posted Jul 8, 2009 at 3:18 PM | Permalink | Reply

Re: cba (#440), thank you for taking the time to look at my article. Re the clouds, it is my belief that the solar cycles, which appear not to vary enough in Total Solar Irradiation to account for the temperature changes, modulate the clouds. In that case I am already taking them into account. But I don’t have any proof of that theory.

I found your last paragraph interesting, and I wonder if you can provide some references to more material on your work on albedo.

Re GTFrank (#21^2), my model is what you want (though David Archibald would no doubt provide you with an even cooler model). If you look at these graphs you will see that I expect cooling of around 0.2C between solar Cycle 23 and 24. Thus instead of HadCRUT3 0.38 for 1996-2006 we should see about 0.18 for 2009-2019. However, 2009 itself, predicted by Hadley to be 0.44, is currently on about 0.38, which is well above my average. So the next few years are a testing time for my model.

Rich.

198. GTFrank
Posted Jul 8, 2009 at 10:58 AM | Permalink | Reply

Climate, like the economy, historically peaks and declines. Just when a trend gets going, it seems to finish its course and reverses for awhile. I’m anxious for the model that projects the next big climate (temperature) downturn (if this is not the beginning of it). Projecting the current trend into the future has not worked well for climate or the economy.

199. Paul Cummings
Posted Jul 8, 2009 at 5:35 PM | Permalink | Reply

Guys if I can interrupt this thread for a minute, does anyone have a link to carbon emissions year by year going back country by country? Its just this 80% of 1990 figure I can’t find. I mean, when was this?, last year, next decade, 1911?
Please don’t put yourselves out, its just something thats been bugging me for a while.
TIA

200. David Jay
Posted Jul 8, 2009 at 7:47 PM | Permalink | Reply

Oops, link is here:

201. David Jay
Posted Jul 8, 2009 at 7:48 PM | Permalink | Reply

Okay, I give up, I can’t get the link to work. Here’s the URL.

http://motls.blogspot.com/2009/07/uah-june-2009-anomaly-near-zero.html

202. cba
Posted Jul 9, 2009 at 7:10 AM | Permalink | Reply

Rich (#442)

There’s nothing published on it that I’m aware of. I took the albedo recreation from Palle & Goode (2007?), reading their chart to get the albedo values or delta values. I believe it was the RSS MSU sat. data that I used to get the delta value for T over the same time frame. I then spent a few minutes trying to get an optimum correlation between them. I believe I did a simple running average of around 6 mos. and a lag for T of around a year. The result was a correlation of 0.7 and an average sensitivity of around 0.2 K per W/m^2 A simple stefan’s law result for sensitivity is around 0.3 K / W/m^2 indicating that the net result in the atmosphere is less than a straight and simple radiative solution which means there is negative feedback reducing the necessity for an increase in T as compared to a no feedback situation. Since the T data is daily or monthly and the albedo is annual, there is a little bit of smoothing done to the T data but none to the albedo data. If I had more time (and better expertise) I think the value would still come out about the same but with much lower std dev which is a bit high based mostly (I’m sure) upon the crudeness of the effort. Other details mentioned in the previous post are described in Khiel & Trenberth (1997?). Between those sources, the raw details should be present.

203. Clark
Posted Jul 9, 2009 at 9:46 PM | Permalink | Reply

You know, those bouncing red and black lines almost look like “a devil’s cauldron of melted icecaps, bubbling permafrost, and combustible forests from which there will be no turning back”

204. martha durham
Posted Jul 9, 2009 at 9:48 PM | Permalink | Reply

David Stockwell – It is so great to see you here. So, I would like to ask a question. If CO2 is being continually pumped, in ever increasing amounts, into the atmosphere, why would we see a decrease in temperature in CO2 if it was a leading variable? My second question, are variables such as Cosmic Rays, the suns magnetism, cloud cover, hydrological issues or even the fact that water vapor is a greenhouse gas dealt with in these models? If not, why not? If major climate processes listed above were excluded, why were they excluded.

Thanks.

B

• Posted Jul 9, 2009 at 10:09 PM | Permalink | Reply

Re: martha durham (#13), A bit OT but if I can paraphrase your questions:
1. Do I think CO2 increases temperature? I don’t know, but probably less than the IPCC lower limit would suggest.
2. Does including more factors increase accuracy? No, largely because of inaccuracy in knowledge of the effects of other factors.

205. Gene Nemetz
Posted Jul 11, 2009 at 9:37 PM | Permalink | Reply

Waxman-Markey on mothballs :

After two days of hearings, Democratic leaders agreed to mothball the measure until September.

206. Posted Jul 12, 2009 at 2:21 PM | Permalink | Reply

I looked into this paper and commented to Jeff Id here

Four things bothered me when I heard about the paper
(1) the start date – why no earlier?
(2) the use of tree rings
(3) the use of ice core measurements right up to the present
(4) absence of any other metrics

Now I appreciate ice core used to compare past items, but when it’s compared to items in the present, I’m not convinced that the effects of 70-odd years’ firnification and compression are adequately accounted for. I’ve not seen recent measurements calibrated against known past measurements, like CO2 content in the ice against CO2 measured at MLO. Sort of like UHI and bad station siting in reverse. And this was used to produce the Ice Hockey Stick of CO2 levels that Al Gore used in AIT if I remember right. Jaworkwski and Segalstad, ice experts, were livid about measurements that didn’t allow properly for things like clathrates exploding and escaping when the ice samples were drilled and raised. Ferdinand Engelbeen disputes all this, but I’m not convinced, and neither is my current understanding good enough to crack the issue. I’m just highly mistrustful.

Perhaps someone here can shed light on these issues. The same issues also apply to the recent Eichler paper.

• Posted Jul 12, 2009 at 3:57 PM | Permalink | Reply

Lucy, there is an overlap of CO2 measurements of about 20 years (1960-1980) between the ice core data of Law Dome and the South Pole. The South Pole data are within the measurement error of the three ice cores (1.2 ppmv, 1 sigma). Here is the plot:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/law_dome_overlap.jpg

And you know my take on Segalstad/Jaworowski. I am still waiting for their reaction…
But this is not the right place to discuss that!

• kuhnkat
Posted Jul 12, 2009 at 10:04 PM | Permalink | Reply

Ferdinand,

as ice would be expected to function similarly, comparing similar experiments in ice core research simply means you can reproduce it. It does NOT prove any relationship to things outside the experiment.

It would be like saying you have robust results in the ability of tree rings to track temperature in Bristle Cone Pines in one area because you obtained the same result from Bristle Cones in another area.

My OPINION is that ice core reading has a LOT of research ahead to PROVE what you are trying to tell us. I HEART Lucy.

• Posted Jul 13, 2009 at 3:36 AM | Permalink

Re: kuhnkat (#42),

I am not willing to start a discussion here, but there is a fundamental difference between ice core CO2, as that is directly measured in the enclosed air with practically (*) the same composition as when it was enclosed and proxies. Proxies like tree rings, which have some relation with the variables (temperature and precipitation) one is interested in, are far more prone to misinterpretation.

A lot of objections from Segalstad/Jaworowski against ice core readings were answered, already in 1996, by the drilling of 3 ice cores (with different drilling techniques: wet and dry) at Law Dome by Etheridge e.a.:
http://www.agu.org/pubs/crossref/1996/95JD03410.shtml
Etheridge not only measured ice core CO2, but also in situ CO2 of the firn top down to the bubble closing depth. At bubble closing depth, ice core bubble CO2 and still open bubbles CO2 were at the same level, although measured via completely different routes. That means that there is not a huge fractionation or decrease of CO2 in ice cores. Neither is there a measurable migration of CO2 over the 800,000 years of Dome C and other ice cores. See my take on Jaworowski at:
http://www.ferdinand-engelbeen.be/klimaat/jaworowski.html

(*)there is a minute fractionation of different isotopic CO2 (and other) molecules, where the heavier are slightly enriched during migration through the firn, for which is accounted for. And there is a small fractionation of the smallest molecules (O2, Ar) during closing time.

207. Craig Loehle
Posted Jul 15, 2009 at 10:58 AM | Permalink | Reply

Monkey business!! is not only in climate science, sad to say. In the study here:
http://junkfoodscience.blogspot.com/2009/07/calorie-restrictive-eating-for-longer.html
there was no difference in survival of monkeys on normal or calorie restricted diets, but when the researchers removed causes of death they believed to be age related, there was. Note that monkeys on calorie restricted diets could have died from many of the events claimed not to be age “related”, such as infections. Press releases and news coverage said calorie restriction was “proved” to extend life. Oh boy.

208. Posted Jul 15, 2009 at 4:05 PM | Permalink | Reply

People have a hard time with the concept of global warming without concrete proof, in other words, how it is affecting them. Most people have just gone through a very cold and bitter winter, with normal summer. How do you convince people that there really is warming going on? It’s difficult without some kind of disaster caused by warming.

209. NickB
Posted Jul 16, 2009 at 3:11 AM | Permalink | Reply

Can anyone shed some light on this for me? I recently stumbled upon this site:

http://clearclimatecode.org/

If you download and read the PDF presentation (at final bullet under 3.1), it would appear to me (very much as a layman), that running GISTEMP using Python produces a markedly different (lower) temperature plot to that produced under Fortran.

Pardon me for being a bit dim on this subject, but how so? Which is more trustworthy?

210. Nick Stokes
Posted Jul 16, 2009 at 4:26 AM | Permalink | Reply

NickB
It’s an interesting site, including its goals. It seems to include a complete fortran version of the Gistemp code, as well as a Python emulation. I think the discrepancy you see just means that the Python emulation wasn’t right at the time of writing (Sept last year). The project is ongoing.

The difference between the curves seems to be a constant offset. The red curve may be just using a different base period.

211. Laws of Nature
Posted Jul 16, 2009 at 11:57 PM | Permalink | Reply

Dear Steve and all of you,

I just posted this comment on an article at Realclimate, which most likely will never be released . . could you give me your comment on this idea? (I hope unthreaded is the right place for this post)

Hi there,
If Swanson’s hypotised 1997/8 step change is true, figure 1 shows, that the model trend’s are way off (they only fit the measured temperature in the short interval around the jump)
The “non-cherry picking trend” is about about half the 2K/century.
Ironically the flat red line might also be wrong, since it seems we are getting another El Nino-jump this year.
That would mean 2 out of 3 wrong, where the one correcct is the measurement and the two others are the predictions . .
All the best regards,
LoN

212. Posted Jul 22, 2009 at 7:21 AM | Permalink | Reply

The consensus is in.

The Precautionary Principle is mandatory.

213. Posted Jul 22, 2009 at 8:42 AM | Permalink | Reply

Interesting. Mathematics applied to the Validation problem. Verification has already, for the most part, been cast as a math problem; with great success.

Fighting the Curse of Dimensionality: A method for model validation
and uncertainty propagation for complex simulation models

• Posted Jul 22, 2009 at 9:01 AM | Permalink | Reply

Here’s the abstract:

Fighting the Curse of Dimensionality: A method for model validation and
uncertainty propagation for complex simulation models

Abstract

This dissertation develops a method for analyzing a parameterized simulation model
in conjunction with experimental data obtained from the physical system the model
is thought to describe. Two questions are considered: Is the model compatible with
the data so as to indicate its validity? Given the experimental data, what does the
model predict about a given system property of interest when the uncertainty in the
data is propagated through the model?

The each of these questions is formulated as a constrained optimization problem.
Experimental data and their associated uncertainties are used to develop inequality
constraints on the parameter vector of the model. Similarly, prior information on
plausible values of the model parameters is incorporated as additional constraints.
Using constraints to describe the data readily enables the integration of diverse, heterogeneous data which may have arisen from multiple sources by the combination of
constraints that describe each piece of data. This aspect has led us to adopt the name
Data Collaboration for the collection of ideas described in this dissertation.

The optimization framework implicitly considers the ensemble of parameter values
that are compatible with the given data. This enables the implications of the model
to be explored without explicit consideration of parameter values. In particular, an
intermediate step of parameter estimation is not required.

The chief difficulty in the proposed approach is that constrained optimization
problems are highly difficult to solve in the general case. Hence a technique is developed
to over- and under-estimate the optimal value of an optimization. To develop
these estimates, the objective and constraint functions are approximated. Consequently
some rigor is sacrificed.

The investigation of three real-world examples shows the approach is potentially
applicable to complex simulation models featuring a high-dimensional parameter
space. In the first example a methane combustion model with more than 100 uncertain
parameters is invalidated. The procedure identifies two major data outliers,
which were corrected upon reexamination of the raw experimental data. The model
passes the validation test with these corrected data. Models for two cellular signaling
phenomena are also studied. These respectively involve 9 and 27 uncertain
parameters.

214. See - owe to Rich
Posted Jul 22, 2009 at 3:14 PM | Permalink | Reply

HadCRUT3 for June is out, and above 0.5C anomaly for the first time since early 2007. Global warming is returning with ENSO apparently…or it could be a random blip. The average for the year is now over 0.4C, so could the Met Office prediction of 0.44C for the year turn out to be close?

Rich.

215. Jaye Bass
Posted Jul 22, 2009 at 4:22 PM | Permalink | Reply

A little factiod…

Outspending Exxon-Mobil by a Factor of 1,103 [Edward John Craig]

Jo Nova has a new report that puts a price tag on Big Green’s massive international funding — which includes \$32 billion by the U.S. government alone for “climate research” (compared with perennial bogieman Exxon-Mobil’s \$23 million).

Published by SPPI, the report can be found here. Jo provides the following summary here:

The US government has provided over \$79 billion since 1989 on policies related to climate change, including science and technology research, foreign aid, and tax breaks. Despite the billions: “audits” of the science are left to unpaid volunteers. A dedicated but largely uncoordinated grassroots movement of scientists has sprung up around the globe to test the integrity of the theory and compete with a well funded highly organized climate monopoly. They have exposed major errors. Carbon trading worldwide reached \$126 billion in 2008. Banks are calling for more carbon-trading. And experts are predicting the carbon market will reach \$2 to \$10 trillion, making carbon the largest single commodity traded.

Meanwhile in a distracting sideshow, Exxon-Mobil Corp is repeatedly attacked for paying a grand total of \$23 million to skeptics — less than a thousandth of what the US government has put in, and less than one five-thousandth of the value of carbon trading in just the single year of 2008. The large expenditure in search of a connection between carbon and climate creates enormous momentum and a powerful set of vested interests. By pouring so much money into one theory, have we inadvertently created a self-fulfilling prophesy instead of an unbiased investigation?

216. Craig Loehle
Posted Jul 23, 2009 at 11:09 AM | Permalink | Reply

This story has it all: cherry-picked dates for analysis, FOI that is ignored, data eventually published that does not match the press release, agenda-driven “science”: http://reason.com/news/show/134481.html

enjoy!

217. TAG
Posted Jul 23, 2009 at 11:43 AM | Permalink | Reply

I wonder why someone esle has not mentioned this. The August issue of the Scientific American ahs a column in which Climate Audit is mentioned and Anthony Watts and “He Who Cannot Be Named” are named.

The column discusses the fact that support for AGW measures is falling in the US. It finds as one factor that the finding of “small” errors by Internet bloggers raises doubts in the uninformed public.

However the column itself shows one reason why the US public has doubts. The effect of any error si downplayed. PR people emphasize transparency in dealing with a public issue. Acknowledge any difficulty publicly and address it. This seems to be very difficult for AGW researchers. They cannot acknowedge hat adequate data is difficult to obtain and interpretations are fraught with difficulty. The world is facing a potentially very serious issue and it is difficult for researchers to obtain clear cut results. Nevertheless the issue must be addressed with the best results that can be obtained. If AGW could bring themselves to admit this, they would obtain much greater support. If the results are clear and science is settled then how is it possible for contrary results to be obtained by groups of skeptics with very little or no funding. If the world is faced with a problem that can have very serious consequences and results are hard to find but action must be taken to deal with vry serious possibilities then contrary results can be expected and adjusted for.

Just as seriously the column downplays any contrary result. One gets the feeling that one is being sold something and that the salesman will say what needs to be said to close the sale. I always have the feeling that I am seeing a vaporware sales presentation when I hear an AGW advocate speak. The AGW issue is potentially very serious. If they would just stick to that problem and not being in their other pet environmental issues and acknowledge that certain results cannot be expected then support for their position would increase

218. VG
Posted Jul 23, 2009 at 4:01 PM | Permalink | Reply

Woaa boy !
from solar cycle 24:
“A small new sunspot is trying to form in the middle of the solar disk. It belongs to Cycle 23. Just when you think that Cycle is over, another sunspot appears”. D Archibald = 100% correct

• See - owe to Rich
Posted Jul 25, 2009 at 2:50 PM | Permalink | Reply

Re: VG (#472), solar cycles always overlap, with periods near minimum when spots from both cycles can appear (even simultaneously, though obviously in different latitudes owing to the “butterfly diagram”). Therefore this Cycle 23 phenomenon doesn’t prove “Archibald right” – it is still possible that we are now past solar minimum (and possible that we are not).

Incidentally, NOAA didn’t give that nascent spot a Sunspot Number, and it faded quickly.

Rich.

219. Andrew
Posted Jul 23, 2009 at 11:42 PM | Permalink | Reply

A potential bombshell paper by Lindzen and Choi in GRL:
http://wattsupwiththat.com/2009/07/23/new-paper-from-lindzen/

220. Craig Loehle
Posted Jul 24, 2009 at 3:36 PM | Permalink | Reply

Does anyone know where in the IPCC report they derive their projections of CO2? I’ve been searching but can’t find it. craigloehl at aol.com
thanks.

• DeWitt Payne
Posted Jul 25, 2009 at 9:56 AM | Permalink | Reply

I believe that would be the Special Reports on Emission Scenarios (SRES). I’m pretty sure they used the same scenarios for TAR and FAR.

• Bob Koss
Posted Jul 25, 2009 at 2:42 PM | Permalink | Reply

Re: Craig Loehle (#474),
I found this note referencing the TAR on page 69 of the ar4-wgI technical summary.

11 Approximate CO2 equivalent concentrations corresponding to the computed radiative forcing due to anthropogenic greenhouse gases and aerosols in 2100 (see p. 823 of the TAR) for the SRES B1, A1T, B2, A1B, A2 and A1FI illustrative marker scenarios are about 600, 700, 800, 850, 1,250 and 1,550 ppm respectively.
Constant emission at year 2000 levels would lead to a concentration for CO2 alone of about 520 ppm by 2100.

221. Ron Cram
Posted Jul 25, 2009 at 10:35 AM | Permalink | Reply

I just read something that really surprised me. Perhaps it should not have surprised me, but it did. I was reading an article by Rasmussen Reports on voters thinking about global warming. Here is a quote:

Republicans by nearly three-to-one say global warming is caused by planetary trends, while Democrats believe human activity is to blame by the same margin. Voters not affiliated with either party are almost evenly divided on the question.

Scientific conclusions are not made by voting. How is it scientific questions become so partisan? What is it about our society (our educational system or institutions?) that is so defective that a scientific question becomes politicized? Perhaps I am tired this morning, but reading the above paragraph was really depressing to me.

• jc-at-play
Posted Jul 27, 2009 at 6:28 PM | Permalink | Reply

Re: Ron Cram (#476),

How is it scientific questions become so partisan? What is it about our society [...] that is so defective that a scientific question becomes politicized?

To me, this poll suggests that American voters perceive climate science itself to be heavily politicized. It’s an open question whether that exhibits good or bad judgment on their part.

222. David Smith
Posted Jul 26, 2009 at 10:55 AM | Permalink | Reply

Interesting Accuweather article on the cool US Summer:
3,000 Low Temperature Records Set in US in July

223. Gene Nemetz
Posted Jul 26, 2009 at 3:15 PM | Permalink | Reply
224. Don
Posted Jul 27, 2009 at 3:53 AM | Permalink | Reply

From THE AGE newspaper (and yes – weather is not climate)
… the first reconstruction of [Australian] weather patterns for the four years to 1791, published today in the The Australian Meteorological and Oceanographic Journal … Contemporary reports are backed up by an analysis of the meticulous weather journal kept by Lieutenant William Dawes, a scientist who sailed on the First Fleet. … That journal was rediscovered in the archives of the Royal Society in London in 1977. It, along with First Fleet logbooks and diaries, has been used by Professor Karoly, Joelle Gergis and Rob Allan to plot the daily temperatures and barometric pressure between September 1788 and December 1791. … The data was then compared with modern measurements taken from Sydney’s Observatory Hill weather station — located just 500 metres from the site where Dawes worked. “He gets the right seasonal variations, the right sort of maximum and minimum temperatures and very accurate pressure variations,” Professor Karoly said. … He said studying Australia’s climate variability before the 20th century was vital work, as it allowed present changes to climate trends to be viewed in a broader historical context.

225. Anthony Watts
Posted Jul 27, 2009 at 11:44 AM | Permalink | Reply

RE178 I’ll take even silly half felt apologies. The point is that you crossed a line, and in the interest of keeping decorum, I felt an apology was needed.

Instead, you took your ball and went home.

The offer remains.

226. Andrew
Posted Jul 27, 2009 at 12:59 PM | Permalink | Reply

Apropos my earlier question of the way a change in OHC is related to surface temperature change, I see RP Sr is on this issue:
http://climatesci.org/2009/07/27/what-does-a-global-average-2-degrees-c-increase-mean-with-respect-to-upper-ocean-heat-content-change-part-i/

Can’t wait for part 2.

227. Posted Jul 27, 2009 at 2:02 PM | Permalink | Reply

Lonnie Thompson will be featured on NOVA’s Science Now at 9PM on PBS, July 28. On the online version July 29, Thompson will answer selected questions submitted by viewers via
http://www.pbs.org/wgbh/nova/sciencenow/0405/04.html.

Does anyone have a good question about when he’s going to archive his data, or whether the “Dr. Thompson’s Thermometer” attributed to him by Al Gore on AIT is really just the HS spliced together with instrumental temperatures, and not based on his ice core research at all?

There’s no guarantee that all questions will be asked, of course.

228. michel
Posted Jul 27, 2009 at 2:47 PM | Permalink | Reply

Well, ford, I have been banned from tamino. Who does not even have the courtesy to say he has banned one. I take it as a compliment, and wear it with pride! Go and do thou likewise.

229. Håkan B
Posted Jul 27, 2009 at 2:58 PM | Permalink | Reply

Hu McCulloch #195

Actually the centigrade scale was invented by Celsius, in Sweden we still call it the Celsius scale. Although Celsius had the freezing point at 100 degrees and the boiling point at 0 degrees this was reverted short after his death in 1744. It’s known that Carl von Linné uses it in his Hortus Upsaliensis (16 december 1745), with 0 as freezing point and 100 as boiling point.

230. tty
Posted Jul 27, 2009 at 3:01 PM | Permalink | Reply

Re 195

The Celsius thermometer scale was invented by the Swedish physicist Anders Celsius in 1741 and his original thermometer was used for the observations in Uppsala from that date (it is still preserved at the Meteorological institution at Uppsala University).
Celsius who was originally an astronomer was very conscious about the need for careful calibration of instruments and wrote a paper “Observationer om twänne beständiga grader på en thermometer” (Observations on two persistent degrees on a thermometer) in the Proceedings of the Swedish Academy of Sciences in 1742. In this paper he showed that the melting-point and boiling-point of water were extremely stable but varied with barometric pressure which must therefore be corrected for when calibrating a thermometer.

231. Posted Jul 28, 2009 at 7:43 AM | Permalink | Reply

RE #488, I have sent the following question for Thompson to NOVA:

Dr. Thompson, according to your online vita, you were an official scientific advisor to Al Gore’s award-winning book and movie, “An Inconvenient Truth.”

One of the most dramatic pieces of evidence Gore presents is a graph of past temperatures that he calls “Dr. Thompson’s Thermometer.” He says that it is based on your ice core research, and that it provides independent confirmation of Michael Mann’s disputed “Hockey Stick” temperature reconstruction.

My question for you is, is the graph that Gore calls “Dr. Thompson’s Thermometer” really yours, or is it in fact just Mann’s “Hockey Stick”, spliced together with recent instrumental data?

If so, have you ever publicly clarified that this is not in fact your work?

If you have, where can the public find that clarification? Is there, for example, a press release on your Byrd Center website?

Or if there is no such clarification, don’t you have a responsibility to the public as Gore’s official scientific advisor to correct such errors, at least as they pertain to your own work?

J. Huston McCulloch
Professor of Economics and Finance
Ohio State University

Selected questions submitted before the webcast tomorrow will be answered by Thompson on the NOVA webpage on August 3.

There’s still time for someone else to ask him when he’s going to archive such-and-such data (eg Bona Churchill — search CA for Gleanings on Bona Churchill etc). Unfortunately, Steve is on the road and may not have the opportunity to get a question in on time.

232. Posted Jul 28, 2009 at 7:46 AM | Permalink | Reply

Re #490, 491, thanks, Hakon and tty!

233. John M
Posted Jul 28, 2009 at 5:14 PM | Permalink | Reply

Hmmm. Is the EPA outsourcing their mileage calculations to GISS?

…he went to the Environmental Protection Agency’s fueleconomy.gov Web site on Saturday to double-check the fuel economy rating for his 1987 Mercury Grand Marquis. When he had visited previously, the car’s combined city and highway fuel economy was rated at 18 miles per gallon, making it eligible for the program.
But on Saturday, he found something different: The fuel economy for his car had been raised to 19 mpg — one mile per gallon over the maximum fuel-efficiency allowed under the Car Allowance Rebate System (aka Cash for Clunkers). As a result, he became ineligible for a trade-in credit worth up to \$4,500.

…as part of the official launch, the EPA conducted “quality assurance and quality control effort regarding fuel economy calculations on more than 30,000 vehicle model types spanning the past 25 years,” according to an e-mail sent by EPA spokesman Dale Kemery.

234. VG
Posted Jul 30, 2009 at 4:06 AM | Permalink | Reply

BTW Warmistas should be jumping with glee as the AMSU temps have jumped dramatically during July 2009. Just you watch as all of a sudden AMSU will become “really very reliable” as distinct to criticizing J Christy and R Spencer and Co. LOL

• Posted Aug 1, 2009 at 6:07 AM | Permalink | Reply

Re: VG (#498),
No, since unlike you I don’t hang my hat on any convenient data that supports a favored position.
The UAH problem with the change over of the satellite has not been resolved and until it is the monthly
anomalies don’t tell us much. In any case if the pattern in UAH data since 2003 is to continue an increase in anomaly for July would be expected following the annual minimum in May/June.

• Andrew
Posted Aug 1, 2009 at 11:08 AM | Permalink | Reply

Re: Phil. (#500), Okay, what gives Steve? I responded to this comment, saying that it is funny that Phil is calling himself a “warmista” and that he is confusing the daily AMSU product, which comes from NOAA-15 right now, while the UAH monthly product comes from AQUA.

I assume you have no issue with my trying to set the record straight. Would I be correct, then, in surmising that you have not intentionally deleted those comments and the issue is technical? Thanks.

• Posted Aug 1, 2009 at 11:32 AM | Permalink

Re: Andrew (#501),

I didn’t call myself a ‘warmista’, I responded to that post because I was one of those who have been critical of the UAH product. I was not confusing the two products but don’t regard the AMSU data (uncorrected for drift) as reliable because of drift, the fact that it’s ‘up’ in July as opposed to ‘down’ in June has no bearing on that decision. The monthly product I don’t regard as reliable for the reason stated, I’m not responsible for the terminology that VG uses.

• Andrew
Posted Aug 1, 2009 at 1:56 PM | Permalink

Re: Phil. (#502), Let me rephrase-you implicitly accepted the label of “Warmista” by responding to “Warmistas should be jumping with glee…” with “No, since unlike you I don’t hang my hat…” (emphasis added). I found that amusing, frankly!

Now, as for the Monthly UAH product-it’s an interesting issue which may be resolved soon-but as “one of those” critical of UAH, I hope you are at least aware that the effect has almost zero influence on the trend (contrary to all the hand wringing that the “discovery” lead to).

And again, You confused a reference to the daily AMSU data (which, yes, requires corrections, what else is knew) with the monthly UAH product-it is not VG who has the terminological problem, it’s you who tried to switch the topic from the daily data to the monthly UAH data. Why? I don’t know, to make a point about your “objectivity”? Apparently.

235. Posted Aug 1, 2009 at 4:14 AM | Permalink | Reply

Is Climate Change here already?

http://www.wired.com/wiredscience/2009/07/hightides/

236. Gunnar
Posted Aug 1, 2009 at 8:07 PM | Permalink | Reply

You guys seem to be getting a little tense, so I just wanted to point out that ACD Science is Unquestionable!

The continents rest on massive tectonic plates. Until the beginning of the Industrial Revolution in the mid 18th century, these plates were fixed in place and immobile. However, drilling for oil and mining for minerals has cut these plates loose from their primordial moorings and left them to drift aimlessly. “The potential for damage is truly catastrophic,” said Hans Brinker, a spokesman for the International Panel on Continental Drift (IPCD). “The continents are adrift due to the ruthless capitalist exploitation of the environment for profit. Unless immediate steps are taken to halt all oil and mineral extraction, we can expect a massive surge in earthquakes and volcanos by next Tuesday.” The representative seemed close to tears during his announcement, a clear indicator of the severity of the threat.

If we all* become poor, tectonic plate movement will stop immediately. Socialism is the only way to preserve the earth and humanity!

*”All” only applies to North America and Europe as usual.

http://www.thepeoplescube.com/red/viewtopic.php?t=1668

This theory is more coherent than the one you are discussing, so you ‘Continental Drift Deniers’ need to shut up, or you won’t get any good health care!

237. TAG
Posted Aug 2, 2009 at 2:41 PM | Permalink | Reply

I have been trying to understand as much as I can about the controversy about climate models discussed in many postings (from Gerald Browning and others) on this blog. The is what I got from it. Does this analysis have any merit?

It is common in scientific investigations to take mathematical models beyond the point where they can be mathematically justified. This can be acceptable because the results are tested be experiment. The models themselves are representations of a physical theory and functions as forms of approximate physical reasoning that entails a physical justification. The fact that they cannot be justified on a strict mathematical basis is negated by their constant reference to experiment. They are a means to the end of suggesting appropriate experiments.

Climate models go beyond this. They cannot be justified mathematically. The mathematics can only functions if non-physical assumptions are made. This creates major issues. When tested, their results do not agree with experiment. This does not result in a rejection of the model but causes the models to be adjusted with more non-physical parameterizations. Additionally the models are not regarded as a means to suggest experiments but are taken as a model of reality. If there is data that cannot be fitted to a model then the data is regarded as suspect not the models.

The models take on the role of reality. They function as justifications for the preconceptions of their creators.

• Craig Loehle
Posted Aug 3, 2009 at 6:41 AM | Permalink | Reply

Re: TAG (#505), Pretty good summary.

• Ron Cram
Posted Aug 4, 2009 at 7:34 AM | Permalink | Reply

Re: TAG (#505),

You got several of the main points. One of the issues that really bugs me with the modelers is when they talk about doing “experiments.” Experiments is a word that should be reserved for nature. Computer modeling runs are not experiments. Lately, I have been seeing the use of “mathematical experiments.” I do not consider this much of an improvement. As you point out, changes to the model only change the hypothesis slightly. One modelers made the claim that when he made changes to the model to improve accuracy on one parameter and as a result another metric also grew closer to reality. To the modeler this proved he was “working on something real.” Hogwash.

At one time, I worked on a computer model of the stock market. I could almost perfectly hindcast a given stock or index, but the computer model had no predictive value. If it had predictive value, I would be Bill Gates rich today.

• John M
Posted Aug 5, 2009 at 5:07 PM | Permalink | Reply

Re: Ron Cram (#509),

At one time, I worked on a computer model of the stock market. I could almost perfectly hindcast a given stock or index, but the computer model had no predictive value. If it had predictive value, I would be Bill Gates rich today.

Ron, your problem was that you didn’t consult enough climate scientists. The current issue of Nature has an editorial pointing out if only economists were as good at modeling as climate scientists, we wouldn’t have been surprised by the economic meltdown.

The field [economic modeling] could benefit from lessons learned in the large-scale modelling of other complex phenomena, such as climate change and epidemics

Hmmmm, come to think of it, climate models are always predicting meltdowns. You know, maybe there’s something to this after all.

• John Baltutis
Posted Aug 5, 2009 at 5:16 PM | Permalink

Re: John M (#519),

Not to mention the mad cow thing in England a couple of years ago, based on those accurate epidemic models.

238. Posted Aug 4, 2009 at 6:24 AM | Permalink | Reply

I just sent the following to WUWT Tips & Comments. CA readers may want to read the article as well.

Anthony —
The New York Times Science Times section has an article this AM, “Nobel Halo Fades Fast for Panel on Climate,” at http://www.nytimes.com/2009/08/04/science/earth/04clima.html.

It’s mainly about the IPCC’s strategy for AR5 to regain the momentum that seems to have been lost since 2007.

However, there are some interesting quotes by John Christy and Gavin Schmidt. Christy: “It just feels like the IPCC has gone from being a broker of science to a gatekeeper.” Schmidt warns that computer models are still in early stages of development and that higher resolution won’t magically increase their accuracy.

Christopher Field, the chairman of one section of AR5, says that an important thrust will be “psychological and sociological research on how people act in the face of uncertain but substantial threats.” Does this mean that people who question the IPCC’s spokesmanship for science must be certifiably crazy?

Whooda thunk Gavin doesn’t have full confidence in climate models?

• James Lane
Posted Aug 4, 2009 at 7:09 AM | Permalink | Reply

Gavin was talking about regional forecasts. Doesn’t mean others don’t wont to invoke them, e.g.

http://www.theaustralian.news.com.au/story/0,25197,25877361-11949,00.html

I particularly like:

One Japanese model predicts 149.7mm more rain by 2099 across Australia and one German model predicts 128.1mm less per year. Australia’s long-term, nationwide rainfall average is about 450mm a year.

• Mark T
Posted Aug 4, 2009 at 8:59 PM | Permalink | Reply

Re: James Lane (#508), 149.7 mm and 128.1 mm? There’s a point at which you begin to look foolish with such precision included in your estimates. Not that I would buy 150 mm and 130 mm, but Gavin wouldn’t look nearly as foolish with two significant digits rather than four.

Mark

• Posted Aug 4, 2009 at 11:10 AM | Permalink | Reply

Christopher “psychological and sociological research on how people act in the face of uncertain but substantial threats.”

Here’s what it SHOULD say: “psychological and sociological research on how people act in the face of uncertain but substantial imaginary threats hyped by media and political entities to the point where they are believed as fact.”

So does that mean this whole AGW thing is a social engineering experiment?

• Follow the Money
Posted Aug 4, 2009 at 6:09 PM | Permalink | Reply

Hu, near the end of the Times article,

Christopher Field, a participant and chairman of one section of the forthcoming assessment, said that an important focus was psychological and sociological research on how people act in the face of uncertain but substantial threats.

“We’ve identified the nature of the problem, and social science shows it’s of the toughest category,” said Dr. Field, who directs the Carnegie Institution department of global ecology at Stanford.

Imagine that. Possibly a whole chapter dedicated to understanding “deniers.” When one cannot pin resistance to the oil industry (now fully in the carbon trading camp), psychologize the resistance as mental defects or irrational.

• Follow the Money
Posted Aug 5, 2009 at 4:25 PM | Permalink | Reply

Re: Follow the Money (#515),
Update to my #515

Psychological barriers hobble climate action

WASHINGTON (Reuters) – Psychological barriers like uncertainty, mistrust and denial keep most Americans from acting to fight climate change, a task force of the American Psychological Association said on Wednesday.

Policymakers, scientists and marketers should look at these factors to figure out what might prod people take action, the task force reported at the association’s annual convention in Toronto.

Numerous psychological barriers are to blame, the task force found, including: uncertainty over climate change, mistrust of the messages about risk from scientists or government officials, denial that climate change is occurring or that it is related to human activity. /cut

and the money quote,

It identified other areas where psychology can help limit the effects of climate change, such as developing environmental regulations, economic incentives, better energy-efficient technology and communication methods.

Send us psychologists more money!!

239. Ron Cram
Posted Aug 4, 2009 at 7:41 AM | Permalink | Reply

On Roger Pielke’s blog
Nicola Scafetta Comments on “Solar Trends And Global Warming” by Benestad and Schmidt. I highly recommend this comment. It is amusing to me to see Gavin Schmidt try to become Steven McIntyre. Too bad Schmidt is not nearly as good at it. McIntyre would have contacted Scafetta to inquire about his methods first before publishing. There is no evidence from this comment that Schmidt ever did that. Perhaps Schmidt is trying to live by his own words of “doing your own work.” To me, this is a comedy of errors. And Scafetta is well within his rights to rip Benestad and Schmidt for doing sloppy work and making unrealistic and contradictory claims.

240. Posted Aug 4, 2009 at 1:54 PM | Permalink | Reply

RE #493,
Lonnie and Ellen Thompson’s answers to selected questions posed by NOVA Science Now viewers are now online at http://www.pbs.org/wgbh/nova/sciencenow/0405/04-ask.html.

Either NOVA didn’t select my question about the authorship of “Dr. Thompson’s Thermometer”, or else the Thompsons were allowed to choose which to answer, and opted to skip over mine.

There were no questions about Bona Churchill, Guliya, Dunde, etc, either.

Their answers did include warnings about “contrarians” and websites that don’t end in .gov (RealClimate excepted), and praise for “gold standard” journals like Science and Nature.

• Kenneth Fritsch
Posted Aug 4, 2009 at 3:25 PM | Permalink | Reply

Either NOVA didn’t select my question about the authorship of “Dr. Thompson’s Thermometer”, or else the Thompsons were allowed to choose which to answer, and opted to skip over mine.

There were no questions about Bona Churchill, Guliya, Dunde, etc, either.

There were questions published that either the Thompsons waived off, were sotfball in nature or off topic. No give and take of a true science based discussion. Can you say RCIPCCNOVA?

241. Posted Aug 4, 2009 at 3:08 PM | Permalink | Reply

I don’t think this has been discussed here before – apologies if I’m wrong.

Studying Uncertainty in Palaeoclimate Reconstruction: a network (SUPRAnet)

This appears to be a new international research effort, led by a UK researcher.

242. TAG
Posted Aug 5, 2009 at 9:38 AM | Permalink | Reply

from Peilke Sr’s blog

Because of the severe and naïve error in applying the wavelet decomposition, Benestad and Schmidt’s calculations are “robustly” flawed. I cannot but encourage Benestad and Schmidt to carefully study some book about wavelet decomposition such as the excellent work by Percival and Walden [2000] before attempting to use a complex and powerful algorithm such as the Maximum Overlap Discrete Wavelet Transform (MODWT) by just loading a pre-compiled computer R package.

There are several other gratuitous claims and errors in Benestad and Schmidt’s paper. However, the above is sufficient for this fast reply. I just wonder why the referees of that paper did not check Benestad and Schmidt’s numerous misleading statements and errors. It would be sad if the reason is because somebody is mistaking a scientific theory such as the “anthropogenic global warming theory” for an ideology that should be defended at all costs.

Nicola Scafetta, Physics Department, Duke University

http://climatesci.org/category/guest-editor-weblogs/

243. Andrew
Posted Aug 15, 2009 at 12:08 PM | Permalink | Reply

Hurricane season’s finally pickin’ up:

http://www.weatherstreet.com/hurricane/2009/Hurricane-Atlantic-2009.htm

It’s just a TS, but MAN! What a slow start. Hasn’t taken this long since the AMO shift at least!

244. Rob Spooner
Posted Aug 20, 2009 at 2:07 PM | Permalink | Reply

I’ve been asked to move a comment to this thread (or unthread), so here it is. I have run into a small problem with some NASA methodology. Looking at http://climate.nasa.gov/keyIndicators/ and just eyeballing the graph on the right, Sea Level Change 1993-present, I find that the data corresponds very closely with a straight line that they helpfully provide. The straight line begins at 1993.0 and end around 2009.5. It rises from about -23 mm to + 20, or 43 mm during a period of 16.5 years.

The caption reads “3.4 mm/year (estimate).” Now my methodology for getting the average would be to divided the rise of 43 mm by the 16.5 years, but that gets 2.6 mm/year. NASA would seem to be using some other methodology. Not division? I guess it’s something so important I wouldn’t understand.

• DeWitt Payne
Posted Aug 20, 2009 at 2:24 PM | Permalink | Reply

You get about 3.4 if you only look at the data through the end of 2005 or 13 years. Apparently they didn’t bother to look at the slope of the line through the data extending to 6/09. Btw, it’s 22 in 6/09 not 20, but that only increases the rate to 2.7 mm/year.

245. Dennis Corrigan
Posted Aug 20, 2009 at 5:53 PM | Permalink | Reply

http://www.bbc.co.uk/iplayer/console/b009y1st

Not quite the engineering quality study on doubling CO2 that we await, but pretty much one-note on the subject.

Biased-BBC says:

IN THE HUDSON…

The Hudson and Pepperdine, to be precise. I was alerted by an intrepid B-BBC reader to this “comedy” on R4 this afternoon. It’s all about global warming and is full on propaganda mode. Laugh? I nearly paid my license fee. I couldn’t get past the first 8 minutes. I dare you to listen.

http://biased-bbc.blogspot.com/2009/08/in-hudson.html

246. frost
Posted Aug 20, 2009 at 9:19 PM | Permalink | Reply

I recently saw a graph posted by Jean S. showing the models versus observed temperature where everything was set to zero sometime in the 1940s. I’ve just gone thru a bunch of recent posts and couldn’t find it. Can anybody provide a pointer?

Thanks.

247. Nathan Kurz
Posted Aug 23, 2009 at 1:20 PM | Permalink | Reply

An epic story on the process of submitting a scientific comment:

http://scienceblogs.com/catdynamics/2009/08/how_to_publish_a_scientific_co.php

Perhaps some here might appreciate the parallels to Steve’s travails.

248. Mark H.
Posted Aug 24, 2009 at 6:46 AM | Permalink | Reply

If you didn’t know better, you would have sworn that Steve M. wrote this horror story:

http://www.scribd.com/doc/18773744/How-to-Publish-a-Scientific-Comment-in-1-2-3-Easy-Steps

249. Charlie
Posted Aug 24, 2009 at 11:51 AM | Permalink | Reply

Copenhagen Synthesis Report Updated. Again.

Perhaps others have already noted this, but yet another version of Copenhagen Synthesis report has been issued.

The revision date in the metadata the latest Adobe pdf file at Copenhagen University is August 11th.

I don’t know that all of the changes are this time around, but it is clear that the Fossil Fuel Emission graph (page 9/36. Above Figure 5) has changed.

It continues to amaze me that an important document like the Copenhagen Synthesis Report doesn’t have simple revision tracking and revision notes.

I have again contacted Australian National Univ to tell them that their copy is out-of-date, and have requested that they again issue a press release telling everyone that a revised version is available. I don’t think any other organization, including the host, Copenhagen Univ have bothered to mentioned the updates.

250. See - owe to Rich
Posted Aug 24, 2009 at 2:43 PM | Permalink | Reply

[Also posted on CA Forum.] Isn’t there something fishy about July’s HadCRUT3 figures – they seem to agree with June’s to 3 places in both NH and SH. Such an event would be unprecedented in human history!

2009/05 0.398 0.414 0.383 0.564 0.232 0.398 0.392 0.565 0.232 0.565 0.232

2009/06 0.499 0.515 0.483 0.643 0.355 0.499 0.493 0.644 0.354 0.644 0.354

2009/07 0.499 0.515 0.483 0.693 0.306 0.499 0.493 0.693 0.305 0.693 0.305

Rich.

251. henry
Posted Aug 25, 2009 at 6:10 AM | Permalink | Reply

Found on Reuters

ADDIS ABABA, Aug 24 (Reuters) – African leaders will ask rich nations for \$67 billion per year to mitigate the impact of global warming on the world’s poorest continent, according to a draft resolution seen by Reuters on Monday.

Ten leaders are holding talks at African Union (AU) headquarters in the Ethiopian capital to try to agree a common stance ahead of a U.N. summit on climate change in Copenhagen in December.

Experts say Africa contributes little to the pollution blamed for warming, but is likely to be hit hardest by the droughts, floods, heatwaves and rising sea levels forecast if climate change is not checked.

The draft resolution, which must still be approved by the 10 leaders, called for rich countries to pay \$67 billion annually to counter the impact of global warming in Africa.

And they’ll do what with this money? Buy an AC for every hut?

Instead of allowing the “rich” nations to use the money to cut their own CO2 levels, Africa says “keep it up, just send us money.”

And since the world’s two largest contributors to CO2 levels are called “developing nations”, they won’t have to pitch in to this impact fund.

On a side note, I did recieve a letter from a Nigerian prince asking me to help move 5 million dollars out of the country. I told him to keep it, sounds like their country needs it more…

Posted Aug 25, 2009 at 7:30 AM | Permalink | Reply

African nations are the most vulnerable to AGW policy, not climate change. They need energy development more than the west and will be denied it through any Copenhagen deal.

So I say go for it. Make the west pay for stifling economic development through insane limitations on CO2 emissions.

253. srp
Posted Aug 25, 2009 at 2:12 PM | Permalink | Reply

Given the many, many complaints about how the climate science community handles its software, you might be interested in this general survey of natural scientists’ software practices. As I’ve tried to explain on some earlier comment threads, science and engineering cultures differ significantly and this survey perhaps illustrates these differences a bit more clearly.

Basically, most scientists are “power users” rather than software professionals and most of their software is intended to be used only by themselves. The social and professional context for a piece of software used to help in one’s own research is completely different from that for software used to control, say, nuclear power plants. The problem is that we need more of the latter standard for doing public policy but the climate science community is still in single-researcher mode.

• Andrew
Posted Aug 25, 2009 at 5:42 PM | Permalink | Reply

Re: srp (#532), You’ll forgive us engineers thinking that scientists should get their asses in gear on that issue.

Now, for everybody. I think you will find a certain line in this amusing, in light of the secrecy of the CRUTemp data:
http://www.latimes.com/news/nationworld/nation/la-na-climate-trial25-2009aug25,0,901567.story

The chamber proposal “brings to mind for me the Salem witch trials, based on myth,” said Brenda Ekwurzel, a climate scientist for the environmental group Union of Concerned Scientists. “In this case, it would be ignoring decades of publicly accessible evidence.”

How can I take these people seriously? Publicly accessible? REALLY???

• srp
Posted Aug 26, 2009 at 7:46 PM | Permalink | Reply

Re: Andrew (#533),

I don’t disagree, although as far as the temperature data are concerned I think that the research scientists are not the solution–Steve M’s proposal that it be handled the way Census and Bureau of Labor Statistics data are is the best idea.

The problem with trying to turn climate models into nuclear-power-plant-quality software systems is that we have a very good scientificunderstanding of how nuclear power plants work, while we are nowhere near that (and may never be) for the Earth’s climate system. One aspect of the whole geoengineering debate that some people are missing is that we might get improved insight about how the climate works by doing experiments, similar to how engineered systems like emergency core cooling systems have been tested.

In one study of a biotech instrumentation firm, we observed scientists complaining that the engineers wanted to turn their technology into a process while the scientists were still trying to figure out what it could do and how to scale it up. The engineers complained that the scientists wouldn’t release their data and refused to answer questions about why they believed certain things about the technology–things the engineers wanted to see hard data on. Sound familiar?

254. John M
Posted Aug 25, 2009 at 5:57 PM | Permalink | Reply

srp and Andrew,

You guys just have to understand how science is done these days.

See the link posted by Mark McKinnon

More here

255. Punch My Ticket
Posted Aug 26, 2009 at 10:43 PM | Permalink | Reply

19 minutes well spent, the first 16 entertaining to all, the last 3 poignant for CA regulars.

Hans Rosling shows the best stats you’ve ever seen

• KevinUK
Posted Aug 27, 2009 at 8:47 AM | Permalink | Reply

PMT, thanks for the link to the Hans Rosling video. Puts Steve’s R generate charts to shame ;-). I’m of to gapminder.org just now to see what they are all about. I agree with you. It was a most enjoyable 19 mins and the message at the end to ‘free the data’ so that we can all analyse it (even using R Steve!) is definitely poignant to CA. I think the title is wrong though. It should say ‘Hans Rosling shows the best animated data you’ve ever seen’.

KevinUK

• BarryW
Posted Sep 2, 2009 at 7:41 PM | Permalink | Reply

Wow, that was awesome. So data “misering” isn’t just with climate data. I wonder what a gapminder chart of temps vs urbanization would look like animated by country.

• Dennis Corrigan
Posted Sep 4, 2009 at 9:06 AM | Permalink | Reply

Similar – at least to my mind – is the excellent:

http://www.ted.com/talks/lang/eng/peter_donnelly_shows_how_stats_fool_juries.html

Posted Aug 31, 2009 at 4:29 AM | Permalink | Reply

FWIW (and by my assailable counting math :-/ ), when someone creates the next installment of Unthreaded, [i.e., n+3], it’ll be #40.

257. Bob Koss
Posted Sep 3, 2009 at 10:40 AM | Permalink | Reply

tty,
Since the subject was off-topic for the other thread I’m replying here and linking back.
Article.
My comment.

You seem to be assuming facts not in evidence. In my post I asked the normal rate of extinction because I don’t think it is possible to quantify. I would be interested in how you came up with your figure of 100 times normal for birds. Could it be conjecture skewed by being subtly influenced numerous times by repetitive assertions in the press similar to what I pointed out in that article?

I’m going to dispute your figure and assert the bird extinction rate is between 75% of normal and 125% of normal rate based on no evidence whatsoever. Whose figure is likely to be closer to reality?

I know little about extinction related events. I suspect most of the ones you remember were of species that have been considered rare since discovery. Those may well have been on their way to extinction even if humans didn’t exist. The passenger pigeon extinction circa 1900, without a doubt was caused by humans. But, how many extinctions of other thriving populations can be attributed to humans? I suspect very few.

The USFWS disagrees with you about Bachman’s Warbler. They list it as endangered. You can use the bottom left Search For a Species: box to find it. Use my spelling.

Perhaps your memory dropped an extinction modifying bit such as ‘almost’ or ‘possibly’.
Maybe you’re getting like me.

• tty
Posted Sep 3, 2009 at 1:17 PM | Permalink | Reply

Re: Bob Koss (#540),

Well you happen to be dead wrong. My estimate is based on an analysis of the fossil and historical record of birds which I am very familiar with. The average “half-life” of a bird species (the time it takes for 50 % of species to go extinct) was about 2 million years until about 10,000 years ago. The figure hasn’t changed much since the Pliocene, and seems to have been much the same back in the Oligocene 30 million years ago (Phosphorites de Quercy fauna).
However the rate started rising sharply in the latest Pleistocene, and during the last 10,000 years more than 500 bird species have gone extinct. That is 5% of all birds, and equals to an half-life of c. 100,000 years. The 20 species in the last 60 years equals an half-life of about 15,000 years. So, yes I think that the extinction rate at the present time is about 100 times higher than normal. Incidentally there is really no doubt that almost all of those 500 extinctions were caused directly or indirectly by humans. They start as soon as humans arrive in an area, but never before. In many cases we even know the specific cause (overhunting, habitat destruction, introduction of new predators, introduction of exotic diseases). And yes all those birds were rare just before they went extinct, but a lot of them weren’t particularly rare originally. To take examples from the US neither the Carolina Parakeet, the Heath-hen, the Ivory-billed Woodpecker or the Eskimo Curlew were rare 200 years ago (incidentally the Ivory-billed Woodpecker and Eskimo Curlew weren’t included in my figure of 20 extinctions, since it is not absolutely certain that they are extinct). On the other hand the Labrador Duck and Bachmanns Warbler were never common, at least not in post-columbian times. And I don’t have much hope for Bachmanns Warbler, nobody has seen or heard one for sure since 1964, and it has been searched for, believe me.

• Bob Koss
Posted Sep 4, 2009 at 8:48 AM | Permalink | Reply

Re: tty (#541), thank you for elaborating on your reasoning. Having only a smattering of knowledge on the subject I find the information informative.

• D. Patterson
Posted Sep 15, 2009 at 2:12 PM | Permalink | Reply

Re: tty (#541),

Have you determined a comparison of avian extinction rates versus total forestationa and deforestation throughout the period of avian species?

258. hengav
Posted Sep 3, 2009 at 10:06 PM | Permalink | Reply

I just finished watching The Nature of things(CBC) on Antarctica. A direct Susuki quote was “in this part of the peninsula, average temperatures have risen 6 degrees in the last 50 years”

Does anybody know of a part of the Antarctic that has warmed that much?

259. Dave Dardinger
Posted Sep 3, 2009 at 10:48 PM | Permalink | Reply

Here’s and interesting article concerning something that’s almost but not quite on topic here. I believe that a good while back there was a discussion about the distribution of numbers, such as street numbers. While this article is on the surface concerning strategy in certain sorts of games, I think it’s closely related to the number statistics problem.

260. hengav
Posted Sep 3, 2009 at 11:08 PM | Permalink | Reply

I found these long stations fromthe CA archive
http://www.climateaudit.org/?p=1989

I can’t find anything larger than 0.23 +- 0.09 degrees per decade in the 50 year reconstruction

261. scp
Posted Sep 8, 2009 at 9:40 AM | Permalink | Reply

I’m not able to watch it yet, but this just came across my RSS feed. It may be of interest to some CA readers…
http://www.ted.com/talks/james_balog_time_lapse_proof_of_extreme_ice_loss.html
Talks James Balog: Time-lapse proof of extreme ice loss
“Photographer James Balog shares new image sequences from the Extreme Ice Survey, a network of time-lapse cameras recording glaciers receding at an alarming rate, some of the most vivid evidence yet of climate change.”

262. Posted Sep 8, 2009 at 7:07 PM | Permalink | Reply

Maybe somebody could help me here. I’m in the middle of being puzzled as to why GISS warms so much in October. See, I was curious per my comment above where I noted that 1978-2008 was very similar to 1911 to 1941, and I wondered about the seasonality of the trends. In the Global mean over 1978-2008, GISS has October having the largest trend. In case anyone is wondering if this is an effect of latitude, the most rapid warmings at high northern latitudes occur between mid September and mid December. There is also another Arctic rapid warming in March-April which is accompanied by a mirror minimum in trends in the high southern latitudes, but no similar effect in the September-December trends in the Southern latitudes-in fact, those are some of the few months which show warming at the lowest latitudes.

http://data.giss.nasa.gov/gistemp/seas_cycle.html

Compared to 1911-1941 the situation is odd, as that period is more symmetric through the year but there is of course no data for Antarctica. The Deep Arctic also warms strongly from Nov-Feb and the warming is more latitudinally uniform, apart from the sharp trends in the Deep Arctic in Boreal Winter, whereas there is strong warming throughout the NH down to 30 latitude in the later warming. The Northern latitudes also experience more summer warming.

All very interesting. Are these differences attributable to something? AGW? Hansen screwing around? (I note that the overall warmings over these periods is basically equal in HadCrut, and the later warming is considerably greater in GISS). Interesting. Does anyone produce similar zonal/seasonal maps with non GISS data?

263. John M
Posted Sep 9, 2009 at 8:02 PM | Permalink | Reply

Nature has an editoridal on data sharing.

And not a word about climate science.

264. Gerald Browning
Posted Sep 10, 2009 at 10:56 PM | Permalink | Reply

Climate Model Sensitivity

Gee, it appears that a climate model could be sensitive to a parameterization:

CGD SEMINAR
DATE: Tuesday, 15 September 2009
TIME: 3:30 p.m.
LOCATION: Mesa Lab, Main Seminar Room
NCAR, 1850 Table Mesa Drive
SPEAKER: Erich Fischer, CGD, NCAR
TITLE: Quantifying uncertainties in projections
of climate extremes – a perturbed land
surface parameter experiment
ABSTRACT:
Socio-economic and ecological impacts of changes in frequency and
intensity of climate extremes reach far beyond effects of mean
climate change.
Changes in climate extremes in response to a doubling of CO2 and
corresponding uncertainties are explored using a perturbed physics
ensemble. Based on CCSM 3.5 with a mixed-layer ocean, a 108
member ensemble experiment is performed by perturbing five poorly
constrained CLM land surface parameters individually and in all
possible combinations.
While the ensemble range of climate sensitivity is found to be
substantially smaller than in corresponding atmospheric ensembles,
temperature variability changes are highly sensitive to land surface
parameter changes. These variability changes have strong
implications for the tails of the distribution, the extreme events.
Consequently uncertainties of cold and heat extremes induced by
poorly constrained land surface parameters are very large.
Furthermore, simple land surface parameter perturbations are
revealed to regionally alter the sign of the precipitation response to
increased greenhouse gas concentrations. Projections of droughts
and heavy rainfall events are thus highly sensitive to land surface
parameters.
For more information, contact Gaylynn Potemkin, email potemkin@ucar.edu, tele. 303-497-1618

Jerry

265. DeWitt Payne
Posted Sep 11, 2009 at 9:21 AM | Permalink | Reply

I hope CCSM 3.5 is a big improvement over CCSM 3.0. CCSM 3.0 runs used in AR4 had 21st trends nearly twice the average, which appears to be too high anyway.

266. Gerald Browning
Posted Sep 11, 2009 at 10:57 PM | Permalink | Reply

All,

The point is that in one of the first parameterization sensitivity tests I have seen, the climate model fails miserably. It is interesting that the test was of a land surface parameterization. If you recall
the surface drag term was the most dominant contributor to the apparent improvement in a short term forecast in Sylvie Gravel’s manuscript on this site. In her manuscript the artificial nature of the parameterization is evident and clearly shows how an artificial term can make things look better even though the term is not physically accurate. This new test just affirms that the parameterizations are tuned to balance an unrealistic enstrophy cascade due to unrealistically large dissipation. If that artificial balance is disturbed, then the capriciousness of the balance is revealed.

Jerry

267. John M
Posted Sep 12, 2009 at 6:48 AM | Permalink | Reply

What rhymes with thixotropy??

Tricks on a tree and mixology.

It’s all yours Mosh.

268. bender
Posted Sep 12, 2009 at 9:58 AM | Permalink | Reply

Iceland (Miller):
Asking too much of the data.
Varved record, but no correlation to met data.

Ok, Hu, Ken, et al, go ahead and poke fun at my skepticism.

269. Calvin Ball
Posted Sep 12, 2009 at 10:10 AM | Permalink | Reply

Thixotropy is pronounced such that it rhymes with “picks a copy”.

• John M
Posted Sep 12, 2009 at 10:24 AM | Permalink | Reply

Close, but that puts the emph-a-h-h-h-sus on the wrong syl-a-h-h-h-ble.

Thix a-h-h truh pee

But, poetic license is aloud in limericks.

(Got a feeling we’ll again be given an opportunity to pronounce “Zamboni”, soon)

• Calvin Ball
Posted Sep 12, 2009 at 10:41 AM | Permalink | Reply

Re: John M (#87), I’ve always heard it with no discernible stress at all. But in any event, the ‘O’ is short.

270. Posted Sep 13, 2009 at 3:04 AM | Permalink | Reply

I know this may seem to be off thread, in which case I apologise, but can anyone explain why no annual mean temperature data are available for Mauna Loa Slope Observatory since 1992, and that consequently we have to rely on Jacoby tree ring data (if any)? It appears that when the Clinton-Gore administration took office in Jan 1993 an Executive Order was issued that NOAA was on no account to record temperatures on cold days. At least that is what is evident from the NOAA data for that site. For example, in January 2008 and January 2009, no temperatures were recorded on 13 and 12 days respectively, so no valid mean was available either for the month (or for the year in the case of 2008). Of course I am not suggesting malfeasance, only that all the Presidents’ money is not enough to finance temperature data collection at Mauna Loa. Nor do I dream of implying that Mauna Loa temperatures have been suppressed since 1992 because they do not confirm that the rising atmospheric CO2 which Pieter Tans heroically compiles there despite the cold refute the IPCC claim that rising [CO2] causes rising temperature. Requests for this data to tcurtin@bigblue.net.au will be met asap.

271. Posted Sep 13, 2009 at 5:03 AM | Permalink | Reply

re Severian at #33: Yes indeed – and the behaviour of Hansen (GISS) in refusing even to include Mauna Loa in his data sets, or of NOAA to ensure complete records since 1992 exemplifies your point, and in spades, vis a vis the IPCC. What doctor confronted by a patient complaining of fever takes not his/her temperature but cites someone else’s temperature 5 years ago 5000 miles away as proof thet said patient does not have a fever. Until the IPCC addresses the temperature records at Mauna Loa, Cape Grim and Pt Barrow et al, none of which show any trend against rising [CO2], its AR5 will be even worse if possible than its TAR, based on the hockey stick, and AR4, not much better.

272. Posted Sep 13, 2009 at 7:05 AM | Permalink | Reply

Geoff (#35), thanks but I could not go back beyond July 2009 from that link. I gather from Bill Kininmonth I have to pay to get that kind of data from BoM, but if you have the actuals, to save me a bob, do please send to me at
tcurtin@bigblue.net.au

• Geoff Sherrington
Posted Sep 14, 2009 at 5:16 AM | Permalink | Reply

IIRC, the AWN used to be daily, but maybe not. They are monthly now. I do not have daily up to the present. If “as it happens” is important to you, maybe there is some help from Laurier Williams [feedback@australianweathernews.com]

Sorry for my mistake Geoff.

273. John M
Posted Sep 14, 2009 at 6:55 PM | Permalink | Reply

How often have we heard that peer review is the “gold standard” and blogs don’t count?

The episode has highlighted the way that blogging can immediately bring together expert opinion on a given topic, or, as Braddock puts it: ‘Poorly reviewed papers claiming novelty can be expected to be rapidly dissected in the blogosphere.’

I guess climate science is different.

• Calvin Ball
Posted Sep 14, 2009 at 8:18 PM | Permalink | Reply

Re: John M (#571), actually, poorly reviewed papers are rapidly dissected in the blogosphere. But the Team doesn’t care. They just ignore it. There’s hard science, and there’s herd science.

• John M
Posted Sep 14, 2009 at 8:42 PM | Permalink | Reply

Indeed. I was pointing out the irony (always a risk on a blog) of all the climate science defenders who are so quick to dismiss the value of blogs (except for RC of course).

You’d think these computer savvy folks would be more comfortable with the way things are done in the 21st century.

Other fields are beginning to realize it, but with climate science…

What was that old joke about the jackass and the two-by-four?

274. nevket240
Posted Sep 14, 2009 at 8:57 PM | Permalink | Reply

Copenhagen is getting closer. I can smell it. Why is it that CO2 is now a listed pollutant but propaganda is OK???

“Scientists find CO2 link to Antarctic ice cap origin
Mon Sep 14, 2009 7:48am EDT
Email | Print | Share| Reprints | Single Page[-] Text [+]
1 of 2Full SizeBy David Fogarty, Climate Change Correspondent, Asia
SINGAPORE (Reuters) – A team of scientists studying rock samples in Africa has shown a strong link between falling carbon dioxide levels and the formation of Antarctic ice sheets 34 million years ago.

The results are the first to make the link, underpinning computer climate models that predict both the creation of ice sheets when CO2 levels fall and the melting of ice caps when CO2 levels rise.

The team, from Cardiff, Bristol and Texas A&M Universities, spent weeks in the African bush in Tanzania with an armed guard to protect them from lions to extract samples of tiny fossils that could reveal CO2 levels in the atmosphere 34 million years ago.

Levels of carbon dioxide, the main greenhouse gas, mysteriously fell during this time in an event called the Eocene-Oligocene climate transition.

“This was the biggest climate switch since the extinction of the dinosaurs 65 million years ago,” said co-author Bridget Wade from Texas A&M University.

The study reconstructed CO2 levels around this period, showing a dip around the time ice sheets in Antarctica started to form. CO2 levels were around 750 parts per million, about double current levels.

“There are no samples of air from that age that we can measure, so you need to find something you can measure that would have responded to the atmospheric CO2,” Paul Pearson of Cardiff University told Reuters.

Pearson, Wade and Gavin Foster from the University of Bristol gathered sediment samples in the Tanzanian village of Stakishari where there are deposits of a particular type of well-preserved microfossils that can reveal past CO2 levels.

“Our study is the first that uses some sort of proxy reconstruction of CO2 to point to the declining CO2 that most of us expected we ought to be able to find,” Pearson said on Monday from Cardiff.

He said that CO2, being an acidic gas, causes changes in acidity in the ocean, which absorbs large amounts of the gas.

“We can pick that up through chemistry of microscopic plankton shells that were living in the surface ocean at the time,” he explained.

Evidence from around Antarctica was much harder to find.

“The ice caps covered everything in Antarctica. The erosion of sediments around Antarctica since the formation of the ice caps has obliterated a lot of the pre-existing evidence that might have been there.”

“Our results are really in line with the most sophisticated climate models that have been applied to this interval,” Pearson added. The results were published online in the journal Nature.

“Those models could be used to predict the melting of the ice. The suggested melting starts around 900 ppm (parts per million),” he said, a level he believes could be reached by the end of this century, unless serious emissions cuts were made.

(Editing by Tomasz Janowski)”

regards

• DeWitt Payne
Posted Sep 14, 2009 at 10:19 PM | Permalink | Reply

Re: nevket240 (#574),

Levels of carbon dioxide, the main greenhouse gas, mysteriously fell during this time in an event called the Eocene-Oligocene climate transition.

[my bold]

As I said to someone else who sent me a link to this press release, how do you know which was cause and which effect? Was there something that caused the initial cooling and the cooling ocean started sucking up CO2 or was there something that started sucking up CO2 first? In either case, what was the cause? I have no problem with CO2 amplifying a temperature change, but you have to have a mechanism to start it.

275. pete m
Posted Sep 14, 2009 at 11:12 PM | Permalink | Reply

Excuse my ignorance, but whatever happened to the data that Steve obtained from the server at the uni? Did he agree to delete it or did they just ignore him, therefore allowing him to make it available here?

cheers

276. Posted Sep 15, 2009 at 12:21 PM | Permalink | Reply

This sounds relevant to the Barton-Chu exchange discussed on CA threads #5880 and 5902. Comments appear to be closed there, so I’m posting it here:

NOVA
PRESENTS
Arctic Dinosaurs
Tuesday, September 15 at 8pm ET/PT on PBS
How did dinosaurs survive and even thrive in the gloom of the dark
and frigid polar regions? Arctic Dinosaurs explores this intriguing
but little-known enigma in contemporary paleontology. The program
follows a unique field expedition, covered exclusively by NOVA, as
it sets out for Alaska’s North Slope to defrost a jackpot of new
fossil clues. Researchers combine extreme engineering and perilous
fossil hunting–including blasting a deep tunnel into the permafrost
to collect fossils trapped beneath the icy soil–revealing
provocative new clues as to how the polar dinosaurs lived and to
their final extinction. Read about the perils of filming in this
remote Alaskan wilderness from producer Chris Schmidt on the
program’s companion website.

http://www.pbs.org/wgbh/nova/arcticdino

277. Calvin Ball
Posted Sep 16, 2009 at 12:55 PM | Permalink | Reply

Interesting. I just did a whois on realclimate, and now they’re anonymous.

Nope, they’re not embarrassed by their associations; they proudly display them for the world to see. Or at least they did a year ago.

• Bob Koss
Posted Sep 16, 2009 at 5:17 PM | Permalink | Reply

Re: Calvin Ball (#579), I just looked and whois now shows Environmental Media Services Washington DC as the registrant.

278. Calvin Ball
Posted Sep 16, 2009 at 6:12 PM | Permalink | Reply

Must be a buggy whois agent that I used. I did: http://www.whois.net/whois/realclimate.org, and it came up with:

WHOIS information for realclimate.org :

Domain ID: D105219760-LROR
Domain Name: realclimate.org
Created On:
Expiration Date:
Sponsoring Registrar: eNom, Inc. (5065-EN)
Status: ok
Name Server:
Name Server:
Registrant ID: Unknown
Registrant Name:
Registrant Organization: Unknown
Registrant Street1: Unknown

It looks like other ones are finding it. Sorry. False alarm. This whois.net site isn’t reliable.

279. DeWitt Payne
Posted Sep 17, 2009 at 2:30 PM | Permalink | Reply

I was looking at the global sea ice area anomaly at Cryosphere Today and noticed something that is probably coincidental. It will be decades before we could possibly know.

This is purely eyeball. I probably have enough data to do a better estimate but I haven’t gotten around to it yet. The global sea ice anomaly looks to be nearly flat to about 2000 then there may be a negative trend to 2007 at least. GMST was increasing until about 2000 and has been remarkably flat since. Ocean heat content has also been remarkably flat for the last five years or so. Ice acts as an insulator between the ocean and the atmosphere so more ice area means colder surface temperature and less radiation to space. So does sea ice act as a thermostat, opening if the temperature gets too high? There are all sorts of complications for doing any sort of quantitative model. For one, ice albedo is higher than water so more sunlight is reflected. It is falsifiable though. We can reject the hypothesis if global sea ice area continues to decline and GMST starts increasing again.

• BarryW
Posted Sep 17, 2009 at 6:14 PM | Permalink | Reply

Bob Tisdale had a blog entry on the effect of the el niño showing a graph of the temps propagating to the polar regions , with about the same time line you’re contemplating.

• bender
Posted Sep 18, 2009 at 3:14 AM | Permalink | Reply

So does sea ice act as a thermostat, opening if the temperature gets too high?

A polar iris. Why not?

• bender
Posted Sep 18, 2009 at 3:18 AM | Permalink | Reply

We can reject the hypothesis if global sea ice area continues to decline and GMST starts increasing again.

Although there could be a lag of several years between the two processes due to thermal inertia of the oceans.

• BarryW
Posted Sep 18, 2009 at 7:40 AM | Permalink | Reply

Re: bender (#587), T
Notice the graph is asymmetric in arctic/antarctic heating, yet the initial temp rise is at least visually centered. Currents/winds? The lack of sea ice reduction in the antarctic it would also seem to support the hypothesis.

• bender
Posted Sep 18, 2009 at 7:50 AM | Permalink

Re: BarryW (#591),
Yes, I saw that. Just difference in ocean basin geometry would be my guess. Faster heat diffusion to the SH than NH. Makes sense to this non-climatologist.

• BarryW
Posted Sep 18, 2009 at 1:22 PM | Permalink

Re: bender (#592),

Thinking about it after reading your comment that makes sense in terms of heat vs temp. The heat would be transferring to a smaller body of water in the northern hemisphere while diffusing in the southern, right? So you would expect a difference in temp but maybe not in total heat transfer.

I wonder what the effect would be on the winds? Could that be a driver for the ice being pushed out of the arctic basin?

280. nevket240
Posted Sep 18, 2009 at 1:24 AM | Permalink | Reply

Don’t worry about the bloody sea-ice, this is serious. Already AGW is causing shortages.
regards,

African Condom Shortage Said to Worsen Climate Impact (Update2)

Share | Email | Print | A A A

By Jim Efstathiou Jr.

Sept. 17 (Bloomberg) — Unwanted pregnancies in poor countries have led to higher demand for land and water, resources already taxed by climate change, according to research to be published by the World Health Organization.

Runaway population growth in countries such as Ethiopia and Rwanda where contraceptives are in short supply is exacerbating drought and straining fresh water supplies, said Leo Bryant, lead author of the study. Of 40 nations reviewed, 37 said rapid population growth worsened environmental damage.

Climate change has been blamed by scientists for increasing droughts, pushing up sea levels and causing floods from heavy rainfall in countries across the globe. The impact can be worse in developing nations where food and water already are in short supply and there is little funding to help communities adapt.

“It’s time to start looking at the environmental relevance of family planning,” Bryant, an advocacy manager for the London-based reproductive health-care provider Marie Stopes International, said yesterday in a telephone interview. “Reproductive health services ought to be integrated into the climate adaptation strategy.”

Bryant analyzed national plans to adapt to climate change submitted to the United Nations by 40 poorer countries. Most said demographic trends were “interacting” with climate change to speed the degradation of natural resources and raise the risk of extreme weather events. He said the findings are set to be published in November by the Geneva-based World Health Organization, which coordinates UN health policy.

Rwanda Family Planning

Population growth rates “have significant impacts on the state of the environment, aggravating vulnerability and adaptation needs,” the Pacific island nation of Kiribati said in a report to the UN. “In this respect, population policy is an important consideration of adaptation strategies.”

The 33-island archipelago risks being submerged in coming years because of higher seas and may purchase land elsewhere to relocate its people, President Anote Tong said in February.

In Rwanda, where only 10 percent of adults have access to reproductive health-care services and protection such as condoms, demographic pressures are forcing a migration to less- populated areas already prone to drought and desertification, Bryant said. In Bangladesh, a higher sea level is shrinking fresh water supplies even as a growing population demands more water.

East African Drought

Climate change and poor management of water resources is causing a “severe” drought in eastern Africa, according to The Netherlands-based Wetlands International, which promotes the restoration of wetlands. Kenya’s Lake Naivasha, normally a 30- hectare (74,000-acre) site, is in danger of vanishing, the group said.

“Unsustainable water use and pollution has driven the local farmers and fishermen into a situation where they can no longer live off the basic support and benefits of the wetlands,” an article on the group’s Web site said today.

Only six of the 37 countries that said population growth compounds water scarcity or threatens biodiversity proposed solutions through climate-adaptation plans, Bryant said. The world’s population is projected to grow from 6.8 billion people at present to 9.2 billion by 2050.

“We’re not in any way proposing that government should start telling people how many children to have,” Bryant said. “Children should be by choice. The problem is that a majority of people in sub-Saharan Africa don’t have that right because they don’t have access to contraception.”

To contact the reporter on this story: Jim Efstathiou Jr. in New York at jefstathiou@bloomberg.net.

Last Updated: September 17, 2009 09:28 EDT

OMG!!!!

• henry
Posted Sep 18, 2009 at 3:21 AM | Permalink | Reply

Re: nevket240 (#584),Story is believable, except that China’s had the one child per family rules for quite a while now.

Hasn’t slowed them down much.

281. bender
Posted Sep 18, 2009 at 3:23 AM | Permalink | Reply

The arcs in BarryW’s graph suggests that lag could be several years. 3? 4? A wavelet analysis should resolve that parameter. Lucia?

• See - owe to Rich
Posted Sep 18, 2009 at 11:22 AM | Permalink | Reply

Re: bender (#589), to my eye it looks more like 2.5ish years than 4. If we use 2.5 to predict then the La Nina of early 2008 would peak in Arctic coolth in summer 2010. This suddenly makes me more sanguine about a further recovery of ice next year. But in 2011 it could be heading back down again.

We need to be careful about this (first with better statistical analysis) because the period of data is PDO- and AMO-positive. There could be interactions with those things.

Rich.

282. bender
Posted Sep 18, 2009 at 3:33 AM | Permalink | Reply

Tisdale also mentions inexplicable lags between sea level anomalies and ENSO events. Maybe it is the product of lagged arctic sea ice melt (?) – consistent with DWP’s polar sea ice “iris effect”. Should check the literature to see if this has been reported already.

283. Gerald Browning
Posted Sep 18, 2009 at 2:53 PM | Permalink | Reply

All,

But the climatologists claim to know everything about the sun:

Sun Surprisingly Active During Low Point in Cycle
By Rachael Rettner
SPACE.com Staff
posted: 17 September 2009
09:03 am ET

Just looking at the number of sunspots doesn’t provide a full picture of how the sun’s solar energy impacts Earth, a new study suggests. The findings contradict previous thinking about how the sun behaves during low points in its solar cycle.

“What we’re realizing is that the sunspots do not tell the whole story,” said Sarah Gibson, a scientist from the National Center for Atmospheric Research (NCAR) in Boulder, Colo.

Sunspots are areas of concentrated magnetic activity that appear as dark dots on the sun’s surface. The number of spots periodically rises and falls in what has become known as the solar cycle. This cycle lasts about 11 years, taking roughly 5.5 years to go from “solar minimum,” or a period of time when there are few sunspots, to the cycle peak, or “solar maximum,” during which there are many sunspots.

During cycle peaks — the next one is expected in 2013 — there are frequent solar flares and geomagnetic storms, events that send out radiation that can bombard the Earth’s atmosphere, damaging satellites and disrupting power grids. Scientists say that a really bad solar storm, akin to one that started fires along telegraph lines in 1859, could bring modern society to its knees.

Conversely, cycle minimums were thought to be very quiet times, periods when the Earth would not experience as many blasts of solar energy. But Gibson’s study shows that this is not necessarily the case.

Full force in 2008

Gibson and her colleagues compared measurements from two different cycle minimums — one from 1996 and one from 2008. They analyzed a type of solar energy called the solar wind — streams of charged particles that accelerate out from the sun’s extremely hot atmosphere. The solar wind, unlike short-lived solar storms, streams from the sun pretty much constantly but with varying intensity.

They found that, while the solar winds intersecting the Earth largely disappeared in 1996, they continued to hit the Earth at full force in 2008.

These results show that “what we thought was a typical solar minimum wasn’t, and what we’re seeing now is a different animal,” Gibson said.

Scientists previously thought that during solar minimums, solar winds would simply blow out the top and the bottom of the sun, and not come out near the equator. Since the Earth is close to the same latitude as the sun’s equator, it shouldn’t experience much solar wind during a low point in the solar cycle.

“If you imagine holding a hose and you hold it straight up, you would spray up, and if you had a friend standing nearby, they might get a little wet, but not soaked,” Gibson said. But during the current solar minimum in 2008, the fire hose was still pointing at the Earth.

“The last two solar minimum, this didn’t happen. When the sun spots went away, these fire hoses went away too,” she said.

In fact, the solar wind’s effect on the Earth’s radiation belt — a ring of charged particles around the planet — was three times greater in 2008 than in 1996. While the effects of solar winds aren’t as drastic as those of a solar flare, they can still interfere with satellite orbits and radio communications.

Changing as you read this

This year, the solar winds are starting to taper off. However, Gibson was surprised that the reduction of winds lagged so far behind the decrease in sunspots.

What could account for this difference in minimums? Gibson thinks it may have something to do with the fact that this cycle’s minimum has been historically wimpy — there were fewer sunspots during this minimum than during any minimum in the last 75 years.

“In a minimum when you have a really strong polar field, it can clamp down everything else at lower latitudes down towards the equator, and really the only action is what’s coming out the poles, and you get what we thought was a classic solar minimum picture, and you don’t get any wind escaping,” Gibson said. “But because we have such a weak magnetic field this cycle compared to last cycle minimum … the polar field is not as strong, it can’t clamp down, and stuff kind of escapes, you get these streams that squirt out at lower latitudes near the equator instead of just at the poles,” she said.

Scientists are trying to learn more about the solar cycle to understand why it occurs and what accounts for cycle differences.

The solar cycle is “something that’s become very important to understand in the space age because we have all these satellites, we have astronauts out there, and you have to know what space whether they’re likely to face,” Gibson said.

The study was led by Gibson, and the research team included scientists from NCAR’s High Altitude Observatory, the University of Michigan, NOAA and NASA. The results will appear this week in the Journal of Geophysical Research. The research was funded by NASA and the National Science Foundation.

* Gallery: Solar Storms
* Video: How Space Storms Wreak Havoc on Earth
* The Great Solar Storm of 1859

Note that I wrote a manuscript with Tom Holzer (NCAR HAO) and was shocked at what I discovered about the

Jerry

284. nevket240
Posted Sep 18, 2009 at 8:33 PM | Permalink | Reply

http://www.marketoracle.co.uk/Article13528.html

I’m sure that after reading this article you feel the same way as I did. It is long but worth it.
I’m not of the conspiracy crew, instead I find the remarks about the Government Agency involved to be a remarkable image of GISS.

regards from a cold, wet Southern OZ.

• See - owe to Rich
Posted Sep 19, 2009 at 11:19 AM | Permalink | Reply

Hey, mate (mite in Strine?), you can’t complain about cold and wet, you’ve just had your warmest winter ever, or so we are reliably (?) informed.

Cheers,
Rich.

• BarryW
Posted Sep 19, 2009 at 11:57 AM | Permalink | Reply

Re: nevket240 (#596),

this will get snipped but I can’t let this go without comment. That article is basically bunk. Fires can cause structural failure see here

• nevket240
Posted Sep 19, 2009 at 8:47 PM | Permalink | Reply

Re: BarryW (#598),

BarryW. if you read what I wrote you will come to the conclusion I was not agreeing with anything to do with the conspiracy aspect. Don’t the comments about the Govt Dept involved resonate??

Rich. you have a very strong accent. The winter was warm enough to ensure a brilliant snow season. Some of OZ was warm but I can assure you, as a shift worker, that the nights here in Vic have been blo*dy cold. Colder than last year.

regards

285. Rob Spooner
Posted Sep 19, 2009 at 12:23 PM | Permalink | Reply

When a new helioseismic model was reported in June to have successfully predicted that sunspots would finally start appearing in quantity, it was seized upon in the press as an “explanation” even though it was pretty close to speculation. A trend with one data point, and one incomplete instance that is not inconsistent, is hardly newsworthy.

Three months have now passed and the theory is now based on one confirming data point and one exception. Shouldn’t there be another press conference, in which the claims of the first press conference are disclaimed?

286. Gerald Browning
Posted Sep 19, 2009 at 4:18 PM | Permalink | Reply

Robert Spooner (#599),

NCAR publishes favorable news releases on its web site all of the time. Anyone that has performed scientific
research knows that it can take years before a particular theory is validated or invalidated. So the news media should be ashamed of themselves for publishing results that are so fresh and NCAR knows better, but wants the publicity.

Jerry

Posted Sep 23, 2009 at 7:02 PM | Permalink | Reply

Pls ignore this. I’m just testing (for myself) the “code” tag for tabularized data:
.

9/5/2009 5340156 -25625 -27678
9/4/2008 4868906 -58125 -42031
9/5/2007 4484531 -43594 -25759
9/5/2006 5934531 -782 -3259

.
9/5/2009 5340156 -25625 -27678
9/4/2008 4868906 -58125 -42031
9/5/2007 4484531 -43594 -25759
9/5/2006 5934531 -782 -3259
.
9/5/2009 5340156 -25625 -27678
9/4/2008 4868906 -58125 -42031
9/5/2007 4484531 -43594 -25759
9/5/2006 5934531 -782 -3259
.

-Thx

288. DeWitt Payne
Posted Sep 24, 2009 at 1:32 PM | Permalink | Reply

The Archer MODTRAN page wasn’t working when I tried it just now. I’m hoping it’s a temporary glitch. I’m assuming that if he died or something, it would show up on a Google search and there’s nothing new there.

289. Pat Frank
Posted Sep 24, 2009 at 10:21 PM | Permalink | Reply

Larry Gould has published my response letter in the Fall edition of the newsletter of the New England chapter of the APS. Larry is a physicist on the faculty of the University of Hartford, and is co-Editor of the newsletter. He takes a very critical opposition to AGW. His web page is here, with lots of links and commentary.

For anyone who wants to see the letter in NES-APS situ, it’s here, about half-way down the page.

My letter was in response to a prior letter by Emeritus Prof. Frank Levin in support of AGW, that appeared in the Fall 2008 NES-APS newsletter, here.

Here’s the text of my reply:

Editors
New England Chapter

Editors:

I still find myself impressed when a trained physicist engages a scientific question with a polemic, as Prof. Levin has done concerning climate warming (APS NES Fall 2008). The real climate-warming question is attribution and is very simple: ‘Can the observed climate warming be scientifically attributed to anthropogenic CO2?’ A validly physics-based answer makes superfluous all polemical arguments. One would have expected a trained physicist to approach a scientific question with that clarity, but Prof. Levin did not.

One must look deep within Working Group 1 (WG1) Chapter 9 of the IPCC’s Fourth Assessment Report (4AR) to find candid descriptions of attribution studies, and in the Chapter 9 Supplement Appendix 9.B: “Methods Used to Estimate Climate Sensitivity and Aerosol Forcing”, one reads:

[D]iscrepancy between observed and simulated temperatures [can be] due to, for example, observational uncertainty, forcing uncertainty or internal climate variability. These uncertainties affect the width of the likelihood function, and thus the width of the posterior distribution. … The prior distribution p(q) that is used in this calculation is chosen to reflect prior knowledge and uncertainty (either subjective or objective) about plausible parameter values, and in fact, is often simply a wide uniform distribution[,indicating] that little is known, a priori, about the parameters of interest… Even so, the choice of prior bounds can be subjective. … Alternatively, expert opinion can also be used to construct priors (Forest et al., 2002; 2006). Note, however, that expert opinion may be overconfident (Risbey and Kandlikar, 2002) and if this is the case, the posterior distribution may be too narrow.

The consensus view is thus that prior bounds on critical parameters within climate models are poorly constrained, subjective, and likely to reflect over-confidence. WG1 Chapter 8 Supplemental Figures S8.5 and S8.14, among other figures, show model errors far greater than the ~2.7 W/m2 GHG forcing the same models are purported to detect.

These levels of uncertainty do not allow any reliable assignment of attribution. The large disparity between the actual state of the science and the confidence expressed in the recent IPCC Summary for Policymakers should give any physical scientist skeptical pause.

The “Hockey-stick” proxy temperature constructions were discussed in prior replies, but it should be further noted that there are no known biomarkers or physical metrics whatever that allow extraction of a temperature from a tree ring. All judgments of specifically temperature-limited tree-growth are qualitative and subjective. For example, D’Arrigo, et al., 2000:

The trees, of Siberian pine and larch (Larix sibirica Lebedour), were sampled at or very near timber-line (exceeding 2200 m elevation) in settings where temperature, rather than drought stress, appears to be the limiting factor to growth. We base this interpretation on the presence of lush ground vegetation, water seepage, correlations with climate, and other considerations (Jacoby et al., 1996).”[1]

Jacoby, et al., 1996 say:

The sampling location is that of a typical tree-line site where temperature should be the factor that limits tree growth. The vegetation is more lush here than at some (drier) high-elevation sites in the U.S. This difference indicates that precipitation should not be a growth limiting factor.”[2]

This is all too typical. The D’Arrigo, et al., reference to a prior qualitative judgment adds nothing. From these qualitative judgments proxy tree-ring reconstructions are represented to tenths of Celsius.

It is not controversial that principal components are strictly numerical constructs with no distinct physical meaning. Nevertheless, statistically renormalized principal components derived from tree ring series have been assigned the physical meaning of temperature.[3, 4] Statistics is not a substitute for physics. Purely statistical assignments do not confer physical meaning.

Such methodological errors are rife in modern climate science, and the global average surface temperature (GAST) record is a final example fundamental to the entire modern clamor: In paragraph 18 of the most recent and definitive estimation of the GAST,[5] the statistical uncertainty in monthly mean temperatures is assessed incorrectly as sixty sequential measurements of one observable rather than, correctly, as sixty independent observables measured once sequentially. This mistake erroneously and significantly reduces the reported statistical uncertainty in the GAST, and must have repeatedly passed peer review.

In a recent article at arXiv (http://arxiv.org/abs/0809.3762) Richard Lindzen documented the character assassinations and other shameful goings-on in climate science, and in his 2008 Erice Conference presentation (http://www.climateaudit.org/?p=3651) Steven McIntyre has related the obscurantism, institutional negligence, and outright obstructionism he has repeatedly encountered in auditing proxy climate reconstructions.

It is long past time for physicists to critically intervene for the integrity of their discipline.

Patrick Frank

1. D’Arrigo, R., et al. (2000) Trans-Tasman Sea climate variability since 1740 inferred from middle to high latitude tree-ring data Climate Dynamics 16(603-610)
2. Jacoby, G.C., R.D. D’Arrigo, and T. Davaajamts (1996) Mongolian Tree Rings and 20th-Century Warming Science 273: 771-773
3. Mann, M.E., R.S. Bradley, and M.K. Hughes (1998) Global-scale temperature patterns and climate forcing over the past six centuries Nature 392: 779-787
4. Ammann, C.M. and E.R. Wahl (2007) Importance of the geophysical context for statistical evaluation of climate reconstruction procedures Climatic Change 85(1-2): 71-88
5. Brohan, P., et al. (2006) Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850 J. Geophys, Res. 111: D12106 1-21

290. Posted Sep 25, 2009 at 4:46 AM | Permalink | Reply

Pat Roache has published a new book on Verification and Validation of computer software. Additional information here.

The price of his earlier V&V book and his classic on CFD have been reduced by 1/2.

Posted Sep 27, 2009 at 3:48 PM | Permalink | Reply

Thanks mosh for the counter view. It all fits so well.

292. Micky C
Posted Sep 27, 2009 at 3:57 PM | Permalink | Reply

To have any chance of the uptick in ring width = temp increase for some special trees to be correct, you would need to show that all other measurable parameters associated with a tree are characterised in some way and are independent of temperature to the best that you can say. Also that you have adequately demonstrated a cause for why the majority of other trees sampled in what you believe or can demonstrate is the same method do not show such a correlation.
However cherry picking tree ring widths that go up with temperature with no other reason than we believe tree ring widths are being influenced by temperature, is interesting but untenable without proper characterisation.
Reason and logic would initially conclude that on the balance of things the trees with the uptick are the anomalous ones.
So sorry lads, no short cuts. More work needed.

293. Fred
Posted Sep 27, 2009 at 4:06 PM | Permalink | Reply

This all seems so strange, so bizarre, so can’t be happening since we have been told over and over and over again, for years and years that the “Science is settled”.

Maybe the new mantra will now be the “Science was settled”

Can’t wait to read all about it in Nature and the NYT.

294. Francis Atchison
Posted Sep 28, 2009 at 10:26 AM | Permalink | Reply

Numerical/theoretical/hand-waving arguments should carry little or no weight in decisions regarding CO2 emission control because of the serious consequences of
1. Implementing them if they are wrong or
2. Not implementing them if the CO2 lobby is right.

The urgent requirement is for definitive measurements that show the role of CO2 in the spectrum escaping the earth system;
that is, a competent team needs to be built to put a series of satellites equipped with infra-red spectrometers into orbit that
will give the answer to this one question.
Despite my first statement: The strong argument against CO2 in the atmosphere being the main cause of temperature
rise is that this implies a positive feedback system (CO2 in the atmosphere and in the ocean have to be in chemical
equilibrium and the ocean is the dominant source of CO2; the density in the ocean is proportional to the logarithm of
the partial pressure of CO2 in the atmosphere, Henry’s relation, see John J. Carroll and Alan E. Mather, “The System
Carbon Dioxide-Water and the Krichevsky-Kasarnovsky Equation,” Journal of Solution Chemistry, vol. 21,
pp. 607-621, 1992), which cannot be true for a system that has already survived a few thousand million years.
The Mauna Loa, Hawaii, CO2 measurements seem to be presented by IPCC and the UN as representative. Some
simple arithmetic may be used to examine their implication. The sea temperature in th region of Hawaii shows an
annual variation of about 3°C (22/24 to 24/27°C) [http://www.nodc.noaa.gov/dsdt/cwtg/hawaii.html]. Using Henry’s
relation suggests that this would cause a fluctuation of the CO2 concentration between ±7 and ±10 ppmV during the year.
The fluctuations themselves correspond to an emission/removal rate of about 1.5 × 1014 kg CO2/year and the general
rise rate to about 1.7 × 1013 kg CO2 extra remaining in the in the atmosphere every year.
The year 2004 value for the CO2 emission corresponds to 2.2 × 1013 kg CO2/year: Are the values really representative?

• Posted Sep 28, 2009 at 2:47 PM | Permalink | Reply

Francis, a few remarks: seawater and fresh water are completely different things for CO2: there is much more CO2 dissolved in seawater than CO2 can be absorbed by fresh water at the same temperature and (partial) pressure. That is a matter of pH and the presence of calcium (as bicarbonate).

The oceans are not the dominant source of CO2, the oceans are a net sink of CO2 (about 2 GtC per year), together with vegetation (about 1.4 GtC/year) of the about 8 GtC/year humans emit. Local seawater temperatures play a role in local release/absorption of the oceans, but the release/uptake is relative slow and can’t cope with the speed of the emissions, neither with the speed of vegetation growth/decay. That is the reason one sees a seasonal variation in the CO2 data.

For the ocean uptake and release, see the work of Feely e.a. at: http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml

• Francis Atchison
Posted Sep 29, 2009 at 6:04 AM | Permalink | Reply

Re: Ferdinand Engelbeen (#81),
Thanks for the comments Ferdinand; I should have said ‘resevoir’ rather than ‘source’ for the ocean; at high atmospheric CO2 concentrations it will act as a sink, at low concentrations it will act as a source. However, the balance of CO2 between ocean and atmosphere is maintained by a large number of processes with large variations in time constant; it is not obvious if this set of processes will result in an overall under- or over-damped system so that the effective time constant at short times is difficult to predict (one estimate I saw said atmosphere-ocean equilibrium has a 5 to 10 year time constant). I used Henry’s constant values implied by measurements made on south Atlantic surface sea water. I am still somewhat amazed by the Hawaii results; it is somewhat remote from large sources of trees/soil, however, the implied emission rate corresponds to the estimated total emission from the world’s vegitation (or 1/3 of the emission of vegetation plus soil); these two taken together are more than balanced by removal from photosynthesis and the processes take place in the same areas ………

295. Richard S Courtney
Posted Sep 28, 2009 at 5:00 PM | Permalink | Reply

Eric (at #42 and at #73), Ferdinand (at #67, at #74 and at #80) and Kuhnkat (at #75):

As I have repeatedly pointed out to Ferdinand in the past, it is simply not true to assert as he does (at #67):

“once the bubbles in the ice cores are closed, there is simply no migration anymore. At that point the ice is 40 years old (for the Law Dome ice cores). What matters is that there is gas migration top down for the upper 70 somewhat meters until full closing, which makes that the average gas age at that depth is only 10 years. After full closing, the 30 year difference between ice age and gas age doesn’t change anymore,”

and it is completely false to assert as Ferdinand does (at #74).

“Gas diffusion is mainly a matter of pore diameter and diffusion speed, this can be calculated and the theoretical calculation was confirmed by measuring CO2 levels at different depths in the still open pores of firn of the Law Dome ice cores.”

Gas diffusion through pores is slow and can be calculated but it is a minor reason for the redistribution of CO2 in ice. Ionic diffusion of dissolved CO2 is much more important.

Ice is slippery because ice surfaces have a coating of liquid water at all temperatures down to -33 deg.C. This was first discovered in 1859 by Michael Faraday and it has been confirmed by many independent studies since then. NMR and XRD studies indicate the existence of the liquid layer results from fewer chemical bonds near the surface.

Ionic diffusion of dissolved gases occurs through liquid water. And CO2 dissolves to form carbonic acid. Ionic diffusion is rapid and results in dissolved ions moving from regions of high concentration to regions of lower concentration.

So, if the ice takes 30 years to solidify (as the IPCC asserts) then the CO2 will be distributed throughout 30 years of ice thickness: the effect on observed concentrations would be similar to taking a 30-year running mean of CO2 concentrations from ice that sealed in one year each year.

And the solidified ice is polycrystalline so it has liquid water on its crystal interfaces (for the same reason). Ionic diffusion from bubbles must also occur through this liquid.

Ferdinand supports his assertions by saying:

“Despite that, the CO2 ranges of completely different ice cores overlap each other, but with increasing smoothing towards the past, as longer time periods need smaller layers.”

Well, of course they “overlap each other” because they are intercalibrated. And this intercalibration also explains Ferdinand’s statements at #80 concerning overlaps between ice core data sets.

There are several other faults with ice core data in addition to the ionic diffusion issue. The stomata data may not be perfect indications of past atmospheric CO2 concentrations, but they are clearly superior to indications of past atmospheric CO2 concentrations from ice cores.

Followers of CA know trees are not good indicators of past temperature. Ice cores are a worse than that as indicators of past atmospheric CO2 concentrations.

Richard

PS
I do not intend to here repeat my never-ending disputes with Ferdinand concerning oceanic CO2 emissions. Suffice it to say that there is no evidence – only assumption – that the oceans are net absorbers of CO2, and an immeasurably small change to ocean surface pH would easily permit the oceans to be responsible for all the observed rise in atmospheric CO2 concentration in recent decades.

• Posted Sep 29, 2009 at 9:13 AM | Permalink | Reply

Richard, I don’t want to repeat our discussions of the past here, but this is a short reply on the main points:

- There is some “liquid” water at the surface of ice. True, but in general that is only a few atoms thick at the ice-air surface, but far less “liquid” (be it chaotic) at the intercrystalline level.
Against a migration/solubility theory are the 20 years overlap in the Law Dome ice core data with the South Pole data (for “warm” ice cores – average – 20 degr.C) and the equality of CO2 levels in closed bubbles and still open pores in the transition zone. Neftel (1985) and Etheridge (1996) confirmed that no migration occured below a certain density of the ice in the still open pores (still above the closing depth).
More important is the fixed ratio over 800,000 years between CO2 levels and “temperature” (d18O and dD isotope ratio’s) in de Vostok and Dome C ice cores (average – 40 degr.C). If there was the slightest migration over time, the ratio for the oldest glacial/interglacial transitions would fade out.

Liquid water at the surface is no problem too, as the measurements are done under vacuum over a cold trap (-70 degr.C) where ice and CO2 are effectively separated.

- As far as I know, the CO2 measurements are what they are for each ice core separately. The instruments are (of course) intercalibrated, the data not. Etheridge for 3 cores at Law Dome found a 1 sigma value of 1.2 ppmv. Different ice cores are within 5 ppmv for the same average gas age. If you have any knowledge of intercalibration of the data, I am very interested.

- Stomata data have their own problems, like the fact that CO2 on land is not well mixed and has a (variable!) positive bias compared to “background” CO2 levels. And the stomata data are calibrated against… ice cores.

- One can assume a lot of things, even that the ocean’s pH is the cause of the CO2 increase in the atmosphere and not the opposite. But then we have a mass balance problem: humans emit 8 GtC/yr to the atmosphere. Of this, vegetation absorbs about 1.4 GtC/yr (based on oxygen measurements). If the oceans were a net source, we should see an increase of more than 6.6 GtC/yr in the atmosphere, but the measured increase is about 4 GtC/yr… Thus were stays the rest of the emissions?

296. Richard S Courtney
Posted Sep 29, 2009 at 1:48 PM | Permalink | Reply

Ferdinand:

This is not the place to revisit our unending debate. And I am only going to make this one response whatever else you post here.

All the arguments in your post at #85 assume there are no errors in the data for ice core analyses of past atmospheric CO2 and for measured present atmospheric CO2 sources and sinks. But, in fact, the inherent errors of all the estimates for these data are greater than the variations being assessed.

All your arguments at #85 are based on your failure to acknowledge the limitations of the data. For example, you ask me;

“If the oceans were a net source, we should see an increase of more than 6.6 GtC/yr in the atmosphere, but the measured increase is about 4 GtC/yr… Thus were stays the rest of the emissions?”

The “rest of the emissions” go into the carbon cycle.

The estimates of the various natural flows of CO2 in and out of the atmosphere have inherent errors greater than the magnitude of the human emissions of CO2 (which is less than 7 GtC/yr). Therefore, it is ridiculous to assert that the sequestration of emissions from natural and human sources can be accounted to an accuracy of +/- 7 GtC/yr.

The CO2 in the air fluctuates with the seasons during each year. And this seasonal fluctuation to the CO2 in the air is an order of magnitude greater than the emission of CO2 from all human sources each year. Therefore, the annual change to the CO2 in the air is the residual of the seasonal changes.

If it is assumed that the rise in atmospheric CO2 is accumulation of some of the emission from human sources then in some years all of that emission accumulates and in other years none of it does. This fact alone indicates that the natural system can sequester all the emission from human sources because the system clearly does sequester all of it at least in some years.

I fail to understand why you think the emission from human activities is special. The system cannot and does not distinguish between the CO2 molecules emitted from natural sources and the CO2 molecules emitted from human activities.

I do not know if the emissions from human activities are or are not a significant contributor to the rise of atmospheric CO2 in recent decades. And anybody who thinks he/she knows is deluded. For an explanation of why this is then see:
Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005)

Richard

297. stan
Posted May 29, 2009 at 8:22 AM | Permalink | Reply

As for skill in debating, I don’t really think that is necessary in this case. I think that anyone debating someone from the team has a tactic that should win every time, regardless of their debating skill. Simply insist at the start that both sides limit their discussion of the science to quality science and avoid speculation and unsupported hypotheses. To wit — any models relied upon must have been validated, any temperature trends cited must have been recorded on thermometers that were sited and regularly calibrated in accordance with basic minimum scientific standards, and any studies cited must have been conducted in accordance with the scientific method. Scientific method meaning that the work has been replicated by independent third parties expert in the field (obviously, if sufficient information to allow replication has not been made available, the work fails to qualify as sound science).

If the team member agrees, he can’t make a case. If he doesn’t, you get to start the debate with a discussion about his need to rely on work that most people don’t consider quality science.

298. Pat Frank
Posted May 30, 2009 at 4:01 PM | Permalink | Reply

Re: Steve McIntyre (#9), Schneider “If these guys think they are “winning” why don’t they try to take on face to face real climatologists at real meetings – not fake ideology shows like Heartland Institute – but with those with real knowledge – because they’d be slaughtered in public debate by Trenberth, Santer, Hansen, Oppenheimer, Allen, Mitchell, even little ol’ me.” (bolding added)

Someone should notify Monckton of Brenchley that Steve Schneider wants to take him up on the standing challenge to debate the changing climate.

299. John Baltutis
Posted May 30, 2009 at 5:43 PM | Permalink | Reply

Re: Pat Frank (#26),

Schneider challenges skeptics to a debate; Pielke says “OK”

300. Pat Frank
Posted May 30, 2009 at 6:12 PM | Permalink | Reply

Re: John Baltutis (#27), Very interesting, John, thanks. The gauntlet has been cast, and taken up. The consequence should be informative, whatever happens.

301. Gerald Machnee
Posted Jun 3, 2009 at 12:13 PM | Permalink | Reply

Re: Scott Brim (#7),

Warmers of my acquaintance strongly question the value and credibility of everything that has been done so far by CA, WUWT, the two Jeffs, RyanO, Lucia etc. concerning Steig’s paper — the primary objection being that none of the people participating in this exercise have any recognized climate science credentials.

What you have described is close to an ad hominem attack. The named individuals are well educated and very capable and have more statistical capabilities than Steig. Their work is very credible, even if some magazines do not like it. It does not matter how great a climatologist you are, if you misuse statistics, your work is for nought. So maybe your friends want to do some serious analysis or replication?

302. John S.
Posted Jun 3, 2009 at 12:44 PM | Permalink | Reply

Re: Scott Brim (#7),

“Credibility” can have two meanings: 1) formal academic standing and 2)substantive comprehension, shown by real-world performance. Because “climate science” is manifestly lacking the latter, except in the PR arena, substantive criticism of its methods and conclusions is widely found not only in blogs, but in professional circles in all the diverse disciplines that apply to the complexities of climate: meteorology, thermodynamics, statistics, and system analysis, to name the most pertinent. Now that econometrics has been put on firmer analytic foundations, I know of no other field of study where such a stark dichotomy exists. It is only in the use of Navier-Stokes equations that this dismal, attention-grabbing science stands on firm ground.

303. Jonathan Baxter
Posted Jun 3, 2009 at 12:50 PM | Permalink | Reply

Re: Scott Brim (#7),

Warmers of my acquaintance strongly question the value and credibility of everything that has been done so far by CA, WUWT, the two Jeffs, RyanO, Lucia etc. concerning Steig’s paper — the primary objection being that none of the people participating in this exercise have any recognized climate science credentials.

My answer to your warmer acquaintances would be that none of the paper authors are recognized statistics authorities, and they’re not merely applying a simple T-test. If the statistics is wrong, the rest doesn’t matter.

I could be the world’s foremost expert on color, but if I use dodgy statistics to prove black is white does that make me right?

304. John
Posted Jun 3, 2009 at 5:37 PM | Permalink | Reply

Re: Scott Brim (#7),

Scott, the argument to authority is quite an ancient logical fallacy. There are numerous hazards in simply accepting an authority’s say on any issue. First there is the simple question, when considering a science of a phenomenon as complex as climate, is it possible that anyone could legitimately claim to be an authority? Is there a sufficiently good understanding of how climate works? The entire discipline of mathematical complexity and chaos has it’s earliest roots in climatological research. Second, assuming that we do have a “science of climate,” upon whose judgment do we recognize someone as an authority in that science, or that possibly being an authority in one discipline allows you to assert authority else where? The fact for example that Steig, in his recent response, stated that he was teaching a class in PCA, does not mean that he is actually an authority in mathematics or statistics or that he necessarily understands PCA in detail for that matter (though undoubtedly better than I do). He studies climate. His assertion about teaching the class invites the reader to draw a conclusion without further warrant. Most importantly, the scientific process relies on the critical ability of practitioners for it’s utility.

The rote acceptance of received opinions, such as the longstanding disinclination of the geological community to consider Wegener’s ideas regarding continental motion, has demonstrably retarded the growth of scientific knowledge by decades, or more reasonably by centuries. Mathematics and data analysis are skills that are universal to both the natural sciences and the social sciences. Analysis of spatially distributed data is employed broadly in geography, geology, ecology, and the social sciences including economy and archaeology to name but a few fields. The increasing trend toward specialization in science has gradually lead many to consider the peculiar data of a field to be special in a manner that compares to magical thinking. The special aspects of any field’s data lie in how it is collected. In climate studies that means thermometers, barometers, hygrometers and the higher tech successor’s to the instruments that hang on the walls of many homes. Once those raw data are collected, the process of converging, calibrating, or rectifying them to a standard is a common problem to many sciences, as is the process of analyzing the resultant processed data. Thus the real “authority” – the set of knowledge and skills requisite to understand and critique anything to do with the process, subsequent to the collection of the raw data, is present in many disciplines. This is amply seen on CA and the Air Vent. The appeal to the correct set of “peers” who “understand” the “special nature” of the data and how it is analyzed is mere mysticism. It invokes a “priestly” image of the proper “authority.” The insistence that the hoi poloi must accept the assertions of this “priesthood” is quasi-religious demanding “faith” and indeed has established “faithful” and “heretical” identities.

305. Mike Lorrey
Posted Jun 3, 2009 at 5:54 PM | Permalink | Reply

Re: John S. (#26), John, the Hockey Team has never, to my knowledge, run a single GCM that came within a fractal’s ass of Navier Stokes math of any kind. So long as they continue to run them as a globe of gridded cells 100k+ on a side, they will continue to fail hard in this regard.

306. Andrew
Posted Jun 3, 2009 at 7:34 PM | Permalink | Reply

Re: John (#216), All logical fallacies are ancient. Sloppy thinking has been around as long as thought itself. Aristotle just codified and named them, and that work was lost and found, updated and revised over the years.

307. Scott Brim
Posted Jun 4, 2009 at 7:24 AM | Permalink | Reply

.
This is my post in another thread which spawned John’s response:

Scott Brim from Comment #7 in the “TTLS in a Steig Context” thread:
.
Warmers of my acquaintance strongly question the value and credibility of everything that has been done so far by CA, WUWT, the two Jeffs, RyanO, Lucia etc. concerning Steig’s paper — the primary objection being that none of the people participating in this exercise have any recognized climate science credentials.
.
If one were to examine the various facets of knowledge and experience that one would expect to be necessary in examining Steig’s Antarctic paper, what percentage of the total background of the required knowledge and experience would each of the following areas cover?
.
– Climate physics and dynamics
– Sensor physics and dynamics
– Data collection/management methods and techniques
– Statistical analysis methods and techniques
– Integration of physics/dynamics, data, and statistics
.
If those doing a critical examination of Steig’s paper don’t include recognized professionals in each of these areas, does the examination have any real credibility?

These responses to my original post appeared in that thread:
.
Re: Jeff Alberts (#8)
Re: Kenneth Fritsch (#9)
Re: Steve McIntyre (#14)
Re: Craig Loehle (#15)

308. John S.
Posted Jun 4, 2009 at 9:34 AM | Permalink | Reply

I agree. But at least they use N-S, instead of some concoction.

309. Feedback
Posted Jun 9, 2009 at 5:40 PM | Permalink | Reply

Re: Jack (#13),

I’m sure you’ll all be bowled over by the “huge” impact it had.

I think what you are saying here is that “It doesn’t matter” – a well known phrase for readers of this blog. Obviously it does “matter” to many people to be able to say the words “hottest year on record”. See for example the Independent, January 1 2007:

A combination of global warming and the El Niño weather system is set to make 2007 the warmest year on record with far-reaching consequences for the planet, one of Britain’s leading climate experts has warned.

The leading climate expert was of course Phil Jones:

The warning, from Professor Phil Jones, director of the Climatic Research Unit at the University of East Anglia, was one of four sobering predictions from senior scientists and forecasters that 2007 will be a crucial year for determining the response to global warming and its effect on humanity.

I might be in snippable territory, but a quick search on the Internet tells me that the words “hottest year on record” is charged with a special significance these days. Because of this it’s important that those words are based on sound science and proper statistical analysis and methods (and, since Phil Jones is mentioned, on transparency). Steve (and Ross McKitrick) have shown that this isn’t always the case. Statisticians are specialists and have specialist competence, and averaging temperatures and analyzing climate data is a statistical task.

As for 2007, as we all know, this forecast didn’t hit the bull’s eye either – but that’s another question.

Aside that and OT:
Re: jeff Id (#29),

climatoknowledgests

This must be the enhanced feedback effect – on language.

310. Mike Lorrey
Posted Jun 10, 2009 at 4:10 AM | Permalink | Reply

Re: ianl (#168), if you don’t keep quiet while hunting the snark, the boojum will softly and silently vansih away. Everybody knows THAT. Shhhhh.

311. Severian
Posted Jun 10, 2009 at 7:31 AM | Permalink | Reply

Re: rephelan (#219),

Wow, long comment in reply to a simple joke.

But, at the risk of this being snipped, I feel compelled to point out that while you may view Steig and the Antarctic paper as going against “authority” in that it says the Antarctic is warming whereas conventional wisdom says it’s getting colder, I would point out that it’s purpose is to shore up “authority,” that authority being conventional AGW thought.

In fact, if you look at many of the research papers out there over the past few years, they seem to almost deliberately be designed to undermine whatever “skeptical” meme is gaining hold in the larger debate. Whether we like it or not (and I for one don’t) this is not a subject that is confined to the musty back halls of academia or some obscure group of scientists, it is of critical importance, being of international scope, and there are indeed serious ramifications if it isn’t gotten right (either way, warming or not). We are, as I say quite unfortunately, in the era of “science by press release” and the larger argument is held in the media and government halls, not among scientists.

Take three areas for example, Urban Heat Island effect, solar influence on climate, and Antarctic cooling. Each of these has become the subject of “skeptical” memes, UHI has an effect that isn’t accounted for, solar is driving warming and cooling, an the Antarctic is cooling. Now examine three papers, UHI was allegedly removed as something to consider based on a paper based on Chinese weather stations that are allegedly unmoving and of high quality, and that particular paper has been found severely wanting to put it mildly. Lockwood and Froliche (sic?) purported to disprove a solar link, again the paper did not prove what was claimed, and now Steig’s Antarctic paper alleges to prove the south pole isn’t cooling, and we see it’s attendant problems and issues under discussion now.

Each of these papers generated exactly what was needed, whether by design or not, that is a press release that an uncritical media immediately trumpeted (and in some cases other agencies, such as the IPCC used as well) to allegedly disprove a skeptical meme that was getting traction. Each paper had serious issues, issues compounded by refusal to provide data and methods, or providing them only grudgingly and incompletely, but that is never brought forth in the larger contest going on in the public sphere. Such doubts and issues remain sidetracked and hidden behind the scenes and in general are never publicly known. Rather than go back and revisit papers and research that has serious problems, we hear “we’ve moved on” from the proponents. Errors are not fully, or ever, addressed and corrected, but further, often flawed, research is published and press released to further shore up the standing of the “authority” and address the arguments against the orthodoxy.

Therefore I see these things as not going against authority, but serving to support and reinforce authority. Your mileage may differ, void were prohibited, results may not be typical, and other further disclaimers.

312. Posted Jun 10, 2009 at 9:24 AM | Permalink | Reply

I also notice that sea ice extent is dipping from its 30-year norm yet again.

That would be “30 year average”. We have no idea what the “norm” is.

313. See - owe to Rich
Posted Jun 10, 2009 at 1:35 PM | Permalink | Reply

Re: Severian (#253), that was an awesome posting, thank you. I usually start to skim over the longer postings, for reasons of efficiency, but you held me there.

Rich.

314. Hoi Polloi
Posted Jun 17, 2009 at 8:33 AM | Permalink | Reply

Re: Andrew (#5), “In the long run, we’re all dead”….

315. Posted Jun 17, 2009 at 9:43 AM | Permalink | Reply

Some people do.

316. Steve McIntyre
Posted Jun 17, 2009 at 3:46 PM | Permalink | Reply

I’m pretty certain that I’ve read Keynes more carefully than most readers. I was offered a PhD scholarship at MIT by one of the leading Keynesians of the day (Paul Samuelson) and I lived through the 1970s in real time. I’m not convinced that Keynes would have endorsed how his theories were interpreted by 1970s officials.

Having said that, my own interpretation of the present problems is that they are connected to quite fundamental trade issues, but I don’t want to spend time personally debating this view.

317. Kenneth Fritsch
Posted Jun 17, 2009 at 4:59 PM | Permalink | Reply

I also do not want to waste bandwidth here on economic/political issues, but I will say that I think you are wrong on trade issues. Trade issues are symptomatic of the problem and not the cause of it.

While I applaud your qualifications for a scholarship studying under Paul Samuelson, a Nobel prize-winner, I cannot refrain from noting that Samuelson in the 1980s was predicting that the USSR economy was soon to pass that of the US. Speaking of modeling, Samuelson was at the forefront of economic modeling and had no use for the Austrian school of economics that shunned empirical models in favor of theoretical arguments.

I’m not convinced that Keynes would have endorsed how his theories were interpreted by 1970s officials.

Perhaps then Keynes did not understand the “psychological” implications of his model. Also interesting that he stated that his general theory (model) should be applicable to a more open economical system or a command economy.

318. Ivan
Posted Jun 17, 2009 at 5:14 PM | Permalink | Reply

While I applaud your qualifications for a scholarship studying under Paul Samuelson, a Nobel prize-winner, I cannot refrain from noting that Samuelson in the 1980s was predicting that the USSR economy was soon to pass that of the US.

Samuelson actually said that in 1989!

319. Andrew
Posted Jun 19, 2009 at 9:16 AM | Permalink | Reply

Re: Gary Strand (#27), True, which is why I said “indeed” and then confirmed your statement. I was presenting it as evidence you were correct!

320. Dave Andrews
Posted Jun 21, 2009 at 2:10 PM | Permalink | Reply

Re: Andrew (#8),

Its more than troubling. Jones, the Hadley Centre and the UK Met Office (now claiming to be able to model future climate down to 25km gridded squares – does anyone believe this?) are central to the IPCC project.

321. ianl
Posted Jul 6, 2009 at 9:30 PM | Permalink | Reply

Re: ianl (#107),

“The Team” snips critiques of the detail of climate models and associated statistical extrapolations used in policy formulation

Steve Mc snips critiques of the policies

A perfect circle

322. David Jay
Posted Jul 8, 2009 at 7:46 PM | Permalink | Reply

You may have missed the beginning of the discussion – Rahmstorf used this method to back up his assertion that the planet “may be responding more quickly than climate models indicate”, in effect saying that warming trends were on the upper end of the model ensemble projections.

In fact, for essentially any time period in the range [Current - (1-110)months] to current, the slope of global temperatures is NEGATIVE as this link shows:

323. steven mosher
Posted Jul 27, 2009 at 11:31 AM | Permalink | Reply

When were you banned at WUWT? Did you get a mere time out until you apologize or something or did you
get a lifetime achievement award. If you did a post under two different names thing, there is nothing I can
say.

324. thefordprefect
Posted Jul 27, 2009 at 11:40 AM | Permalink | Reply

Re: steven mosher (#176), It was a effectively a life time until I made a rather silly appology which I would not have meant anyway! I do not do that sort of two faced thing, so took the award!

325. wkkruse
Posted Jul 29, 2009 at 4:25 PM | Permalink | Reply

Mosher, This is one of the best pieces I’ve read about the issue of AGW. It’s a position I’ve been gravitating to after reading this blog and other stuff on AGW for some time. Now I have a reference I can site.

Posted Jul 29, 2009 at 8:04 PM | Permalink | Reply

Steven,

I guess you could then call me a Lukewarmer too but I think it’s a bit overly simplistic to only consider GHG forcing and the UHI effect. I’d also disagree that GHG forcing is the best explanation for the warming we have seen. It’s obviously not pre-1950 as the change in forcing is too small from the small increase in GHGs. The IPCC doesn’t claim GHG were the main forcing agent pre-1950 and post 1950 the observed temperature increases can be explained mostly by changes in oceanic forcing as described by Compo and Sardeshmukh [2008] and McLean, de Freitas, Carter [2009]. This runs contrary to the iconic statement made by the IPCC in 2007 about CO2 being responsible. Their only defence of that statement was they could not think of any internal natural variability that could be responsible. Attribution only to GHG and UHI is not really tenable.

Further there are positive reasons to suppose that the model projections for 2CO2 are much too high. Papers by Lindzen and Choi [2009], Spencer and Braswell [2008] and Douglas and Christy [2009] have all been able to show, from empirical satellite observations, that there are likely no positive feedbacks from increased GHGs or that the feedbacks are negative even if you assume all observed warming in the latter part of the 20th-century was due to increased GHG forcing.

I guess, though, these papers will eventually be challenged as the datasets are modified to become more in line with the models.

327. Geoff Sherrington
Posted Sep 12, 2009 at 11:59 PM | Permalink | Reply

What rhymes with “thixotropy”? Try “Sticks on the tree”. Unlike strip bark.

There has been a whole symposium devoted to the role of thixotropy and related processes in sedimentation, plus an e-book by J N Elliston “The Origin of Rocks and Mineral Deposits”, Oct 2005.

In a related publication we can read -

Geology is a science, a philosophy and an art – and so are diagenetic – catagenetic studies an integral member of sedimentology, petrology, and paleoenvironmental reconstructions. Likewise, metamorphic investigations (often mentioned here as a logical extension of diagenesis – catagenesis) are founded on
petrographic + petrologic + geochemical “concrete” science – philosophy – “abstract” art. These three groups of phenomena are complexly intertwined and often inseparable, so that we deal with the art of science or the art of the scientific method; the art and science of philosophy, or the philosophy of science, for instance.
That these phenomena are not based purely on “a play of words” and have an ever-increasing practical application in our computer age, will be demonstrated below.

328. Dave Dardinger
Posted Sep 27, 2009 at 3:39 PM | Permalink | Reply

Dear Substitute,

The best way of understanding it is by segregating those cores that exhibit a good correlation with modern temperatures from those that don’t.

This makes sense as long as we divide the instrumental temperature record into two parts, that part which doesn’t demonstrate the divergence problem and that which does. Then the total proxy ocean can be checked and those chronologies (or perhaps those trees in a given chronology) which shows good correlation in the non-divergence section can be separated out and the out of calibration correlations of the selected series / trees can be checked. Of course this will still require a lot of chronologies which aren’t available to be sure the chronologies which have been used in past papers haven’t been cherry picked with respect to the whole universe of possible chronologies.

329. steven mosher
Posted Sep 27, 2009 at 3:44 PM | Permalink | Reply

Re: JS (#28),

JS, I don’t care about the size or direction of the uptick. I’m merely talking about the position one can take on this. I’m taking their side and trying to do so in a responsible fashion aiming for their best argument. I find that is always a good practice. Just briefy and in broad brush strokes. Here is what you have.

1. You have an instrument record from 1850 to present. This record, like all historical records, has it’s challenges, but it’s all we have. So for the sake of moving forward, let’s accept it as the “best record”. We can discuss it’s issues elsewhere. So stipulated we have a time series of instrumented temperatures.

2. We have the science of dendrology which tells us that certain trees can under certain conditions function as proxies for temperature. Granted, there are other confounding factors in certain times and places. But under some conditions ring characteristics correlate with temperature.

3. We have a large sample of cores, say 100. We examine their characteristics in the instrumented period and find that
50 of them have a good correlation ( er pick a number) with the instrumented record and 50 of them have a poor correlation.

What is the best approach to take.

1. Mash them together and only report that.
2. Segregate the cherries and only report them?
3. Report on all three cases.
A. the good ( cheerries)
B. the bad ( merged)
C. The ugly ( the censored)

I would think that any responsible analyst would do #3. You’d report out the finding from the entire population.
and then you’d report out the findings if you picked cherries on the assumption that the science was sound.

Climate science does #2.

330. Posted Sep 27, 2009 at 3:52 PM | Permalink | Reply

Yes, exactly. Another thing that needs to be attended to here is whether there really IS any significant unobserved heterogeneity in the trees, or whether instead the ring sensitivity is just a high variance thing and we are just selecting thermometers from the upper tail of a homogenous distribution.

There have been a couple of papers in my field about “likelihood-based data mining.” The techniques here are meant to provide a way of classifying members of a population into a number of K categories interpreted as behaviorally different “types” within the population. You use a Bayesian penalty function, probably something like a BIC, whenever you expand K. The techniques are meant to keep you from “overclassifying,” that is inferring too many underlying types. You do have to make some assumptions about “within-type variance” but one can vary these to check on the robustness of the estimated K and classifications.

If there’s interest I can dig these up. It would be a way of looking at a collection of trees and asking whether there are “really” good and bad treemometers, or just a high-variance collection of correlation coefficients.

331. Posted Sep 28, 2009 at 2:33 PM | Permalink | Reply

Re: kuhnkat (#75),

The work of Etheridge e.a. on Law Dome: http://www.agu.org/pubs/crossref/1996/95JD03410.shtml is on very fast accumulating ice cores, two of them at 1.2 m ice equivalent per year. That covers the past century (the third one about a milennium), which makes a comparison with hemispheric/global climate over that time period not too difficult. Moreover, there is an overlap of about 20 years between the ice core (fully closed bubbles) and the continuous measurements at the South Pole. And they measured CO2 in the firn and ice at closing depth. The three cores were drilled with three different techniques (wet and dry). There were no differences outside the accuracy (1.2 ppmv, one sigma). Etheridge probably intended his investigation about the different influences of drilling and contamination as answer on the objections of Jaworowski/Segalstad against ice core results.

I agree that the gas age – ice age difference over periods of hundredthousands of years is not that easy, but there is some help of several ice cores with quite different accumulation rates (thus different ice age – gas age lengths), which in the most recent period for the longer cores overlap each other. And the (SH oceans) temperature of that time can be deduced from d18O and dD levels.

That there are a lot of mechanical problems to overcome is sure, but the researcher learned from experiences in the past. That is why relaxation is allowed up to one year on site under cold conditions. Despite that, it happens that there are cracks and drilling fluid creeping in. In such cases, measurements around such parts gives highly irregular results and a lot of high outliers. This is recognised and such results are discarded.
See e.g. http://www.nature.com/nature/journal/v453/n7193/full/nature06949.html