Anthony Watts at UCAR

Anthony’s presentation at Roger Pielke Sr’s seminar at UCAR appears to have been well-received. He’s posted some interested online reports at his blog and been written up here and here.

In response to the criticisms about the failure of sites to meet standards, Lawrimore of NOAA said that they already adjusted for these problems:

For the USHCN stations being checked by Watts and others, Lawrimore said there are checks to ensure the data is accurate. Some stations are placed on less-than-ideal sites, but he said it’s important to note the impact of those has been analyzed and accounted for.

It’s nice to that the adjusters are on the scene. Given the number of adjustments that seem to be required, should we start describing Hansen and his colleagues as chiropractors?

123 Comments

  1. Sylvain
    Posted Aug 30, 2007 at 9:44 AM | Permalink

    “Some stations are placed on less-than-ideal sites, but he said it’s important to note the impact of those has been analyzed and accounted for.”

    In other word: Why bother, we know what the data should say and whatever it says we will adjust it to say what we want it to say.

  2. Phil
    Posted Aug 30, 2007 at 9:56 AM | Permalink

    New to the site. Trying to take it all in. One question: do you know, on average, how many adjustments are made to each individual data point?

  3. MarkW
    Posted Aug 30, 2007 at 9:59 AM | Permalink

    Prior to the start of Anthony’s efforts we were being told that the network was high quality.

    Now that it is being proven that the network has “issues”, we are being told that the problems were already known and have been accounted for.

    What will the story be next week?

    As TCO pointed out, there have been no studies to quantify just how much X square feet of asphault 20 feet from a sensor contaminates the readings of that sensor. So how the heck are they “adjusting” the data? If they claim that they are making adjustments by comparing against nearby stations, then we know that they are blowing smoke.

  4. Posted Aug 30, 2007 at 10:03 AM | Permalink

    Insurance agents.

    To keep the money flowing.

  5. bernie
    Posted Aug 30, 2007 at 10:14 AM | Permalink

    Remember that the key issue is trend not absolute impact. What Anthony’s work highlights is that there are many possible effects that are “transient” year over year, and potentially cumulative. The possible importance of something like asphalt is that it could act as an amplifier so that if there was a trend increase in temperature the asphalt would amplify this existing trend. Obviously increasing the amount of asphalt or renewing the asphalt could produce a trend.

    The UHI effect is potentially important because via population growth, reduction in greenspace, increases in energy consumption a UHI trend exists. Similarly, if it can be shown that take-offs and landings impacts temperature records, then placing stations at airports may need to be adjusted for changes in air traffic volumes.

    All this could be mitigated if the new CRN has chosen sites that have no extraneous trend affects.

  6. Spence_UK
    Posted Aug 30, 2007 at 10:17 AM | Permalink

    I wonder if they use Toftness Radiation Detectors to determine where adjustments are needed?

  7. Bill F
    Posted Aug 30, 2007 at 10:21 AM | Permalink

    Nope…not good enough. I simply don’t trust them anymore. If he wants me to believe that they have done what Anthony is doing, he needs to publish it and make it available for public review. Simply saying “we already know what the biases are” without providing proof of how is the same thing Phil Jones did when he and Wang claimed to have verified the quality of the Chinese stations. If they want me to believe the biases have been accounted for, then they need to provide a station by station database listing the biases identified and the adjustment used to account for it. If they can’t do that, then Lawrimore is full of crap. If they have already done what he says they have done, then it should be very simple to copy all of the files over to a public server where people can access them and review what was done.

  8. Gary
    Posted Aug 30, 2007 at 10:40 AM | Permalink

    Lawrimore also said “I think any effort to better understand the observation system that’s used to collect data and analyze it is helpful.” So the NCDC should jump at the chance to join the SurfaceStations effort. Let’s assume good faith in their previous work, forgive any negligence, and get on with assembling all the original, unadjusted data and metadata in one place that has unfettered public access. Then we can debate the data rather than political agendas.

  9. BarryW
    Posted Aug 30, 2007 at 10:40 AM | Permalink

    So they “know” the sites have problems and they’re adjusting for it? Since they admitted they didn’t have any survey data on site conditions what crystal ball did they use for the adjustments?

    Sites can also have local heat islands build up as more structures and asphalt are added to the site, even though they are technically rural. This will create an artificial trend the same as UHI does.

    The worry I have is how they will use the CRN site data. They may decide it needs “adjustment” to normalize it to the historical record.

  10. steven mosher
    Posted Aug 30, 2007 at 11:05 AM | Permalink

    Well then. It’s clear. If adjustments can, have and will remove bias then
    why is CRN required?

    Do these bozos actually graduate?

  11. steven mosher
    Posted Aug 30, 2007 at 11:13 AM | Permalink

    re 9.

    Barry makes an excellant point. CRN is going to be the standard. Question, did the first
    correlation study of the CRN ( the ashland study) with the historical network prove that
    the historical network was accurate or not.

    Guys, notice the pattern of argument.

    1. Anthony critcizes the historical network.
    2. They play Skeptic. Show the IMPACT!
    3. We show studies. ( CRN, gallo, oke, etc)
    4. THEY RESPOND: CRN will fix this.

    If there is no problem, CRN is a waste of money.
    Then lean on CRN because they know there is a problem.

  12. Posted Aug 30, 2007 at 11:53 AM | Permalink

    “Some stations are placed on less-than-ideal sites, but he said it’s important to note the impact of those has been analyzed and accounted for.”

    That’s wonderful. Now all he has to do is drag out the ledger for the auditors to glance over and approve and everything will be just hunkydory. Shouldn’t take more than a day or two once the ledger is produced.

    He did say he was producing the ledger, right?

  13. Slevdi
    Posted Aug 30, 2007 at 11:58 AM | Permalink

    #6 Spence_UK ‘Toftness Radiation Detectors’

    After proving that kinesiology doesn’t work:

    ‘When these results were announced, the head chiropractor turned to me and said, “You see, that is why we never do double-blind testing anymore. It never works!” At first I thought he was joking. It turned it out he was quite serious. Since he “knew” that applied kinesiology works, and the best scientific method shows that it does not work, then — in his mind — there must be something wrong with the scientific method. This is both a form of loopholism as well as an illustration of what I call the plea for special dispensation. Many pseudo- and fringe-scientists often react to the failure of science to confirm their prized beliefs, not by gracefully accepting the possibility that they were wrong, but by arguing that science is defective.’ —How People Are Fooled by Ideomotor Action by Ray Hyman, Ph.D

  14. Spence_UK
    Posted Aug 30, 2007 at 12:44 PM | Permalink

    Re #13

    Quite an amusing story.

    Double blind studies don’t work, and neither does the r2 statistic. We have anecdotal examples of both failing to work, therefore they are incorrect, silly.

    I wonder if climate scientists are conditioned such that their brains stick whenever they see a hockey stick?

  15. Posted Aug 30, 2007 at 1:14 PM | Permalink

    Should stop calling it Urban Heat Island (UHI) effect on the station sites, its really Anthropogenic Heat Island (AHI) because it’s not just in urban areas. There’s evidence from plenty of the sites surveyed, that paving, air conditioning, etc… occur at rural as well as urban station sites.

    I found (and it’s a good source) the Califorina Spatial Information Library DOQQ (aerial photos) from the 1990’s for all of California, most from the mid 1990’s, which can be compared to curren google photos to assess changes at the sites in the last 10 years or so. I’ve submitted such for Newport Beach CA, and there are significant changes, removal of trees, expansions of parking lots, near the station site.

    AHI is a one way effect — as population grows, AHI increases.

  16. Bob Meyer
    Posted Aug 30, 2007 at 1:14 PM | Permalink

    It’s nice to that the adjusters are on the scene. Given the number of adjustments that seem to be required, should we start describing Hansen and his colleagues as chiropractors?

    How about we call them “climopractors”?

  17. SteveSadlov
    Posted Aug 30, 2007 at 1:18 PM | Permalink

    RE: #15 – Leon you are right on. I’ve also been promoting that terminology.

  18. MarkW
    Posted Aug 30, 2007 at 1:22 PM | Permalink

    BarryW @ 10:40,

    They’ve described this method several times before, so I wouldn’t be surprised if by adjusted, they mean that they have compared each station to it’s neighbors (even if the neighbor is hundreds of miles away) and thus eliminated any anamolous data jumps.

    The problem with this method is that it does not catch slowly developing trends, this is doubly true if several of the sites are being contaminated at the same time.

  19. Gunnar
    Posted Aug 30, 2007 at 1:26 PM | Permalink

    >> I’ve submitted such for Newport Beach CA

    Don’t forget, the readings are correct for their purpose. You folks are using the readings in a way not intended.

  20. Jerry
    Posted Aug 30, 2007 at 1:38 PM | Permalink

    Do these bozos actually graduate?

    LMAO – My thoughts exactly.

  21. Don Healy
    Posted Aug 30, 2007 at 2:11 PM | Permalink

    “Some stations are placed on less-than-ideal sites, but he said it’s important to note the impact of those has been analyzed and accounted for.”

    Just curious, but how does one make corrections for stations that are no longer in the location reported on the MMS system. A case in point is Cottage Grove, Oregon. The current location of USHCN Station #351897 is about one mile south of, and about three hundred higher in elevation than reported, and has been in the new location for about four years. I emailed the contact listed on the MMS page of the error and the correct coordinates some time ago when I entered the data onto Anthony’s website, but in checking the page again today, see no indication of a change.

    It would appear that one does not need quality control measures if one has excellent “adjustment” methods.

  22. BarryW
    Posted Aug 30, 2007 at 2:52 PM | Permalink

    Re #18

    Yeah I know they describe their methods, but as you say it handles discontinuites, not slow changes like enroachment. Without site surveys you can take a site that is slowly dirfting from the ideal and wind up using it to adjust good site since you don’t know the qualtity of the site has changed. It can damage the data from more than just the site in question. That’s what I meant by crystal ball they state they know what they don’t seem to know.

  23. Sam Urbinto
    Posted Aug 30, 2007 at 4:10 PM | Permalink

    That is the entire point.

    1. The network is high quality.
    2. Here’s things that may influence readings.
    1. Oh, we knew about that. It’s why we said you were wasting time doing it.
    2. Why didn’t you say that?
    1. You never asked.
    2. Okay, so if you knew about it, why are the contaminations still there? And when did you survey the station to determine what was wrong and how to adjust for it?
    1. We don’t, we balance it against other stations nearby, then we don’t have to worry about specifics like going to the station, taking a long term photographic history, checking for influences and their impact, or finding out if it’s really rural or not or is in the location it’s listed as being in.
    2. So how is it high quality?
    1. We adjust for it.
    2. And how do you adjust for it?
    1. It’s a simple procedure.
    2. Can we have the adjustment code?
    1. No.
    2. No?
    1. Recreate it yourself and then you’ll get the same answer and know we’re doing it correctly.
    2. Um, I want to verify what you’re doing correctly does it, not that I can do it also.
    1. You’ll learn more if you do it yourself.
    2. Doing it ourselves will verify your methods?
    1. Of course. It’s all right there in the papers by Hansen et al, in plain, easy, simple to follow plain language instructions. Simple.
    2. But why can’t I have the code?
    1. It’s too messy and not understandable.
    2. It’ll get cleaned up when I examine it.
    1. A waste of time for you. Ours is bits of code that the people that wrote them run themselves, they’d be of no use to you anyway.
    2. If it’s so simple, why is it bits and peices of code that only the person familiar with them can understand?
    1. It’s easy, simple! It just won’t work for you. Read the stuff and do it, you’ll see!
    2. What if I recreate it and get a different answer?
    1. Oh, you won’t. The network is high quality. We only use rural stations.
    2. How do you know…..

    And the circle begins again.

    BTW, this explains how to do the stuff that happens after the high quality US network finishes with its high quality data: http://www.ncdc.noaa.gov/gcag/gcagmerged.html

    I’d appreciate an opinion if it is also easy to understand, all in one place and easy to implement.

  24. jae
    Posted Aug 30, 2007 at 7:13 PM | Permalink

    23: ROFLMAO. As a key figure here says, “hey, it’s climate science!” It’s a new genre of science, don’t you know.

  25. John Norris
    Posted Aug 30, 2007 at 7:42 PM | Permalink

    re #23 Sam Urbinto

    Wow, well summarized. I may suffer from my own anti AGW filter when I read RC but that is EXACTLY as I interpreted their explanations.

    Perhaps most of us don’t agree with the full faith and confidence that Climate Science has put in the current set of measurements, and the genius in the adjustments, but I suspect that most of us can get behind CRN. I say full throttle with CRN and go get some decent data. My experience with stepping up to better quality data is once the results are in, everyone gets some much needed education.

  26. jae
    Posted Aug 30, 2007 at 7:45 PM | Permalink

    23, Sam: you could work this sentence in somewhere and summarize the whole history of this blog: “It would be silly to do that.”

  27. Posted Aug 30, 2007 at 7:50 PM | Permalink

    “The merged land air and sea surface temperature…”

    I don’t even understand just what system GISS are studying. If it is the air temp 4 ft above land and Sea Surface temperatures, why not include the earth temperatures down to the frost line at the various sites (latitude, surface inclination, surface material, etc). What about all the mass of things other than air, water and dirt between 0 and 4 feet above the dirt? Just where is the system definition?

    If the seas have been analyzed for the depth to be included in heat capacity for model prediction (Brookhaven Lab paper), where is the similar info for the solid surfaces not covered by the seas? What happens in lakes where the depth is less than 100 – 150 ft?

    Shouldn’t Scientists (esp Rocket Scientists) be using the Kelvin Temperature scale (all that T^4 stuff)? How to we know that these corrections are an offset rather than a multiplier? Rather than both an offset and slope? Where are the statistical confidence levels?

  28. Gunnar
    Posted Aug 30, 2007 at 7:50 PM | Permalink

    >> I suspect that most of us can get behind CRN

    Do I understand correctly that you want to create a new land based measurement system to measure the climate? Will it be more comprehensive than the satellite system?

  29. jae
    Posted Aug 30, 2007 at 7:50 PM | Permalink

    Bottom line: it is impossible to “correct for x,” when you don’t have a suitable algorithm for correcting. The “average global temperature” is a sad statistic, indeed.

  30. jae
    Posted Aug 30, 2007 at 7:54 PM | Permalink

    This whole issue boils down to the same problem that makes “climate sensitivity” such a mystery: If a reasonable rationale existed, it would be explained here in a second.

  31. John Norris
    Posted Aug 30, 2007 at 9:00 PM | Permalink

    re #29

    Hmmm, good point Gunnar. Perhaps I am succumbing to the Climate Science rhetoric that the satellite data has problems. I seem to recall some arguments that the extreme latitudes were not measured correctly with the satellites. Also, there is the UAH RSS difference and I believe some debate over the meaning of warm or cool measurements at altitude.

    So perhaps you are suggesting that once the CRN data shows a lack of significant warming we will get some rhetoric over how the CRN stations are not teleconnecting properly, or some other story that those of us without proper climate science credentials just don’t understand.

    I guess that is why I like the idea of a new, transparent, surface measurement system, aimed at climate measurement in tenths of degrees, rather then for weather reports or forecasts where accuracy to a few degrees is adequate. CRN sounds simple, hard to corrupt, hard to justify adjustments, and adjustments to adjustments, and difficult to obfuscate – as I suppose the satellite data should be. I’ll have to reassess my expectations of an unimpeachable CRN result.

  32. John Norris
    Posted Aug 30, 2007 at 9:03 PM | Permalink

    Oops. #31 should be re #28, not re #29 …

  33. John Norris
    Posted Aug 30, 2007 at 9:11 PM | Permalink

    Perhaps it is getting late and I should give up on typing for the night.

    CRN Overview

  34. DeWitt Payne
    Posted Aug 30, 2007 at 9:17 PM | Permalink

    31

    I seem to recall some arguments that the extreme latitudes were not measured correctly with the satellites.

    They aren’t, but then again GISS and Hadley don’t do such a good job at high latitudes either. The MSU’s look sideways at an angle, not straight down and the satellites are in polar orbit. Therefore, they can’t see the area around the poles. There is an additional problem with high altitude cold locations. That eliminates the Tibetan Plateau, Antarctica and the spine of the Andes. OTOH, both the RSS and UAH interpretations have very high correlation with GISS and Hadley for the contiguous 48 states. That means that there is at least one good reason to believe that satellite temperatures are valid for large parts if not most of the globe.

  35. David Smith
    Posted Aug 30, 2007 at 9:31 PM | Permalink

    Here is an aerial photo of the “rural” Columbia, MS MMTS site (yellow pin). Note the large nearby paved areas and buildings.

    While Columbia is not urban neither is it pastoral rural.

    What a mess.

  36. Posted Aug 30, 2007 at 11:46 PM | Permalink

    California and other states have free copies of aerial photographs (DOQQs) taken by the USGS apparently in the mid 1990s. In california, these are dated in the mid 1990’s, and can give a 10 year old comparison for current google or microsoft maps to see what’s changed in the last 10 years. I’ve done a comparison for Newport Beach and e-mailed it in to surfacesites.org (haven’t heard if they got it), but it shows removal of trees, expansion of parking lots, more cars, etc… For california DOQQ’s can be obtained at

    http://gis.ca.gov/ims.epl

    then click on

    Calsil geofinder to locate DOQQ’s in all counties of California. The DOQQs are big (45 MB) but can be downloaded, then trimeed in photoshop to area around a station site, and compared to a google or microsoft maps image of the same site. DOQQ’s are about 1 meter resolution.

    The USDA also has a website for accessing their DOQQ library, haven’t had time to check it out yet. It’s at

    http://datagateway.nrcs.usda.gov/

    But this way one can show what’s changed in the last 10 years at a station site.

  37. Posted Aug 31, 2007 at 1:20 AM | Permalink

    #37

    Or to be more precise they’ve been ‘stick bending’, hockey stick bending to be even more precise. It took a bit of doing and involved using a CENSORED folder containing some dodgy Fortran code and new novel application of equally dodgy statistical methods and finally an egregious splicing technique but they managed it in the end.

    Sadly they didn’t bargain on it all being audited by our resident Toto. They should have known what happens when you give a dog a bone. Especially when the dog is also a very good competitive squash player.

    KevinUK

  38. Posted Aug 31, 2007 at 1:43 AM | Permalink

    I should have been clearer.

    They are insurance adjusters.

    Their job is to insure all adjustments keep the iron rice bowl filled.

  39. Rob
    Posted Aug 31, 2007 at 3:49 AM | Permalink

    Can I ask a question here? Satellite data seems to agree with the “adjusted” temperatures from ground based stations (period 1980 – 1999), at least I quickly found this paper. So even if Waldo can’t be found on the ground, could he be orbiting in space?

  40. Robert Wood
    Posted Aug 31, 2007 at 6:06 AM | Permalink

    If I understand the methodology, then they average all readings, then adjust individual records to track the average?

  41. Boris
    Posted Aug 31, 2007 at 7:06 AM | Permalink

    I guess he didn’t mention it?

  42. Gunnar
    Posted Aug 31, 2007 at 8:27 AM | Permalink

    #31 >> the extreme latitudes were not measured correctly with the satellites.

    John, let’s just reposition the polar satellites, or put new ones up in a stationary orbit over the poles.

    #31 >> idea of a new, transparent, surface measurement system … CRN sounds simple

    But impossible to be comprehensive, which is absolutely crucial for climate studies.

    #34 >> There is an additional problem with high altitude cold locations.

    DeWitt, wouldn’t this be because they only have one calibration? That is, they are calibrating for oxygen at a certain concentration, but on top of the himalayas, there is less oxygen, so it’s not correct. If I’m correct, they could fix this by having an additional calibration. This would require an accurate land based temperature measurement, which might not be there right now, since it’s remote.

    #40 >> Satellite data seems to agree with the “adjusted” temperatures from ground based stations (period 1980 – 1999),

    Rob, this could be some agenda serving hand waving by AGWers. The data shows quite different results. It does not “agree”.

  43. Rob
    Posted Aug 31, 2007 at 8:41 AM | Permalink

    #43

    Okay Gunnar, I don’t have the data so have to go from what I read in the research papers. I just can’t believe they are all wrong or all made by people with an agenda. Although the AGW hypothesis seems dodgy to me on gut instinct, I’m really on shaky ground when I argue that that alone is enough to falsify it ;).

    However, obviously if you don’t have satellite observations before the 1970’s, it’s pretty hard to separate out natural variability from any AGW trend from dodgy surface data or proxies for the latter half of the 20th century. So I think the most we can say is “maybe”, rather than “definately yes or no” – a point I think the media and politicians are having a hard time recognizing.

  44. David Smith
    Posted Aug 31, 2007 at 9:15 AM | Permalink

    Re #43 Rob I’ve looked at the satellite and ground data in some detail and found pretty good agreement. There is occasional divergence for short periods (which seems to be related to ENSO activity) and the ground trend is higher than the satellite trend which needs to be resolved, but I think it’s clear that the globe has warmed since 1980.

    The problem, as I see it, are the earlier years, where data is sparse and adjustments and assumptions come into play. This is especially true for the oceans and polar regions, which happen to be where Waldo resides. The question is, is that just a coincidence or does Waldo thrive in the land of assumptions and adjustments?

  45. Gunnar
    Posted Aug 31, 2007 at 10:15 AM | Permalink

    >> I don’t have the data

    Well, let’s fix that right away: http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

    >> it’s pretty hard to separate out natural variability from any AGW trend

    If you approach it from a purely statistical point of view, making no assumptions about the known science involved, I would agree with you. But in that case, almost everything is mysterious.

    >> I’ve looked at the satellite and ground data in some detail and found pretty good agreement

    David, I’ve also looked at them in some detail, and it’s clear they diverge. The satellite data shows a cooling of .6 degrees since april of 1998. The land based measurements show increases since 1998. In the satellite data, the hockey stick blade is pointed down.

  46. Sam Urbinto
    Posted Aug 31, 2007 at 12:09 PM | Permalink

    O2converter, 27. Good questions.

    John, 29. Your last paragraph about the CRN being more transparent and all. Perhaps that’s why there’s not a lot of funding, updates on status, or rush to implement it totally? (I doubt it will invalidate anything though, just give us better measurements and more measurements of other variables) Although maybe not, it’s what, about 3/4 done in the US? (Their page is broken for the list)

    Gunnar, 31. I believe Dr. Hansen and/or Dr. Schmidt thinks they have enough to do so. I think a number of strategically placed, properly place, well monitored and maintained stations designed to track GW accurately (and take all the readings of everything pertinent) worldwide and up to standards would do a pretty good job. But perhaps it’s not comprehensive enough.

    Nobody in particular, everyone in general:

    Guess what? Yep…. Algorithms and details.
    http://www.ncdc.noaa.gov/crn/officialtemp.html
    http://www.ncdc.noaa.gov/crn/elements.html

  47. Boris
    Posted Aug 31, 2007 at 12:14 PM | Permalink

    David, I’ve also looked at them in some detail, and it’s clear they diverge. The satellite data shows a cooling of .6 degrees since april of 1998. The land based measurements show increases since 1998.

    As I said before, the sat likely picks up ocean changes better. Look at the entire record and the trends match, especially if you use RSS data instead of UAH.

  48. jae
    Posted Aug 31, 2007 at 12:28 PM | Permalink

    44:

    Re #43 Rob I’ve looked at the satellite and ground data in some detail and found pretty good agreement. There is occasional divergence for short periods (which seems to be related to ENSO activity) and the ground trend is higher than the satellite trend which needs to be resolved, but I think it’s clear that the globe has warmed since 1980.

    But maybe not since 1988…

  49. David Smith
    Posted Aug 31, 2007 at 12:41 PM | Permalink

    Looks like we have the makings of a discussion!

    I’ll post some satellite vs surface charts when I return to my home computer this evening.

  50. Boris
    Posted Aug 31, 2007 at 12:53 PM | Permalink

    Tamino compares the surface data to the RSS data here. The relevant graphs are in the updates.

  51. Jeff C.
    Posted Aug 31, 2007 at 1:00 PM | Permalink

    Here is the latest GISS vs. UAH LT here http://www.junkscience.com/MSU_Temps/MSUvsGISTEMP.html

  52. Posted Aug 31, 2007 at 1:21 PM | Permalink

    Is MSU data available for the just the U.S region? The surface data shows that that the ROW has warmed more than the U.S.? Does the MSU data show this as well? How does it compare?

  53. Sam Urbinto
    Posted Aug 31, 2007 at 1:31 PM | Permalink

    The page on Tamino sure is crowded with data, all I got the impression of was that there’s simply too many graphs on the page. and not too much else. Seems it depends on what you’re talking about as to what land vs sat tells us. Corrected, uncorrected, LT, UT, MT, ground etc. But it all looks pretty much the same mostly. What’s a few decimal places between friends.

  54. David Smith
    Posted Aug 31, 2007 at 2:03 PM | Permalink

    Re #52 There is MSU data available for just the US (lower 48) and I think it matches GISS pretty well, now that the circa 0.15C adjustment to GISS has been made. However, 2006 matches poorly (GISS is noticeably warmer than the MSU data).

    Many good graphs are available here . They (surface and MSU) tend to show that the US has warmed less than the rest of the world.

  55. Paul Penrose
    Posted Aug 31, 2007 at 2:11 PM | Permalink

    The fact that the surface trends more or less match the sat trends is pretty much meaningless since we are talking about a time period of what, a little over 30 years? This is too short to mean anything if you want to talk about climate trends.

  56. Steve McIntyre
    Posted Aug 31, 2007 at 2:32 PM | Permalink

    I emailed Lavrimore asking him about whether there were manuals or technical reports assessing the impact of adjustments needed to account for site biases. He replied as follows:

    This reporter misquoted me. Not only did he call me chief of NCDC, but he also misrepresented my discussion of adjustments for factors such as station moves, instrument changes, and changes in observer practices – which are accounted for. I had a 30 minute conversation with him and it appears he stuck together pieces that had no business being together and left out other critical points.

    Jay

  57. jae
    Posted Aug 31, 2007 at 2:35 PM | Permalink

    My 48: I meant not since 1998.

  58. fFreddy
    Posted Aug 31, 2007 at 2:44 PM | Permalink

    Re #56 Steve McIntyre

    …he also misrepresented my discussion of adjustments for factors such as station moves, instrument changes, and changes in observer practices – which are accounted for.

    So these things are accounted for ? So where are the manuals, etc. ?

  59. Sam Urbinto
    Posted Aug 31, 2007 at 2:51 PM | Permalink

    HA! Good luck on that one! lol, manuals. “What are we, babysitters?”

  60. Gunnar
    Posted Aug 31, 2007 at 2:53 PM | Permalink

    >> Here is the latest GISS vs. UAH LT

    The problem is that the information is lost in the aggregation. 1998 is one data point. However, you can still see that what I said is true: The satellite data shows a cooling of .6 degrees since april of 1998. The land based measurements show increases since 1998.

    >> The fact that the surface trends more or less match the sat trends is pretty much meaningless since we are talking about a time period of what, a little over 30 years? This is too short to mean anything if you want to talk about climate trends.

    Exactly. Statistics can sometimes be useful, but usually not: This batter is 6 for 10 for games that start before 3pm. The Maple Leafs have never lost when they have scored in the 22nd minute of the 1st period. Steve, sorry for the blashemy. The plain fact is that the surface data says that it got hotter after 1998, and the satellite data says that April of 1998 was the high point, and that it has cooled since then. Those are completely different pictures, and you can’t use statistical trends to change that.

    AGW proponents proclaimed to high heaven at that time that 1998 was caused by AGW. Their claim is costing my fellow norwegians about a billion dollars per year. Live by the sword, die by it. Therefore, since it has cooled off since then, either C02 has been reduced dramatically, or AGW is wrong.

  61. Bill F
    Posted Aug 31, 2007 at 3:49 PM | Permalink

    If I am not mistaken, they have been pretty open about how they do the time of observation and instrumentation change adjustments. It is the adjustments to take into account other biases due to station siting problems that Lawrimore sounds like he is backing away from. I can totally understand it if he was misquoted by the reporter. Given my own interactions with reporters on scientific issues, you are lucky if they get 25% of what you said right.

  62. Gerald Browning
    Posted Aug 31, 2007 at 5:37 PM | Permalink

    David Smith (#49),

    The satellite temperature data is obtained by the inversion of an integral relationship (an extremely sensitive mathematical procedure even when the relationship is accurate).This relationship is quite complicated, especially in the presence of clouds that affect radiation.
    From the tests that Sylvie Gravel ran, it was quite clear that the satellite measurements of temperature
    were of questionable accuracy, especially if there were no ground based measurement to anchor the algorithm. Thus over oceans, the temperatures are not trustworthy?

    Jerry

  63. STAFFAN LINDSTRÖM
    Posted Aug 31, 2007 at 6:41 PM | Permalink

    #60 Gunnar..[before leaving home to work in a normally
    “free” night] If this planet cools substantially in the
    the next 5-6 years I can make a bet on we’ll have some revelations
    that CO2 was measured in the wrong way and/or in the wrong
    places compare Donald Rumsfeld “Footinthemouth” “There are known
    knowns…” Sums it up nicely…But soon resolution on Google
    Earth etc will be so good that you can see if small oaks are
    groving above 1000 m ASL in the Scandinavian Mountains…etc etc
    Well gotta go…DID (Duty Is Duty)

  64. David Smith
    Posted Aug 31, 2007 at 8:07 PM | Permalink

    Re #62 I did not know that. Thanks, Jerry. Let me state what I think I am reading: the satellite-derived temperatures are anchored (something like a reality check) against surface observations, such that if the surface observations are flawed then the satellite-derived temperatures are also flawed, and flawed in the same direction.

    BTW, on the questions from earlier today on surface vs lower troposphere, here is a comparison of smoothed monthly surface and satellite-derived temperatures. I averaged NOAA and NASA for the surface and RSS and UAH for the satellite.

    The surface trend somewhat outruns the satellite trend, which to me is not realistic, and I suspect that the surface record still needs cleanup. But, that cleanup period covers less than 30 years and is only a piece of Waldo. My guess is that Waldo mainly resides amongst the older data, especially for the sparsely-sampled regions (like the oceans and polar regions).

  65. Gunnar
    Posted Aug 31, 2007 at 9:15 PM | Permalink

    >> If this planet cools substantially in the the next 5-6 years I can make a bet on we’ll have some revelations that CO2 was measured in the wrong way

    Steffan from Sverige, I think you’re right. The elephant in the room is the solar cycle. They are betting that the general public doesn’t know about it. So, they are already positioning themselves for cooling for a several years, then “global warming will start in earnest” in 2009, or something like that. However, they may not get a second chance at this, and they know it. Like a lot of manias, when reality proves it wrong, no one will say another word about it, and it will be like it never happened. Remember Y2k? I’d like to come to Sverige some day soon to go skiing, before I get too old.

  66. Gunnar
    Posted Aug 31, 2007 at 9:28 PM | Permalink

    #64, David, something is seriously wrong with your graph. The peak of the satellite curve is over .8, when it’s actually .78. It ends at .45, when it’s actually around .25 or something. I’m trying to bite my tongue about you proving me right so quickly about:

    There are three kinds of lies: lies, damned lies, and statistics.

    Please tell me it was some kind of honest mistake?

  67. Gerald Browning
    Posted Aug 31, 2007 at 9:53 PM | Permalink

    David Smith (#62),

    That is essentially correct. If you peruse Sylvie’s manuscript (link shown in Exponential thread),
    she briefly mentions the integral relationship and the problems associated with using it to obtain temperatures from satellites. Typically the inversion of an integral relationship is an ill-posed problem (any small error will be amplified). That is why tomography (MRI’s) use multiple angles to reconstruct images of the brain and even then there can be surface (skull) issues.
    To invert the vertical integral for satellites, usually there is an iteration performed that uses something as the initial guess (e.g. previous days forecast). Whether on not this iteration converges is very questionable (let alone if the integral relationship is accurate). Obviously it would be better to have many different angles from a satellite
    similar to tomography. This has been investigated by Yuanfu Xie at NOAA, but there are still a number of problems that cannot be easily overcome. If you would like more info about any of these points, just let me know and I will be happy to expand or obtain some references.

    P.S. It is much easier to use wind data to update NWP models because it can be directly inserted
    into the model without any knowledge of cloudiness. And the wind (vertical component of vorticity) is the correct mathematical variable for slowly evolving solutions at all scales of motion.

    Jerry

  68. David Smith
    Posted Aug 31, 2007 at 10:32 PM | Permalink

    Re #66 Hello, gunnar. Thanks for taking the time to closely look at the graph.

    Two things: one, as mentioned, the plot is composed of smoothed data. Second, it is anomaly data, so that the Y-axis doesn’t have a lot of meaning. I’ll explain this in some detail tomorrow (bedtime now).

    Since I have the monthly anomaly data for GISS, NCDC, RSS and UAH on a spreadsheet I’ll be glad to display plots in any manner you might request. Also, I think I’ll post the spreadsheet, so that anyone can play with the data and make their own graphs.

  69. TCO
    Posted Sep 1, 2007 at 6:42 AM | Permalink

    The key is how well do they adjust. One can have a meaningful discussion without blustering in rage at the flat comment from a manager. for instance, list the kinds of possible interferences, list which are/aren’t adjusted for. How they’re adjusted for. Some comments on the validity of various adjustments (some will be less questionable than others). and perhaps list the prevalence of different types of site issues. THat would be a veryu useful way to frame the discussion with someone like this administrator. Blustering in rage, or making rhetorical demands for him to answer your questions, is not the path forward.

  70. Boris
    Posted Sep 1, 2007 at 7:23 AM | Permalink

    >> Here is the latest GISS vs. UAH LT

    The problem is that the information is lost in the aggregation. 1998 is one data point. However, you can still see that what I said is true: The satellite data shows a cooling of .6 degrees since april of 1998. The land based measurements show increases since 1998.

    >> The fact that the surface trends more or less match the sat trends is pretty much meaningless since we are talking about a time period of what, a little over 30 years? This is too short to mean anything if you want to talk about climate trends.

    Exactly. Statistics can sometimes be useful, but usually not…

    So Gunnar is making a huge deal out of “cooling since 1998” (cherry picking the El Nino year) then turns around and says 30 years–the standard time frame for averaging climate–is too short a time period. Gotcha.

  71. David Smith
    Posted Sep 1, 2007 at 7:58 AM | Permalink

    Here is my spreadsheet with the monthly global anomalies from UAH, RSS, GISS and NCDC. The UAH and RSS values are their lower troposphere (TLT) estimates.

    If anyone wants the raw data then simply click on the spreadsheet icon in the center and download the Excel spreadsheet. It can be fun to play with the data. It is updated through June 2007 and can of course be kept current.

    Regarding the chart I offered yesterday, the data is in the form of anomalies from base periods. The base periods vary and the y-axis really doesn’t mean much when one is comparing plots of anomalies. Anyway, as mentioned, the two curves are smoothed (1-2-3-2-1 smoothing). Also, the lower troposphere curve had 0.20C added to it to make it easier for eyeballs to see trends.

    What I’ll do is replot the curves using zero as the average value for each curve. I’ll do that later today.

  72. David Smith
    Posted Sep 1, 2007 at 8:15 AM | Permalink

    Re #71 A plot of the anomalies in which the y-axis is zeroed is here .

  73. David Smith
    Posted Sep 1, 2007 at 8:42 AM | Permalink

    Two final comments (I hope) on the plots in #72:

    First, note that there are times when the satellite-derived anomaly exceeds the surface (1998, for example) and times when the surface exceeds the satellite (2000, for example).

    These diverges tend to be associated with ENSO activity (big El Nino in 1998 and somewhat big La Nina in 2000) and I think that makes physical sense. The two measurement systems are measuring somewhat different things (satellite measures the lower troposphere) and ENSO affects the two in somewhat different ways at least over the short term.

    Second, it’s easy to forget that there were two major volcanic eruptions in the first half of the satellite record. The first volcano (early 1980s) masked what would have been a very big El Nino and the second volcano (early 1990s) also masked a generally warm ENSO pattern. See the ENSO index here for specifics.

    Had those volcanoes not happened, then the temperature anomaly chart would have looked noticeably flatter for the 1979-2007 period. There would have been greater warmth in the first half of the chart.

    And, if one then looked at the longer time scale, it would be more evident that the global temperature rise of recent decades owes a lot to the temperature rise in the 1970s (our big, mysterious “climate regime change” of 1976).

  74. Gunnar
    Posted Sep 1, 2007 at 8:46 AM | Permalink

    #72, it still doesn’t seem right to me. It seems to have something dragging the average down at the beginning. I’m not sure what smoothing does for you.

    I’m starting to wonder if the false premise that climate folks have about the “signal” is the problem. The premise is that there is natural and unnatural variability. They treat natural variability as “noise”. Smoothing would be appropriate in the situation where you actually have noise, like EM interference or static in a video signal.

    In this case, there is no noise. It’s all valid data. No reason to smooth it. Please plot the actual data for both the UAH and the GISS. Do not average anything together.

  75. Gunnar
    Posted Sep 1, 2007 at 8:50 AM | Permalink

    >> These diverges tend to be associated with ENSO activity

    Which is why the satellite is superior, since it doesn’t miss anything. ENSO activity is not noise and does not mask anything. We need to identify and analyze each major factor separately.

  76. Kenneth Fritsch
    Posted Sep 1, 2007 at 10:44 AM | Permalink

    David Smith, I continue to be in essential agreement with what I believe you are providing as evidence for the GISS and satellite records being in reasonable agreement and particularly when we compare the anomaly trends over regional and global spaces. I was of the view that the satellite records were less subject to uncertainty than I recently found on further research. Those records are adjusted also.

    If we assume that the GISS and satellite records are totally independent than the comparison you make tends to strongly validate the common trend. Do you know for certain or with high probability that there is no cross validations between the records in the adjustments (primarily for the satellite to surface)?

    The difference in temperatures that one sees between surface and satellite over short periods of time still bothers me even with your explanations. Take a mild hump here and there out of the surface record over a period of 100 years and we could calculate significantly different trend lines. The bigger issue I have and on which I would like to hear your comments is on the climate model projections (under the assumptions of GHG warming) that the lower troposphere should be warming at a rate 1.2 times faster than the surface when globally averaged. The papers I have read on this deflect the averaged measured and modeled temperature anomalies being different for the troposphere and surface (largest discrepancy being in the tropics) by pointing to the wide range of uncertainties in the measured and modeled temperatures. If the models are correct on average and the satellite averages where correct than the surface temperature trend must be reduced by approximately 20%. I do not think one can have it both ways because the alternative explanation would be that we have simply too many uncertainties in measured and modeled temperatures to make the judgment. Of course, if the surface and satellite records are totally independent, and we judge them under that condition to give a reasonable common and equal trend then the models on average are wrong and the whole GHG theory becomes less tenable.

    A further point of contention for me is can we compare localized records of temperature between satellite and surface records, i.e. does the satellite measurement have sufficient resolution. My contention in this matter is how much climate change will vary by locality and make some global average less meaningful. I think this my next project.

  77. John Norris
    Posted Sep 1, 2007 at 11:26 AM | Permalink

    re #62

    … the satellite measurements of temperature were of questionable accuracy, especially if there were no ground based measurement to anchor the algorithm …

    What? So the MSU satellite measurements are not totally independent from the surface measurements?

  78. David Smith
    Posted Sep 1, 2007 at 12:46 PM | Permalink

    Re #74 As requested ( link )

    Re #76, #77 As Jerry mentioned in #67, the satellite record is to some extent dependent on the surface observations, and errors in the surface record (GISS, etc) could also cause errors in the reported satellite record.

    I had always thought that the satellites were independent of the surface record and was taken aback when Jerry pointed out that they are not independent. That’s a bit depressing.

    I’ve read that models predict a 1.2x tropospheric warming versus the surface. Others reportedly predict less than that. As discussed before, the comparison (satellite vs surface) has problems and the things that Jerry mentioned may make that comparison an even greater problem.

    But, there’s hope for an alternate check of the models vs. actual measurements. This alternative has to do with the global climate models’ clear prediction of a much-warmer tropical upper troposphere versus the extratropical upper troposphere. The satellite data to examine this exists but I have not found it broken down by latitude. If I find it I will plot it.

  79. Gunnar
    Posted Sep 1, 2007 at 3:04 PM | Permalink

    >> Re #74 As requested ( link )

    Thank you David.

    Now, could you draw two trend lines for UAH, instead of one? Make the first one start in December of 1978 and end December of 1997, and the second one start January 1998 until present.

    If it looks like I expect, I think other engineers that have experience or knowledge of control systems will recognize this as a “step input”. And it’s no mystery what it was. Numerous direct hits from solar flares, delivering an enormous amount of energy to the earth.

  80. Larry
    Posted Sep 1, 2007 at 3:40 PM | Permalink

    79, let’s be clear on terminology. Step input changes from one value to a second value, and stays there indefinitely. A pulse is a step up followed by a step down. As I’m sure you know, the responses are very different. Are you talking about a pulse?

  81. Gunnar
    Posted Sep 1, 2007 at 4:21 PM | Permalink

    >> Are you talking about a pulse?

    Good point, I meant a pulse or a step of a certain duration.

  82. David Smith
    Posted Sep 1, 2007 at 6:42 PM | Permalink

    Re #79 Here they are:

    UAH 1979-1997

    UAH 1998-6/2007

  83. Willis Eschenbach
    Posted Sep 2, 2007 at 1:37 AM | Permalink

    For a more accurate view of the surface vs troposphere question, here are GISS, HadCRUT3, MSU (tropospheric temperatures) from Christy and Spencer at University of Alabama Huntsville (UAH), and MSU from Remote Sensing Systems (RSS).

    This is the same data as the first graph, but using Gaussian averages instead of the raw data. Note the different y-scales. Of interest is the change in the satellite vs ground data post 2002, during which time the ground temperature records continued to rise, while the satellite records either showed dropping (RSS) or nearly level (UAH) temperatures.

    It is also notable that the GISS data is the most unlike the other three.

    Best to everyone,

    w.

  84. jae
    Posted Sep 2, 2007 at 8:12 AM | Permalink

    Willis: great plots. Those slopes will keep going down if we don’t see some rising temperatures in the next few years. The declines shown for the satellite data post 2002 are a little worrisome.

  85. John F. Pittman
    Posted Sep 2, 2007 at 9:18 AM | Permalink

    Great post, Willis. Thanks.

  86. David Smith
    Posted Sep 2, 2007 at 10:06 AM | Permalink

    Re #83 Excellent plots, willis, and the addition of HadCRUT3 is appreciated.

    The satellite/surface post-2002 divergence shows up well using Gaussian smoothing but is not so evident using simpler smoothings. Can you explain in layman’s terms why that would be? Thanks.

  87. Kenneth Fritsch
    Posted Sep 2, 2007 at 12:23 PM | Permalink

    Here are some graphics of actual and modeled surface and troposphere temperatures — by latitude and longitude. The first image is from a yet to be published paper “Disparity of Tropospheric and Surface TemperatureTrends: New Evidence” and authored by David H. Douglass, Benjamin D. Pearson, S. Fred Singer, Paul C. Knappenberger, and Patrick J. Michaels and linked here.

    Click to access 0407075.pdf

    The second and third images are from this link:

    Click to access sap1-1-final-execsum.pdf

  88. Larry
    Posted Sep 2, 2007 at 1:23 PM | Permalink

    87, from the intro to the paper:

    The question of the degree to which Earth’s surface temperature is increasing is a climate problem of great interest. The pattern and magnitude of current and /or future warming has both ecological and economic implications. However, the science is not settled on these issues, as many outstanding questions remain.

    These guys are asking for it.

  89. Douglas Hoyt
    Posted Sep 2, 2007 at 1:26 PM | Permalink

    Willis (#83),

    Perhaps GISS differs from the other time series is because of the depopulation error described at http://www.climateaudit.org/?p=1798 (see comment 86). The error is probably not confined to just NYC.

  90. Kenneth Fritsch
    Posted Sep 2, 2007 at 1:35 PM | Permalink

    Sorry, I posted thumbnails above. See full size images from previous post below.

  91. David Smith
    Posted Sep 2, 2007 at 1:41 PM | Permalink

    Re #87

    The topic I’d like to explore is the GCM projections of changes in the atmosphere’s profile.

    From the IPCC report, here are model projections of temperature changes in the atmosphere at various heights. It shows considerably greater warming in the tropical upper troposphere than in the extratropical upper troposphere.

    The satellite data breaks out the tropics (20S-20N), which I’ve indicated on this modified IPCC graph ( link ). The inner (tropical) box should warm faster than the outer (extratropical) boxes. If we can find data for the temperature trends in those boxes then we should be able to see differential trends, or not.

    I’ve looked at the satellite weighting functions and it appears to me that TTS (total troposphere) would provide the best approximation of the boxes. I’ve not been able to find the data, however. I’ve e-mailed RSS with a request and will also ask UAH.

    Perhaps there’s some wrinkle that makes this a poor choice for examination, but I haven’t come across that.

  92. Kenneth Fritsch
    Posted Sep 2, 2007 at 1:59 PM | Permalink

    Re: #88

    These guys are asking for it.

    The authors include some of the “bad boys” in climate science so they will anticipate the “bought and sold by the fossil fuel industries”. It is my judgment that this statement like the obligatory “its AGW and we need to fix it now” really only remind the reader where the authors are on political spectrum and has little to do with the science revealed in the paper.

    Re: #83

    There is a published satellite measurement that has a trend as high as 0.29 degrees C per decade. It is not quoted much, but was by the AR4. Using the same satellite inputs we can obtain trends from 0.14 to 0.29; a situation that does not give me a lot of confidence in the satellite measurements — or, at least, the rendering of them by various groups. Indeed we should really be looking at the claimed confidence limits of the measurements from the various sources, looking at how much they overlap and determining whether the measurements vary by source by more than statistically significant amounts.

  93. Paul Linsay
    Posted Sep 2, 2007 at 2:37 PM | Permalink

    #83, Willis,

    I’ve been saying the same thing that Gunnar says in #79 for a long time. The satellite time series is definitely not a simple straight line, it is a step function with one level pre 1998, and a second level post 2000 with a big El Nino pulse in 1998. Most of the fluctuations are due to El Ninos. From Wikipedia:

    Major ENSO events have occurred in the years 1790-93, 1828, 1876-78, 1891, 1925-26, 1982-83, and 1997-98.[15]
    Recent El Niños have occurred in 1986-1987, 1991-1992, 1993, 1994, 1997-1998, 2002-2003, and 2006-2007.

    Mark them on your raw data plot for the satellite data and then imagine what the temperature would look like without them. Flat.

  94. Gunnar
    Posted Sep 2, 2007 at 4:33 PM | Permalink

    >> it is a step function with one level pre 1998, and a second level post 2000 with a big El Nino pulse in 1998. Most of the fluctuations are due to El Ninos

    I didn’t actually mention el-nino in #79, but I have in the past. I agree with you that an ENSO event was also a major factor in the 98-02 timeframe, along with extraordinary solar activity and the coincindental lack of a volcanoe.

    This is yet another example of how statistics can be very misleading. With a certain premise in mind (namely, that the temperature is gradually increasing), the statistician draws a single trend line.

  95. Ron Cram
    Posted Sep 2, 2007 at 9:23 PM | Permalink

    re:67
    Gerald,
    You wrote:

    To invert the vertical integral for satellites, usually there is an iteration performed that uses something as the initial guess (e.g. previous days forecast). Whether on not this iteration converges is very questionable (let alone if the integral relationship is accurate). Obviously it would be better to have many different angles from a satellite similar to tomography. This has been investigated by Yuanfu Xie at NOAA, but there are still a number of problems that cannot be easily overcome. If you would like more info about any of these points, just let me know and I will be happy to expand or obtain some references.

    Are you saying the satellite temp record is dependent on the surface temperature record? In Comment #78, David Smith seems to think that is what you are saying. I would like to know more about the process, specifically what percentage of an artificial warming bias would “teleconnect” to the satellite record? Let’s say it was determined that 0.3C of the recent warming was an artifact of UHI and poorly sited stations. How much of an impact would that have on the satellite record?

  96. Willis Eschenbach
    Posted Sep 3, 2007 at 1:51 AM | Permalink

    David Smith, thanks for the post. You say:

    Re #83 Excellent plots, willis, and the addition of HadCRUT3 is appreciated.

    The satellite/surface post-2002 divergence shows up well using Gaussian smoothing but is not so evident using simpler smoothings. Can you explain in layman’s terms why that would be? Thanks.

    Dunno … which “simpler smoothings” are you referring to?

    w.

  97. Willis Eschenbach
    Posted Sep 3, 2007 at 3:34 AM | Permalink

    Paul Lindsay, you raise an interesting point in your post:

    #83, Willis,

    I’ve been saying the same thing that Gunnar says in #79 for a long time. The satellite time series is definitely not a simple straight line, it is a step function with one level pre 1998, and a second level post 2000 with a big El Nino pulse in 1998. Most of the fluctuations are due to El Ninos. From Wikipedia:

    Major ENSO events have occurred in the years 1790-93, 1828, 1876-78, 1891, 1925-26, 1982-83, and 1997-98.[15]
    Recent El Niños have occurred in 1986-1987, 1991-1992, 1993, 1994, 1997-1998, 2002-2003, and 2006-2007.

    Mark them on your raw data plot for the satellite data and then imagine what the temperature would look like without them. Flat.

    Well, things are never quite that simple. All that removing the ENSO events can do is remove some of the humps. The underlying temperature data still contains a trend. A couple of graphs will illustrate what I mean.

    There are several ways to measure the ENSO events, including the El Nino 3.4 Index, the Southern Ocean Index (SOI), the Multivariate Enso Index (MEI), and the Bivariate Enso Timeseries Index (BEST). Here is a graph showing the indices, along with the UAH MSU tropospheric temperature. It seems to take about six months to a year for the ENSO variations to spread to the tropospheric temperature, so I have offset the MSU by one year.

    Note that the ENSO indices do not contain a trend, so I have compared them with the detrended tropospheric temperatures. The two regions where the fit is bad, following 1982 and 1991, are times of major volcanic eruptions (El Chichon and Pinatubo).

    However, this doesn’t remove the underlying trend from the tropospheric time series. Here is the UAH MSU troposphere temperature both with and without the El Nino effects:

    As you can see, eliminating the El Nino effect removes some of the swings, but does not affect the trend.

    My best to everyone,

    w.

  98. Paul Linsay
    Posted Sep 3, 2007 at 6:09 AM | Permalink

    #97, Willis,

    Compare your analysis with the UAH plot. The UAH plot shows a clear step function, at least to me! BTW, no smoothing allowed, that introduces artifacts.

  99. Kenneth Fritsch
    Posted Sep 3, 2007 at 1:33 PM | Permalink

    RE: #98

    Compare your analysis with the UAH plot. The UAH plot shows a clear step function, at least to me! BTW, no smoothing allowed, that introduces artifacts.

    Smoothing is used to showthe presence of a trend. If one sees a step function or whatever in the unsmoothed graphics that is the advantage of looking at the data both ways. Better still is to have reasonable explanation for seeing a trend or a step function.

  100. Kenneth Fritsch
    Posted Sep 3, 2007 at 1:33 PM | Permalink

    RE: #98

    Compare your analysis with the UAH plot. The UAH plot shows a clear step function, at least to me! BTW, no smoothing allowed, that introduces artifacts.

    Smoothing is used to show the presence of a trend. If one sees a step function or whatever in the unsmoothed graphics that is the advantage of looking at the data both ways. Better still is to have reasonable explanation for seeing a trend or a step function.

  101. Gunnar
    Posted Sep 3, 2007 at 1:46 PM | Permalink

    >> Smoothing is used to show the presence of a trend.

    Don’t think so. I think the purpose of smoothing is remove noise in the signal. In this case, there is no source of noise. But I agree that before drawing a trend line, it’s necessary that one consider what is actually happening in the system. In other words, one cannot start with only knowledge of statistics. For example, one could look at temperatures from midnight until 2pm, presume a rising trend line, and conclude that we will soon burn up, or that people waking up is causing the temperatures to soar.

  102. Willis Eschenbach
    Posted Sep 3, 2007 at 2:53 PM | Permalink

    Paul, in #98 you say:

    #97, Willis,

    Compare your analysis with the UAH plot. The UAH plot shows a clear step function, at least to me! BTW, no smoothing allowed, that introduces artifacts.

    My blue line (in the second graph) is identical to the UAH plot, they’re the same data displayed in a different way. It does contain what appears to be a step function, but the step function disappears once the El Nino effect is removed.

    Also, I don’t understand your comment about smoothing in this case, as there is no smoothing used in the graph under discussion.

    w.

  103. David Smith
    Posted Sep 3, 2007 at 3:59 PM | Permalink

    Re #96 Thanks, willis. I found an error (missing minus sign) in my UAH database which, when corrected, created reasonably close resemblance in the final years.

    One question – your plots seem to end in early/mid 2007. Is that correct (does Gaussian averaging generate trailing averages) or is my read of the chart incorrect? Again, thanks for the various plots – they are illuminating.

  104. David Smith
    Posted Sep 3, 2007 at 4:44 PM | Permalink

    In playing with smoothing I generated this chart of smoothed GISS and UAH data. It’s admittedly an odd smoothing (31-month moving average, which approximates the 2.5 years of willis’ work). I set the 2007 end points to be the same number so as to see how the anomaly patterns behaved relative to each other.

    What surprised me is that there are long periods when the two series agree, only to go through periods of divergence. I expected to see them more-or-less steadily drifting apart.

    I can attempt a partial explanation, involving ENSO, but it’s a shaky one.

  105. Paul Linsay
    Posted Sep 3, 2007 at 4:53 PM | Permalink

    #102, Willis,

    How did you do the subtraction of the El Nino temperature profiles? The ENSO indices all have the same amplitude which isn’t necessarily the case for the actual events. If you look at the UAH plot I linked, the step function is quite obvious, at least to me 😉

    Sorry, I was mistaken about the smoothing. I find it really, really annoying that all the climate time series have smoothed curves drawn through them. There’s generally no justification for it and it throws out a lot of information that’s in the fluctuations /rant>

  106. Gerald Browning
    Posted Sep 3, 2007 at 9:34 PM | Permalink

    Ron Cram (#95),

    Are you saying the satellite temp record is dependent on the surface temperature record? In Comment #78, David Smith seems to think that is what you are saying. I would like to know more about the process, specifically what percentage of an artificial warming bias would “teleconnect” to the satellite record? Let’s say it was determined that 0.3C of the recent warming was an artifact of UHI and poorly sited stations. How much of an impact would that have on the satellite record?

    When temperature retrievals from the satellites were inserted into the forecast model,
    they produced similar forecast accuracy to the insertion of wind data (see Sylvie Gravel’s
    manuscript under the Exponential Growth thread). This did not agree with the mathematical
    analysis in our updating manuscript. Then I realized that Sylvie had not removed
    the radiosondes from the temperature computations (integral inversion). When she ran the correct
    case (integral inversion only w/o any additional info), the temperture insertion led to much larger
    forecast errors as expected.The logical conlusion is that the inversion depends on the use of
    data from the radiosondes. An analysis of the inversion of an ill posed process would not be easy.
    One first needs to know the accuracy of the physical approximation, the accuracy of the
    data used to invert the integral, and the sensitivity of the inversion to errors in the satellite measurements
    and radiosonde measurements. A feel for the sensitivity could be obtained by using the inversion
    w/o any external data and then comparing the answer to the correct observed temperature.
    But there can be errors in the radiosonde data too.

    Jerry

  107. Ron Cram
    Posted Sep 3, 2007 at 9:54 PM | Permalink

    re: 106
    Gerald,
    So if I am understanding you correctly, surface temperature readings do not affect in any way the satellite temp record. In the process you are describing, the radiosonde and satellite readings are used to control the behavior of computer climate models.

    So then errors in the surface record are not migrated into the satellite record. That is a good thing.

  108. Willis Eschenbach
    Posted Sep 4, 2007 at 12:09 AM | Permalink

    David, you say:

    One question – your plots seem to end in early/mid 2007. Is that correct (does Gaussian averaging generate trailing averages) or is my read of the chart incorrect? Again, thanks for the various plots – they are illuminating.

    I use an algorithm that has proven to be the most accurate at estimating the true gaussian average at the end of a dataset. Where there is missing data, the average of the remaining data is simply increased by that amount. For example, if 10% of the information needed for the Gaussian average is missing, the calculated average is increased by 1/(1-0.1). This is only an estimate, of course, but is the best estimate I know of. It will be very close up to about a quarter of the filter width (FWHM), with increasing error after that. Since the filter is only 2.5 yrs wide, the average will not be off by much.

    w.

  109. Willis Eschenbach
    Posted Sep 4, 2007 at 12:22 AM | Permalink

    Paul Linsay, you raise several interesting questions, viz:

    #102, Willis,

    How did you do the subtraction of the El Nino temperature profiles? The ENSO indices all have the same amplitude which isn’t necessarily the case for the actual events. If you look at the UAH plot I linked, the step function is quite obvious, at least to me

    I subtracted the El Nino effect by regressing an El Nino index (I used the MEI, which has the highest correlation with MSU temperature), against the detrended MSU temperature. I then subtracted the regressed values (which are the amount of the MSU variation attributable to the El Nino) from the detrended temperature, and added the trend back in.

    You say that the indices have the same amplitude, which is not the case for the events … how are you measuring the events, if not via the indices? In any case, the strength of the event is related to the area under the index rather than the amplitude of the variation — a short, sharp peak may not be as strong as a longer event with less amplitude.

    Finally, you say “I find it really, really annoying that all the climate time series have smoothed curves drawn through them. There’s generally no justification for it and it throws out a lot of information that’s in the fluctuations”. Me, I find it really annoying when they don’t have smoothed curves. I can see the information in the fluctuations with the naked eye, while the longer, slower changes are difficult to gauge by eye alone. Guess that’s what makes horse races …

    All the best,

    w.

  110. Posted Sep 4, 2007 at 5:19 AM | Permalink

    I was intrigued by the last sections of the combined surf/trop temperature plots above so I made my own plot combining all monthly anomalies of the four surface and satellite records for the last 5 years. I’ll try to emulate Willis and post the resulting graph here:

    Now, I’m wondering if in a world warming at unprecedented levels because of increasing GHGs, a 5 year period can exist where the following occurs:

    1) Basically no warming trend in either the surface or the troposphere (actually the linear trendline for RSS is negative and CRU is zero).

    2) The troposphere temperature anomalies are consistently below the surface ones for the whole 5 year period (need of “reconciliation” again?).

    Sure, the noise in a short period like this is big and the different baselines for the anomalies must be taken into account but still, while the former could potentially explain point 1 I find it difficult to imagine a physical explanation for point 2.

    In any case one surprising result for me was that the four lines seem reasonably well correlated at the global level for this period. At the regional or hemispheric level the disparity is much greater, with GISS always showing the warmest and most divergent record. But somehow it seems that all 4 sets manage to capture the same global trends, as is visible in the likely ENSO-related warming early this year followed by a marked cooling.

  111. Posted Sep 4, 2007 at 5:43 AM | Permalink

    Hmmm, why did my image link disappear after submitting the comment? It displays fine in the preview.

    Available here
    http://mikelm.blogspot.com/

  112. Gerald Browning
    Posted Sep 4, 2007 at 12:11 PM | Permalink

    Ron Cram (#107),

    That is not what I said. The updates were for short term weather forecasts on the order of 12 hours (see the manuscript cited above). After that the relative errors in the forecast were O(70%) due to errors in the boundary layer creeping up to higher levels. The satellite retrievals use an ill posed method to compute the temperature profile and that method involves all sorts of trickery. There must be information from some other source to help the inversion process, and even then the accuracy of the retrieved temperature profile is unknown. Over the oceans good luck unless there is a boat or submarune to anchor the method. I also clearly stated that the correct method would be to use something similar to tomography, i.e. where there are many satellite ray observations to compute the temperature profile. But that method also has serious problems. It has been known for a long time that the satellites have problems accurately determining the correct temperature profiles, and that is no surprise given the above mathematical difficulties. If someone can show that this is not the case, please provide a mathematical error analysis of the inversion process as currently used. I also pointed out that temperature is not the correct variable for insertion.

    Jerry

  113. Gunnar
    Posted Sep 4, 2007 at 12:53 PM | Permalink

    >> Over the oceans good luck unless there is a boat or submarune to anchor the method.

    You suddenly seem to be guessing here. There is no doubt that IR measurements need to be calibrated by independent temperature measurements. One need only look at a manual for an IR temperature instrument. If you ever worked in an engineering office, you would know that all instruments need to be calibrated. In our office, there was a special department that did nothing else. The first step of any qualification test was to check the calibration of every instrument.

    There is also no doubt that the satellite measurement system uses selected surface based measurements for this purpose. No “luck” is needed over the ocean, since there are well placed temperature measurement stations in the oceans.

    In addition, as far as I know, the problem is not “ill-posed”. This is a simple use of Planck’s Law. Please excuse me if I’m misunderstanding, I have not read all the posts above, and if this already explained in detail, you don’t have to repeat it. Thermal imaging is a well known and mature technology. It is used in military, fire fighting, law enforcement, etc.

  114. SteveSadlov
    Posted Sep 4, 2007 at 12:59 PM | Permalink

    RE: #73 – In other words, the impacts of the great “0” to “1” bit flip / state change of 1979 would have looked more like a ringing step function response. But since the volcanic effects acted as a sort of series “inductor,” we ended up with a noisy saw tooth instead of a noisy square wave. My only question, as always, is, has the bit flipped back to “0” yet? We should know for sure in a couple years.

  115. Gerald Browning
    Posted Sep 6, 2007 at 8:11 PM | Permalink

    Gunnar (#113),

    1. State the accuracy of the satellite measurements both laterally and in
    relative error terms in the vertical direction.

    2. Write down the method for converting satellite data to temperature data in the presence of clouds and state the relative accuracy of that conversion in physical terms.

    3. State how cloudiness is measured at all levels by satellite and the relative accuracy of those measurements.

    4. Show how the conversion from satellite data to temperature data
    is performed without any independent data.

    5. State the source, type, and relative accuracy of any independent data that is used in the conversion.

    Once we have these facts, we can discuss analytically the posedness and accuracy of the process. No hand waving needed.

    Jerry

  116. Gunnar
    Posted Sep 7, 2007 at 10:18 AM | Permalink

    >> 4. Show how the conversion from satellite data to temperature data is performed without any independent data.

    Why do ask this, when no one has said that they do this without calibrating using known surface based temperature measurements. I merely pointed out that this requirement for calibration is not unique to satellite thermal imaging. All instruments need to be calibrated. For example, a thermometer doesn’t measure temperature, it measures how mercury rises in a cylinder.

    I was following you for a while, but when you said that luck was needed over oceans, you suddenly seemed to be guessing. And as for clouds, the image is of course degraded, but thermal imaging systems are able to see through clouds.

    1, 2, 3) Every instrument has errors, but are you saying you know something that the folks at UAH don’t know?

  117. Gerald Browning
    Posted Sep 7, 2007 at 11:58 AM | Permalink

    Gunnar (#116),

    These 5 questions were asked in order to quantitatively address all of the issues needed to clarify the accuracy of the conversion of satellite data to temperature. That is not a trivial process and stating that it is just Planck’s law for black body radiation and that thermal imaging systems can see thru clouds is very misleading. Please answer the 5 questions in detail and then the general reader can begin to see why satellite data is not the panacea that it is claimed to be.

    I also point out that my original comment for David Smith was that the temperature conversion from satellite data is not independent of in situ measurements from other sources and error prone for a number of reasons addressed in the questions you have been asked.

    Jerry

  118. Sam Urbinto
    Posted Sep 7, 2007 at 12:41 PM | Permalink

    Having done work requiring calibration to 6 decimal places and devices from national standards labs, as well as some IR thermal imaging array adjustment work, I wouldn’t think (unless orbiting the Earth is extremely smooth and the satellites very very very expensive) they’d stay as calibrated as well as is claimed for as long as they’ve been in use.

    Note that this is not an argument from incredulity, simply a comment based upon some experience in this field.

  119. Gunnar
    Posted Sep 7, 2007 at 12:45 PM | Permalink

    Jerry,

    I don’t have to answer your questions, because I have never claimed any measurement to be error free. If you think that the satellite measurements are more erroneous than the UHI dominated surface network, the burden of proof is on you to demonstrate that. If you think the UAH guys are incompetent, then why not publish a paper showing how they are messing up. Again, are you saying you know something that the folks at UAH don’t know?

    My point is that the satellite data is comprehensive in a way that the surface based system cannot come close to matching. I also assert that comprehensive measurements are absolutely crucial to climate studies. I stand by what I actually said in #113, and not by the straw man you want to counter.

  120. Gunnar
    Posted Sep 7, 2007 at 12:49 PM | Permalink

    #118, Sam, they are not dependent on calibration prior to satellite launch. As someone else explained, they continuously calibrate with an on-board Oxygen sample, and are also continuously calibrated to live surface based temperture measurements.

  121. Gerald Browning
    Posted Sep 7, 2007 at 8:42 PM | Permalink

    Gunnar (#119),

    You don’t have to answer my questions if you don’t know the answers or do not want to quantitatively analyze the accuracy of the temperature profiles obtained from satellite data using the current methods.

    I have not claimed that any measurement is error free. On the contrary, I have claimed that given the errors in all of the components of the conversion process, be it errors when using the black body formula for clouds that are not black bodies to errors in the satellite data and in any in situ data used to convert the satellite data to temperature profiles, one can do an analytical analysis of the accuracy of the retrieved temperature profiles, especially if it is not an ill posed process as you have claimed.

    I find it amusing that you have not answered a single one of the 5 questions even though you have made a number of statements about Planck’s law, the transparency of the clouds, the use of in situ data with large
    errors but no quantification of those errors, the impact of those in situ errors on the conversion process, etc.

    At this point I am prone to believe that you are full of hot air.

    Jerry

  122. Gerald Browning
    Posted Sep 7, 2007 at 8:57 PM | Permalink

    Gunnar (#119),

    The UAH folks are not gods. Is there some reason that you believe that to be the case? The easiest way to settle this discussion is to produce the answers to the questions (which you are unable or unwilling to do). That is the reason they were asked – to stop the hand waving and do the analysis.

    Jerry

  123. Gerald Browning
    Posted Sep 7, 2007 at 9:13 PM | Permalink

    Gunnar (#119),

    A few more points for you to ponder. Global coverage does not necessarily imply accurate global coverage. And satellites have only been around for several decades, not enough for climate info. If the satellite temperature is so accurate and helpful, why did it lead to worse short term forecasts when compared to in situ wind data? If the in situ surface temperature errors from the UHI’s must be used in the conversion process, not a very good process.

    Jerry

    Jerry