Did Canada switch from Engine Inlets in 1926 Back to Buckets?

Folland has been the leading IPCC authority on bucket adjustments. Folland et al 1993 carries out a comparison from early 1980s measurements of (presumably predominantly insulated) bucket and non-bucket measurements, arguing that the difference was about 0.08 (less than 0.12-0.18 suggested in 2006 by Kent and Kaplan.

They reported a puzzling situation in the Gulf of Alaska where, according to the data base used for the comparison, Canadian data came from “buckets or unknown instrumentation” (Folland allocates “unknown instrumentation” in with buckets for the purpose of his comparison as shown below and states that Japanese data is classified in their data bases as also coming from “buckets or unknown instrumentation”. He goes on to observe that WMO 47 says that over 90% of Japanese data came from engine inlets and accordingly should have been in the opposite pool. One wonders why they wouldn’t have simply written to the Japanese and asked them to clarify the matter. How hard would that have been?

follan1.gif

Folland’s allocation of Canadian data to the bucket pool also caught my eye. The discussion of Brooks (1926!) included a comment from a Canadian saying that they were already obtaining accurate SST measurements from engine inlet pipes by 1926. I would be highly surprised if Canadian ships in the 1980s had reverted to the use of buckets. Again, how hard would it have been for Folland to have written to someone in Canada and asked for this information to be confirmed.

follan7.gif

I double checked my reading of Folland et al 1993 to be sure that he had really allocated unknown measurements to buckets. If you don’t know how something was measured, what conceivable purpose would there be in including it in one’s comparisons? Here’s the original statement of methodology describing the construction of one group of (a) buckets and unknown instrumentation; and (b) “unflagged” data assumed to be engine inlet and hull sensor (see p 111.)

follan10.gif

A reader of this might say that there might be some interpretation of this language under which unknown instrumentation was identified, but not necessarily pooled with buckets. If this were the only statement, then I’d be inclined to seek further clarification from the authors. However, later (p 112), they note that incorrect inclusion of engine inlets classified as “unknown” into the bucket class could lead to an under-estimate of bucket-non bucket differential, a differential that Folland et al 1993 cap at about 25%.

follan11.gif

Folland et al 1993 also has a couple of interesting comments on the timing of the introduction of insulated buckets into the UK fleet. On page 97, they stated that:

the U.K. voluntary observing fleet changed from the predominant use of uninsulated canvas buckets to that of insulated (black) rubber buckets in the 1960s and early 1970s.

Thompson et al 2008 stated that:

after the mid-1960s are not expected to require further corrections for changes from uninsulated bucket to engine room intake measurements.

Now I’m not in a position to know the precise schedule of the transition from uninsulated to insulated buckets, but, insofar as the U.K. voluntary observing fleet is concerned, Folland’s statement in 1993 that the transition was taking place in the 1960s and early 1970s hardly justifies the conclusion that the conversion had been completed by the mid-1960s. If the conversion was say only 33% or 50% complete, then some portion of the adjustment would get pushed later into the record. (And this is aside from the point previously made that insulated buckets seem to be intermediate between uninsulated buckets and engine inlets and thus some portion of the total allocation of the adjustment, whatever it is, has to be spread into the 1970s and later when the conversion of insulated buckets to engine inlets was being completed.

And by the way, isn’t it bizarre that engine inlet techniques, already available in 1926, were still not be utilized in the 1960s?

References:
BROOKS, CF. 1926. OBSERVING WATER-SURFACE TEMPERATURES AT SEA. Monthly Weather Review 54, no. 6: 241-253. url
Folland, C. K., R. W. Reynolds, M. Gordon, and D. E. Parker. 1993. A Study of Six Operational Sea Surface Temperature Analyses. Journal of Climate 6, no. 1: 96-113.

71 Comments

  1. Posted Jun 1, 2008 at 8:54 PM | Permalink

    Steve-

    See slide 8 here:

    http://icoads.noaa.gov/marcdat2/John_Kennedy.ppt

    It is not clear what the pace of transition from non-insulated to insulated buckets was, and in fact, it seems no one knows. What happened in the UK may or may not be representative of the broader world. The notes to the slide say:

    “Currently lack the necessary metadata to crack this nut.”

    The uncertainties can be bounded (e.g., by assuming all insulated or all non-insulated), but apparently not reduced. Efforts to reduce the uncertainties will reflect arbitrary decisions or guesses.

  2. Alan S. Blue
    Posted Jun 1, 2008 at 9:47 PM | Permalink

    Was there absolutely no comparison done at the time of first introduction? There’s a veritable laundry list of reasons I’d run multiple concurrent instruments to observe some variable in a lab setting. Are there no ‘calibration charts’? No one wrote a paper in 1926: “A comparison of two temperature gathering techniques”?

    I can see the paper or report by whatever maritime commission being filed after finding the two devices ‘close enough.’ But there should be raw data of some sort. Yes?

  3. Jeff A
    Posted Jun 1, 2008 at 11:17 PM | Permalink

    I think you have to remember why these measurements were being taken, and it had little or nothing to do with climatic or weather conditions.

  4. Paul Maynard
    Posted Jun 1, 2008 at 11:47 PM | Permalink

    Accuracy of SST measurements

    This follows my post yesterday in Lost at Sea.

    (1) Can we assume that all methods of measurement were intended foir engine measurement and not climate
    (2) Can we assume that with any bucket method, accuracy greater than 1C is inconceiveable
    (3)

  5. Paul Maynard
    Posted Jun 1, 2008 at 11:55 PM | Permalink

    Accuracy of SST measurements

    This follows my post yesterday in Lost at Sea.

    (1) Can we assume that all methods of measurement were intended for engine management and not climate?
    (2) Can we assume that with any bucket method, accuracy greater than 1C is inconceiveable?
    (3) Do we have any data on engine intake accuracy? Has to imagine that in such a hostile environment ( salt water) any device would have been better than 1C!
    (4) Whilst it would be interesting to know why the Royal Navy were still using buckets as opposed to engine intake (my suspicion is meaness as presumably putting a thermometer in the engine intake would have been more expensive than a few tars swinging buckets on the end of the rope. Rope is a good insulator by the way. Isn’t canvas? Has anyone compared canvas with wood with plastic ….)
    (5) Do we have any geographic plots of this data?

    Yet again, anomalies to 0.1C are being extracted from data that must have +/- 1C errors.

    Regards

    Paul

  6. nevket240
    Posted Jun 2, 2008 at 1:21 AM | Permalink

    The mention of rubber is a concern. Imagine a rubber bucket,(dimensions?) being taken from a warm area inside the ship, thrown overboard, dragged through turbulent water then hauled up. How far up? The Artic air temp. would surely affect both the turbulent water and the surface water in the bucket which may have already been far warmer than the SST or airT.
    Unless the distance from the seaman to the surface, ships speed and retrieval rate is constant whats the point of .03C????

    regards.

  7. KevinUK
    Posted Jun 2, 2008 at 5:30 AM | Permalink

    Sorry should have been ‘you then don’t report any adverse results’

    Oh also I forgot, you obviously don’t disclose your methods and claim IPR if anyone requests your source code and if challanged you delay supplying you data for as long as you possibly can. After all if you supply your data and methods then someone might only use them to prove that you were wrong and we can’t be having that in climate science now can we?

    And last but not least you ensure that the only peer-review carried out on your work is by ‘your mates’ and that you publish your results are published in a ‘friendly’ journal where you know that the editor(s) are sympathetic to ‘the cause’.

    KevinUK

  8. Gary
    Posted Jun 2, 2008 at 6:23 AM | Permalink

    And by the way, isn’t it bizarre that engine inlet techniques, already available in 1926, were still not be utilized in the 1960s?

    Bizarre? Maybe not. Several questions need to be answered. How many old ships built before 1926 and still operating forty years later had not been converted to inlet techniques and why? How extensively were inlet techniques adopted in new construction after 1926? How gradual or irregular was the transition between the two? The expertise of naval historians is needed here to understand the mix of reasons for the lag in implementation.

  9. Posted Jun 2, 2008 at 6:38 AM | Permalink

    The commencement of the use of engine inlet techniques was relatively late, because back in 1939 most ships were still coal burners and did not needed cooling water, http://www.ozeanklima.de/English/Atlantic_SST_1998.pdf

  10. retired geologist
    Posted Jun 2, 2008 at 6:50 AM | Permalink

    snip – too much venting. Understandable, but take a deep breath.,

  11. Posted Jun 2, 2008 at 7:03 AM | Permalink

    I wonder – how does all this latest new on SST affect the error bars on temp change graphs?

  12. KevinUK
    Posted Jun 2, 2008 at 7:20 AM | Permalink

    Steve,

    Thanks for letting my #7 post stay (so far) as although it is largely ‘venting’, it is I hope you agree, an accurate description of that practices that you have encountered during you very illuminating period of climate auditing over recent years?

    #10 retired geologist

    ” I just can’t believe that such poor quality data collection is used to promote the whole AGW story. Can it really be possible that having a sailor or deckhand toss a bucket overboard, haul it back in full of water, stick a thermometer in it, write down the temperature and then years or even decades later that number and many others like it gathered in equally sloppy fashion could be used to set international climate policy.”

    Yes, please believe it and please tell all your friends and encourage them to visit this blog as I have done with all my friends. Also encourage them to follow many of the links that are posted on the threads on the blog and get them to make up their own minds about whether or not the data does in fact support the conclusions given in many of these so-called ‘peer-reviewed’ reports? Tell them that it’s well worth the time that they’ll investment as they will most likely rapidly reach the conclusion that its all somewhat of a ‘scam’. If they are tax payers then perhaps they’ll be prepared to question why they are paying for all this and maybe they’ll even be prepared to take some action even if it is is only to ‘spread the word’ further themselves?

    KevinUK

  13. Patrick Hadley
    Posted Jun 2, 2008 at 7:22 AM | Permalink

    I hestitate to post about statistics on a site run by and visited by experts, but it is pretty elementary that if you have a enough readings accurate to 1 unit then you really can work out an average accurate to 0.01 units.

    This works on the assumption that the errors in measurement either through sloppiness or rounding are random and that given many thousands of readings they all balance out to give an almost zero effect.

    Obviously, when there are systematic errors all on the same side, as in the example about measuring sea temperatures using buckets, then giving averages to a high level of accuracy can be very misleading.

  14. Richard Lewis
    Posted Jun 2, 2008 at 8:05 AM | Permalink

    On a related thread I posted the following:

    http://www.tc.gc.ca/marinesafety/bulletins/1989/08-eng.htm

    “It is common practice to provide a supply of low pressure steam or compressed air to maintain clear cooling water intakes. However, experience has shown that such arrangements will not maintain clear inlets on ships operating in anything but the lightest ice conditions.”

    Just out of curiosity I had googled “engine inlets” or some such phrase, and the link above was one of the first to be returned. I posted my (apparently largely unnoticed) comment to make the point that the variables in pursuing the pre-1979 surface temperature record are so vast as to make it a fools errand.

    Then comes this post …. So how many Canadian (Russian, Norwegian, et al) vessels inject(ed) “low pressure steam or compressed air to maintain clear cooling water intakes” and when did they begin doing it? And how frequently and at what latitudes? And what adjustments should be made to the reported “surface temperatures” from these vessels?

    How many more unknown unknowns before it is reasonably concluded that a meaningful global temperature history is impossible to achieve?

    I’m neither a scientist nor a statistician, but I engaged in managerial economic evaluations for 30 years and I have followed the “global warming” debate closely for the past 20 years. The hand-waving type of “analysis” that has flowed from the early breathless reports of the Mona Loa CO2 readings from NCAR, et al, would have been thrown out of 90% of corporate research departments. (The 10% not rejecting such analytical sloth are, or will soon be, bankrupt.)

    Mr. McIntyre’s work has consistently demonstrated this point.

    Would that an concise, accurate and accessible (i.e., understandable by a serious lay reader) rehearsal of the irresolvable and intractable problems surrounding the surface temperature history of the planet were available!

    Mr. McIntyre and Dr. Wegman strike me as capable potential lead authors for such a crucial opus.

  15. KevinUK
    Posted Jun 2, 2008 at 8:16 AM | Permalink

    #13 PH

    Perhaps you should visit Numberwatch and also invest in buying JEB’s books.

    “enough readings accurate to 1 unit then you really can work out an average accurate to 0.01 units.”

    This is just the sort of statement that climate scientists use to justify their claimed (deluded) precision in deriving the global mean surface temperature anomaly for the last century. All those errors are not random (indeed are any of then random?)and so based on the central limit theorem they all cancel out right? No they don’t cancel out! Because the measurement errors are not all random (far from it) and they do not all have gaussian distributions so you can forget about applying the central limit theorem when it comes to climate science.

    KevinUK

  16. Howard
    Posted Jun 2, 2008 at 9:08 AM | Permalink

    Patrick and Kevin: There are random and systematic errors associated with recording data. Random errors would balance out as Patrick noted. An example of a systematic error from my experience is the measurement of groundwater depth. In this case, the foot and tenth of a foot marks are below the measuring point and it is very common that data recording errors are skewed to a false deeper water level reading. This is because the marks you can easily see are above the measuring point and the psychology of the mind pushes you to record the numbers that are more easily seen. Some of the sailors on board CA might want to chime in on the particular sources of potential systematic error when reading a SST thermometer.

    The different collection methods (different bucket types, engine inlet, etc.) would likely produce systematic errors skewed one way or another. However, without knowing what method was used to record unique readings, it seems that the generalized adjustments could potentially compound the errors. I’m sure the climate science profession is currently applying for grants to fund a study of the SST data collection methods such that the non-random systematic errors can be assessed and apportioned as appropriate.

  17. stan
    Posted Jun 2, 2008 at 9:33 AM | Permalink

    Steve wrote — “One wonders why they wouldn’t have simply written to the Japanese and asked them to clarify the matter. How hard would that have been?”

    Therein lies the crux of the matter. Quality science requires it. The groundless assumption (or wild-ass guess) is not quality science. And when the stakes are high, it doesn’t matter how hard it would have been. It must be done.

    I have a question for the scientists. Once upon a time, I cross-examined my share of witnesses in court. Shredding the credibility of these people would have been a simple and easy task. After which, no jury would have trusted anything they said — on this or any other subject. Once their carelessness would have been established, their credibility would have been zero. My question — do other scientists consider this kind of work credible? Does it influence the credibility associated with other work done by the same scientists?

    [I guess the question could be directed at Mann as well. After all the shortcomings demonstrated in his work, does he stil have any credibility as a scientist?]

  18. retired geologist
    Posted Jun 2, 2008 at 9:57 AM | Permalink

    Stan #17

    I’m one scientist for whom none of the climate change data or climate change conclusions have any credibility at all. Since it now seems obvious that most of the raw data is suspect (I wanted to say, that its just crap) it bothers me a great deal that no matter how precise and correct the sort of statistical auditing is that Steve and others have done on work such as Mann’s, it seems that its basically just a climatic version of the trite phrase “garbage in-garbage out”. As to why people like Mann (I hesitate to use the term scientist) still have credibility, all I can say is, “beats the hell out of me”.

  19. Clark
    Posted Jun 2, 2008 at 10:11 AM | Permalink

    I have a question for the scientists. Once upon a time, I cross-examined my share of witnesses in court. Shredding the credibility of these people would have been a simple and easy task. After which, no jury would have trusted anything they said — on this or any other subject. Once their carelessness would have been established, their credibility would have been zero. My question — do other scientists consider this kind of work credible? Does it influence the credibility associated with other work done by the same scientists?

    Stan – it all depends on the particular scientific field. In my field (molecular biology), scientists are pretty critical of each others’ work. One problem with peer review is the fact that normally only a small number of people (2-3) review a given paper. When you submit a paper, you can “suggest” reviewers, and exclude specific reviewers (details depending on the particular journal), so that one can often craft the review environment. Another issue is that once someone publishes some high-profile papers, they can become the go-to person for a journal like Science or Nature, and strongly influence the type and slant of papers published there.

    You should remember that the editors of some journals who make the decisions on who to review what papers, and how to handle the reviews, and whether to reject or accept papers are not active scientists in their field. They are full-time employees for Nature publishing (or whichever journal), and often have little experience in the lab beyond a PhD. Thus, they must rely heavily on the opinion of 1 or 2 reviewers. Other journals use active scientists as referees, and that seems a much better system to me.

    Also, reviewers are not paid for their efforts, and do it on a voluntary basis. Some take it very seriously and go through every figure, while others are more lacidasical and seem to just skim the papers that they read.

    Real problems can arise when a researcher is invested in terms of money, career advancement, and reputation on a particular outcome to an experiment. It would be even worse when many researchers in the same field were invested in the same outcome. When that happens, getting the “right” result could take precedence over doing the experiment correctly. Not that these people are evil or badly intentioned, it’s just that choosing which replicate to present, which dataset to reply upon, can be subtly influenced by where you think the models should lead and is a subjective process. All scientists face these issues, because published papers only present the tip of the iceberg that is years perhaps of many experiments.

  20. UK John
    Posted Jun 2, 2008 at 12:13 PM | Permalink

    What Hadley and Climate Science need is a bigger computer to work out what the question was, they already have the answer.

    I am sure I read this in “Hitchikers Guide to the Galaxy”, its amazing how fact mimics fiction. By the way the answer was 42 (or 0.7C) if you are a Climate Scientist.

  21. Posted Jun 2, 2008 at 12:21 PM | Permalink

    #15

    Because the measurement errors are not all random (far from it) and they do not all have gaussian distributions so you can forget about applying the central limit theorem when it comes to climate science.

    Central limit theorem doesn’t require Gaussian distribution, just finite mean and variance, and independent samples. Cauchy distribution is one example where CLT doesn’t work, and there are examples where distribution of sample mean is becoming more widely dispersed as the number of samples increases! But I think that the general problem when applying CLT to measurement errors is the requirement for independent observation errors (randomness in your post (?) ) . Take a measurement tape, measure something 1000 times, how much more accurate is the average than the 1st measurement ?

  22. Mike
    Posted Jun 2, 2008 at 1:13 PM | Permalink

    Hey #14,

    “Would that an concise, accurate and accessible (i.e., understandable by a serious lay reader) rehearsal of the irresolvable and intractable problems surrounding the surface temperature history of the planet were available!”

    The best place to visit is surfacestations.org and the work of Anthony Watts. This is an ongoing project to survey (mostly by volunteers in the field) the historic temperature gauges on the United States. It clearly shows the problems of historic temperature readings, and the surfacestation.org site should be visited by everybody in the Global Warming (or the new replacement term, Climate Change)debate. Sometimes photos are the only way to reach the masses.

  23. henry
    Posted Jun 2, 2008 at 2:22 PM | Permalink

    Mike (22)

    Sometimes photos are the only way to reach the masses.

    And still there are those “climate scientists” that think the surface stations project is a waste of time: “A single picture is not data.”

    Ask them why they compare the Arctic ice pictures then. If the installers of the original surfac stations had taken pictures when installed, we could see the changes.

  24. H.Oldeboom
    Posted Jun 2, 2008 at 2:36 PM | Permalink

    I’ve been merchant navy engineer. Sea water temperature thermometers were on our fleet (Shell Tankers Netherlands) of the normal standard engineroom design (also those on inlet condensors). These thermometers were not designed for scientific measurements but only to survive engineroom conditions and were not able to be read with high accuracy. Further on, the accuracy was also strongly related with the enthousiasm of the concerning engineer when writing these data in the engineroom logbook.

  25. Bob Maginnis
    Posted Jun 2, 2008 at 3:02 PM | Permalink

    The sea chest of ships must be about 30 feet below sea level, and thus usually cooler than surface water, so the temps must have been usually skewed colder after the change from buckets to sea water intake. BTW, water has great heat capacity, and a bucket full of water doesn’t change temperature very quickly, insulated or not, especially if there isn’t a great delta T of water vs air temp. Try it yoursaelf with a bucket of water.

  26. Posted Jun 2, 2008 at 3:05 PM | Permalink

    Re# H.Oldeboom (24). I think this comment has got it in a nutshell: shipborne temperature measurements were never designed for the scientific accuracy that they have been used in the global warming debate. The quality of the data was also dependent on the individual engineer/deckhands’s personal enthusiasm. This may not have been high.

    Frankly, I could not take the idea of obtaining accurate temperature readings from a bucket- particularly when temperatures from specific depths of ocean were requested- I could not take it seriously. How do did you obtaining water samples from varying depths using buckets?

  27. Jaye
    Posted Jun 2, 2008 at 3:18 PM | Permalink

    Obviously, when there are systematic errors all on the same side, as in the example about measuring sea temperatures using buckets, then giving averages to a high level of accuracy can be very misleading.

    To be absolutely clear “accuracy” describes the bias error in a set of measurements while “precision” is a measure of variance.

  28. Steve McIntyre
    Posted Jun 2, 2008 at 3:53 PM | Permalink

    #26, 27. I agree with the distinction between precision and bias. People have complained over and over about precision, but the data is what it is and people have to work with what they have. Let’s take the comments about precision as read and limit observations to one’s about bias over time, as that’s the issue here.

  29. stan
    Posted Jun 2, 2008 at 3:59 PM | Permalink

    Clark (19),

    Thanks for the response, but the question is not directed toward publication and the review process. I realize that there are potential problems there. The question is what happens when a scientist has been exposed for having made major errors due to incompetence, laziness, unwarranted assumptions, wild-ass guesses, corruption or fraud? Do they continue to enjoy credibility in the scientific community? If so, why?

  30. Dave Andrews
    Posted Jun 2, 2008 at 4:09 PM | Permalink

    The following was posted on Watts Up: ‘Buckets,Inlets SSTs etc, part1’

    It’s lengthy, anecdotal but obviously based on experience over many years. It reinforces the many posts made by ex seamen both here and on Watts Up especially in relation to both the supposed warming bias in Inlet temps and the accuracy of measurements in the period Thompson et al considered. Methinks the latter may soon come to regret their foray into this area.

    “Jon Jewett (10:30:48) :

    I am a retired merchant marine engineer. I made my living on merchant ships from 1966 to 1999, when I retired as Chief Engineer of a 32,000 SHP steam turbine powered container ship.

    A ship moves through the water at between 12 knots and 22 knots. (Twelve knots for an old Liberty Ship from WWII, 22 knots fairly representative of new ships.) At even 12 knots, the speed is fast enough to cause turbulent flow of the sea water and thereby prevent a “warm” layer forming along the ship. In addition, very little of the ship is a significant heat source; only the area in way of the machinery spaces. Most of the ship is thermally inert cargo space that may be warmer or colder than the sea temperature. The main cooling water pump (Main Circulator) takes suction through the hull at a depth of 15 feet to 60 feet (depending on the ship) below the surface of the water. In addition, a Main Circulator will pump up to several thousand gallons per minute.

    In short, the concept that the water would be heated by a measurable amount is far fetched at best. The statement indicates that assumptions were made about the measurement process and there was no meaningful investigation before these opinions were voiced.

    Actually, there is cause for concern in using these numbers. Typically, either alcohol type bulb thermometers or bimetallic type thermometers were used. They were calibrated at the factory before they were installed, but never after that as long as the readings were “close enough”. It was not until the late ‘60s that thermo-electronic (e.g. RTD and Thermister) methods started to enter the fleet. It is to be expected that, particularly with the older readings, each one may be on the order of up to 10 degrees off, with a random distribution.

    Regards,

    Steamboat Jack”

  31. Bob Koss
    Posted Jun 2, 2008 at 4:22 PM | Permalink

    What sort of standard is used when measuring the temperature of water in a bucket?

    How likely were any such standards to be followed? Especially if the weather is cold or stormy.

    Is any old starting temperature good enough for the thermometer to be inserted in the bucket?

  32. Steve McIntyre
    Posted Jun 2, 2008 at 4:30 PM | Permalink

    Again, folks, the issue is bias over time. If the errors are random measurement errors, then they are not matters that I wish to bother discussing here. Unless you can give a reason why the things change over time, relax.

  33. jeez
    Posted Jun 2, 2008 at 4:36 PM | Permalink

    How about a steady change in how far Sailors had to bend over to take measurements?

    BACKGROUND: The secular trend in the height of the US population has been almost neglected in a comparative perspective, despite its being a useful indicator of early-life biological conditions. AIM: The study estimated the height of the US population and compared it to Western European trends after World War II. SUBJECTS AND METHOD: The complete set of NHES and NHANES data were analyzed, collected between 1959 and 2004 by the National Center for Health Statistics, in order to construct trends of the physical stature of US-born men and women limited to non-Hispanic blacks and whites. Also analyzed was the trend in the height of US military personnel whose parents were also born in the USA. The trends and levels were compared with those of several European populations. RESULTS: The increase in the physical stature of US adults slowed down by mid-century concurrent with a substantial acceleration in height attainment in Western and Northern Europe. Military data corroborate this finding in the main. After being the tallest population in the world ever since colonial times, Americans are now shorter than most Western and Northern Europeans and as much as 4.7-5.7 cm shorter than the Dutch, who are the tallest in world today. CONCLUSION: Given the well-established relationship between adult stature and early-life biological welfare, it was hypothesized that either American diets are sub-optimal or that the universal health care systems and social safety net of the European welfare states are providing a more favorable early-life health environment than does the American health care system.

    PMID: 17558591 [PubMed – indexed for MEDLINE]

  34. retired geologist
    Posted Jun 2, 2008 at 4:37 PM | Permalink

    Re # Steamboat Jack’s comment about readings being “close enough”. Doesn’t it seem that all data used to support AGW doesn’t have to be any better than “close enough”, whether tree rings or temperature measurements. Or, as is often said, “Close enough for government work”. Or at least for government funded research.

  35. David Smith
    Posted Jun 2, 2008 at 6:08 PM | Permalink

    Here’s a 1963 paper on intake vs (special) bucket temperatures. It attributes a +1F bias to intake, apparently due to heat input into the instrument (thermowell conduction), among other factors, rather than actual heating of the water.

    Link

    Steve: I mentioned this in an earlier post.

  36. EJ
    Posted Jun 2, 2008 at 6:28 PM | Permalink

    stan says:

    June 2nd, 2008 at 9:33 am

    Stan – “”Therein lies the crux of the matter. Quality science requires it. The groundless assumption (or wild-ass guess) is not quality science. And when the stakes are high, it doesn’t matter how hard it would have been. It must be done. I have a question for the scientists. Once upon a time, I cross-examined my share of witnesses in court. Shredding the credibility of these people would have been a simple and easy task. After which, no jury would have trusted anything they said — on this or any other subject. Once their carelessness would have been established, their credibility would have been zero.””

    As a registered engineer, I have a license which I put on the line for all my recommendations, reports and caclulations I proffer. Apparently, scientists can say oops, I didn’t know better, I am sorry and trust me, it won’t happen again. No authoritative consequences for negligence or fraud. Sure, maybe a harsh review in some obscure Journal or report, but no career ending judgement.

    If I mess up, or choose not to share my methods, I lose my license and can no longer legally claim to be an engineer.

    In climate science, someone has to put his name on the line, for better or worse, for any recommendation(s) he makes. He needs to archive all reports, methods, data and calculations for validation and verification.

    Mann et al, Jones et al and many others etc., having been shown to be unscientific, unprofessional and downright wrong, should no longer be allowed to practice science, let alone continue to be authors and reviewers.

    Are there no boundaries, no consequences?

  37. David Smith
    Posted Jun 2, 2008 at 6:37 PM | Permalink

    Barnett’s 1984 paper on SST measurements is here . I didn’t notice an earlier link in this thread.

  38. jeez
    Posted Jun 2, 2008 at 8:40 PM | Permalink

    *crickets*

    On a more serious note since my joke above bombed.

    Due to a War-based economy, it would not be unlikely that every single thermometer put to sea by the US, warships or merchant marine, during WWII was made by a single manufacturer. The same could be true for other seafaring powers. This is certainly a potential time-based bias. Whether they were alcohol-based or mercury, single sourcing of thermometers could easily introduce measurement bias in the order of magnitude being teased about.

  39. EJ
    Posted Jun 2, 2008 at 9:11 PM | Permalink

    How many boxes of thermoters were available for a voyage? How many did we break along the way? Were they calibrated?

    Were they …?

    Let alone the buckets and standard methods. Or were there any standard methods?

    If not, then it is any body’s guess.

    Steve: Once again, complaints about precision or lack of precision are understood. Unless you can connect such an observation to changing bias over time, let’s leave this sort of thing alone. And yes, there were standard methods – whether they were observed or not is a different issue – so again, hold the complaining unless you’ve got some evidence to add.

  40. EJ
    Posted Jun 2, 2008 at 9:41 PM | Permalink

    Sorry Mr. McIntyre

  41. LadyGray
    Posted Jun 2, 2008 at 10:13 PM | Permalink

    Take a measurement tape, measure something 1000 times, how much more accurate is the average than the 1st measurement ?

    That is an oversimplification. What if there are 1000 measurement tapes, each used once? Are those tapes made by the same manufacturer, or are they all from different manufacturers, or some mixture of many different manufacturers? Are they in English units, or Metric, or some mixture of the two? What temperature was the air when the measurement was taken? What is the designation for partitions of the unit measurement, is it in decimals or fractions? Did the same person take all measurements? Was there any written guidance for how to take the measurements? What was the humidity of the air when the measurement was taken?

    Science should be precise, both in what it knows and what it doesn’t know.

    Steve:
    This issue has nothing to do with this post. Let’s not spend bandwidth on the Central Limit Theorem.

  42. Clark
    Posted Jun 2, 2008 at 11:53 PM | Permalink

    stan says:
    June 2nd, 2008 at 3:59 pm

    Thanks for the response, but the question is not directed toward publication and the review process. I realize that there are potential problems there. The question is what happens when a scientist has been exposed for having made major errors due to incompetence, laziness, unwarranted assumptions, wild-ass guesses, corruption or fraud? Do they continue to enjoy credibility in the scientific community? If so, why?

    Scientist in my field caught in outright fraud (e.g., simply making up data) are finsihed in terms of funding, publications and (usually) a job. But I don’t see any evidence that that is what goes on in the climate science talked about here. Rather it’s subjective judgements about data adjustments, and those are always arguable (clearly one point of this blog is to discuss them in a comprehensive, rigorous way to get at a more accurate product). Plus, the fields often evolve sometimes slowly and standing assumptions change over time in a way that early works aren’t rejected immediately but lose favor over a period of years as more robust and accurate approaches supplant them. It’s certainly true that some scientists are more respected in their field because the work they publish is consistently high quality, whole other are held in less high esteem.

    One analogy I don’t think is accurate is one mentioned above about the fallibility of scientific conclusions versus engineering designs. Science is AWLAYS wrong. Scientific conclusions are always based on incomplete information, and we expect that anything we conclude will be found to be wrong in some substantial way in later studies. That’s why I find it disturbing when certain fields are so strongly wedded to a particular conclusion. I try to teach the students in my lab and in the classroom that data is the king, and our all-too-fallible interpretations are simply working models to try to put the data in context.

  43. tty
    Posted Jun 3, 2008 at 12:01 AM | Permalink

    #10

    The commencement of the use of engine inlet techniques was relatively late, because back in 1939 most ships were still coal burners and did not needed cooling water

    This is simply not so. Every steam-powered ship, whether coal- or oil-burning using either reciprocating machinery or turbines has a condenser which must be cooled by sea-water. Indeed “condenseritis” has been the bane of “steamboats” since way back.

  44. Posted Jun 3, 2008 at 2:35 AM | Permalink

    Where does all this leave the PDO? Presumably the data comes from a higher preponderance of U.S shipping? To what extent does that affect the offset required to correct the data? Does the 1940-70 cooling disappearance leave the multi decadal PDO phase shift dead in the water?

  45. Pete
    Posted Jun 3, 2008 at 6:01 AM | Permalink

    Sea state influences ocean surface temp gradient vs. depth due to wave action mixing. Could there be a bias one way or the other that correlates to sea state and varies based on bucket vs. engine coolant inlet methods? Bias may also change depending on whether air temp was above or below the water temp. I wouldn’t be surprised if some sea state data is available to correlate. I could be complicated by the duration/history of recent weather in a given area….

    The ship hull and length may introduce some more variation especially for larger sea states that change bucket methodology (copied last one, made it up, measured the spray/rain water, etc) and effect engine related coolant inlet flow away from steadier state.

    The engine variations in higher sea state could be biased upward for warmer air temp if engines are working harder and increasing wall temp influences on the thermometer. For cooler air temp, the surface layer cooling may counter the wall temp upward bias.

    As a 1st cut, I’d suggest picking the larger ships that are (?) less influenced by sea state. Alternatively, look at “all” ships, but focus on the lower sea states.

  46. Posted Jun 3, 2008 at 1:56 PM | Permalink

    Re# Steve: “…hardly justifies the conclusion that conversion had been completed by the mid-1960s”

    I have just seen the UK Met Office/Hadley press release (29 May) available on its homepage that again argues that the problem had been cleared up by the 1960s and that any necesssary adjustments would not have any: “…significant effects on 20th century warming trends”

    I’ve got to go to my UK bed now but I look forward to reading trans-Atlantic opinions on the Met Office position on this topic when I wake up in the morning.

  47. David Smith
    Posted Jun 3, 2008 at 2:58 PM | Permalink

    Info from a 1938 paper on North Atlantic practices –

    for 1912 to 1931.
    Each day the shp’s observer takes one or more observations
    of a nimber of weather elements during the time
    that the ship is under way. The water-surface temperatures
    are recorded as a part of this routine observation,
    and those used in the present compilations were taken by
    either of two methods, namely the “bucket” method, or
    the “intake” method. The data gathered by ships using
    these two modes of observation, have been here assumed
    to be homogeneous and have been combmed into one mass
    of data.
    The greater number of the observations, about three fourths,
    were made by the bucket method, and the other
    quarter of the observations by the intake method
    . The
    bucket observations are made by drawing up a samp!e
    of the surface water from the sea to the deck of the shp
    in a canvas or metal bucket, and immersing a thermometer,
    designed for this specific purpose, into the surface water
    sample. The observer is instructed to stlr the thermometer
    in the water sample until the mercury column
    comes to a stationary height. The correct temperature of
    the water sample may then be read from the thermometer.
    In the intake method of observation, the water temperature
    is read from a permanently installed thermometer
    whose bulb is immersed m the condenser intake, where the
    water flows past the bulb of the thermometer as it enters the ship.

    Text from here .

    Bold type is by me.

  48. DocMartyn
    Posted Jun 3, 2008 at 3:24 PM | Permalink

    look at this plot of average ship tonnage vs year, for the 20th century

    Now, bigger ships have a bigger draft, until they hit natural limits; the debth of the Panama canal. So we have deeper and deeper drafted vessels being made. Now most of the UK traffic would have been in the Atlantic and Indian oceans. Whereas, US flagged ships would have needed to operate in both the Pacific and Atlantic.
    It makes sense to use bigger ships in the Pacific, so my guess is that the US used, on average, deeper drafted vessels. In the late 60’s and early 70’s, the Japanese ship building took hold and the UK and US began buying ships from the Far East, the same mass produced vessels. UK/US ships would then be the same.

  49. Harold Pierce Jr
    Posted Jun 3, 2008 at 11:24 PM | Permalink

    RE: Gavin, The Grinch Who Stole Harold’s Comment!

    Here is a comment I just posted at Gavin’s Garage re the buckets. The Grinch has a closet full of my comments with empirical data from Quatsino. GO: http://www.fogwhistle.ca/bclights to see its location.

    If Gavin can’t stand the heat, then he should get the hell out of the kitchen!

    RE: #63

    “By the way, Spencer’s first article on this subject (Part 1) was titled: Atmospheric CO2 Increases: Could the Ocean, Rather Than Mankind, Be the Reason?”

    He is probably right. Here is a snippet of data form the Quatsino BC weather station (Elev 7 m, Lat 50 deg N) located near the very northern tip of Vancouver Island for Sept 21 for the 1990-2000 and 2001-2007 intervals.

    1990-2000 mean Tmax, 20.4 deg C, mean Tmin, 09.4 deg C.

    2001-2007 mean Tmax, 14.5 deg C, mean Tmin, 10.1 deg C.

    ***********Delta T, -5.9 deg C, **Delta T, +0.7 deg C.

    I have data for the entire record which starts in 1895 for this site.

    For this one day, the sunlight is constant over the yearly sampling intervals and the photoperiod is 12L:12D. Tmax is a measure of the sea breeze coming in the Pacific ocean whereas Tmin is the “forest breeze” coming down and out of the old-growth forest on the steep slope that rise up from the sea. This is why the Tmin doesn’t change much.

    The drop in Tmax was abrupt: for 2000, Tmax 19.5; for 2001, Tmax 14.5; delta T, -5.0 deg C.

    Pesumably the temperature drop is due to the PDO shifting into a cool phase after 30 years in a warm phase that started in ca 1970. Data from the lightstation (Elev. 21 m) was similar (delta Tmax, -4.9 deg C) except that delta T for the Tmin metric was -1.9 deg.

    Buckets! Smuckets! I say let’s get back to the basics: old-fashioned acturial meterology. And then call’em as we see’em.

    I can’t wait for Sept 21, 2008. Which way will Tmax go? I don’t know, but I’m not holding my breath.

    They say global warming stopped 1998. At this site it is 2001 which is probably due to its more northern location. However, 7 years is too short of a time to draw any conclusions. This could just be another curve ball that Mother Nature likes to throw to keep us guessing.

  50. Geoff Sherrington
    Posted Jun 4, 2008 at 12:08 AM | Permalink

    Re # 17 Stan

    I copped a poor lawyer in a Court. Here is your quote modified to reflect what happened:

    “I have a question for the lawyers. Once upon a time, I questioned my lawyer in court. Shredding the credibility of this person was a simple and easy task. After which, no jury would have trusted anything he said — on this or any other subject.”

    I’m not proud to recount this – they guy was taking the hard stuff – but there’s not much mileage in lawyers attacking scientists or vice versa. Reason is, we are not homogeneous groups with like individual capability.

    Much better to be positive in the knowledge that science, as other disciplines have, has contributed a lot to our daily comfort. Yes, there are bad scientists, but concentrate on the good ones.

  51. Harold Pierce Jr
    Posted Jun 4, 2008 at 1:07 AM | Permalink

    RE #50

    In general scientists are not bound by any code of professional ethics suchas laywers, doctors, engineers, etc. snip

    The chief analysist in some laboratories, for example ore assay labs, however, is legally responsible for the results when he signs his name to the report since the results can greatly effect the price of a stock of small company exploring some “hot” new property.

    Gavin the Grinch whacked my comment because it is the first set of empirical evidence that falsifies the AGW hypothesis.
    Temperature data from Death Valley also falsifies the hypothesis that CO2 causes global warming as does the data form Alice Springs AU. Gavin whacked my comment about these sites as well as the link to the late John Daly’s website.

  52. Dr Slop
    Posted Jun 4, 2008 at 1:24 AM | Permalink

    Harold (#51)

    Try googling “society code of conduct” and you’ll see plenty of codes of professional ethics. You should be looking at compliance with those codes, rather than claiming they don’t exist.

  53. jeez
    Posted Jun 4, 2008 at 2:49 AM | Permalink

    Some raw data which may be of interest.

    http://arcweb.archives.gov/arc/arch_results_detail.jsp?&pg=3&si=0&nh=5&st=b

  54. MarkW
    Posted Jun 4, 2008 at 4:33 AM | Permalink

    Many container ships and tankers outgrew the Panama Canal years ago. That’s why they are talking about expanding the canal.

  55. Harry Eagar
    Posted Jun 4, 2008 at 10:03 AM | Permalink

    The average size of a ship is 6,000 tons?

    I haven’t seen a ship that small — fishing boats aside — offshore in over 30 years.

  56. kuhnkat
    Posted Jun 4, 2008 at 10:28 AM | Permalink

    Jeez,

    apparently your url is session specific. That is, others can not use it to get to where you were. We get a session time out page.

  57. jeez
    Posted Jun 4, 2008 at 10:48 AM | Permalink

    Oops.

    ARC Identifier: 597848
    Title: Marine Meteorological Journals, 1879 – 1893
    Creator: Department of Agriculture. Weather Bureau. (1890 – 06/30/1940) (Most Recent)
    Department of the Navy. Bureau of Navigation. Hydrographic Office. (1866 – 05/09/1898) (Predecessor)
    Type of Archival Materials:
    Textual Records
    Level of Description:
    Series from Record Group 27: Records of the Weather Bureau, 1735 – 1979
    Location: Archives II Reference Section (Civilian), Textual Archives Services Division (NWCT2R[C]), National Archives at College Park, 8601 Adelphi Road, College Park, MD 20740-6001 PHONE: 301-837-3510, FAX: 301-837-1752, EMAIL: (Contact us): archives.gov/contact/
    Inclusive Dates: 1879 – 1893
    Part of: Record Group 27: Records of the Weather Bureau, 1735 – 1979
    Arrangement: Arranged numerically by log number.
    Scope & Content Note:
    This series consists of volumes entitled “Meteorological Journal” (a regulation Navy-issue publication) which were to be completed by masters of merchant vessels during their voyages. Continuing the pioneering work undertaken by Lt. Matthew Fontaine Maury, United States Navy, the U.S. Navy’s Hydrographic Office endeavored to enlist the help of commercial vessels in compiling data regarding meteorological conditions at sea. These meteorological journals were completed by the captains of over 650 merchant vessels transiting the Atlantic, Indian, and Pacific Oceans. Although the majority of the vessels represented in the journals were American, vessels from Great Britain, Germany, Austria, France, Canada, Italy, and other nations are included.

    Each journal documents the passages of one vessel, usually for a two-month period of time. Entries were made daily, noting particulars of the weather during the day. Also noted are the course and speed of the vessel, measured at two-hour intervals, along with the distance traveled.

    Each journal contains instructions for the proper notation of entries, and instructions for use and care of the principal weather measuring instruments: thermometer, aneroid barometer, and mercurial barometer.

    Access Restrictions:
    Unrestricted
    Use Restrictions: Unrestricted
    Specific Records Type:
    journals (accounts)
    Finding Aid Type: Index
    Finding Aid Note: These records are indexed by the series “Index to Marine Meteorological Journals, ca. 1940” (ARC Identifier 597847).
    Finding Aid Source: Weather Bureau
    Variant Control Number(s):
    Master Location Register (MLR) Entry Number: NC 3 127

    Copy 1
    Copy Status: Preservation
    Extent: 163 linear feet, 4 linear inches
    Count: 378 Legal Archives Box, Standard
    Storage Facility: National Archives at College Park – Archives II (College Park, MD)
    Media
    Media Type: Bound Volume
    Container ID: Boxes 1-378
    Physical Restrictions Note:
    Researchers are to only use the microfilm publication.

    Index Terms

    Subjects Represented in the Archival Material
    Meteorology

  58. Harold Pierce Jr
    Posted Jun 4, 2008 at 12:23 PM | Permalink

    RE: #52

    David Baines of the Vancouer Sun routinely reports stories of scams, swindles, rip-offs and fraud prepetrated by stockbrokers and how these guys get off and don’t go to jail. The sub-prime mortage scandal is a good example. These guys were selling mortgages to people who really didn’t qualify, were breaking the rules, and they knew it. But are any of them going to jail. Not a chance.

    snip – no need to vent.

  59. MPaul
    Posted Jun 4, 2008 at 2:38 PM | Permalink

    UC #21 and Steve throughout — I think the issue is not random variation vs assignable cause variation in the data rather its the bias introduced by the adjustments themselves. Take random samples from an unknown distribution, average groups of samples and the CLT will evenly distribute random variation. Fine. If some source of non-random bias gets introduced, then there are statistical tools that allow us to detect the bias. Fine. Now if one takes the data and adjusts it based on some incorrect schema — then all bets are off. Presumably the schema is designed to remove the effects of non-random bias in the data. But if the schema’s not “right” then the schema itself becomes the source of bias. And it seems to me that the schema doesn’t need to be very far off for it to significantly distort the signal we seek. We see this over and over again in climate science. Climate scientists shouldn’t be speculating on when such-and-such change got introduced. Statistical techniques applied to the raw data should be able to tell us.

  60. claytonb
    Posted Jun 4, 2008 at 6:27 PM | Permalink

    #59,
    You mean something like the USHCN algorithm that was supposed to detect station changes?

  61. stan
    Posted Jun 5, 2008 at 9:26 AM | Permalink

    Geoff (50),

    Say what?! This has absolutely nothing to do with criticizing the role of science or lauding lawyers.

    My concern is for science and its long term credibility. This is about responsibility in science — not just the responsibility of any scientist in doing his own research, but the responsibility of scientists to police their own. Steve has demonstrated that a lot of the climate “science” supporting the claims of impending climate catastrophe is the result of shoddy work (see e.g. his Ohio St presentation on Mann). More significantly, the review process which is supposed to catch such shoddiness has clearly failed on a consistent basis.

    Scientists have to police this for two reasons. First, many of the political “solutions” being proposed will cause tremendous suffering to billions of people. Scientists with the expertise to set the record straight have a moral duty to do what they can to keep that suffering from happening. Second, climate alarmists are claiming to speak for the entire scientific community. Unrebutted by scientists, the public will accept that claim as true. [If Al Gore tells the world you agree with him, the world will believe you agree with him unless you set the record straight.] Eventually, the truth will come out and the backlash will likely be proportional to the damage. The public will blame the scientific community whose “consensus” was used to propel the politics and policy. I would think that scientists would have a strong personal incentive to see that the public’s inaccurate impression is corrected.

  62. Pete
    Posted Jun 5, 2008 at 4:57 PM | Permalink

    #61: I believe it is not just the public that will blame the scientific community, but the politicians will also. That could be good if the politicians then levy a constructive change to improve scientific practices/ethics and public education which includes reading sites like this one.

    NASA- Please put error bands on your temperature curves.

  63. MC
    Posted Jun 5, 2008 at 5:05 PM | Permalink

    #61 I’m a scientist (a physicist) and though it is true that science is not “policed” in the same way it depends on the field. Particle physics or material science has little time for shoddy method. Climate science is inherently a bit more “wooly” as the principle is to try and distill some patterns and behaviours in a multi-parameter system. However this does not excuse bad scientific method.

    #59 and general. As an experimental physicist I feel I need to make the point that precision of a measurement(as in how close to a theoretical steady state value) is the key to all this, not the trend per se. This was stated much earlier in that trends are being derived from data with larger measurement errors. Secondly if your instrument only measures to +/- 1 degree then it does not matter how many samples you take the accuracy will not get any better. The measurements belong to what is called in maths a bounded set. You only know that your measurement lies between two limits. That is it. The only way you can improve accuracy is if you calibrate against a simultaneous measurement with better known accuracy. Any other Gaussian or CLT is an assumption that must be backed up by test or specifically stated as an assumption. This is scientific method.

    Hence the problem we see with the SST is that it looks like the precision of the instruments is much larger than 0.1 degrees. In fact it looks to be 2-3% or more of the absolute reading for a lot of the historic readings, which when a mean is taken away results in hundreds of percentage uncertainty. The conclusion is simple: the accuracy of the SST is no where near accurate enough to determine 0.1 degree resolution trends and so we cannot use it and need to make better measurements.

    We do not have a useable understanding (beyond units of temperature) of how average temperatures have varied over the century. I do not need to go into statistics. The raw data is enough to tell me this.

  64. retired geologist
    Posted Jun 5, 2008 at 8:20 PM | Permalink

    Physicist #63

    Your post basically confirms what I’ve been saying in several posts. There simply is no raw (as is ) data that is of sufficiently high quality to say very much at all what the temperature of the atmosphere or the oceans is with any degree of accuracy. I’ve always thought that if the data is good then no adjustments or mathematical manipulations should be needed. Certainly not enough accuracy to have a “consensus”on climate change.

  65. Geoff Sherrington
    Posted Jun 5, 2008 at 8:58 PM | Permalink

    Re # 61 Stan and # 63 MC

    Let’s not get at cross purposes – also this is OT. I write on CA because of dissatisfaction with the standards of climate and green/political science.

    In my career we were required to lodge all significant results and reports with a government repository. Our corporate survival depended on good science and others were run out of the country and ended up in jail for poor work and falsification and I have no sympathy.

    When a personal career depends on the excellence of your work and on delivering the goods, you tend to pay more attention to quality than those whose performance is judged (say) by number of peer-reviewed papers produced, indifferent as to importance.

    I am not exaggerating greatly to claim that proper science is under assault as never before. Galileo the Sequel comes to mind.

  66. stan
    Posted Jun 6, 2008 at 8:09 AM | Permalink

    63,

    “if your instrument only measures to +/- 1 degree then it does not matter how many samples you take the accuracy will not get any better.”

    What is the proper way to view the errors introduced by improper thermometer siting (see Watts)? It isn’t a simple matter of calculating a warming bias of X amount. The siting problems can vary by type and by degree (extent) and influence temperature readings by different amounts at different times of the year. If it can be shown that temperature recordings at a majority of sites can be wrong by a range of 0 to 5 degrees, how can such error be resolved to demonstrate a trend measured in tenths of a degree?

    [At my kids’ swim meets, parents are given stopwatches which purport to measure to one hundreth of a second. Two parents timing the same swimmer can differ by more than half a second. It seems that Watts’ survey is showing the same type “operator” error exists for surface temperature stations.]

  67. MPaul
    Posted Jun 6, 2008 at 10:09 AM | Permalink

    Ugh, its been too long and my skills are rusty. Sorry Steve for getting into an elementary topic (and sorry for reducing this to a practical engineering discussion). If you have a population of data of unknown distribution, if you take random samples from that population and average subgroups of samples, the new distribution (of the averaged sub-groups) will be normally distributed and any random measurement errors will be remapped symmetrically about the new mean. So sayeth the CLT (if I’m remembering correctly).

    Lets say you were measuring the bacteria level of frozen peas on a manufacturing line. Every 10 minutes you randomly take 5 samples, measure the bacteria level of each and average the five. Do this procedure a sufficient number of times. The distribution of the sample averages will be normal. You can calculate the mean of this distribution and you can calculate the standard deviation. Now lets say you keep doing this every 10 minutes for several weeks. 99.73% of your sample averages will fall within your 3 sigma limits. If you now get 3 sample averages out of 5 consecutive sample averages that are outside of the 3 sigma limits, then it is highly, highly probable that something changed — something non-random. This is what an engineer would call assignable cause variation. There’s a whole branch of statistics dedicated to developing techniques that separate random variation from assignable cause variation. The manufacturing industry depends on them.

    You would never, ever, ever see Green Giant ‘adjusting’ the bacteria data the way climate scientists routinely adjust data. Such a practice would be considered reckless and would probably be the stuff of lawsuits. Rather, they rely on strict analytical procedures to determine when something changed.

  68. Pofarmer
    Posted Jun 6, 2008 at 5:42 PM | Permalink

    What is the proper way to view the errors introduced by improper thermometer siting (see Watts)? It isn’t a simple matter of calculating a warming bias of X amount.

    First, I would look at completely rural stations. And by rural, I’m not talking about stations at an airport 5 feet from a tool shed. Then I’d look at large city and smaller urban as seperate sets. I don’t think there’s anything to be gained by trying to create homogonous data from non-homogonous data sets.

    The next answer would be to throw it all out and start over.

  69. John F. Pittman
    Posted Jun 7, 2008 at 5:55 AM | Permalink

    #66-68

    What is the proper way to view the errors introduced by improper thermometer siting (see Watts)? It isn’t a simple matter of calculating a warming bias of X amount.

    I think to state it properly is that the range of error is +-x amount not just +x amount. As discussed on this thread http://www.climateaudit.org/?p=3114 #153, it appears an audit to look at the claim of precision and accuracy of the century temperature trend and its sd,se is needed.

  70. Juraj V.
    Posted Oct 6, 2009 at 3:25 PM | Permalink

    I have been thinking about the CA´s bucket SST issue and I have realized, that by that abrupt adjustment based on false sampling premises, “they” managed to cut down the top of the SST warm period by 0.3 deg C, before it fully peaked in 50ties.
    By applying the sampling issue correctly, the SST graph should look more like this: http://blog.sme.sk/blog/560/190772/hadsst2corr.jpg
    70% of Earth is covered by ocean; above ocean air temperatures (which have been derived from SST before sat data became available) have 70% weight in the global data sets. When “they” prematurely cut down the SST warm peak,they also removed the top of the warm peak from global temperature data sets in 50-60ties.
    Had the corrected SST data been used, oceans in 50ties would had been as warm as oceans in 2000s.
    I stand my case, that this “all engine intake since 1945” premise has actually been another artificial manipulation of global temperature data sets. Those [self-snip] simply cut down the warm SSTs down and thus cooled down the whole inconveniently warm period.

  71. Posted Nov 19, 2009 at 6:25 PM | Permalink

    A healthy mind in a healthy body.

One Trackback

  1. […] Buckets and Engines, The Team and Pearl Harbor, Bucket Adjustments: More Bilge from RealClimate, Rasmus, the Chevalier and Bucket Adjustments, Did Canada switch from Engine Inlets in 1926 Back to Buckets?; […]