HadSST3

A new HadSST3 version has been recently published. It starts the process of unwinding Folland’s erroneous Pearl Harbour bucket adjustment, an adjustment that has been embedded in HadSST for nearly 20 years.

Folland’s erroneous adjustment had been originally criticized at CA in 2005 here and further discussed at length in March 2007 at CA here, a post in which I observed that no climate scientist had made any attempt to validate Folland’s bizarre adjustment and that correcting Folland’s error (through a more gradual and later changeover to engine inlets than the worldwide overnight change that Folland had postulated after Pearl Harbour) would have the effect of increasing SST in the 1950s, in turn, potentially eliminating or substantially mitigating the downturn in the 1950s that was problematic for modelers.

However, not until Thompson et al 2008 (submitted Jan 2008; published May 2008) was the problem with the Folland adjustment clearly acknowledged by the “community”. The importance of Thompson et al in resolving the problems arising from the Folland adjustment were credited by Susan Solomon and Phil Jones in the commentary accompanying the Nature article.) Both lead author David Thompson and co-author Mike Wallace, though very prominent climate scientists, had negligible (or no) publishing history on the topic; as one commenter at James Annan’s blog put it, they came out of “left field”. Thompson was an ozone specialist. The other co-authors, John Kennedy of the Hadley Center and Phil Jones of CRU, were, of course, actively involved in the field.

Now over three years later, in a new SST edition (HadSST3), the Hadley Center has accepted and implemented Thompson et al’s criticism of Folland’s Pearl Harbour adjustment. Instead of implementing an overnight changeover to engine inlets in December 1941 as before, the changeover is now phased in through the mid-1970s. This results in changes to SSTs between 1941 and ~1975.

In the new edition, they introduce a new adjustment for an intermediate changeover from uninsulated buckets to insulated buckets. Under the Pearl Harbour assumption, insulated buckets had either been disregarded (because of the assumption that a changeover to engine inlets had been complete at Pearl Harbour) or were held to be of only marginal significance and thus not included in the calculations (the apparent position of Rayner et al 2006).

However, this previously non-existent adjustment is important to HadSST3. There is evidence (discussed at CA here) of buckets being widely used into the 1970s, despite the Pearl Harbour assumption. By distinguishing between uninsulated buckets and insulated buckets and providing for a changeover from uninsulated buckets (pre-WW2) to insulated buckets by the 1970s, most of the effect of a gradual changeover is allocated prior to 1975, thus limiting changes after the 1970s, where there is also a satellite record that would need to be reconciled. While there is still an effect for changeover from insulated buckets to engine inlets after the 1970s, there is also evidence that the introduction of buoys in this period results in an offsetting cold bias. I referred to these issues in the final post of my 2008 series on Thompson et al 2008 here. I did one more post on SST at the time: a short consideration of the ICOADS data set here.

The new HadSST3 dataset still contains some seemingly arbitrary assumptions. They assert that 30% of the ships shown in existing metadata as measuring SST by buckets actually used engine inlet and proceed to reallocate the measurements on this assumption:

It is likely that many ships that are listed as using buckets actually used the ERI method (see end Section 3.2). To correct the uncertainty arising from this, 30+-10% of bucket observations were reassigned as ERI observations. For example a grid box with 100% bucket observations was reassigned to have, say, 70% bucket and 30% ERI.

The supposedly supporting argument at the end of Section 3.2 is as follows:

It is probable that some observations recorded as being from buckets were made by the ERI method. The Norwegian contribution to WMO Tech note 2 (Amot [1954]) states that the ERI method was preferred owing to the dangers involved in deploying a bucket. This is consistent with the rst issue of WMO Pub 47 (1955), in which 80% of Norwegian ships were using ERI measurements. US Weather Bureau instructions (Bureau [1938]) state that the \condenserintake method is the simpler and shorter means of obtaining the water temperature” and that some observers took ERI measurements \if the severity of the weather [was] such as to exclude the possibility of making a bucket observation”. The only quantitative reference to the practice is in the 1956 UK Handbook of Meteorological Instruments HMSO [1956] which states that ships that travel faster than 15 knots should use the ERI method in preference to the bucket method for safety reasons. Approximately 30% of ships travelled at this speed between 1940 and 1970.

This adjustment would reduce the difference between HadSST2 and HadSST3, though the size of the impact was not reported in Kennedy et al 2011. I think that it is reasonable to hope for more conclusive documentary support for overwriting actual data particularly given that the changes described in Kennedy et al 2011 arise from unwinding previous adjustments made without documentary support,

Another somewhat quirky methodology of Kennedy et al 2011 is reported as follows:

Some observations could not be associated with a measurement method. These were randomly assigned to be either bucket or ERI measurements. The relative fractions were derived from a randomly-generated AR(1) time series as above but with range 0 to 1 and applied globally.

I have no idea at present why one would do things this way or what its effect is. It seems like an odd methodology.

Overlooked thus far is the impact of the HadSST revision on the relationship between HadSST and CRUTEM, both said to be “independent” series. The three series – HadSST2, HadSST3 and CRUTEM, are shown in the figure below from 1940 (the point of departure) to 2006:

Figure 1. Global HadSST2(black), HadSST3 (red) and CRUTEM (green).

Although the results of these series are often said to be mutually supporting, between 1975 and 2006, CRUTEM has increased substantially more than HadSST, with both HadSST versions very similar in this period: CRUTEM: 0.243 deg/decade; HadSST: 0.135 deg/decade. Over the 1940-2006 period, the difference in trends is 0.082 deg?decade, resulting in a cumulative difference of 0.41 deg C (56 years at 0.082 deg/decade).

Over the 1940-2006 period illustrated in the above figure, the HadSST trend is reduced by 35% from 0.074 deg/decade (HadSST2) to 0.048 deg/decade (HadSST3). The decrease in 1950-2006 trend from HadSST2 to HadSST3 is 29.5%, a value that is “remarkably similar” to the figure of 30% postulated in 2008 by Pielke Jr. (Both Pielke’s calculation and my related calculations were restricted to the effect of unwinding the Pearl Harbour assumption and were thus not apples-to-apples to the present HadSST2-HadSST3 differential, which incorporates other adjustments.) Given the closeness of Pielke’s 30% to the actual decrease in 1950-2006 trend from HadSST2 to HadSST3, it is remarkable that Schmidt singled this estimate out for particular contumely (while not criticizing the 20-year failure of IPCC scientists to unwind the erroneous Folland method.)

Rather than reporting the change in trend for the HadSST series that had been illustrated in the first two figures of the realclimate post (SST series had been at issue in Thompson et al 2008 and the subsequent discussion), Schmidt estimated changes in trend for HadCRU by combining the HadSST changes with unchanging CRUTEM (70% HadSST; 30% CRUTEM), only reporting the decrease in land-and-ocean trend (and not the decrease in SST trend.) On a 70% basis, the 29.5% decrease in HadSST trend equates to about 20% on HadCRU basis (Schmidt reported 17%.)

Which is the “right” way of presenting the calculation: SST or combined land-and-ocean? There are pros and cons to each calculation. Given that CRUTEM and HadSST are held to be independent series, I think that it is definitely important to visualize the impact of the change from HadSST2 to HadSST3 against CRUTEM, particularly since the discrepancy between the two series increases with the HadSST3 revisions. The original blog discussion at CA was entirely focused on SST (CRUTEM never being mentioned), though one graphic used HadCRU though the discussion was about HadSST. Be that as it may, in a discussion of HadSST, even at realclimate, it wouldn’t have been out of place for Schmidt to actually mention the decrease in 1950-2006 trend from HadSST2 to HadSST3: 29.5%.


63 Comments

  1. Hector M.
    Posted Jul 12, 2011 at 11:50 PM | Permalink

    Steve, a hyperlink is missing in your sixth paragraph, where you write “There is evidence (discussed at CA here)”.

  2. nevket240
    Posted Jul 13, 2011 at 12:13 AM | Permalink

    Talk about shooting one’s foot.
    My eyes are not that great but do I see an implied cooling in SST3 that was not in 2?
    What has been done is to accentuate the climate cycle shift of the mid 70’s.
    MMmmm. This is a tactical shift. I am pondering why.
    regards

    • Jit
      Posted Jul 13, 2011 at 7:02 AM | Permalink

      The other way about – version 2 had the cooling in the mid 40s – 50s. The two versions are pretty similar after about 1975, because it is assumed that methodological changes are finished then.

  3. Posted Jul 13, 2011 at 12:29 AM | Permalink

    Looking at HADSST3, in the words of Uncle Floyd, “Oooh! Isn’t that scary boys and girls!”

  4. Don Keiller
    Posted Jul 13, 2011 at 4:05 AM | Permalink

    The comparison between CRUTEM and HadSST 2 & 3 looks like very good evidence for the the contamination of the land-based record by UHI.

    • Jit
      Posted Jul 13, 2011 at 6:58 AM | Permalink

      Not sure about that. But what seems to be overlooked in the discussion of UHI is that the “rural” sites are themselves not brilliantly sited. Landscape-level changes in ground cover means that rural stations have higher max temps and probably lower minimums than the “natural” situation.

      The impacts of these changes are probably not significant in the industrialised world now, but are ongoing elsewhere (deforestation, for example).

      Steve: this is a diversion within the present topic.

  5. Gavin
    Posted Jul 13, 2011 at 5:04 AM | Permalink

    More misrepresentation. Both your and Pielke’s projected changes were for HadCRUT3, not HadSST2/3. Rewriting history to make you look better in retrospect when the statements are available at the click of a mouse is childish.

    • TheChuckr
      Posted Jul 13, 2011 at 9:13 AM | Permalink

      No Gavin, snip

      Steve: please do not engage with other readers like this.

    • Steve McIntyre
      Posted Jul 13, 2011 at 9:58 AM | Permalink

      Re: Gavin (Jul 13 05:04),

      Gavin, you allege that the post contains “misrepresentations”, a serious allegation. Let me ask you a question: in preparing your post on this topic, did you calculate the difference between HadSST2 and HadSST3 trends and not report it? Or did you neglect to do this calculation? Simple question.

      While it’s gratifying that you look to old CA blog posts and comment threads for guidance on scientific practice, the difference in trend between HadSST2 and HadSST3 seems to me like a fundamental calculation in appraising the impact of the unwinding the Folland error on SST indices, regardless of whether I had turned my mind to the question in a quick calculation several years ago.

      As to your allegation of “misrepresentation”, I re-read the post and do not see any misrepresentations. Perhaps you can identify the sentence or sentences that you believe to contain a misrepresentation.

      • Posted Jul 13, 2011 at 1:32 PM | Permalink

        Steve,
        For us tyros, could you please answer Gavin in simpler terms. Most importantly, which dataset were you originally discussing? I do not see how your reply relates to his statement (not understanding the links relating them).

        It is rare to see a real climate advocate here, and that makes your exchange of great interest.

      • Nicolas Nierenberg
        Posted Jul 14, 2011 at 12:07 AM | Permalink

        I don’t understand Steve. Gavin is referring to HADCrut vs. HadSST. He is saying that your analysis had to do with global temperature not sea surface temperature. Your response had to do with two different series of sea surface temperatures. Why did you ask him to compare two different sea surface trends in response? That’s just confusing.

        As to the big picture, I agree. This is a big change in temperature trends which is interesting.

        • Posted Jul 14, 2011 at 12:54 AM | Permalink

          Let me try. Steve wrote:

          While it’s gratifying that you look to old CA blog posts and comment threads for guidance on scientific practice, the difference in trend between HadSST2 and HadSST3 seems to me like a fundamental calculation in appraising the impact of the unwinding the Folland error on SST indices, regardless of whether I had turned my mind to the question in a quick calculation several years ago.

          Steve long ago questioned Folland’s assumption of an abrupt change after Pearl Harbor and it turns out HadSST3 largely agrees with him on that. He’s indicating the kind of questions that are really worth asking as a result. The fact that instead Schmidt wants to score points about an irrelevance indicates that the obsession with McIntyre and Pielke has become extremely unhealthy.

        • oneuniverse
          Posted Jul 14, 2011 at 5:39 AM | Permalink

          As far as I can tell, Gavin’s claim that “Steve McIntyre predicted that the 1950 to 2000 global temperature trends would be reduced by half” seems to be based on a comment Steve made in reply to a comment by Pete Webster. The relevant portion of their exchange seems to be concerned only with SST’s.

          Pete’s comment (in full, my emphasis):

          Hi Steve,

          Re 90: You quote Phil Jones as stating that he would not release his data so that you could destroy it..25 years investment and etc. But when I went back to the source of the quote I found it to be one of your list of 15 reasons not to release data. So is there a real attribution to Jones? Can you point to a source of this quote?

          The reason I find this interesting is that a group of us developed in the 1990′s the “TOGA data disclosure policy”. Simply, those who make measurements, compile data sets and etc. have 2 years of “ownership” before the data and its sources must be made public. Data remained under PI wraps for years before this and there was never availability for research or subsequent scrutiny. I thought we had made a difference! DOE was party to this agreement I seem to remember. So I am interested when/where Jones made this statement (who I presume is funded by DOE).. If he did he is in violation of the TOGA protocol.

          BTW, if the blip in the SST record were due to the bucket/intake transition, would one expect all of the blips in each ocean basin to be in phase?

          Regards

          Peter W

          Steve’s comment (in full, my emphasis):

          Posted May 29, 2008 at 5:59 PM | Permalink | Reply | Paste Link

          Hi, Peter, thanks for saying hello.

          The statement was made to Warwick Hughes who forwarded me a copy of his email. Von Storch disbelieved that any scientist could say such a thing and contacted Phli Jones directly for confirmation. HE reported that Phil Jones confirmed the statement in his presentation to the NAS panel reported on CA here, with link to PPT.

          The TOGA protocol is a reasonable one. There are some high-level policies that I’ve reviewed previously (See Archiving Category old posts), that require this sort of policy. NSF – Paleoclimate doesn’t require it, although some other NSF departments do. The DOE agency funding Jones doesn’t require it either. We contacted them a couple of years ago and they said that Jones’ contract with them did not entitle them to make any requirements on him for the data. This was a couple of years ago. If I’d been in their shoes and funding someone who was making me look bad, I’d have told them – maybe you beat me on the last contract, but either you archive your data or that’s the last dime you ever get from us. But DOE were just like bumps on logs.

          The only lever on Jones has been UK FOI. Willis Eschenbach has been working on this for years. Jones wouldn’t even disclose station names. 6 or 7 different attempts have been made finally resulting in a partial list. After repeated FOI actions, we got a list of the stations in Jones et al 1990, a UHI study relied on by IPCC to this day. As noted in CA posts and by Doug Keenan, Jones et al 1990 made claims to have examined station metadata that have been shown to be impossible. In any other discipline, such misrepresentations would have the authors under the spotlight.

          My understanding is that the adjustment was applied across the board to all oceans, so, yes, it would apply to all oceans. To be sure, you’d have to inspect their code, which is, of course, secret.

          Also there’s more to this a WW2 blip. As I observed in a companion post, Thompson et al are simply lost at sea in assessing the post-1960s effect. The 0.3 adjustment previously implemented in 1941 is going to have a very different impact than Thompson et al have represented.

          If the benchmark is an all-engine inlet measurement system, as seems most reasonable, then the WW2 blip is the one part of the record that, oddly enough will remain unchanged, as it is the one part of the historical record that is predominantly consistent with post-2000s methods. Taking the adjustments at face value – all temperatures from 1946 to 1970 will have to be increased by 0.25-0.3 deg C since there is convincing evidence that 1970 temperatures were bucket (Kent et al 2007), and that this will be phased out from 1970 to present according to the proportions in the Kent et al 2007 diagram.

          Recent trends will be cut in half or more, with a much bigger proportion of the temperature increase occurring prior to 1950 than presently thought.

          The problem IMO is with Steve’s use of a HadCRU3 land+sea plot (with & without adjustments), which I find confusing. The graphic first appeared in Mar 2007, in the post titled “The Team and Pearl Harbor”, in the comments of which Steve notes:

          As a very quick first pass at ballparking what the effect was, I used the above implementation on the HadCRUT3 global average (I realize that there’s land data in this, but I was using this series in connection with testing a point made by UC and it was handy for me – if I do more work on this, I’ll tidy this up.)

          Unfortunately, Steve neglects to tidy up when the graphic reappears in the May 2008 CA post “Nature “Discovers” Another Climate Audit Finding”, which is the article criticised by RealClimate. In the article, there’s an updated version of the graphic, still using HadCRUT3.

          I think a fair criticism can be made that Steve plotted HadCRUT3 and adjustments to it, while actually discussing SST’s. Although he acknowledged that he used HadCRUT3 out of convenience (it was at hand) to get a ballpark figure, the use of different datasets does lend itself to confusion (and wilful misinterpretation), and he did plot it again later instead of switching to the appropriate SST’s.

          Gavin’s claim that “Steve McIntyre predicted that the 1950 to 2000 global temperature trends would be reduced by half”, however, appears to based on a clumsy conflation of Steve’s comment on SST’s to Pete Webster, and the HadCRUT3 graphic.

        • oneuniverse
          Posted Jul 14, 2011 at 5:45 AM | Permalink

          (Formatting correction : In my quotation of Steve’s comment to PW, the two paragraphs between the bold-type paragraphs should also be in bold.)

        • Steve McIntyre
          Posted Jul 14, 2011 at 6:29 AM | Permalink

          You say:

          The problem IMO is with Steve’s use of a HadCRU3 land+sea plot (with & without adjustments), which I find confusing. The graphic first appeared in Mar 2007, in the post titled “The Team and Pearl Harbor”, in the comments of which Steve notes:

          As a very quick first pass at ballparking what the effect was, I used the above implementation on the HadCRUT3 global average (I realize that there’s land data in this, but I was using this series in connection with testing a point made by UC and it was handy for me – if I do more work on this, I’ll tidy this up.)

          Unfortunately, Steve neglects to tidy up when the graphic reappears in the May 2008 CA post “Nature “Discovers” Another Climate Audit Finding”, which is the article criticised by RealClimate. In the article, there’s an updated version of the graphic, still using HadCRUT3.

          I think a fair criticism can be made that Steve plotted HadCRUT3 and adjustments to it, while actually discussing SST’s. Although he acknowledged that he used HadCRUT3 out of convenience (it was at hand) to get a ballpark figure, the use of different datasets does lend itself to confusion (and wilful misinterpretation), and he did plot it again later instead of switching to the appropriate SST’s.

          I entirely agree with your comments, both as to the primary interest in the SST record and the form of calculation.

          While Gavin now says that people were only interested in the impact on the global land-and-sea record – and this may have been Gavin’s primary interest from the point of view of IPCC public relations – other people, including Climate Audit readers were first interested in the impact on SSTs. Thompson et al 2008 stated:

          The results shown here are based on the current version of the UK Met Office Hadley Centre SST data set (HadSST2; ref. 9).

          Consider for example an immediately subsequent thread SSTs and Hurricanes – which talked about the potential impact of the changes on the then active issue of SSTs and hurricanes. Judy Curry wrote in:

          we are obviously interested in the implications of this SST issue for hurricanes.

          Roger Pielke wrote in:

          Obviously, the first thing that needs to be done is develop the new time series of SSTs. Doing so will likely make the arguments over historical storm counts look simple and tame.

          Kenneth Fritsch said:

          I am interested in what the “adjusted” SSTs would look like.

          In my quick pro forma calculation, I had adopted the idea of a simple linear removal of the 0.3 deg C adjustment as opposed to Folland’s 1941 one-step change. I first used a scenario derived from a suggestion from reader Carl Smith based on the Kent diagram showing predominant bucket usage in 1970.

          In response to Thompson et al 2008, the importance of adjusting for the implementation of insulated buckets was pointed out by a reader and the importance of this in phasing the changeover was immediately highlighted in CA posts. My above reply to Peter Webster was written before the insulated bucket issue was raised. (I immediately posted updates on all then current threads alerting readers to this issue.)

          Reader David Smith proposed the following revised linear adjustment:

          I adjusted Kaplan SST for the MDR. The adjustment started at +0.3C and linearly declined to zero at 1980. This is sort of a pseudo – Carl Smith adjustment.

          He refers to a couple of images which haven’t survived the move to WordPress and which I’ll try to retrieve from him.

          The following day, I reviewed the bidding adopting a line echoing David Smith’s saying:

          I’m headed away for the weekend, but I’ll redo my rough guess based on these variations in a day or two.

          However, I didn’t return to the issue though any Climate Audit “prediction” would have been along the lines of this post.

          Schmidt says that I’ve had “plenty of time” to do the calculation. Indeed, I have. Had someone asked me to do so, I probably would have. The calculation only takes a couple of minutes. The eventual correction turns out to be fairly close to the David Smith scenario – a little quicker, but a linear estimate along the lines of the Climate Audit pro forma style was an entirely reasonable approach and hardly one that “wiser heads” should have turned their noses up at.

          As oneuniverse observes, it is too bad that I didn’t observe the original caveat of the earlier post:

          As a very quick first pass at ballparking what the effect was, I used the above implementation on the HadCRUT3 global average (I realize that there’s land data in this, but I was using this series in connection with testing a point made by UC and it was handy for me – if I do more work on this, I’ll tidy this up.)

          I re-examined the code and it clearly shows that I was illustrating the sort of effect that a 0.3 deg C adjustment would have using a related series rather than estimating the impact on global land-and-ocean. I’ve done lots of calculations in which things are distributed 70% ocean and 30% land and if I were trying to calculate the effect on land-and-ocean, I would have done a calculation that way.

          If I’d done a calculation on May 31 reflecting the discussion at CA, it would have been a calculation on HadSST2 using a scenario along the lines of David Smith’s. This would have resulted in a decrease in 1950-2006 HadSST trend of about 35%. (The corresponding calculation for the Carl Smith-type scenario in the first diagram was actually more like 60%).

          If I’d done a calculation on May 31 on HadCRU using a scenario along the lines of David Smith’s, this would have resulted in a decrease in 1950-2006 HadCRU trend of 20%. (The corresponding calculation for the Carl Smith-type scenario in the first diagram was about 36%.).

          There was nothing wrong with the idea of making a pro forma estimate of the impact through a linear adjustment. “Wiser heads” were clearly incorrect about this. To make an accurate estimate of the impact, it was necessary to include the effect of the new insulated bucket adjustment – a point discussed at Climate Audit and which I would have included in any Climate Audit estimate.

          Judith Curry writes: we are obviously interested in the implications of this SST issue for hurricanes. http://climateaudit.org/2008/05/30/sst-revisions-and-hurricanes/

          Obviously, the first thing that needs to be done is develop the new time series of SSTs. Doing so will likely make the arguments over historical storm counts look simple and tame.

          http://climateaudit.org/2008/05/30/sst-revisions-and-hurricanes/#comment-149376

          Re-examining the code for my calculation confirms the point.

        • Nicolas Nierenberg
          Posted Jul 14, 2011 at 11:35 AM | Permalink

          Thanks Steve, I think that is a great discussion of the history. But the history of a couple of blog posts is, I agree, not the most interesting thing about this.

          So up until now what we have heard is how the “untuned” climate models have done such a great job of modeling the temperature record. I predict that a continued lack of tuning with a series of “improvements”will now conform much more closely to the now revised temperature record. This will not be a deliberate effort on anyone’s part, but is just human nature. We are already seeing movement towards reassessing the role of aerosols even though there is limited data.

        • Steve McIntyre
          Posted Jul 14, 2011 at 12:51 PM | Permalink

          Peter Huybers wrote an interesting article last year observing that there was substantial covariance in the choice of parameterizations for climate models with the net result being that the results were more similar than from an independent draw from the options,

        • oneuniverse
          Posted Jul 15, 2011 at 5:28 AM | Permalink

          Thanks Steve.

          “Compensation between Model Feedbacks and Curtailment of Climate Sensitivity” (Huybers 2010)
          DOI: 10.1175/2010JCLI3380.1

          “Observations used to condition the models ought to be explicitly stated, or there is the risk of doubly calling on data for purposes of both calibration and evaluation.”

    • Roger Pielke, Jr.
      Posted Jul 13, 2011 at 10:11 AM | Permalink

      Gavin, You are still confused. I see no evidence that Steve confuses ocean and surface temps in this post or elsewhere. That is a new red herring from you after the others failed to stick.

      Even after having it explained to you, you apparently do not understand the difference between a sensitivity analysis – a “what if?” exercise – and a prediction. I provided 3 such exercises at the time and even solicited your views on any other value that might be considered (one was for a 15% change, you don’t see me crowing about being “right” based on that, do you?) You declined the offer.

      More generally, why is the misplaced effort fro from you to score blogwar points on obscure posts from 3 years ago worth sullying your reputation on? Why not focus on the science of the new data set rather than me and Steve?

    • Anonymouser
      Posted Jul 13, 2011 at 3:02 PM | Permalink

      Gavin,
      When you write at Real Climate:
      “While wiser heads counselled patience, Steve McIntyre predicted that the 1950 to 2000 global temperature trends would be reduced by half”
      are you aware how much like BS that sounds to anyone who reads the article and understands that Steve was pointing out the difference that a PHASED IN bucket adjustment made, rather than the ALL AT ONCE adjustment what was made in 1940’s ? It was not a PREDICTION it was perfectly valid, serious, and relevant COMMENT illustrating the effect of the difference. Given the new study has done exactly the same thing – used a phased in approach, albeit with an earlier termination date – Steve’s comment was completely legitimate.

      To reasonably scientifically literate people this kind of attitude at Real Climate is a tremendous turn off, and greatly damages the legitimacy of “climate scientists”, if you people really care about that any more. It makes it difficult to accept Real Climate is really objective about anything (and yes I tell my less scientifically literate family and friends about this behaviour of climate scientists if AGW comes up in conversation – they are not impressed).

      • glacierman
        Posted Jul 13, 2011 at 3:18 PM | Permalink

        What does the team and their apologists always use as a defense to having their failed predictions pointed out? “they weren’t predictions…..they were scenarios based on various assumptions…..”

        This makes Gavin’s obsession with trying to slam a “prediction” by Steve so darned ironic.

    • pax
      Posted Jul 13, 2011 at 3:27 PM | Permalink

      Going by the links provided by Gavin it does seem like the original posts talked about the possible implication on the global trend. So when Steve writes:

      “Given the closeness of Pielke’s 30% to the actual decrease in 1950-2006 trend from HadSST2 to HadSST3, it is remarkable that Schmidt singled this estimate out for particular contumely.”

      That would be a misinterpretation if Pielke at the time talked about the implications on the global trend, which he seemed to do:

      “A 30% reduction in the IPCC’s estimate in temperature trends since 1950 would be just as important as a 50% reduction, and questions of its significance would seem appropriate to ask.”

  6. Posted Jul 13, 2011 at 5:49 AM | Permalink

    The Folland anomaly wasn’t only problematical for modelling, it was problematical for simple curve fitting. See http://wattsupwiththat.com/2011/06/06/earth-fire-air-and-water/ … which also hits on the land-water difference.

  7. Hu McCulloch
    Posted Jul 13, 2011 at 6:00 AM | Permalink

    Congratulations, once again, Steve! The CA index is overdue for a category entitled “Imitation is the sincerest form of flattery.”

  8. pat
    Posted Jul 13, 2011 at 6:42 AM | Permalink

    snip – sorry, policy

  9. Posted Jul 13, 2011 at 7:04 AM | Permalink

    Nice post, while It is nice to see further evidence impeaching Gavin’s bizarre post, I think you have hit upon the more important issue here: given the importance of this issue for the science, why does Real Climate think the most important issue here to be, “what does this mean for how people should think about McIntyre and Pielke?” rather than the science itself. Weird and obsessive, it seems.

    • Steve McIntyre
      Posted Jul 13, 2011 at 12:19 PM | Permalink

      I’ve submitted a comment at Real Climate asking Gavin what difference in trend he got between HadSST2 and HadSST3? It’s in moderation.

      • PJB
        Posted Jul 13, 2011 at 4:24 PM | Permalink

        Excuse the intrusion but from Gavin’s response to your “odd” question (the # of the post, perhaps? ;-) )

        the 1950-2006 trend went from 0.097 deg C/dec to 0.068 deg C/dec (mean of all realisations) a 31% drop

        since the “real” value is .068, wouldn’t that mean that the actual change was .029/.068 or a 42.7% drop?

        sorry, couldn’t resist.

      • theduke
        Posted Jul 13, 2011 at 5:52 PM | Permalink

        Steve’s question and response from Gavin at RC:

        Gavin, what did you get as the difference in 1950-2006 trend between HadSST2 and HadSST3, the series illustrated in the second figure of your post?

        [Response: Odd question. The data are available and anyone can calculate the different trends, I don’t think I have any special method or anything, but for completeness the 1950-2006 trend went from 0.097 deg C/dec to 0.068 deg C/dec (mean of all realisations) a 31% drop (uncertainties on OLS trends +/-0.017 deg C/dec; for 100 different realisations of HadSST3 the range of trends is [0.0458,0.0928] deg C/dec). Your post insinuates that I deliberately did not give these numbers – that is false. I calculated the changes in trends for the metrics that people had talked about previously, and especially the ones they had graphed (and you will note that your graph is of the change to HadCRUT, not HadSST; as was RP’s). You are of course aware that oceans are 70% of the world, not 100%. – gavin]

        • HaroldW
          Posted Jul 15, 2011 at 6:27 AM | Permalink

          From Gavin’s response at RC: “…for 100 different realisations of HadSST3 the range of trends is [0.0458,0.0928] deg C/dec[ade, for 1950-2006]…”

          Is it correct to read this as: given the HadSST3 values and error bars, our estimate of the temperature trend, even with 50 years’ data, has a 3-sigma width of around a factor of 2?

      • Posted Jul 14, 2011 at 8:04 AM | Permalink

        I have done a calculation here which I think shows the effects on trend for both the SST and the land/sea averages. At the end of the post are two graphs which show trend to 2006 as a function of starting year.

  10. Fischbeck
    Posted Jul 13, 2011 at 7:46 AM | Permalink

    Concur with Pielke. The lack of curiosity by RealClimate aficionados as to the implications of the broad assumptions used to smooth the data highlights their true interests. Claiming that 30% of the personnel on ships around the world were confused when recording the source of their SST records is one heck of an assumption, and by the way, the “error” only goes in one direction (“bucket” is really “inlet,” never the other way around). No first year graduate student of mine would be permitted to assert such a claim without exploring how it impacts the confidence of their conclusions.

  11. Posted Jul 13, 2011 at 8:59 AM | Permalink

    The Pearl Harbor assumption reminds me of a modeling class way back in the last century. In constructing a model for some system, like a grazing land, we had some relations from the literature to use, but to make it complete we often had to make up something–“let’s assume X”. Ok for a class exercise (though the prof could have pointed out how to do a sensitivity analysis of our assumption…) BUT not in the real world.

    • JEM
      Posted Jul 13, 2011 at 4:33 PM | Permalink

      I’m inclined to think that much of the background work that’s gone into the field of climate science was first-pass, “well, for now we’ll assume…” supposition that subsequently ended up etched in stone as grant money and careers became dependent on blessing them as accurate to multiple decimal points.

  12. John
    Posted Jul 13, 2011 at 9:03 AM | Permalink

    Gavin isn’t =- snip

    Steve – please do not engage in this way with other readers.

  13. Steve McIntyre
    Posted Jul 13, 2011 at 9:43 AM | Permalink

    ##FUNCTIONS
    gethad=function(url=”http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/annual”) {
    D=read.table(url)#dim(D) #158 12 #start 1850
    names(D)=c(“year”,”anom”,”u_sample”,”l_sample”,”u_coverage”,”l_coverage”,”u_bias”,”l_bias”,”u_sample_cover”,”l_sample_cover”,
    “u_total”,”l_total”)
    temp=D[nrow(D),]==0
    D[nrow(D),][temp]=NA
    x=ts(D[,2],start=D[1,1])
    return(x)
    }

    trend=function(x) lm(x~I( (1:length(x))/10) )$coef[2]

    ##DATA

    hadcru3= gethad(url = “http://hadobs.metoffice.com/hadcrut3/diagnostics/global/nh+sh/annual”)
    crutem=gethad(“http://hadobs.metoffice.com/crutem3/diagnostics/global/nh+sh/annual”)
    sst2 = gethad(“http://hadleyserver.metoffice.com/hadsst2/diagnostics/global/nh+sh/annual”)
    sst3=read.table(“http://www.climateaudit.info/data/gridcell/hadsst3/ihadsst3_-1_mean_0-360E_-90-90N_n_su.dat”)
    #downloaded from KNMI
    sst3=sst3[sst3[,1]>=1880,] #missing values before 1880 in KNMI
    sst3=ts(sst3[,2],start=sst3[1,1])

    #png(“d:/climate/images/2011/sst/before_and_after2.png”,width = 600, h=400)
    start0=1940;
    par(mar=c(3,4,3,1))
    ts.plot(window(sst2,start0,2006),ylab=”deg C”,xlab=””,ylim=c(-.4,.8),lwd=2)
    title(“HadSST: Before and After”)
    lines(window(crutem,start0,2006),lty=1,col=3,lwd=2)
    lines(sst3,lty=1,col=2,lwd=2)
    legend(“topleft”,col=c(1,2,3),lwd=c(2,2,2),legend=c(“HadSST2″,”HadSST3″,”CRUTEM”))
    #dev.off()

    ##TRENDS 1950-2006
    trend(window(sst2,1950,2006) )
    # 0.09894413
    trend(window(sst3,1950,2006) )
    # 0.0697371
    1- trend(window(sst3,1950,2006) )/trend(window(sst2,1950,2006) )
    # 0.2951871
    trend(window(crutem,1950,2006) )
    # 0.1513508
    trend(window(hadcru3,1950,2006) )
    # 0.1171312

    .3*trend(window(crutem,1950,2006) )+.7* trend(window(sst3,1950,2006) )
    # 0.0942212
    1 – ( .3*trend(window(crutem,1950,2006) )+.7* trend(window(sst3,1950,2006) ) )/ trend(window(hadcru3,1950,2006) )
    # 0.1955925

    • Posted Jul 14, 2011 at 2:41 AM | Permalink

      There he goes again! Posting his code! Clearly he is NOT a real Climate Scientist!

      /SARC

  14. BRIAN M FLYNN
    Posted Jul 13, 2011 at 9:53 AM | Permalink

    “Which is the “right” way of presenting the calculation: SST or combined land-and-ocean?”

    Combined land-and-ocean data would appear to further mask a cooler ocean, and its continuing capacity to absorb atmospheric CO2. The “right” way? – show both.

  15. Doug Proctor
    Posted Jul 13, 2011 at 10:18 AM | Permalink

    Steve, you say: “Which is the “right” way of presenting the calculation: SST or combined land-and-ocean?”

    With due respect, I consider that it is completely and absolutely inappropriate to present any temperature statistic that combines land and SST data. The trends and magnitude of changes, positive and negative, are different. (It is actually inappropriate to combine all contiguous US temperature data to represent what has gone on within the contiguous mainland US: the variation across the continental contiguous US is so different that what an average change/data point is has questionable interpretable value.)

    The global warming phenomenon is not global in its effects. The land temperatures have risen much greater than that of the seas, the northern hemisphere more than the south, and the Arctic-polar more than the northern hemisphere as a whole. This would not be a big issue is temperatures had risen everywhere – globally – but that it was greater in one area than another, but that is not true. Many, many regions haven’t changed at all, and in some places, like South America, recent changes have been negative, not positive.

    snip: over-editorialing

  16. Oxbridge Prat
    Posted Jul 13, 2011 at 10:45 AM | Permalink

    An old friend used to work on sea surface temperatures for the Met Office; I guess it was on HADSST but at the time I didn’t know about such things. His off-the-cuff view was that the Russians recorded temperature data carefully and reported it accurately, but for security reasons usually lied about where they were at the time of the recording. By contrast the American ships were more than happy to tell you where they were, but their sailors probably just made up the data.

  17. timetochooseagain
    Posted Jul 13, 2011 at 11:05 AM | Permalink

    correcting Folland’s error (through a more gradual and later changeover to engine inlets than the worldwide overnight change that Folland had postulated after Pearl Harbour) would have the effect of increasing SST in the 1950s, in turn, potentially eliminating or substantially mitigating the downturn in the 1950s that was problematic for modelers.

    I believe it is worth examining whether this correction has brought the data into better agreement with models. It would seem not:

    BTW, I see a potentially interesting feature of the HADSST3 curve in the plot above-the dip in temperatures following Pinatubo seems to be significantly reduced. Not sure why, but it might matter to analyses of the response of that eruption compared with models.

    While Gavin is here, I would like to ask (quite politely!) whether the folks at GISS are planning on implementing an adjusted set of SST data in their products. My impression is that so far only the Hadley guys are out there trying to do this.

    • timetochooseagain
      Posted Jul 13, 2011 at 11:06 AM | Permalink

      Oh, the image is courtesy of Bob Tisdale.

  18. JohnH
    Posted Jul 13, 2011 at 11:35 AM | Permalink

    This change in the temp trend will cause GCMs that “fit” before to be “wrong” now. The models will have to be re-estimated. In the past, critics of the GCMs have charged that the models were “tuned” to fit the past temp data by adjusting the effect of aerosols. If the models are now re-estimated, what will it mean if the new model estimates are said to “fit” well, and it is seen that the coefficient for aerosols (or other means of fitting the aerosol data) has changed? Does that mean that the models got aerosols wrong before, but now (anticipating the new model estimates) have got aerosols “right”?

    I also understand from past criticism that the coefficent for aerosols is not estimated by normal statistical techniques such as regression analysis, or by imposing on the GCM the known physical effect of aerosols on temperature, but rather the ceofficient is determined as a residual effect that makes the overall model work. If the new model estimates are said to “fit” the new temp data, and the aerosol effect is determined as a residual effect and not based on the quantifiable scientific effect of aerosols on temperature, then what does it tell us about the GCMs? It seems to me this will be a good test of the importance of “fudge factors.”

  19. Posted Jul 13, 2011 at 1:00 PM | Permalink

    Fascinating post. Thank you Steve. I am looking forward to the fact based discussion here (or at realclimate) with Gavin.

  20. Kenneth Fritsch
    Posted Jul 13, 2011 at 1:05 PM | Permalink

    I appreciate that SteveM has taken the discussion of the Kennedy paper a step or two further in noting the specific assumptions that are required for SST adjustments (homogenizing) and the lack evidence and documentation for those assumptions. These discussions of uncertainties (which should be the major science concern) can too quickly get lost in the sidebars.

  21. AJ
    Posted Jul 13, 2011 at 2:02 PM | Permalink

    Here’s a bit of analysis that probably isn’t worth a pinch, but I thought I’d share anyway :)

    A few weeks ago over at The AirVent, there was a post with a periodogram of HadCruV3 showing a spike with the period of about 65 years:

    http://noconsensus.wordpress.com/2011/06/20/amending-the-past/

    Since this period corresponded to half of the 130 year HadCruV3 record, I suspected that it pointed to an anomaly around 1945 which could be related to the bucket adjustment issue.

    When I saw Steve’s code posted here, I decided to produce similar periodograms by pasting on some code. The results indicate that the half-record period spike is still evident. In relation to the full-record period, the half-record period signal becomes stronger in HadSST3 vs. HadSST2. This same half-record period spike is also present in the CRU land record.

    Here’s the code that I pasted after Steve’s code:

    end0=2006;

    # Plot periodograms of sst2, sst3, and crutem

    windows()
    spsst2<-spectrum(window(sst2,start0,end0), spans=c(3,2), main="sst2", log='no')

    windows()
    spsst3<-spectrum(window(sst3,start0,end0), spans=c(3,2), main="sst3", log='no')

    windows()
    spcrut<-spectrum(window(crutem,start0,end0), spans=c(3,2), main="crutem", log='no')

    # Save the specral density and name them according to period length (years – instead of frequency)
    specsst2<-spsst2$spec
    names(specsst2)<-format(1/spsst2$freq,digits=3)

    specsst3<-spsst3$spec
    names(specsst3)<-format(1/spsst3$freq,digits=3)

    speccrut<-spcrut$spec
    names(speccrut)<-format(1/spcrut$freq,digits=3)

    # Do a quadratic fit on crutem and plot periodogram
    y<-window(crutem,start0,end0)
    x1<-start0:end0
    x2<-x1^2

    quadmod<-lm(y~x2+x1)
    p<-ts(predict(quadmod),start=start0)
    windows()
    plot(y,main="crutem",ylab="deg C")
    lines(p)

    windows()
    spcrutfit<-spectrum(window(p,start0,end0), spans=c(3,2), main="crutem quadfit", log='no')

    speccrutfit<-spcrutfit$spec
    names(speccrutfit)<-format(1/spcrutfit$freq,digits=3)

    # Display spectral densities of first five cyclic periods of each time series

    specsst2[1:5]
    specsst3[1:5]
    speccrut[1:5]
    speccrutfit[1:5]

    • David L. Hagen
      Posted Jul 13, 2011 at 7:22 PM | Permalink

      snip – OT

  22. Posted Jul 13, 2011 at 8:27 PM | Permalink

    Steve McIntyre,
    Very nice post. Two observations:
    1. It appears difficult to imagine (impossible?) that a significant revision in SST history in the 1940-1975 period will not have a substantial impact on the overall surface average. The SST portion ought to represent ~70% of the overall average. The discrepancy is clear in the differences in year-on-year trends for the SST and overall average in 1943-1965 compared to the year-on-year trends post 1970. Another shoe needs to drop.

    2. I suspect that with the revised SST series it will be easier to relate the ocean heat accumulation history to the surface temperature history…. these should be closely related.

  23. Posted Jul 14, 2011 at 11:14 AM | Permalink

    Steve, one point of clarification. Thompson et al. did not really address the 1941 discontinuity induced by Folland. They were concerned with a 1945 discontinuity due to the return of UK ships to the global monitoring fleet sample. UK ships went from 0% to about 50% of the Met Office sample in one year, while US ships went from 80% to about 40%. Thompson et al. noted that the UK ships were using buckets while US ships were mainly using engine intakes. So they noted that buckets were in widespread use even after 1941, but didn’t exactly enlarge upon the implications of this. They said:

    From the country-of-origin information in Fig. 4, it is clear that the SST archive—and hence the mix of measurement methods— continued to evolve considerably during the decades following 1941.

    That’s about as far as they go. I guess that is what you mean by saying the problem was “clearly acknowledged” by the community in Thompson et al. But I think the acknowledgment should really be credited to the work of Kent et al.–I don’t really think a typical reader of Thompson et al. would have picked up on the implications. If I knew how to embed graphs in a comment I’d juxtapose Folland and Parker (1995) Fig 19 with Kent et al 2007 Fig 2f so people could see just how far off the bucket adjustment assumption was. You might even have done this already in an earlier post.

    • timetochooseagain
      Posted Jul 14, 2011 at 11:29 AM | Permalink

      Ross, embedding images should be a simple matter of html code, less than img src equals quote url unquote greater than.

      I hope that helps, and I am fairly certain it should work.

  24. Steven Mosher
    Posted Jul 14, 2011 at 3:39 PM | Permalink

    Faced with a a correction to SST, you have two choices:

    1. Look for a blog post to attack:
    2. examine its implications to your work

    You have to watch this whle presentation (jan 2011) where at the end a questioner asks the presenter if they have considered the forthcoming correction to SST.

    http://ams.confex.com/ams/91Annual/flvgateway.cgi/id/17273?recordingid=17273

    Back in this time period the new SST estimate was making the rounds ( I hear about the dip removal via the grapevine)

    Also a great presentation for natural variability fans

    • timetochooseagain
      Posted Jul 14, 2011 at 4:03 PM | Permalink

      Indeed it is an interesting presentation, but I suspect it is mostly a case of over statistical analyzing to tease signals out of data…basically it’s trying too hard to separate out features of the data. He does make some interesting (and I think quite good) points about the models “fit” to the data, however.

      Someone should see what effect these changes to the sea surface temperature data have on the calculations of various so called “indices” of “variability”. There could be quite a lot of mis-interpretation of “PDO” and “AMO” etc “signatures” or “patterns” in the data if these things are taken into account. Seeing if it changes the “AMO” will be relatively easy, just using the data available on climate explorer. I’ll see what happens, and perhaps get back to everyone. “PDO” will require some more difficult calculations since it is not calculated in a simple way (although it will definitely change them, because the global anomalies are explicitly removed from the data when deriving the “PDO” and we know the global anomalies are changed).

      • timetochooseagain
        Posted Jul 14, 2011 at 5:25 PM | Permalink

        This is what the corrections look like they do to the “AMO”:

        I describe what I did here:

        http://devoidofnulls.wordpress.com/2011/07/14/new-sst-record-effects-on-amo/

      • Posted Jul 15, 2011 at 5:55 AM | Permalink

        Timetochooseagain: The EOF/PC analysis at the KNMI Climate Explorer cannot be used to determine the effect on the PDO of the changes from HADSST2 to HADSST3, at least not back to 1900. The EOF/PC analysis through the Climate Explorer requires at least 75% of the valid points and HADSST3 data for the North Pacific north of 20N does not reach that threshold in the 1940s, so there’s a decade gap.

        The PDO is a odd dataset anyway. It was originally calculated (and still is) until 1981 using UKMO SST data, which is obsolete—replaced by HADSST released in 2001, HADSST2 released in 2006, and HADSST3. From 1982 to 2001, JISAO uses the obsolete Reynolds OI.v1 SST data for the PDO. It’s only since 2002 that the PDO is made up of a current dataset, Reynolds OI.v2

    • Sean
      Posted Jul 15, 2011 at 1:20 PM | Permalink

      Steve Mosher, any chance you could summarize the question and the answer? I listened tot he link you posted but I couldn’t hear the question and so didn’t really understand the answer.

  25. Manfred
    Posted Jul 15, 2011 at 3:25 AM | Permalink

    How do you “correct the uncertainty” by making an unproven assumption ?

    So, if we buy into the story that 30% of measurements labelled buckets were actually done by inlets, the temperature increase in 70 years was approx. 0.3 deg, if we don’t buy into it, the adjustments would have to be about 100%/(1-0.3) = 43% higher and the temperature invrease in 70 years would reduce to approx. 0.2 deg.

  26. harry
    Posted Jul 15, 2011 at 8:30 AM | Permalink

    I have 3 questions to ask in follow up to this change, it’s nice that Gavin’s here, I’d ask at his website but they seem to always have technical issues and lose my posts.
    1) Is it common in “settled science” for something as fundamental as SST trends over 50 years to be out by 30%, isn’t kind of like announcing that you got the speed of light wrong and it is now only 2×10^8m/s?
    2) Are there many papers in “peer-reviewed journals” that depended on the previous figures for their results and conclusions? Is it common for these papers to be withdrawn? I understand the bar is pretty low in climate science, i.e. you get to use contaminated data upside-down and not a single climate scientist complains, but I’m guessing this is a tad more fundamental.
    3) Is the reduction by 30% going to lead to more of Trenberth’s missing heat or less? You’d think they’d be a little more careful about losing all that heat, next they won’t be able to locate data and confidentiality agreements. [Steve: this last is totally unrelated]

    • harry
      Posted Jul 15, 2011 at 3:01 PM | Permalink

      Why is it unrelated?

      snip – unrelated to this specific issue here and blog editorial policy is to discourage attempts to prove or disprove AGW in 3 sentences.

      • harry
        Posted Jul 15, 2011 at 4:28 PM | Permalink

        I wasn’t trying to prove or disprove anything. It is a genuine question. If the oceans aren’t storing as much energy as expected, then it must mean the CO2 forcing is lower, or the energy is being stored elsewhere. This doesn’t disprove AGW, in fact it presupposes it since we are talking about a CO2 forcing. It just means that the scale may be lower, which does have consequences. I’d expect one of those consequences is that it provides an avenue for UHI to take a greater role in surface temperatures as an explanation of the disparity between the expected energy balance and measured retained energy (i.e we got the energy inbalance wrong). This too might lead to an overall reduction in the estimate of CO2 forcing, but that doesn’t disprove AGW, it just makes it a tad less worrying.

        I expect you want to fix up my previous comment. I have it listed as “awaiting moderation” and “snipped”. I don’t think it can be both.

        Steve- nothing wrong with the question. However, I’ve long ago made the editorial decision that if I allow debates about AGW from first principles on every thread, then every thread looks the same. This thread is about specific adjustments – let’s stick to that.

  27. ferd berple
    Posted Jul 15, 2011 at 2:45 PM | Permalink

    If there was an instrumentation problem after 1945, then the climate models using data up to 1945 should have predicted unexpected warming/cooling after 1945 as compared to the instrument records.

    Why wasn’t any unexpected warming/cooling reported by the climate models after 1945? This should have alerted climate scientists to a likely data error.

  28. Posted Jul 21, 2011 at 3:27 PM | Permalink

    Mr. McIntyre,

    There is proof positive the 1941 ‘adjustment’ is false conjecture. Many should recall that prior to the US being in WWII, we were supporting Britain in the North Sea with massive convoys and pre-positioning forces in response to Japan’s earlier invasions.

    The number of hulls in the oceans did not dramatically jump in 1941, they were growing rapidly throughout the 1930’s and in response to ever increasing tension due to the expansion of Nazis and Japan.

    Just more AGW fudge baked into the mix

  29. P. Solar
    Posted Feb 16, 2012 at 3:58 AM | Permalink

    I have just plotted the hadSST3 adjustment against the original ICOADS data.

    by scaling to 67% the game becomes clear. They have quite simply engineered this adjustment to remove the early 20th c. rise that is problematic for their hypothesis.

    The late 19th c. fall similarly gets a 50% reduction.

    That is why the arguments seem so contrived.

  30. P. Solar
    Posted Feb 16, 2012 at 4:11 AM | Permalink

    Manfred: How do you “correct the uncertainty” by making an unproven assumption ?

    This is a very good point. I just spotted that phrase while rereading the paper. By definition you cannot correct an uncertainty.

    What they are in fact doing is _increasing_ the uncertainty by questioning the reliability of the metadata and suggesting this could require a wholesale change to the record.

2 Trackbacks

  1. [...] there is a new record of sea surface temperatures and a clever fellow on the Climate Audit thread says: Someone should see what effect these changes [...]

  2. […] and HadSST3; […]

Follow

Get every new post delivered to your Inbox.

Join 3,245 other followers

%d bloggers like this: