Reconciling Model-Observation Reconciliations

Two very different representations of consistency between models and observations are popularly circulated. On the one hand, John Christy and Roy Spencer have frequently shown a graphic which purports to show a marked discrepancy between models and observations in tropical mid-troposphere, while, on the other hand, Zeke Hausfather, among others, have shown graphics which purport to show no discrepancy whatever between models and observations.  I’ve commented on this topic on a number of occasions over the years, including two posts discussing AR5 graphics (here, here) with an update comparison in 2016 (here) and in 2017 (tweet).

There are several moving parts in such comparisons: troposphere or surface, tropical or global. Choice of reference period affects the rhetorical impression of time series plots.  Boxplot comparisons of trends avoids this problem. I’ve presented such boxplots in the past and update for today’s post.

I’ll also comment on another issue. Cowtan and Way argued several years ago that much of the apparent discrepancy in trends at surface arose because the most common temperature series (HadCRUT4,GISS etc) spliced air temperature over land with sea surface temperatures. This is only a problem because there is a divergence within CMIP5 models in trends for air temperature (TAS) over ocean and sea surface temperature (TOS). They proposed that the relevant comparandum for HadCRUT4 ought to be a splice as well: of TOS over ocean areas and TAS over land.  When this was done, the discrepancy between HadCRUT4 and CMIP5 models was apparently resolved.

While their comparison was well worth doing, there was an equally logical approach which they either didn’t consider or didn’t report: splicing observations rather than models. There is an independent and long-standing dataset for night marine air temperatures (ICOADS). Combining this data with surface air temperature over land would avoid the problem identified by Cowtan and Way. Further, NMAT data is relied upon to correct/adjust inhomogeneity in SST series arising from changes in observation techniques, e.g. Karl et al 2015:

previous version of ERSST assumed that no ship corrections were necessary after this time, but recently improved metadata (18) reveal that some ships continued to take bucket observations even up to the present day. Therefore, one of the improvements to ERSST version 4 is extending the ship-bias correction to the present, based on information derived from comparisons with night marine air temperatures.

Thus, there seems to be multiple reasons to look just as closely at a comparison resulting from this approach, as one from splicing model data, as proposed by Cowtan and Way.  I’ll show the resulting comparisons without prejudging.

Troposphere

Spencer and Christy’s comparisons are for satellite data (lower troposphere.) They typically show tropical troposphere, for which the discrepancy is somewhat larger than for the GLB troposphere (shown below.) The median value from models is 0.28 deg C/decade, slightly more than double observed trends in UAH (0.13 deg C/decade) or RSS version 3.3 (0.14 deg C.) RSS recently adjusted their methodology resulting in a 37% increase in trend  (now 0.19 deg C/decade.)   The UAH and RSS3.3 trends are below all but one model-run combinations. Even the adjusted RSS4 trend is less than all but two (of 102) model-run combinations.

The obvious visual differences in this diagram illustrate the statistically significant difference between models and observations.  Many climate scientists e.g. Gavin Schmidt are deniers of mainstream statistics and argue that there is no statistically significant difference between models and observations. (See CA discussion here.)

CMIP5 and HadCRUT4

IPCC AR5 compared CMIP5 projections of air temperature (TAS) to HadCRUT4 and corresponding surface temperature indices (all obtained by weighted average of air temperatures over land and SST over ocean.)  In this case, the discrepancy is not as marked, but still significant. Median model trend was 0.241 deg C/decade (less than troposphere) while HadCRUT4 trend was 0.181 deg C/decade (Berkeley 0.163).  Berkeley was lower than all but six runs, HadCRUT4 lower than all but ten. Both were outside the range of the major models. As noted above, the basis of this comparison was criticized by Cowtan and Way, re-iterated by Hausfather.

Cowtan and Way Variation

As noted above, Cowtan and Way (followed by Hausfather) combined CMIP5 models for TAS over land and TOS over ocean, for their comparison to HadCRUT4 and similar temperature data. This had the effect of lowering the median model trend to 0.189 deg C/decade (from 0.241 deg C/decade), indicating a reconciliation with observations (0.181 deg C/decade for HadCRUT4) for surface temperatures (though not for tropospheric temperatures, which they didn’t discuss.)

ICOADS NMAT and “MATCRU”

The ICOADS air temperature series is closely related to SST series. There is certainly no facial discrepancy which disqualifies one versus the other as a valid index. There are major and obvious differences in trends between the ocean series and the land series. The difference is larger than in models, but models do project an increasing difference over the next century.

One wonders why the standard indices (HadCRUT4) combine the unlike series for SST and land air temperature rather than combining two air temperature series.  As an experiment, I constructed “MATCRU” as a weighted average (by area) of ICOADS and CRUTEM.  Rather than the consistency reported by Cowtan-Way and Hausfather, this showed a dramatic inconsistency – not unlike the inconsistency in tropospheric series prior to the recent bodge of RSS data.

 

 

Conclusion

What does this all mean? Are models consistent with observations or not?  Up to the recent very large El Nino, it seemed that even climate scientists were on the verge of conceding that models were running too hot, but the El Nino has given them a reprieve. After the very large 1998 El Nino, there was about 15 years of apparent “pause”. Will there be a similar pattern after the very large 2017 El Nino?

When one looks closely at the patterns as patterns, rather than to prove an argument, there are interesting inconsistencies between models and observations that do not necessarily show that the models are WRONG!!!, but neither are they very satisfying in proving that that the models are RIGHT!!!!

  • According to models, tropospheric trends should be greater than surface trends. This is true over ocean, but not over land. Does this indicate that the surface series over land may have baked in non-climatic factors, as commonly argued by “skeptics”, such that the increase, while real, is exaggerated?
  • According to models, marine air temperature trends should be greater than SST trends, but the opposite is the case. Does this indicate that SST series may have baked in some non-climatic factors, such that the increase, while real, is exaggerated?

From a policy perspective, I’m not convinced that any of these issues – though much beloved by climate warriors and climate skeptics – matter much to policy.  Whenever I hear that 2016 (or 2017) is the warmest year EVER, I can’t help but recall that human civilization is flourishing as never before. So we’ve taken these “blows” and not only survived, but prospered. Even the occasional weather disaster has not changed this trajectory.

 

 

161 Comments

  1. Lance Wallace
    Posted Nov 18, 2017 at 4:01 PM | Permalink

    Must be embarrassing for a Canadian to see the Canadian model leading the pack, as it were.

    What is “Under_3”? The Russian model?

    • MikeN
      Posted Nov 18, 2017 at 4:38 PM | Permalink

      Negative feedback operates in many ways…

      • Follow the Money
        Posted Nov 23, 2017 at 2:02 PM | Permalink

        Feedbacks? Here is mine: With each model remove its assumption for increasing temps due to increasing atmospheric CO2 concentration.

        I predict the models adjusted so will align with instrumental temps much more convincingly.

    • Steve McIntyre
      Posted Nov 18, 2017 at 5:10 PM | Permalink

      should have explained – models with only 1 or 2 runs. I grouped them together for boxplot.

  2. Posted Nov 18, 2017 at 4:49 PM | Permalink

    Reblogged this on ClimateTheTruth.com.

  3. Posted Nov 18, 2017 at 4:56 PM | Permalink

    I see the structure of the debate differently. If you look at the trend range, you have the Canadian and the UK HADGem2 models holding up the upper end of predictions (you have a token one at the lower level, there has to be one). The rest are just so many dogs in the race, running in the middle. If one greatly expanded research funding in models, there would spawn a whole host of these but they would roughly occupy the *same* prediction space. Models that would closely match present day (in this instance 1979-2017) trends would show reduced warming in the future, which would directly conflict with why models get funded. This is not to say models are funded in order to produce warm trends, but that they are funded in the context of the study of global temperatures and their policy implications. Low-trend models would be self-defeating in such a milieu. The metric for model success is the absolute value reached 70-80 years from now, not their match with present-day trends. It is little worder not a single one of them is apparently parameterized in any satisfactory way to match present trends but yet are the very embodiment of physics for one hundred years out in the future. In other words, the evolutionary and selection pressures on climate models doe not include their capability in matching present reality.

  4. mpainter
    Posted Nov 18, 2017 at 6:16 PM | Permalink

    The lack of a La Nina explains the higher global temperature anomaly for the past year. This means that Pacific SST is stuck on warm. Normally, a super La Nina follows on the heels of a super El Nino, but that did not happen and so the models seem to be more accurate now than a few years ago.

    BUT, this warmth is due to an aberrant ENSO, not to the radiative physics of CO2. Hence the models, which did not forecast this aberration, cannot be said to be a reliable means of prognostication.

    For those not in the know, La Nina is an accelerated meridional overturning circulation wherein cold water is brought to the surface in the equatorial Pacific, acting as a global coolant.

    • mpainter
      Posted Nov 19, 2017 at 11:47 AM | Permalink

      La Nina 2017 has officially arrived, according to NOAA indices. The course of any La Nina is unpredictable, however. If it resembles the circa 2000 La Nina, it will be deep and prolonged and it will again expose the models as overhot. And so we see that the vagaries of nature hold the issue in suspense.

    • Alan Longhurst
      Posted Nov 19, 2017 at 1:26 PM | Permalink

      “For those not in the know” La Nina is the direct consequence of the return of trade winds to their normal strength after a Nino so that renewed surface wind stress plus the Coriolis effect generate upwelling of cool water along the equatorial region: go read it up in any decent, old-fashioned text-book on physical oceanography published after about 1970….

      The Meridional Overturning Circulation is an entirely different process, in a different ocean, usually acronymed as the AMOC…

      • mpainter
        Posted Nov 19, 2017 at 1:59 PM | Permalink

        Alan, by claiming that meridional overturning circulation is confined to the North Atlantic, you reveal your ignorance.

        Most definitely meridional overturning circulation is in the Southern hemisphere.

        YOU go read up. Start with Antarctic Convergence Zone. The rest of your comment is compounded of your ignorance. So read up and return if you wish more instruction.

        • Posted Nov 19, 2017 at 2:18 PM | Permalink

          The thermohaline circulation is one of those lovely ideas we have no means of measuring. We can calculate that it simply does not work as a heat/salt engine. If it exists, it is mechanically pumped, most likely by wind.

        • mpainter
          Posted Nov 19, 2017 at 2:30 PM | Permalink

          Gymnosperm, I doubt that southern hemisphere overturning circulation is thermohaline. Wind is the best bet, imo, as the driver. However, it seems clear to me that salinity difference drives the AMOC.
          What do you think?

        • mpainter
          Posted Nov 19, 2017 at 2:55 PM | Permalink

          To forestall semantic quibbling, please understand that “meridional overturning circulation” refers to the process whereby water subducted at the poles is circulated toward the equator (meridionally) and thereby upwelled to the surface.

          Also, please understand that upwelling must equal subduction and the two are coupled, that is, one cannot occur without the other also occurring, simultaneously.

        • mpainter
          Posted Nov 19, 2017 at 3:02 PM | Permalink

          Also, it needs to be understood that eastern boundary currents are part of this meridional overturning circulation (transport and upwelling of water originating as polar water previously subducted)

        • Posted Nov 19, 2017 at 3:37 PM | Permalink

          Mpainter, I think Alan’s point was that overturning is not necessarily meridional, and in the case of ENSO it is mostly equatorial, driven by equatorial atmospheric Walker Circulation. Atlantic Meridional Overturning is also a major player affecting GMST, likely greater than ENSO. There is no equivalent PMOC now but there is evidence of one existing in the Pliocene as reported here.

        • mpainter
          Posted Nov 19, 2017 at 4:08 PM | Permalink

          Wrong, Ron. Bad wrong and confused. Read up on Antarctic convergence. You simply do not know what you are talking about.

        • Posted Nov 19, 2017 at 4:47 PM | Permalink

          Mpainter, admittedly climate science is not my day job and I don’t claim to be publishing anything in a journal, as Steven Mosher aptly points out. However, I am citing my sources and as I Google “Antarctic convergence” and ENSO I get one paper that is looking for cross influences. Here is a quote from the abstract:

          It is clear that some evidence of ENSO can be found in the Antarctic meteorological and ice-core records; however, many of the relationships tend not to be stable with time, and we currently have a poor understanding of the transfer functions by which such signals arrive at the Antarctic from the tropical Pacific.

          So I think it’s clear you were thinking about AMOC, which has one leg in the Antarctic convergence. BTW, I think it’s interesting to note that meridional overturning warms the planet while equatorial overturning cools the planet (or at least the surface by helping to break up the thermocline where the upwelling occurs). With merdional overturning the upwelling is in a polar region (which is already cold). The heat transport is almost all on the surface. This smoothes the equatorial-polar gradient, which warms the planet in two ways. Firstly by melting the highly reflective polar sea ice and replacing it with highly absorbent ocean. Secondly, objects radiate proportion to the fourth power of surface temp, thus spreading the heat over a larger surface reduces radiative efficiency.

          Perhaps the loss of the PMOC in the Pliocene was heavily responsible for the transition into the Pleistocene (Quaternary Ice Age) along with the reduction of CO2. This would through a monkey wrench into the current consensus theory that only looks at CO2. OTOH, if we see evidence of a PMOC reviving that could make things worse than we thought.

        • mpainter
          Posted Nov 19, 2017 at 5:00 PM | Permalink

          Ron, you should not comment on matters in which your understanding is deficient. Equatorial upwelling is by _definition_
          meridional overturning circulation.

        • mpainter
          Posted Nov 19, 2017 at 5:08 PM | Permalink

          And Ron, you seem determined not to inform yourself (and address your ignorance) by reading about the referred Anarctic convergence.
          Please do not address me again on this subject.

        • Posted Nov 19, 2017 at 5:34 PM | Permalink

          “Equatorial upwelling is by definition meridional overturning circulation.”

          Partly true. Thanks to the Coriolis force there is a 5 degree deflection of the currents in either direction ways from the equator. http://www-das.uwyo.edu/~geerts/cwx/notes/chap11/equat_upwel.html

          Otherwise, the current is all east-west, dragged by surface winds.
          https://atmos.washington.edu/gcg/RTN/Figures/RTN12.html

        • mpainter
          Posted Nov 19, 2017 at 5:51 PM | Permalink

          Ron, you have not read about Anarctic convergence, have you?

        • mpainter
          Posted Nov 20, 2017 at 4:50 AM | Permalink

          From Encyclopedia Britannica online:

          “Within the Antarctic Convergence zone, the cold, dense surface waters of the circumpolar ocean sink and flow northward, thus creating a major meridional circulation system.”

          This what Ron Graf has trouble comprehending.

        • Mark - Helsinki
          Posted Nov 20, 2017 at 5:48 AM | Permalink

          La Nina and El Nino should be removed when comparing models with observations.

          As such, the trend then skirts the 95% rim.
          Truth be told, most warming in hindcast is tuned in because the models cannot reproduce the warming with just model physics, therefor future warming in model output is tuned in

        • Posted Nov 20, 2017 at 12:52 PM | Permalink

          For the record, I and others were commenting on mpainter’s comment: “For those not in the know, La Nina is an accelerated meridional overturning circulation wherein cold water is brought to the surface in the equatorial Pacific, acting as a global coolant.”

          If mpainter was referring to the ocean current global conveyor, aka thermohaline conveyor or THC, then he is partly correct. There is a leg of the THC that upwells in the equatorial region in the Indian Ocean. This has nothing to do with La Nina or ENSO.

        • mpainter
          Posted Nov 20, 2017 at 2:29 PM | Permalink

          I am entirely correct. La Nina upwelling is meridional overturning circulation. I said nothing about thermohaline in my original comment, Ron. You are the sort that falsely attributes expressions to those you dispute.

          My original comment was concise and I stand by it. Ron, you comment from ignorance and now try to cover your utter incomprehension by strawman diversions. Enough of your ignorant blathering. I post my single reference to thermohaline from above. Your ignore everything I say on this topic:

          Posted Nov 19, 2017 at 2:30 PM | Permalink
          Gymnosperm, I doubt that southern hemisphere overturning circulation is thermohaline. Wind is the best bet, imo, as the driver. However, it seems clear to me that salinity difference drives the AMOC.
          What do you think?

        • Frank
          Posted Nov 20, 2017 at 4:19 PM | Permalink

          To the best of my knowledge, there are two separate phenomena.

          In the Equatorial Pacific, we normally have equatorial upwelling in the east and downwelling in the west, aka the Western Pacific Warm Pool. When that circulation slows or even reverses, the much warmer SSTs eastern equatorial Pacific warms the planet as a whole – El Nino. This oscillation isn’t meridional. It is driven by changes equatorial trade winds, which are normally weakest in the spring and change direction when warmer SSTs starts moving east.

          However, the upwelling of cold water off of the east coast of South American is also part of the meridional overturning of the ocean that begins with the sinking of cold salty water near the poles (thermohaline circulation) that forms the characteristic deep water found at the bottom of the major oceans. That water returns to the surface in a variety of locations that are nearer the equator, but not necessarily in the tropics. Variations in this overturning are suspected be responsible for the apparent 65-year oscillation in temperature around the Atlantic (AMO).

          Given two systems that can produce upwelling off the western equatorial coast of South America, it wouldn’t be surprising to find that they interact with each other.

    • mpainter
      Posted Nov 21, 2017 at 12:26 AM | Permalink

      The Humboldt current is categorized as an eastern boundary current. These are meridional currents that are an important aspect of ocean overturning circulation and they are all wind driven and generate much upwelling due to the Eckmann Effect.

      The Humboldt current runs offshore western South America in a northerly direction as far north as Peru.
      It plays an important role in ENSO. When the current is strong, La Nina conditions obtain: the current turns sharply westward at Peru and parallels the equator. This leads to the formation of a Walker circulation (trade winds) which accelerate the current and greatly enhance the upwelling. This Walker circulation is due to the SST difference: about 8°C cooler in the eastern Pacific than the western Pacific. The greater the SST difference, the stronger the trade winds. This is a coupled wind/SST process.

      Eventually the Humboldt current weakens, upwelling slows and stops. The Walker circulation falters and the SST of the eastern Pacific warms which finally puts an end to the trade winds. This is El Nino. With no (or weak) westward equatorial current, the sea surface warms in the tropical Sun and cools evaporatively, adding vast amounts of moisture to the atmosphere. This is the El Nino wetness that we know in the U.S.

      There is a similar ENSO type cycle off western Africa, generated in the same fashion. This involves the Benguela current, another northward flowing eastern boundary current. This, like the Humboldt, generates considerable upwelling and varies in strength and rate of flow. At its strongest it turns westward at the Bight of Africa and sets up a Walker Cell, just like the Humboldt at Peru. This has much less effect than the ENSO cycle yet there is some. Northwest Brazil experiences drought during such times. This current eventually flows around northern South America into the Caribbean and into the Gulf of Mexico. It continues northwards as the Gulf Stream. So, it is seen that the Gulf Stream has its origins off the southwest coast of Africa in the cold water Beguela current. These eastern boundary currents have a profound effect on climate and weather and are well worth study by those who seek to educate themselves on climate. Do not ask a climate scientist about these. They can tell you nothing.

      When one is properly educated in oceanography, one is less likely to be confused by the various aspects of ocean circulation

      • mpainter
        Posted Nov 25, 2017 at 3:40 AM | Permalink

        It should be noted that Walker circulation is actuated by a pressure difference: high pressure in the eastern Pacific (cool SST), low pressure in the west Pacific (warm SST). Note also that this atmospheric circulation doesn’t involve cyclonic air movement; rather, the air movement (trade winds) from high to low pressure is an undeviating straight line flow. This is due to the fact that the Coriolis force is zero at the equator. At any other latitude this sort of air movement would be cyclonic because of the Coriolis force.

        It is this straight line air movement which sets up the equatorial current. The greater the SST difference between the east and the west Pacific, the stronger the wind and the stronger the current; the stronger the current, the greater the rate of upwelling/cooling. This is the “coupling” of ocean and atmosphere. So how is this process de-coupled? By weakening of the Humboldt current ; reducing its rate of flow reduces the upwelling at the eastern terminus of the coupled Walker circulation. Thus the circulation falters according to the degree of weakening of the Humboldt. During El Nino, the current (and the upwelling) ceases altogether.

        There is confusion over whether upwelling at the equator is or is not meridional overturning. It needs to be understood that all deep water below the thermocline originated as water subducted in polar regions. Equatorial upwelling is by definition meridional circulation. Pay no attention to the thermohaline circulation charts that are found on the i-net. These are mostly conjectures by uninformed climate scientists. Upwelling (meridional overturning circulation) occurs in conjunction with all eastern boundary currents; there are no subsurface currents of polar water that connect to this upwelling as one might suppose by studying the blue lines that one sees on thermohaline circulation charts that pop up on the i-net.

    • mpainter
      Posted Nov 21, 2017 at 12:46 AM | Permalink

      I should add that these eastern boundary currents are variable and fluctuate in rate of flow and amount of upwelling over time. With the Humboldt, the flow rate is seasonal, usually slowing during the antipodal summer. This reduces upwelling which has an effect on the fishing industry (offshore Peru catch is the greatest in tonnage worldwide). With reduced upwelling, the catch is reduced. During El Nino, fishing stops. El Nino refers to the Christ child, as this is the season when the upwelling usually starts to fail. This is an ironical reference as El Nino brings dearth to the fishing industry at Christmas time.

  5. Posted Nov 18, 2017 at 8:07 PM | Permalink

    Steve, please delete the prior posting of this comment that got caught in moderation. Thx.

    “There is an independent and long-standing dataset for night marine air temperatures (ICOADS). Combining this data with surface air temperature over land would avoid the problem identified by Cowtan and Way. Further, NMAT data is relied upon to correct/adjust inhomogeneity in SST series arising from changes in observation techniques, e.g. Karl et al 2015:”

    Karl(2015), in adjusting the SST with NMAT2 was citing the Huang (2015). By using night marine air temperatures to normalize all sea data Huang effectively transferred the tas tend to tos for ERSSTv4 (NOAA’s ocean temp index used by GISTEMP). Nic Lewis pointed this out in his CA post last year challenging Richardson(2016).

    Cowtan et al(2015) accept that the new NOAA data set “incorporates adjustments to SSTs to match night-time marine air temperatures and so may be more comparable to model air temperatures”. GISSTEMP uses the NOAAv4.01 SST data set (ERSST4). Both NOAAv4.01 and GISSTEMP show almost identical changes in mean GMST to that per HadCRUT4v4 from 1880-1899, the first two decades they cover, to 1995-2015…

    Nic was refuting Richardson et al’s claim that HadCRUT’s use of tos rather than tas accounted for a 9% low bias in GMST trend. And, not only was post Karl15 GISTEMP essentially already reporting GMST using tas for the sake of multi-decadal trend, the tas that ERSST4 uses is biased about 5% high. Howso? Well, the reason all along that tos was used for ocean temperature is that tas has data only for night since that is the only time ship air readings are not confounded with solar heat radiating from the hull. By the way, this could also have biased temperatures as ships got larger through the decades.

    So rather than having a true Tavg the night marine air temp is really a proxy for Tmin. Why does that matter? According to both models and observation the global trends for Tmin are greater than for Tmax. Though models do not match observed distribution of warming occurring mostly in the winter and high lats of NH, the models do in fact that night times are warming faster than day times. Night sea air trends are 5% steeper than tas modeled Tavg.

    Kenneth Fritsch was kind enough to pull the following using RCP 4.5 in the model mean on Nic Lewis’s Richardson post here:on

    Singular Spectrum Analysis Trends

    1861-2100: TMin = 2.383, TMean = 2.293, Ratio = 1.04

    1861-2005: TMin = 0.929, TMean = 0.885, Ratio = 1.05

    1976-2014: TMin = 0.724, TMean = 0.698, Ratio = 1.04

    1999-2014: TMin = 0.331, TMean = 0.321, Ratio = 1.03

    Linear regression Trends

    1861-2100: TMin = 2.764, TMean = 2.661, Ratio = 1.04

    1861-2005: TMin = 0.705, TMean = 0.670, Ratio = 1.05

    1976-2014: TMin = 0.790, TMean = 0.760, Ratio = 1.04

    1999-2014: TMin = 0.313, TMean = 0.303, Ratio = 1.03

    Nic commented in response: “Ken Thanks – a useful analysis. It makes the non-divergence between HadMAT2 and HadCRUT4 trends over the industrial period appear even more at variance with model simulations.”

    • Don Monfort
      Posted Nov 19, 2017 at 12:48 AM | Permalink

      Nice work, Ron.

    • Posted Nov 20, 2017 at 3:25 AM | Permalink

      Ron,
      “By using night marine air temperatures to normalize all sea data Huang effectively transferred the tas tend to tos for ERSSTv4”
      No, it doesn’t have that effect. Here is the relevant part of Huang’s paper (Sec 5):

      They fit global coefficients A from all the locations where they have both SST and NMAT. That doesn’t require complete NMAT coverage. The fitted monthly are averaged over months, and then subtracted from the climatology equivalent. That takes out the steady differences between SST and NMAT, like not being the same time of day, but leaves a result that varies with year. So the adjustments don’t adjust to the trend of NMAT, only to annual discrepancies.

      • Posted Nov 22, 2017 at 4:34 AM | Permalink

        Nick, I think you have misinterpreted the ERSST4 ship SST bias adjustment.

        Suppose, to simplify the argument, that the climatological period SST – NMAT difference C_x,m is 1 K for all locations and all months. Assume further that in all locations NMAT increases faster than SST by 0.1 K/century, that is d_x,m,y has a negative trend of 0.1 K/century for all x and m.

        Then by Eq.(5) A_m,y will also have the same -0.1 K/century trend. Eq.(6) then implies that the bias correction B_x,m,y will have a trend of +0.1 K/century (A_y being the annual mean of A_m,y). Since the bias corection is added to SST measurements, it follows that the adjusted SST trend will be increased by 0.1 K/century, and will match the NMAT trend.

        The same applies if C_x,m varies by month – the A_m,y coefficients estimated for individual months by Eq.(5) will simply scale up or down, and that scaling will cancel out when Eq.(6) is applied, leaving the same 0.1 K/century trend added for all months.

        The form of Eq.(5) and (6) effectively spreads the global average SST – NMAT difference ifor each year in proportion to the climatological SST – NMAT difference for each location and month.

        In aggregate the ERSST4 SST bias adjustments do indeed adjust the trend of SST to the trend of NMAT. This does not apply over short periods, as the monthly fitting coefficients A are effectively 16-year low pass filtered.

  6. Steven Mosher
    Posted Nov 18, 2017 at 8:10 PM | Permalink

    “One wonders why the standard indices (HadCRUT4) combine the unlike series for SST and land air temperature rather than combining two air temperature series. ”

    We worked on this for a while using a technique you suggested. NMAT had more issues ( noise, bias, missing metadata)
    than SST. Picked the smaller dogs breakfast

    Still may do something with it as it helps in certain cases.

    • Posted Nov 18, 2017 at 8:21 PM | Permalink

      Night time marine air temp NMAT is effectively a proxy for tas Tmin, not Tavg. And, as you well know, Steve, Tmin has a higher trend. So, with GISTEMP being normalized to NMAT since ERSST4, Huang15 via Karl15 added a spurious ~5% bias to observed decadal GMST trend. I have a larger comment on this in moderation.

      • Steven Mosher
        Posted Nov 18, 2017 at 8:25 PM | Permalink

        Ron

        When you go through ICOADS and look at the whole sum of NMAT and SST, you will have standing to make an observation.

        Further all Satillite data is adjusted to a Noon temperature. So I dont even believe the series can be compared until

        all of these issues are addressed.

        When you publsih on Karl let me know. Code and data

    • Posted Nov 18, 2017 at 8:29 PM | Permalink

      Steven, do you know if study has compared the trend of NMAT data taken from large ships versus small? One might suspect that there could be a bias as larger ship retain daytime heating of the superstructure longer into the night, just like a building. If so, there might be a non-climate effect in the NMAT historical data as the average vessel size grew over the decades in the global shipping fleet.

  7. Geoff Sherrington
    Posted Nov 18, 2017 at 8:21 PM | Permalink

    It is sad that so much argument has arisen, because it is possible to describe a common path whereby all parties agree that comparisons should involve specified terms, data types, sources and, if needed, assumptions.
    Such agreement should keep in mind that there is a purpose beyond statistical comparisons, that purpose being a better understanding of the world.
    The topic is annoying because of the many past refinements to data and their rapid acceptance or rejection. The C&W and Karl adjustments have had many objections, few satisfactory responses, yet now they seem entrenched. This is like politics, not science. Science tries for refinements towards the best answer, which means taking note of poor assumptions and proper estimates of uncertainty.
    Steve, in overview, do you have a recommendation for which comparisons should be most accurate and meaningful for better understanding the ways the world works? Geoff.

    • Steven Mosher
      Posted Nov 18, 2017 at 8:27 PM | Permalink

      All of the objections have been answered. Published and peer reviewed.

      That commenters on blogs dont believe so, is no surprise.

      And yes, the dataset has been updated.

      • mpainter
        Posted Nov 18, 2017 at 9:33 PM | Permalink

        Time and again we see that published and peer reviewed means little in climate science, except that shoddy science has been raised to the level of consensus. By this method we have a consensus of a greatly exaggerated effect of CO2. Where have you been Mosher, when peer reviewed studies have been dissected and laid bare on CA?

        That commenters on blogs don’t believe so is not surprise.

      • Michael Jankowski
        Posted Dec 20, 2017 at 9:17 PM | Permalink

        Why does someone with a BA in English Literature write like a bot so frequently?

  8. Steven Mosher
    Posted Nov 18, 2017 at 8:22 PM | Permalink

    There is more to comparing the data than simply pulling the “relavant” fields from each source.

    If you start to slice through the data, for example, how do they compare over land and over sea? , or
    what does reanalysis look like ? ( which Curry believes is the Best temperature record ) you’ll see that
    there are interesting differences you would not expect to find.

    Further it is not entirely clear how to compare the Tropospheric temperatures in models to get you an apples
    to apples comparison with averaging that the satellite models use.

    The pattern of differences between the surface and the trop ( most pronounced over snow covered land) suggests
    that UAH and RSS assumption of constant land emmissivity may need to be checked.

    In order to calculate a temperature both groups assume a single emmissivity number for the land and then assume
    that it hasnt changed since 1979– while the land cover has changed and while snow cover has changed.

    Lots of theories about why the two records disagree. I remain skeptical that anyone has done it correctly.

    Finally, you can expect models to get many things wrong. From a predictiuon standpoint they predict

    1000s of data fields. AGW theory suggests a warming surface.. we see that .. AGW theory suggests a cooling stratosphere
    we see that.

    • Posted Nov 18, 2017 at 8:50 PM | Permalink

      “AGW theory suggests a warming surface.. we see that .. AGW theory suggests a cooling stratosphere we see that.”

      Steven, arguing that the Enhanced Greenhouse Effect is real and arguing that effect is amplified 3 times by clouds and water vapor are two separate arguments. If the first one is correct then humans have improved the Earth’s climate. If the second is true then we have only improved the climate if SLR can be prevented by geoengineering like seeding polar precipitation making it by mass installation of fountains blasting water into the air, etc. There is tens of trillions of dollars difference in implication between the model trend and the observed trend of GMST. Just so you know the debate, because I know you are new to it.

      • Pat Frank
        Posted Nov 18, 2017 at 11:49 PM | Permalink

        Ron, radiation physics predicts a cooling stratosphere.

        The reason is that the stratosphere is radiatively cooled. Increased CO2 enhances that cooling. That’s all.

        Radiation physics is not AGW theory.

        Steve Mosher has it wrong, but his is a very common mistake.

        • Steve McIntyre
          Posted Nov 19, 2017 at 1:12 AM | Permalink

          A long-standing CA editorial policy is to discourage comments on physics which attempt to prove or disprove AGW in a few sentences. I’d prefer that people discuss physics or solve AGW elsewhere.

        • Pat Frank
          Posted Nov 19, 2017 at 5:43 PM | Permalink

          I was discussing physics, Steve.

        • MikeN
          Posted Nov 19, 2017 at 8:06 PM | Permalink

          Which is what he wants discussed elsewhere.

        • Pat Frank
          Posted Nov 21, 2017 at 3:38 PM | Permalink

          Then Steve McI could have put his request under Mosher’s post, where the physics was first misrepresented. I’d not have replied.

        • Steve McIntyre
          Posted Nov 21, 2017 at 8:17 PM | Permalink

          yes. my editing is not consistent and often, as in basketball, the original offender passes unnoticed.

        • Posted Nov 23, 2017 at 2:56 PM | Permalink

          I am with Steve on the physics issue, as I like to read about the “math” issues on his website. The physics perspective can be followed elsewhere.

          Steve: it is purely editorial, not personal. It goes with blog editorial policy discouraging attempts to prove or disprove AGW in a single paragraph. Without such a policy, there was a climate version of Godwin’s law in which every thread turned into these well-worn disputes in about 12-20 comments, rendering them less interesting in my editorial opinion.

        • mpainter
          Posted Nov 25, 2017 at 5:05 AM | Permalink

          Steve can correct me if I’m wrong, but I don’t believe that Steve’s comment was directed specifically at Pat Frank. Rather, I believe that a comment by Don Monfort (nested within Pat Frank’s) was the real offender. Steve deleted Monfort’s comment without the usual “snip” signification. As I recall, Monfort addressed the physics of AGW in the forbidden fashion and I suppose that it was this that prompted Steve’s editorializing.

          Pat Frank in fact did not intend to refute AGW through principles of physics; he only pointed out that radiative cooling of the stratosphere was not a consequence of atmospheric warming as propounded by AGW hypothesis; and that it was incorrect to view stratospheric radiative cooling as connected to (or proof of) anthropogenic global warming, as Mosher and others supposed.
          On the other hand, this issue does involve complexities of atmospheric radiative physics, does it not? 🙂

    • mpainter
      Posted Nov 18, 2017 at 9:55 PM | Permalink

      “AGW theory suggests a cooling stratosphere
      we see that”

      ### ### ### ###

      I don’t believe we do. Not since Pinatubo in 1993 which step down in stratospheric temperature cannot reasonably be attributed to CO2.

    • mpainter
      Posted Nov 18, 2017 at 10:00 PM | Permalink

      “AGW theory suggests a warming surface.. we see that ..”

      ### ### ### ###

      Warming since 1980 is better explained by natural causes, not by AGW. The truth is that the effect of CO2 on temperature has been vastly exaggerated. It is not a significant factor.

      • Steve McIntyre
        Posted Nov 18, 2017 at 11:58 PM | Permalink

        I think that people (including myself) have spent too much time parsing minutiae of warming and not enough on impacts. We are already a very long way towards doubling CO2 (esp if effect is logarithmic). If doubling CO2 was to have a very bad impact on human civilization, then we’re far enough along that it should be biting hard. But our civilization is experiencing unprecedented prosperity. Even if there is some extra impact on weather disasters, the effect in world terms is third-order effect relative to human prosperity.

        So when Gavin Schmidt announces that 2016 or 2017 was a record “hot” year, it’s worth noting that our civilization easily accommodated this “stress” and also achieved record prosperity.

        If we could purchase an actual “insurance policy” i.e. a policy that would fully protect us against adverse climate change for a sensible premium (even 1% of GDP), I’d be OK with that. I’m against pointless feel-good expenditures that, as “insurance”, are fraudulent.

        • mpainter
          Posted Nov 19, 2017 at 12:26 AM | Permalink

          Steve, it has been my view for years that atmospheric CO2 is entirely beneficial, the more the better. Any argument to the contrary is always alarmist and ill supported. As you point out, any index that one uses shows improvements for humankind since 1950. No detriment can be shown because of enhanced atmospheric CO2. Even the polar bears are thriving, as is true for all Arctic life. In fact, there are no alarmist arguments left that have not been demonstrated as false.

        • Posted Nov 19, 2017 at 3:46 PM | Permalink

          Its a good point Steve. If anything, this area of impacts is even more political and controversial than basic climate science issues. My perception is that there has been a concerted campaign to portray impacts as “worse than we thought” and only a few scientists like Cliff Mass have the courage to correct some of the huge of distortions in the media.

        • Ben
          Posted Nov 19, 2017 at 4:06 PM | Permalink

          IPCC CO2 emission scenarios are built on a hypothesis of about 3% economic growth up to 2100, with the poorest countries having their GDP / capita reaching to today’s US GDP per capita. These are inputs. Outputs of these scenarios are the temperatures.

          IPCC +4/6°C scenarios require as an input that people get much much richer than today.

        • mpainter
          Posted Nov 19, 2017 at 6:55 PM | Permalink

          Ben, meaning the outputs of appropriately tuned climate models.

        • mpainter
          Posted Nov 19, 2017 at 7:27 PM | Permalink

          Theoretically, higher SST means more rain hence more incidence of flood. But we are prepared for flood with our flood control structures and policies.

          Milder winters save on heating costs and associated cold weather costs. This is a benefit, of course. Fewer cold weather deaths, less inconvenience, less hazardous conditions. A very big plus for warmer temperatures.

          Theoretically, there should be less drought, less drought related crop loss and a longer growing season and consequently greater yields. Also, CO2 fertilization, greening of semi arid rangeland (Sahel) means more livestock. Also, more phytoplankton and larger food base in the oceans. I think these benefits are proven.

          Severe weather increase? It does not seem so. We see fewer severe tornadoes, fewer hurricanes, less drought, milder winters.

          Sea level rise? Tidal gauges on stable coasts show about 1.2-1.5 mm/year. Forget about satellite sea level data, it’s inaccurate. But even the exaggerated 3mm/year is only 12 inches/century; no cause for alarm. Accelerated SLR? Only in alarmist hype. I see no threat from SLR.

          On the whole, benefits far outweigh costs.

        • MikeN
          Posted Nov 19, 2017 at 8:09 PM | Permalink

          Ben, this is an internal contradiction. They get this high amount of warming from high CO2 from high economic growth in poor countries. Under all this wealth, the impact of global warming will not be as harsh for all these poor countries.

        • Ben
          Posted Nov 20, 2017 at 11:56 AM | Permalink

          @MikeN
          That is exactly my point. High end temperature scenarios require people to be rich and therefore to be almost weather insensitive.

        • Frank
          Posted Nov 21, 2017 at 12:36 AM | Permalink

          Steve wrote: “We are already a very long way towards doubling CO2 (esp if effect is logarithmic).”

          While I agree with much of what you wrote, there is a more appropriate way to describe how far towards a doubling we have come. The problem is that we have experienced a transient – not equilibrium – response to the forcing from rising GHGs (negated to an unknown extent by rising aerosols).

          The 43% increase from 280 ppm to 400 pm is roughly half a doubling on a logarithmic scale. (1.41^2 = 2) The forcing from all well-mixed GHG is 3.0 W/m2 (AR5), which is close to a doubling. However, about 0.7 W/m2 of that forcing is going to heat the deep ocean*, and the planet’s surface (atmosphere and mixed layer) has so far warmed enough to send an additional 2.3 W/m2 of net heat to space. That would mean that we are 62% of the way to a doubling (3.7 W/m2).

          This is before accounting for the cooling effects of aerosols. If we assume -0.5 W/m2 (a central estimate), then 3.0 W/m2 from WMGHG becomes 2.5 W/m2 anthropogenic forcing, and 1.8 W/m2 after subtracting the 0.7 W/m2 going into the deep ocean 1.8 W/m2 is roughly half a doubling. (This is on a logarithmic scale since forcing is proportional to the log of the change in GHG.)

          So it would be more accurate to say that we are about HALFWAY to the equilibrium warming and impacts expected from a doubling of CO2.

          *In calculating ECS in energy balance models, ocean heat uptake (dQ = 0.7 W/m2) is subtracted from the forcing change (dF)

          ECS = F_2x * dT/(dF-dQ)

          (Hopeful Nic Lewis will correct any serious mistakes I have made.)

        • Frank
          Posted Nov 21, 2017 at 1:05 AM | Permalink

          When I concluded above that “it would be more accurate to say that we are about HALFWAY to the equilibrium warming and impacts expected from a doubling of CO2”, I should have noted the wide confidence intervals associated with some of these numbers. Half is a central estimate.

          To better see some of the absurdity associated with high climate sensitivity, we can rearrange the terms of the above equation to get:

          dT/ECS = (dF-dQ)/F_2x

          If ECS is high, then we are roughly 1/4 the way to equilibrium warming (dT/ECS = 1/4) and dF-dQ would be 0.9 W/m2.

    • Steve McIntyre
      Posted Nov 18, 2017 at 11:46 PM | Permalink

      If you start to slice through the data, for example, how do they compare over land and over sea? , or
      what does reanalysis look like ? ( which Curry believes is the Best temperature record ) you’ll see that
      there are interesting differences you would not expect to find.

      I have some work on land vs ocean in inventory for a long time. Unfortunately the land-sea mask at KNMI doesnt work for tropospheric taz. It returns same results for both masks. I wasted quite a bit of time trying to figure this out.

      I have an interesting homebrew technique for scraping data from KNMI using R, which works in most cases, but not when the masks themselves don’t work. A footnote on the taz page states that the mask doesn’t work, but its not easy to notice.

      • AntonyIndia
        Posted Nov 19, 2017 at 12:29 AM | Permalink

        Regarding KNMI Climate Explorer: amazing that this famed open project depends on the “hobby” of 1 official Dutch Physics/ climate PhD (The support team consists of one research scientist, next to my day job writing research papers, in practice I spent about a day per week on the Climate Explorer and related projects) https://climexp.knmi.nl/about.cgi?id=someone@somewhere
        I guess the KNMI establishment let him continue because it became popular and Geert Jan publishes neat main stream climate stuff: https://www.researchgate.net/profile/Geert_Jan_Van_Oldenborgh/publications

      • Posted Nov 19, 2017 at 12:07 PM | Permalink

        I actually think this comment by our host deserves more discussion: “We are already a very long way towards doubling CO2 (esp if effect is logarithmic). If doubling CO2 was to have a very bad impact on human civilization, then we’re far enough along that it should be biting hard. But our civilization is experiencing unprecedented prosperity. Even if there is some extra impact on weather disasters, the effect in world terms is third-order effect relative to human prosperity.”

        The climate conversation has been proceeding at a breathless pace since 1988 (a rather arbitrary starting point, but… whatever). The warming period of 1975-1998 was unusual and worthy of discussion. The idea that humans contributed to some portion of that warming was just common sense, given the rapid pace of industrialization.

        But the warming achieved by 1998 has not slowed human progress in any field in the twenty years since. We grow more food, live longer and healthier lives, make more money, use a lot more energy, have more air conditioning, etc. etc. etc.

        Nor do the projected natural effects of climate change show any signs of appearing. Sea level rise, temperature rise, storms and droughts all seem to follow the basic schedules they had prior to 1945. With only 82 years left in the century it would seem appropriate for those projections for 2100 to be revisited.

        Instead of a sober assessment of the state of climate affairs, it seems to me that much of the consensus community has spent the last few years air-brushing the concept of atmospheric sensitivity out of the photograph, focusing instead on concepts like representative concentration pathways that start with the end result and work backwards and have baked in high levels of sensitivity in the background.

        Now it could be that our planet is in the same situation as those ads I see on lots of internet newsy sites: “26 pictures taken seconds before death,” that show someone (almost always an attractive young woman) smiling on a cliff. Some day I’ll click and find out what happened to her.

        But it seems only rational to also start examining some of the basic premises that led to projections of five feet of sea level rise by 2100, etc.

        When does the disaster start?

        • MikeN
          Posted Nov 19, 2017 at 8:11 PM | Permalink

          Thomas Fuller, I remember asking about this when the pause some time ago at Tamino’s. My suggestion was that the models that show less sensitivity should be given more weight because of the lack of warming. Tamino said the models show acceleration of warming.

        • jddohio
          Posted Nov 19, 2017 at 9:53 PM | Permalink

          T Fuller: Although coming from a somewhat different angle, you are essentially coming from the same page as Julian Simon. His basic premise was that over long time frames, human ingenuity outpaces scarcity and that notwithstanding, regular prophecies of doom, the human condition has been improving substantially over the last 200 years.

          Personally, as technology rapidly progresses, if increased CO2 is shown to have the propensity to have negative substantial effects, I believe that it will be a trivial task to ameliorate the problem. I believe Nathan Myhrvold suggested possibly transporting dust into the upper atmosphere about 5 years ago.

          I realize a lot of people will react in horror to humans trying to manage the climate. However, if we really are facing catastrophic damages and the equivalent of death trains, the risks of geoengineering have to be compared to other scenarios.

          Personally, I see no reason to take any major steps now. However, unlike many situations, if we wait 10 or 20 years, the solutions will probably be cheaper and more effective (as technology improves)than if we took substantial, significant and necessarily costly steps now.

          JD

        • Posted Nov 20, 2017 at 11:26 AM | Permalink

          Yes, Tom, and then there is the question of what if anything ought to do about it. I keep hearing that solar is now so cheap that we are on the verge of replacing fossil fuels. If that’s really true, we don’t need to do anything in the policy arena.

    • Geoff Sherrington
      Posted Nov 19, 2017 at 12:37 AM | Permalink

      Steven,
      Given this model uncertainty, should you not be cautioning against its use for any serious policy derivation?
      BTW, of course I studied the published aftermath of both C&W and Karl/Huang. Largely unconvinced. Repeat the need for proper estimation and reporting of error.
      Felt that the C&W extrapolation work of earlier times might benefit from study by mineral resource people who use similar methodology, but have not chanced across any such papers. Did Robert Rohde go down that cross check path? Geoff

    • Geoff Sherrington
      Posted Nov 19, 2017 at 12:37 AM | Permalink

      Steven,
      Given this model uncertainty, should you not be cautioning against its use for any serious policy derivation?
      BTW, of course I studied the published aftermath of both C&W and Karl/Huang. Largely unconvinced. Repeat the need for proper estimation and reporting of error.
      Felt that the C&W extrapolation work of earlier times might benefit from study by mineral resource people who use similar methodology, but have not chanced across any such papers. Did Robert Rohde go down that cross check path? Geoff

  9. AntonyIndia
    Posted Nov 18, 2017 at 11:03 PM | Permalink

    Another angle: plant CO2 emissions higher that parameterised in models.
    “Land-atmosphere exchanges influence atmospheric CO2. Emphasis has been on describing photosynthetic CO2 uptake, but less on respiration losses.” And “Our analysis suggests Rp (=whole-plant respiration) could be around 30% higher than existing estimates.”
    https://www.nature.com/articles/s41467-017-01774-z

  10. Posted Nov 19, 2017 at 4:17 AM | Permalink

    Steve,
    “One wonders why the standard indices (HadCRUT4) combine the unlike series for SST and land air temperature rather than combining two air temperature series”
    It would be an obviously desirable thing to do. The problem is that NMAT coverage is too sparse, and there are big problems with homogeneity. The fact that it does have to be at night, because of unpredictable deck warming, already gives an indication of difficulty. And it isn’t any kind of average for the day, and can’t be. It’s more like a minimum, so hard to match with land.

    AR5 3.2.2.3 says of it
    “Overall, the SST data should be regarded as more reliable because averaging of fewer samples is needed for SST than for HadMAT to remove synoptic weather noise. However, the changes in SST relative to NMAT since 1991 in the tropical Pacific may be partly real (Christy et al., 2001). “

    Bob Tisdale says that NMAT must be good, because , as said here
    “Further, NMAT data is relied upon to correct/adjust inhomogeneity in SST series arising from changes in observation techniques”
    But they don’t use a NMAT global average to correct a SST average. Instead, they use individual NMAT readings as a reference in pairings in homogenisation. Then the poor coverage doesn’t matter.

    A further reason for preferring SST is that there is now a huge data resource from drifter buoys – the chief source for SST now. They don’t help with NMAT. Another plus for SST is that skin temperature can be measured by satellite. That is also too hard to homogenise to be used alone, but is a great check on SST measured by other means.

  11. stevefitzpatrick
    Posted Nov 19, 2017 at 8:25 PM | Permalink

    “…there are interesting inconsistencies between models and observations that do not necessarily show that the models are WRONG!!!, but neither are they very satisfying in proving that that the models are RIGHT!!!!”

    Well sure, but the models are also inconsistent with each other, and on average well above measured reality, so most models have to be, well…. quite WRONG. Will there be continued warming? Sure, but nothing like the model average… more like 50% to 60% of the model average. If 0.15C per decade is not enough to motivate draconian public actions to restrict fossil fuels, then that is the time to give it a rest.

    • Posted Nov 20, 2017 at 1:01 AM | Permalink

      “the models are also inconsistent with each other”
      The data isn’t brilliantly consistent either. Here I superpose a plot of just UAH V6 and RSS V4 TMT tropical troposphere plots on Christy’s graph. I’ve used the same processing – 5 year moving average and adjusted so the trendlines pass through zero as 1979. The UAH is blue; the RSS is red. I have aligned the plots as best I can; both axes are shown. It’s a bit messy with the superimposed details, but it is all there. UAH and RSS are still well below the CMIP mean (which I haven’t checked), but this is Christy’s cherrypicked worst case.

      • Steve McIntyre
        Posted Nov 20, 2017 at 1:23 PM | Permalink

        it’s disquieting that RSS recently adjusted their results upwards after the fact.

        • Posted Nov 20, 2017 at 7:04 PM | Permalink

          RSS was tracking with UAH until spring of 2016 when RSS changed their method of diurnal correction using GCMs and also decided to favor an older microwave sensing units MSU satellites that were diverging in the warm direction from a newer AMSU satellite. At the same time UAH chose with its version 6.0 to down-weight the MSU in favor of the AMSU, thus we have UAH and RSS diverging. The climate justice community was ecstatic. Headlines blared: “Sceptics lose their last talking point [except for UAH and sondes (weather balloons)].”

          With RSS’s paper covering their revision sailing through peer review while UAH’s paper drifts, Roy Spencer took the occasion of RSS’s publishing to posted a detailed critique on his blog in July. Here is a Spencer making his case for the bet on his horse to win in the long run.

          We have a paper in peer review with extensive satellite dataset comparisons to many balloon datasets and reanalyses. These show that RSS diverges from these and from UAH, showing more warming than the other datasets between 1990 and 2002 – a key period with two older MSU sensors both of which showed signs of spurious warming not yet addressed by RSS. I suspect the next chapter in this saga is that the remaining radiosonde datasets that still do not show substantial warming will be the next to be “adjusted” upward.

          Spencer sounds a tad cynical. Goodness knows why.

        • Posted Nov 20, 2017 at 7:22 PM | Permalink

          Spencer points out in another blog post that both RSS and UAH, along with sondes, still diverge with the IPCC models in tropical warming.

        • Posted Nov 20, 2017 at 7:37 PM | Permalink

          Steve,
          “it’s disquieting that RSS recently adjusted their results upwards after the fact.”
          They explained why. Two years earlier, UAH radically adjusted downward their results. They explained why, too. In fact satellite AMSU temperature has never been very stable.

          The average of the two now is probably close to what it was in early 2015.

        • Posted Nov 20, 2017 at 8:22 PM | Permalink

          Ron Graf,
          That graph compares CMIP something with various TLT results. Elsewhere, A graph is shown comparing CMIP with various TMT results. Yet the red CMIP5 curve looks to me absolutely identical, even to the little kinks. It is supposed to use a different weighting (never properly expkained by Christy) for TMT and TLT.

          I’m pretty cynical about these plots. I’m even more cynical when I see presentations like Michaels at Cato where they strip of the headings from Christy’s graph and make no mention that it is tropical troposphere only, and not global.

        • Posted Nov 20, 2017 at 10:55 PM | Permalink

          Hi Nick,

          You said: “Yet the red CMIP5 curve looks to me absolutely identical, even to the little kinks. It is supposed to use a different weighting (never properly expkained by Christy) for TMT and TLT.”

          Correct me if I’m wrong, the CMIP5 little kinks that make it look appear to have accurately projected observations are deceiving. They were put there after the fact. I’m a conspiracy nut, you say? Well, maybe, but not for this. The CMIP5 corrects [actually likely over-corrects] after volcanic events. Their logic is that eruptions are unpredictable and since they affect GMST it’s only proper that the model’s RCPs should be revised (to cool the past). The Warren Commission was fake but the Apollo landings were real.

        • mpainter
          Posted Nov 20, 2017 at 11:09 PM | Permalink

          UAH is empirical, as ever stated by the authors. Their adjustment corrected for better agreement with balloon data. At that time, RSS was cooler than UAH. RSS corrected for no apparent reason other than to achieve a higher temperature.

          Now UAH agrees very well with balloon data, RSS runs much hotter. Climate science stripped naked, thank you RSS.

        • Frank
          Posted Nov 21, 2017 at 3:51 AM | Permalink

          Steve wrote: “it’s disquieting that RSS recently adjusted their results upwards after the fact.”

          UAH and RSS aren’t thermometers measuring temperatures at a fixed time and place every day (or even the daily high or low). On a typical day at the surface, temperature rises and falls about 10 degC (over land at least). So the time the temperature is measured (ie when the satellite is above a particular location) is critical and orbits of satellites gradually decay. You must correct for that. Furthermore the satellites haven’t been using the same “thermometer” (MSU’s and AMSU’s) since 1979, those measurement devises are calibrated against each other while two units produce overlapping readings. I don’t think it is fair to say that one group or the other has recently adjusted their results and the other hasn’t. Both groups have been continuously refining their calculations and corrections – UNFORTUNATELY with both knowing how some of the choices they make will impact their final results. This is far from an ideal situation.

          UAH validates some of the choices they have made on the best way to process satellite data (radiances) by comparing their results to radiosondes. Great, but that may not give us two independent records that confirm each other. We may have “satellite temperature trends that reflect radiosonde trends. Any biases in the radiosonde trends could be in the UAH trend. RSS using output from GCMs confirm the correctness of some of the choices they made processing the same satellite data. Their output will contain GCM biases. I’m not an expert in this field, so I may be exaggerating these problems. The scientists who peer review their papers likely understand how some of these choices effect the final trends. Do they give much more scrutiny to UAH’s choices and let RSS’s sail through review without challenge?

          In an ideal world, key scientists like Christy and Mears wouldn’t have access to the satellite data itself. They would simply design the algorithms needed to process the data and give those algorithms to others. When confronted with the problem of calibrating new sensors, they could propose and debate the merits of different approaches without knowing how those options would impact the final trends. If several approaches appeared equally good on paper, they could propose tests to distinguish which was best and possibly recognize that there was no way to determine the best method and agree to include the difference in the confidence interval.

        • mpainter
          Posted Nov 25, 2017 at 6:01 AM | Permalink

          Each radiosonde is independent of the others, giving an independent dataset. UAH agrees well with these some 800 (more or less) datasets. No bias in these datasets has ever been demonstrated, except for claimed problems of improvements in radiosonde sensors which problems are exaggerated and spurious, imo.

          UAH data reliability is confirmed by its agreement with radiosonde data. RSS produces data according to whatever suits its authors, as when they recently warmed its dataset and justified this warming by reference to theory and GCM product.

          There can be no question that UAH data is more accurate than RSS, being confirmed by radiosonde data deriving from some 800 independently operated radiosonde programs.
          In my view, RSS simply employs inferior methods in generating their products.

      • mpainter
        Posted Nov 25, 2017 at 6:15 AM | Permalink

        An empirical approach requires that algorithms be devised according to constraints. The “ideal” that the scientists who originate the algorithms should not access the product of their algorithms is ludicrous. Anyone who suggests such an “ideal” is unfamiliar with empirical methods. Such as a climate scientist who tinkers with the dubious GCMs.

  12. Mark - Helsinki
    Posted Nov 20, 2017 at 5:51 AM | Permalink

    Model means are meaningless

    Each model gets measured against obs. So that means 98% of runs are meaningless, with 2% of lucky strikes.

    Models are determinate, there is no representation of what we dont know in them. They will always warm and never cool by design.

    Can wee all just admit climate models are not science, not mathematics not physics. They are junk

    • mpainter
      Posted Nov 22, 2017 at 11:55 AM | Permalink

      The Russians (one of their academys of science) have devised a climate model that agrees with observations. They achieved this simply by fiddling with some of the tunable assumptions.

      So why do not other modelers tune theirs to achieve agreement with observations? Perhaps because there would no longer be any support for alarmism.

      One of Steve’s posts this year shows this Russian model’s product.

      • mpainter
        Posted Nov 22, 2017 at 12:02 PM | Permalink

        To continue, the referred Russian model adjusts parameters to achieve a product in agreement with observations. This is how science would ordinarily proceed in any field of science. Such an approach is constrained by observations in approved scientific methodology. Climate science eschews such an approach and disdains such constraints. It is contemnable on that basis and should be.

  13. Mark - Helsinki
    Posted Nov 20, 2017 at 5:54 AM | Permalink

    There are three outcomes in climate
    1. It will warm – it has been warming for 3 to 4 hundred years, save bet that it will keep warming for the next few decades
    2. It will cool
    3 It will plateau.

    Our best data shows plateau almost
    Save bet based on NO DATA is it will continue warming, this is what models are doing, save bets

  14. Posted Nov 20, 2017 at 9:17 AM | Permalink

    I would just note that the difference between the surface TAS plot above and the blended TOS-TAS amounts to how one processes the model data over the oceans. The result is a very large difference in trends (0.28 vs. 0.19 C per decade). One might ask, which is the “correct” number to determine when we cross the 2C threshold? Does anyone even know?

    This all strikes me as an exercise in phony precision, i.e., trying to get more information out of data and models than their precision really justifies. Certainly, whether the models pass of fail this very gross test of a global integral quantity is largely irrelevant for their intended purpose which is to produce regional climate changes. They fail at that task as pretty much everyone acknowledges.

  15. Matt E
    Posted Nov 20, 2017 at 10:44 AM | Permalink

    To me the most interesting outcome of all this is that the new “Cowtan and Way” combined models had much lower trends (halved) that appear to reflect observations. If that’s true it seems pretty solid admission that the prior models touted by the IPCC and similar mouthpieces have vastly overprojected future temps. I’d be more sympathetic to the “warmists” cause if I saw them embrace these new numbers. I won’t hold my breath.

  16. Adam Gallon
    Posted Nov 20, 2017 at 12:35 PM | Permalink

    We’ve another “Pause Buster” paper.
    https://phys.org/news/2017-11-added-arctic-global-didnt.html
    “We recalculated the average global temperatures from 1998-2012 and found that the rate of global warming had continued to rise at 0.112C per decade instead of slowing down to 0.05C per decade as previously thought,” said Zhang.”
    It’s because the Arctic’s red hot!

    • mpainter
      Posted Nov 20, 2017 at 12:40 PM | Permalink

      Where there are no thermometers to confuse the issue.

  17. EdeF
    Posted Nov 20, 2017 at 1:38 PM | Permalink

    You can only compare measured temperatures with computer models for the 1979 to 2015 timeframe using computer programs written on or before 1979. Setting a 2010 computer program back to “1979 initial conditions “ ha ha ha hee hee ho, excuse the mirth, means you’ve had three decades to toss in all kinds of fudge factors.

    • mpainter
      Posted Nov 21, 2017 at 2:09 AM | Permalink

      Ah, good point. Start your projection at 1979 and let’s compare that to observations. A 1979 climate model (unrevised) will reveal the truth about these contrivances.

  18. Posted Nov 20, 2017 at 8:36 PM | Permalink

    Whenever I hear that 2016 (or 2017) is the warmest year EVER, I can’t help but recall that human civilization is flourishing as never before. So we’ve taken these “blows” and not only survived, but prospered. Even the occasional weather disaster has not changed this trajectory.

    Page 89 of Richard Tol’s book “Climate Economics” (2014) says “The best off country is Canada … which is rich, well-organized and rather cold. The [climate change] impact is positive throughout the 21st century, as are incremental impacts so that there is no incentive to reduce emissions.”
    Richard Tol is the original author of the FUND integrated assessment model (IAM), which is defined for 16 regions, and is the most detailed IAM. Figure 6.3 implies the Canada benefits from emissions by US$101 billion per year by 2100 in 2016 dollars.

    On a global basis, the Julia implementation of FUND3.9 give the following impacts per GDP by component assuming a 3 °C equilibrium climate sensitivity (ECS);

    Nic Lewis calculated a ESC = 1.45 °C from empirical measurements, but did not correct for urban warming nor the millennium warming cycle. Correcting for these factors gives a ECS of 1.0 °C as shown https://friendsofscience.org/index.php?id=2330

    Julia FUND3.9 give the following impacts per GDP by component assuming a 1.0 °C ECS;

    Using a 5% discount rate, the ECS = 3 °C and 1 °C cases gives a discounted impact in 2018 in US2016 dollars of -0.76% and +0.35% of GDP, respectively. Impact changes from 2018 and GDP for years 2018 through 2300 are discount to 2018 in these calculations. In other words, greenhouse gas emissions (especially CO2) are a wonderful by-product of fossil fuel use.

    The global storms and sea level rise impacts for ECS = 1.0 °C, discounted to 2018 at 5% are 0.0019% and 0.0049% of GDP, so the media hype about storms and sea level rise damages are misplaced.

    Julia FUND3.9 calculates a social cost of carbon dioxide of +1.01 US$/tCO2 and -2.93 US$/tCO2 for ECS = 3.0 °C and 1.0 °C, respectively, for emissions in 2020, in 2016 dollars, using a 5% discount rate.

    The default emissions in FUND assumes CO2 emission per year increases from 30.4 gigatonnes CO2 (GtCO2) in 2017 to a maximum of 95.8 GtCO2 in 2115, which is a huge increase and very unlikely to occur.

  19. Posted Nov 22, 2017 at 5:07 AM | Permalink

    I calculate the January 1979 to September 2017 (latest month) trend in GMST per HadCRUT4v5 as 0.174 K/decade, not 0.181 K/decade?

    The idea that substituting air temperature for SST over the open ocean makes a significant difference to the GMST trend is not supported by observational evidence. Even in GCMs the effect on GMST trends of doing so is under 10%.

    The obvious apples-for-apples comparison of 1979-onwards global surface air temperature changes in models is with ERAinterim global 2 m air temperature. ERAinterim is the most highly regarded reanalysis product. For examining changes over time, ECMWF produces a version that is unaffected by possibly inhomogeneous NMAT data and which is adjusted for an inhomogeneity in the satellite ocean skin temperature data it uses. See http://climate.copernicus.eu/resources/data-analysis/average-surface-air-temperature-analysis.

    The 1979 to 2017 trend in the ERAinterim global 2 m air temperature is 0.182 K/ decade, far below the model average of 0.241 K/decade.

    • Posted Nov 22, 2017 at 11:15 AM | Permalink

      Nic,
      Agreed about the HAD4 trend.

      “The obvious apples-for-apples comparison of 1979-onwards global surface air temperature changes in models is with ERAinterim global 2 m air temperature.”

      The issue there is, does ERA have a better information source about 2-m air temp over sea than SST? I note in your link

      “Values over sea prior to 2002 are further adjusted by subtracting 0.1OC. This accounts for a change in bias that arose from changing the source of sea-surface temperature analysis.”
      IOW, the correction for transition from ships to buoys. That is applied to the 2m air temperature. That suggests strongly that the dominant source of info for the air temperature in ERA is SST. Later they say

      “Elsewhere [not over land], the background forecast model plays a stronger role, helping values of surface air temperature to be derived from other types of observation, such as of sea-surface temperatures and winds.”

      • Posted Nov 22, 2017 at 1:18 PM | Permalink

        Nick,

        “IOW, the correction for transition from ships to buoys.”

        No, it has nothing to to do with that. The discontinuity arose in two steps, in mid 2001 and the start of 2002, both relating to the source of the SST analysis used in ERAinterim changing its calculation scheme.

      • Posted Nov 22, 2017 at 11:12 PM | Permalink

        Hi Nic Lewis, thanks for taking the time to accumulate such expertise to so deeply audit to find the truth.

        Nick Stokes, of all the “non-skeptics” in the climate science blogs you are likely the most knowledgeable and civil. I applaud you and wish others would follow your fine example. I came to the blogs after reading a WSJ editorial written by Curry and Christy in 2014. I was shocked at what I was hearing from actual climate scientists after reading so many articles of climate doom. The article mentioned Judith Curry’s blog and I decided to try to find the truth myself. It certainly has not been easy to learn all the science but it was easy to see a pattern. The more powerful the magnifying glass of audit the more errors of bias (and some outright deceptions) are exposed. Almost without exception they are in one direction: exaggerating climate sensitivity and anthropogenic attribution certainty.

        Nick, whether or not growing humanity’s environmental conscience is for the common good, do you agree there could be also be a benefit to scrupulously avoiding exaggeration and false scientific claims?

        • Posted Nov 24, 2017 at 2:16 AM | Permalink

          For some time after I started following climate blogs I noticed that every time a skeptic brought up the divergence of surface records from UAH and RSS trends (which matched back then) a non-skeptic would automatically point out: a) that satellite do not measure temperature, and b) the lower troposphere is not the surface. The answer to the first would always be “neither do thermometers measure temperature.” And, it is surely sensible that whether one is measuring the property of oxygen brightness or expansion of mercury, it’s the rigor of instrument testing and calibration that’s the critical factor. On the second point it took me over a year to find the expected ratio of GMST to TLT. After inquiring on the subject I would typically get replies like this one from a frequent blogger, who now works for BEST:

          Ron
          The surface temperature is not measured by satellite.
          The assumptions required to generate a “fake” temperature from satellites are quite stunning when you look at them. There is no calibration.
          Further, even if the surface temp is wrong, by a couple of tenths we still face a potential issue. http://rankexploits.com/musings/2015/paris-3-the-accord/#comment-141853

          Then about a month later I came right out and asked this person: “What is the history of the consensus opinion on the expected factor to correct (normalize) [TLT to GMST] one or the other’s trend to match up? The answer:

          As I said.. I dunno, there really isnt a consensus as far
          as I know.. There are a couple papers.. not my specialty.

          The following day researching the formation of BEST I found this 2011 CA post that was devoted to this very issue of expected trend of TLT. Astonishingly, I found that there was thought to be agreement between Gavin Schmidt and Richard Lindzen that the TLT should be warming at a rate 1.4X GMST. I presume that GISS models were outputting what was being called a “tropospheric hotspot.” The problem was that GMST was trending ~1.6X of both UAH and RSS and multiplying that by 1.4 made an outrageously wide 2.2X divergence.

          With this kind of discrepancy one starts to look for problems. One area that is a longtime issue is urban warming spuriously biasing the land record. HadCRUT was the lowest trending index in 2011 but even HudCRUT makes no correction for urbanization or micro-site bias. Climate scientist Robert Way looked in another direction: the satellite analysis had to be wrong. NASA’s STAR mid troposphere index was running higher then UAH and RSS’s mid troposphere so there was an indication that if STAR had a TLT index it would be higher and in line with HadCRUT. Steven Mosher commented next and pointed out that even if a TLT could be conjured up to meet HadCRUT one would still have to deal with the 1.4X model’s TLT amplified warming. Robert, sensing a closing in feeling, summoned the climate gods with the question: “I might need a reference for the model amplification factor of 1.4x. Where can I find that ?” Thundering through the clouds appeared a voice. Gavin:

          Nowhere. The expected land-only amplification of MSU-LT over SAT is close to zero (actually equivalent to a factor of ~0.95 +/- 0.07 according to the GISS model).

          The ocean-only tropical amplification is related to the moist adiabat which is not the dominant temperature structure over land since deep convection is mostly an tropical ocean phenomena. Ocean temperatures are rising slower than over land, therefore even if tropical land tropospheric temperatures were being set by a moist adiabat over the ocean, it would still have a smaller ratio with respect to the land temp.

          One wonders if the models might have once shown a warmer TLT but needed to get re-tuned. In any case Gavin admitted that the TLT should be at least 0.95 of the faster warming land surface record and left unsaid what the specific higher ratio to the heat-uptaking ocean’s 70% of the planet surface was.

          Consulting Wikipedia on this question I was first disheartened to read: “Satellites do not measure temperature.” Wikipedia is often a very slanted source for climate science info I’ve found. It claims for example that “Mike’s Nature Trick” was referring to a deception but just a clever way to solve a tough math problem. Thus I was shocked to find the citing on TLT modeled warming:

          Globally, the troposphere should warm about 1.2 times more than the surface; in the tropics, the troposphere should warm about 1.5 times more than the surface.

          I think somebody needs to tell Gavin about this.

        • Posted Nov 25, 2017 at 12:39 AM | Permalink

          I meant to say “Mike’s Nature Trick” in Wikipedia does not refer to it as a deception. Not to spur a discussion on this, here is the standing sentence:

          In science, the term “trick” is slang for a clever (and legitimate) technique, in this case Michael E. Mann’s technique for comparing two different data sets.

          I am interested if anyone knows the history of CMIP modeling of the troposphere. Have models changed parameters to cool the troposphere? If so, is there legitimate change in scientific knowledge for doing so or is it just pressure to fit observation?

        • DaveS
          Posted Nov 25, 2017 at 6:44 AM | Permalink

          Wikipedia’s coverage of climate science has long been tightly controlled by the likes of Connelly so should not be regarded as a reliable source of information. That spin on ‘trick’ originated in one of the whitewash Climategate inquiries – I think they made it up to get Phil Jones off the hook, the idea that it is some kind of commonly used term in ‘science’ is laughable.

        • Steve McIntyre
          Posted Nov 25, 2017 at 3:30 PM | Permalink

          the term “trick” can be used to describe a legitimate operation. The problem with various “tricks” to conceal the discrepancy between the Briffa reconstruction and other reconstructions is that they were not legitimate operations, but deceitful falsifications. The only people who don’t understand this are climate scientists and climate warriors.

        • Steve McIntyre
          Posted Nov 25, 2017 at 3:31 PM | Permalink

          the spin on trick originated with Gavin Schmidt on day 1 and was adopted by the Penn State Inquiry Committee – Penn State obviously not having a good reputation for searching misconduct inquiries.

        • Posted Nov 25, 2017 at 4:30 PM | Permalink

          ” the idea that it is some kind of commonly used term in ‘science’ is laughable”
          Google Ewald’s trick (basically Poisson summation).

        • Steve McIntyre
          Posted Nov 25, 2017 at 11:13 PM | Permalink

          the idea that Mann’s deletion of adverse data in IPCC 2001 is anything other than deception is laughable (basically “hide the decline”)

        • Posted Nov 26, 2017 at 9:57 PM | Permalink

          Once again Stokes shows how narrow his blinders are. Ewald’s Summation is a mathematical method to get a series explaining long-range inter-actions to converge faster by looking at energies and Fourier calculations rather than the its real space values, then transforming back to real space. This is almost purely mathematical in nature. Hence, those crystallographers that use the Ewald trick are using it exactly as a mathematician would and using trick in exactly as a mathematician would refer to it.

          So Nick ‘defends’ Mann via implied ‘incorrectness’ of DaveS’s comment.

        • Posted Nov 27, 2017 at 5:15 AM | Permalink

          “implied ‘incorrectness’ of DaveS’s comment”
          The comment was incorrect. It said:
          “the idea that it is some kind of commonly used term in ‘science’ is laughable”
          I showed that it was indeed used, in this example, by crystallographers. The fact that the trick used mathematics is irrelevant to the usage. It is something that is clever and valid, and that is the term they use for it.

          What Jones described as Mike’s trick was also mathematically based.

        • Steve McIntyre
          Posted Nov 27, 2017 at 2:57 PM | Permalink

          Nick, as usual, refuses to confront the issue of Mann’s deletion of adverse data in IPCC 2001, which, as noted in a previous post, appears to meet all criteria of falsification as defined in codes of conduct. Deletion of adverse data is not “clever and valid”. Climategate letters revealed why Mann and others concealed the adverse Briffa results in IPCC 2001: they did so in order not to give “fodder” to skeptics and so as not to “dilute the message”.

          Nick Stokes is well aware of the intentional concealment of adverse data and, rather than give offence to the community, has stoutly defended the falsification in question.

          Nick doesn’t defend directly but through weaseling. Today Nick says, Look squirrel, Mike’s Nature trick is “mathematically based”. What on earth does this mean? That he used numbers in his calculation. What an irrelevant bit of nonsense. In 1998, Mann spliced proxy and instrumental data to construct his smoothed proxy diagram (as pointed out long ago at CA by Jean S and UC). He didn’t disclose the splice at the time. Then he later he swore up and down that nobody in climate spliced proxy and instrumental data, saying that only fossil fuel funded miscreants would even dare make such a claim. But that was a lie – as Nick knows but will never admit. Instead, Nick will point to more and other squirrels.

        • John Bills
          Posted Nov 27, 2017 at 5:36 AM | Permalink

          Nick Stokes,

          The state of climate science in 2017 is this: https://climateaudit.org/2017/07/11/pages2017-new-cherry-pie/
          Just look who is part of pages 2k.

        • Posted Nov 27, 2017 at 10:34 AM | Permalink

          Nick, You really are straining at knats and swelling camels. Mike’s Nature trick involves omitting adverse data and splicing in a different set of data to make the curve look “good”. That is not mathematical at all.

        • mpainter
          Posted Nov 27, 2017 at 10:52 AM | Permalink

          “swelling camels”

          Nick likes camels with lots of humps.

        • Posted Nov 27, 2017 at 4:16 PM | Permalink

          “Nick, as usual, refuses to confront the issue of Mann’s deletion of adverse data in IPCC 2001”
          There is a laudable editorial policy here deprecating efforts to sweep away the whole of AGW in a paragraph. It encourages sticking to the topic. I try to do that too. Here we had a simple fact at issue; the usage among scientists of the word “trick”. It does not embrace Mann’s “deletion of adverse data” etc. It is a simple issue of fact that can be resolved by looking at examples. People may want to use it as part of a larger argument, but if you don’t try to get the parts right first, you can’t make progress. There may well be things to say against Mann’s graphing, but the fact that Jones used the word “trick” is not one of them.

        • Steve McIntyre
          Posted Nov 28, 2017 at 1:43 AM | Permalink

          You are quite right that the mathematics usage of the term “trick” does not “embrace Mann’s ‘deletion of adverse data'”. On this point, we could not agree more.

          A technique must be clever and valid to qualify as a mathematical “trick”. It appears that we agree that “deletion of adverse data” in order not to give “fodder to skeptics” does not qualify as a clever and valid mathematical trick. In other words, it appears that we agree that Gavin Schmidt was wrong when he described Mann’s deletion of Briffa data (thus hiding its decline) as a clever mathematical trick. And likewise, it appears that we agree that the Penn State Inquiry Committee was also wrong when they described the deletion of adverse data as a clever mathematical trick. Rather than being a “clever” trick, it was falsification – a point that we seem to be in agreement on as well.

          As you observe, it is important to get the parts right and I am glad that you’ve finally agreed on these points.

        • mpainter
          Posted Nov 27, 2017 at 4:57 PM | Permalink

          As never before, Nick comes close to admitting that Mann used deceptive techniques in his “graphing”. Is doubt starting to worm its way into that bastion of AGW?

        • Posted Nov 28, 2017 at 12:00 AM | Permalink

          I do find Nick’s behavior in defending the indefensible a little odd, given his generally scientific tone. I’ve noticed too on other subjects that Nick has more confidence than I do in the science establishment. I’m assuming Nick is Australian and a retired civil servant. Maybe the soft money culture arrived down under much later than in the US.

      • Posted Nov 25, 2017 at 4:43 PM | Permalink

        I think, Nick, that you are confusing different branches of science. “Trick” is sometimes used in mathematical to indicate a clever but valid manipulation. I don’t think its used in other branches of science, particularly not in medicine. In the Mann context, you must consider the purpose which was to “hide the decline.” Context tells us the truth.

        • Steve McIntyre
          Posted Nov 25, 2017 at 11:15 PM | Permalink

          Nick has defended the worst sort of climate offences over the years, never making the slightest concession. Discussion with him is pointless.

        • Posted Nov 26, 2017 at 3:00 AM | Permalink

          David
          “I don’t think its used in other branches of science”
          Ewald was a crystallographer.

        • Steve McIntyre
          Posted Nov 26, 2017 at 9:31 AM | Permalink

          The issue with Mann’s deletion of adverse data to conceal that the Briffa reconstruction went the “wrong” way is not that Phil Jones called a technique to do so a “trick” but that it was falsification because it omitted data or results such that the research was not accurately represented:

          (2) Falsification means manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not
          accurately represented in the research record.

          The longstanding attempts by Schmidt, Stokes and others to defend the indefensible has probably done more to undermine the credibility of the climate community than the original offense.

        • mpainter
          Posted Nov 26, 2017 at 4:29 AM | Permalink

          “Ewald was a crystallographer” who never engaged in deceptive science as did the likes of Michael Mann the climate scientist.

      • Posted Dec 1, 2017 at 3:16 PM | Permalink

        Reply to Nick Stokes: I’m a long-term fan of Steve McIntyre but like Ron Graf I’m also a fan of Nick Stokes’ persistent efforts to present the orthodox or consensus position in the best possible light – Nick is invariably courteous and relevant and doesn’t seem to take offence at attempts to put him down. He may sometimes (or often, I don’t know enough to judge) be wrong but as an interested observer I value the opportunity to read his commments,which provide a standard against which alternative explanations can be assessed. Thanks both Steve and Nick
        If I may be excused a snide comment of my own, I do on the other hand tend to groan when I read comments from Mosher…now why should that be?

  20. joe
    Posted Nov 24, 2017 at 10:25 AM | Permalink

    Two observations regarding the accuracy of the models as claimed by the warmists

    1) as with all things in nature, there are cyclical patterns (long term, short term, mid term – however the periods are defined). The crest of these short term cyclical temperature patterns seem to hit the middle of the model predictions. If the models were more accurate, the predictions would cut through the middle of the cyclical patterns. . Seems it is only because of the El Nino of 1998 & el nino of 2015/2016 that the models have reached the middle of the predictions.

    2) The warmists will use the Hansen 1988 prediction (B version of his prediction ) as the standard to compare to the observed temps. However, I recall that the was a downward revision of Hansen A/B/C models which is used as the comparison. It would appear that hansen had a .2c to .25c downward adjustment in the second version of the 1988 model prediction.

    https://climateaudit.org/2008/01/16/thoughts-on-hansen-et-al-1988/

    https://www.skepticalscience.com/how-well-have-models-predicted-gw.html

    • mpainter
      Posted Nov 24, 2017 at 12:16 PM | Permalink

      If the modelers were true scientists, they would constrain their model parameters with observations, as befits proper scientific investigation. Such constraint would yield an empirical climate sensitivity of negligible value. But such an approach would put them out of business.

      • Frank
        Posted Nov 25, 2017 at 6:05 PM | Permalink

        mpainter: “If the modelers were true scientists, they would constrain their model parameters with observations, as befits proper scientific investigation. Such constraint would yield an empirical climate sensitivity of negligible value. But such an approach would put them out of business.”

        All of the warming rates shown in this post (the dotted red lines on each graph) lie between 0.1 and 0.2 degC/decade. They don’t indicate that climate sensitivity is negligible; but it could be half of what models suggest.

        There are several problems with tuning models to fit observed warming: 1) We don’t have very reliable measurements of observed warming. As this post shows, if models were tuned to agree with “observed warming”, the result would still depend on whose record of “observed warming” was used. 2) Not all warming (or temperature change) is forced. Some is “unforced” or internal variability. During a strong El Nino, the planet warms 0.3 degC in less than 1 year without any forcing being responsible. (That warming is mostly due to a slowdown in exchange of cold deep water with surface water.) There was a period of unforced warming between 1925 and 1945, a Pause in forced warming from 1945-1975, and 1998-2013 (though rising aerosols may have caused some of the former Pause). The AMO produces enough warming and cooling in and around the Atlantic in the past two cycles to have an apparent amplitude of 0.25 degC on GMST. We also don’t know what role unforced variability played in phenomena like the LIA and MWP (thought the LIA was clearly partially associated with a period of “low solar activity”).

        There are different opinions about how climate models should be tuned. If you intend to use your model to attribute observed warming to human forcing, then it is inappropriate and unethical to tune your model so that observed warming equals hindcast warming. As best I can tell from work with ensembles of models with perturbed parameters, there is no reason to assume that any scheme for tuning parameters one at a time produces anything better than local optimum in a large multi-dimensional parameter space with a large number of such optima. An AOGCM can be tuned to produce almost any ECS between 1.5 and infinity.

        • mpainter
          Posted Nov 25, 2017 at 6:30 PM | Permalink

          Constraining climate sensitivity with observations implies a determination of what portion of warming is natural and what portion is attributable to AGW. This will require a thorough sifting of the evidence and merits of attribution as well. In other words, there will be real science, not climate science.

        • mpainter
          Posted Nov 25, 2017 at 6:58 PM | Permalink

          For example, if it were shown that natural factors were responsible for 80% of warming since 1980, then AGW would be shown as of little concern. The merits of this study would be thoroughly sifted and weighed.This is the scientific way, but this approach is utterly eschewed by the present generation of climate scientists.

        • mpainter
          Posted Nov 26, 2017 at 4:13 AM | Permalink

          The step up in the global temperature anomaly at circa 2000, about 0.3°C, has been shown as a result in reduced cloud albedo. It certainly cannot be shown as AGW. The global temperature anomaly has been influenced the last two years by a super El Nino of unprecedented duration. Adjust for that and what is your warming trend since the step up? The GCMs are pretense, not real science. Real science conscientiously tests hypothesis against observations.

        • mpainter
          Posted Nov 27, 2017 at 5:40 AM | Permalink

          ” Adjust for that and what is your warming trend since the step up? ”

          In other words, we know that El Nino temperature spikes are a natural occurrence and not attributable to AGW. With this spike removed, the trend for the whole of this century would be flat.

        • Frank
          Posted Nov 27, 2017 at 12:31 PM | Permalink

          mpainter wrote: “Constraining climate sensitivity with observations implies a determination of what portion of warming is natural and what portion is attributable to AGW. This will require a thorough sifting of the evidence and merits of attribution as well. In other words, there will be real science, not climate science.” “For example, if it were shown that natural factors were responsible for 80% of warming since 1980, then AGW would be shown as of little concern.”

          The problem is that the only way to determine how much of warming is forced and how much is unforced is by using a model to convert observed forcing (like rising GHGs and the changes in cloud albedo you mention) into a (forced) change in temperature.

          Models should be validated by their ability to represent current climate accurately everywhere on Earth, including large seasonal changes.

        • mpainter
          Posted Nov 27, 2017 at 1:26 PM | Permalink

          Now I have two comments in moderation, Steve, for no apparent reason, or if there is a reason, could you please explain it for me, thanks.

        • Steve McIntyre
          Posted Nov 27, 2017 at 3:02 PM | Permalink

          straying into physics theory – a topic that I’d prefer be discussed elsewhere.

        • mpainter
          Posted Nov 27, 2017 at 2:57 PM | Permalink

          Yeah, Well we already have seen what the modelers can do. Time to try some genuine science.

      • Posted Nov 25, 2017 at 6:18 PM | Permalink

        As Frank points out, the tuning problem is difficult. It is claimed by Schmidt and others that they use top of atmosphere radiation imbalance to tune and presumably some other parameters where good data is available. There is a recent paper about tuning that I think is the start of a trend towards more openness on the subject which is a good thing.

        • mpainter
          Posted Nov 26, 2017 at 4:22 AM | Permalink

          Is Schmidt to be trusted with data?

        • mpainter
          Posted Nov 26, 2017 at 5:14 AM | Permalink

          “the tuning problem is difficult..”

          Especially for those who lack any sort of scientific training and who regard the injunction of “test your hypothesis against observations” as a devious stratagem to overthrow the Truth.

        • Posted Nov 26, 2017 at 12:27 PM | Permalink

          “There is a recent paper about tuning…” Do you mean this one: https://www.geosci-model-dev.net/10/3207/2017/gmd-10-3207-2017-discussion.html ? They claim that they don’t use the 20C records for tuning but something like the TOA imbalance. The editor of the journal, James Annan, has a blog where he wrote this article about the mentioned paper: http://julesandjames.blogspot.de/2017/09/practice-and-philosophy-of-climate.html . It’s worth reading and the summary is: Be careful! 😉

        • mpainter
          Posted Nov 26, 2017 at 3:43 PM | Permalink

          I scanned the Schmidt study link, thanks, and it confirms my opinion of the climate modeling confraternity.
          A recurrent phrase : “expert judgement”.
          Do not hold your breath waiting for a more thorough airing of climate modeling by the fraternity.

          Also: “The radiation imbalance in the 21st Century with observed SST must be positive with a target range of 0.5 to 1 W m2”.

          Such over-precision can only mean one thing: no error bars with any basis in reality.

          Also, nowhere have I found any reference to photosynthesis which, according to one i-net source, absorbs 2% of all solar radiation incident on the surface. This is chemical sequestration of solar energy which is thereby removed from the earth’s energy budget. Is photosynthesis addressed by the climate modelers?

        • mpainter
          Posted Nov 27, 2017 at 10:48 AM | Permalink

          Steve, I’ve a comment in moderation, thanks.

        • Steve McIntyre
          Posted Nov 27, 2017 at 3:00 PM | Permalink

          you used a banned word – sorry about that. The comment strayed into physics, as well. 🙂 I’m not going to tell you the banned word.

        • Gerald Browning
          Posted Dec 11, 2017 at 1:02 AM | Permalink

          dpy6629,

          It continues to amaze me that anyone puts any faith in any climate model.

          For a hyperbolic system with multiple time scales, there is only one reduced system that accurately describes the slowly evolving solution in time that is also smooth in space (the one supposedly of interest to climate modelers). For the atmospheric equations of motion that system is not the hydrostatic equations of motion that all climate models are based on, e.g., vertical columnar heating does not lead to a solution that is smooth (large scale) in space. In fact that is why the models need unrealisticaly large dissipation to smooth the
          horizontal point discontinuities that arise from columnar heating (Browning, Hack, and Swarztrauber)

          It is also the case that the reduced system is always well posed for the initial-boundary value problem (IBVP). That is not the case for the hydrostatic equations (Oliger and Sundstrom) because it is not the correct reduced system.

          Browning and Kreiss (2002) derived the correct reduced system in the mid-latitudes.
          It involves the solution of two spatial elliptic equations needed to remove the sound waves and the gravity waves whose spatial portions satisfy those equations. The hydrostatic system does not solve those equations (that is the reason for the ill posedness of the IBVP), but the numerical climate models do force the solution of the elliptic balance equation between the pressure and the vertical component of vorticity by using a semi-implicit method.
          (The correct reduced system near the equator is much simpler – a direct balance between the vertical component of the wind and the heating).

          So the mathematical theory of hyperbolic systems with multiple time scales is complete. But if you look at the cites of our 2002 manuscript it is only by 6 numerical analyst types. Now isn’t it interesting it is not cited by any climate or weather modelers?

          Jerry

        • Posted Dec 12, 2017 at 9:51 PM | Permalink

          Yes Gerry I agree to some extent that modelers would be vastly better served by looking at the rigorous math. Your comment above is very clear and well stated. Thanks for posting it.

          I do think modelers are at least coming clean about tuning and I’ve seen a few good recent papers on the subject. They may still be hiding behind the aerosol forcing issue however.

  21. Posted Nov 26, 2017 at 1:05 PM | Permalink

    Well Frank, your reference looks interesting but the one I was familiar with is this one.

    http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-15-00135.1

    In searching I also found another long paper by amoung others Mauritsen and Stevens.

    http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/full

    I would say climate scientists have actually done a good job recently of making model tuning more transparent and providing good information on the subject.

    • Frank
      Posted Nov 28, 2017 at 5:39 PM | Permalink

      dpy6629: IMO, there are massive problems with tuning models.

      1) Only a few publications have discussed the strategy used to tune a particular model (which can change in the future). If one uses the historical record of warming to help tune your climate model, you are assuming that 100% of warming is due to the forcing we know about (with a great deal of uncertainty in the case of aerosols). If the IPCC then uses the same model to attribute warming to anthropogenic forcing, circular reasoning is involved and the conclusion bogus.

      2) All AOGCMs participating in AR6 are supposed to disclose their tuning strategy (for the first time). Then we will know what climate phenomena models have been tuned to reproduce and what climate phenomena models correctly predict without having been tuned to do so. For example, all models predict the same OLR feedback through clear skies (water vapor plus lapse rate, about +1.1 W/m2/K). The same amount of feedback is observed during seasonal warming from space. Does this happen because models have been tuned to produce this much feedback or because other tuned parameters interact in such a way as to agree with observed feedback from clear skies?

      We also know about LWR feedback from cloud skies, and SWR feedback from clear and cloudy skies in response to seasonal warming (which is 3.5 K in GMST without anomalies due to the lower heat capacity in the NH). No AOGCMs get these feedbacks right and they all disagree with each other. WG1’s reports spend a great deal of time comparing models with each other and have very little on how models compare with observations.

      3) Any tuning strategy that optimizes parameters one-by-one is likely to get caught in local optima (when there are non-linear relationships between parameters). I understand (but can’t cite a reference) that the “optimum” one reaches depends on the order in which parameters are tuned and what initial parameters are chosen. Climate models are tuned, but not tuned using a strategy expected to produce a globally optimal set of parameters.

      4) Experiments have been run with thousands of simplified models using key parameters chosen at random from within a physically plausible range (perturbed parameter ensembles). When judged on their ability to reproduce eight different climate observables, no part of the tested “parameter space” could be excluded or judged superior because it produced inferior or superior results for all eight observables. Some combinations were better for TOA OLR while others were better for precipitation. ECS ranged from 1.5 to 11.5 K.

      5) One section in AR4 recognizes that the output from their two dozen models doesn’t begin to systematically explore the “parameter space” that is consistent with our current climate and could represent our future. They warn against interpreting the spread in model output as the confidence interval for future projections. The IPCC uses their “expert judgment” to call the 90% confidence interval for model projections “likely” rather than “very likely” (as if policymakers would pay any difference).

      6) A recent paper by the GFDL group showed changes in the parameterization of convective precipitation could lower ECS from 3.0 to 1.8 K without causing a deterioration in the model’s ability to represent current climate.

      http://journals.ametsoc.org/doi/full/10.1175/JCLI-D-15-0191.1

      7) The climate feedback parameter (W/m2/K) tells us how much more heat is emitted as OLR or reflected as SWR as the planet warms. ECS (K/doubing) is the reciprocal multiplied by 3.7 W/m2/doubling. Small changes in high climate sensitivity involve trivially small changes in the radiative cooling to space that models probably can’t get right.

      0.0 W/m2/K = runaway GHE
      0.6 W/m2/K = 6.2 K/doubling
      1.2 W/m2/K = 3.1 K/doubling
      1.8 W/m2/K = 2.1 K/doubling
      2.4 W/m2/K = 1.5 K/doubling
      3.2 W/m2/K = 1.15 K/doubling (no feedbacks climate sensitivity)

      • Posted Nov 30, 2017 at 12:30 AM | Permalink

        Frank, Thanks for this detailed response. I to believe GCM’s are plagued by inaccuracies especially over long time scales and that tuning is a black art designed to give plausibility to these questionable calculations. However, I do believe the literature on this tuning is better and more honest than what is available for example in computational fluid dynamics.

  22. Posted Nov 28, 2017 at 2:08 PM | Permalink

    dpy: I frequently cited the Mauritsen/Stevens paper ( your 2nd link) where they remark:”2.A longer simulation with altered parameter settings obtained in step 1 and observed SST’s, currently 1976–2005 from the Atmospheric Model Intercomparison Project (AMIP), is compared with the observed climate.” The SST of the periode mentioned as tuning parameter seems to be much more plausibely than the TOA-imbalance which is not directly observable at all also with the sofisticatest recent technonolgies, see http://journals.ametsoc.org/doi/abs/10.1175/JAMC-D-16-0406.1
    “Uncertainties in absolute calibration and the algorithms used to determine Earth’s radiation budget from satellite measurements are too large to enable Earth’s energy imbalance to be quantified in an absolute sense.” ( Cited from the conclusions).

    • Posted Nov 28, 2017 at 2:14 PM | Permalink

      Steve, I have a comment in moderation. If i used a “banned word”, it was not intentionally… English is not my native language! 🙂 AFAIK it was not a comment about basic physics also…

    • Lance Wallace
      Posted Nov 29, 2017 at 1:27 AM | Permalink

      Frank, I read the linked paper but could not find your quote in the conclusions section.

      • Posted Nov 29, 2017 at 12:14 PM | Permalink

        Lance, sorry, right quotation, the wrong link. The right one to the recent paper is: http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-17-0208.1

        • Lance Wallace
          Posted Nov 29, 2017 at 2:02 PM | Permalink

          OK, thanks, Frank. The quote at the beginning of a paragraph in the conclusions section apparently refers to the rather large 1-sigma uncertainty on the net flux of 4.3 W/m^2 in Table 5. However, the same table includes a much smaller value of 0.71 W/m^2 after applying a “constraint”. So is it +0.71 +/- 8.6 W/m^2 (95% range)? Not too helpful…

          But even if we got a good nonzero number for the net flux, would it be meaningful? Perhaps the net flux is positive for 500 years while the ocean is overturning or some other cycle is playing itself out, and then negative for the next 500.

        • Posted Nov 29, 2017 at 3:45 PM | Permalink

          That’s why my argument vs. the Gavin et.al paper with the claim that they used TOA imbalance for tuning their model. This value is not observable independend of the model estimates.

        • Frank
          Posted Nov 29, 2017 at 4:07 PM | Permalink

          frankclimate: It is my understanding that the TOA imbalance average over decade is measured using ocean heat uptake and is 0.7 W/m2 for the ARGO period. So much of the planet’s effective heat capacity is the ocean that the TOA imbalance and ocean heat uptake are essentially the same.

        • mpainter
          Posted Nov 29, 2017 at 5:03 PM | Permalink

          And there are no measures of ocean deep temperature independent of the JPL Argo program, controlled by the likes of Josh Willis concerning whom, see head post.

        • Posted Nov 30, 2017 at 11:37 AM | Permalink

          Frank, this is essentially true, anyway, the data before Argo ( older than 2004) are more or less guesses or model derivated. For the tuning period ( Mauritsen et al mention 1976…2005 for CMIP5) I’m not quite sure if those data have the merit that they have to use to be…

        • Frank
          Posted Dec 2, 2017 at 6:25 PM | Permalink

          mpainter: “And there are no measures of ocean deep temperature independent of the JPL Argo program, controlled by the likes of Josh Willis concerning whom, see head post.”

          The ARGO program is not controlled by Josh Willis. The program (including the collection of raw data and its quality control) is overseen by a steering committee of about 15 that doesn’t currently include Josh Willis. The program produces data, QC-controlled data with metadata; not an official climate record from that data. Various groups have analyzed that data and they all come up with about the same answer for ocean heat uptake in the ARGO era, one that is disappointingly small for alarmists. (The faster the ocean is currently taking up heat, the greater the amount of “committed warming” that will occur after GHGs stop rising.)

          The early ARGO years (2003-2008) came in the middle of the Pause and skeptics published several papers showing that the 0-700 m layer of the ocean wasn’t warming according to ARGO-only data. Josh Willis was unable to stop this. However, the trend after 2008 has clearly been upward and the Pause now appears to have only effected 0-700 m layer.

          If you go to Roger Pielke Sr.’s blog, you can see that Roger (a prominent skeptic and a big advocate of measuring global warming through OHC) relied on Josh Willis for the latest information on the subject and even has forwarded comments from skeptical blogs about ARGO data.

          The idea that a single individual or group is corrupting some aspect of climate science needs evidence to back it up. Steve McIntyre showed the corruption in climate reconstructions (particularly with respect to the MWP) and to some extent his skeptical position has won. The SPM for WG1 AR5 now says:

          “Continental-scale surface temperature reconstructions show, with high confidence, multi-decadal periods during the Medieval Climate Anomaly (year 950 to 1250) that were in some regions as warm as in the late 20th century. These regional warm periods did not occur as coherently across regions as the warming in the late 20th century (high confidence).”

          I don’t know if Steve objects to this statement, but it is far better than AR3 and AR4. Skeptics appear to be winning about the “hot spot”. Nick Lewis appears to made a big impact with the divergence between ECS from AOGCMs and observations (EBMs). There are some phenomena in the ARGO record worth questioning. For example, more heat accumulating below 700 m than above. Maybe someone should look at it more closely. That might change things. Trumpian tweets won’t.

        • mpainter
          Posted Dec 2, 2017 at 7:30 PM | Permalink

          Wrong thread, Frank.

          Are trying to make us believe that the Adjustocene is over at NASA JPL?
          I know it is at the EPA where the execrable Gina McCartney has been replaced.
          But who has cleaned up the JPL? It needs to be done Also, GISS, another NASA agency still run by the notorious Gavin Schmidt.

          In fact, why are these two sinkholes of public funding doing climate studies?

          JET_PROPULSION_LABORATORY

          GODDARD_INSTITUTE_of_SPACE_STUDIES

        • mpainter
          Posted Dec 3, 2017 at 1:43 AM | Permalink

          Bottom line, Frank: all of these NASA climate hyping hotbeds have been in disrepute for years and need a thorough housecleaning and re-direction.

    • mpainter
      Posted Nov 29, 2017 at 4:27 AM | Permalink

      “These demonstrate an order of magnitude improvement in relative accuracy for Edition 1 MERBE results over CERES and show that the latest CERES data are less accurate and stable than claimed.”

      study referenced by Frank climate.

      • mpainter
        Posted Nov 29, 2017 at 4:29 AM | Permalink

        From the abstract, that is

  23. 4TimesAYear
    Posted Nov 28, 2017 at 5:19 PM | Permalink

    Reblogged this on 4timesayear's Blog.

%d bloggers like this: