The results of these regressions (correlations) are given below for the time period and length in years noted. The first correlation was calculated by using the shorter time period to determine the SS trend while the correlation in parenthesis was calculated by first extracting the trend for the entire 1861-2100 period and then calculating the trend for the given time period.

240 years from 1861-2100: Correlation of SS derived Trends versus TCR=0. 84 (0.84).

95 years from 2006-2100: Correlation of SS derived Trends versus TCR= 0.69 (0.73).

40 years from 1975-2014: Correlation of SS derived Trends versus TCR = 0.63 (0.67).

40 years from 2006-2045: Correlation of SS derived Trends versus TCR = 0.63 (0.77).

15 years from 2000-2014: Correlation of SS derived Trends versus TCR = 0.60 (0.77).

The correlation between the SS trends and TCR increases with the length of time period used when the shorter time periods were used to determine the trend, while when the trends were determined using the entire 1861-2100 period, the correlation is considerable less dependent on the period length.

The question then is what do these results mean with regards to the method used in extracting a deterministic trend that is determined mainly by GHGs – as would be the assumed case for TCR? A more direct question would be does the extracted deterministic (at least secular) trend contain effects from natural variability and is that what causes the correlation to be reduced (somewhat) using shorter time periods? I think the latter method of extracting a deterministic trend, in that it is not very length dependent, indicates that the trends are closing in on the value that would be predicted by the TCR value for the model and therefore a good measure of the same effects from which TCR is derived, i.e. GHGs in the atmosphere. I plan to go back and redo these calculations using both SSA and EMD (Empirical Mode Decomposition).

Meanwhile I think if this approach is reasonable that it shows that the M&F paper has added some considerable portion of the deterministic part of the CMIP5 model series to the natural variability part. Counter to what M&F found in their paper, the strong SS trend and TCR correlation in this exercise shows that the variation in TCR values for the individual CMIP5 models is a good predictor of a deterministic temperature trend even for short periods of 15 years.

A feature of the deterministic trend that I was not able to investigate at this time was to account for the effects of aerosols to the SS trends and TCR value by adding a aerosol proxy for each model to the regression. All models used the same aerosols for the historic period but the effects still vary considerably from model to model as shown by comparing the aerosol optical depths for some models.

]]>I am not sure if you were already aware that according to M&F’s Data table of models used they elected not to use Forster 2013’s model ISPL-CM5B-LR despite having the values for F, a and k. Also, model NorESM1-M is missing from the list but Nic thinks it an oversight. Lastly, I do not see the diagnosed values four models: bcc-csm1-1, bcc-csm1-1-m,GISS-E2-R, MIROC5, or in the Forster 2013 data. Were these newly derived? Is so, why?

Here is a good reference paper on model use:

http://envsci.rutgers.edu/~toine379/extremeprecip/papers/taylor_et_al_2012.pdf

In case you have not already read it on page 494 they go over considerations for using CMIP5 data. They make the point that Nic did that the historical runs, due to arbitrary placements of PDO and other oscillations, are randomly squiggling relative to each other. They also point out that k is significantly drifting even in the calibration period and will need adjustment.

On my final thoughts on M&F, I first believe their evaluation of the 36-model’s 114 runs for variability follows circular logic and is a head-scratcher as to how any models could add insight into the variability of the observed record more than the observed record itself. M&F seem to ignore that fact the modelers created their wiggles with the benefit of the same record that M&F were comparing them to. In light of this fancy sentences like: “Our interpretation of Fig. 1 tacitly assumes that the simulated multimodel-ensemble spread accurately characterizes internal variability, an assumption shared with other interpretations of the position of observed trends relative to simulated trends (for example the reduction in Arctic summer sea ice.” is junky junk designed to obscure the truth (that they have nothing).

In the 18 models’ 72 runs where they were evaluating dF, k, a, and dT over intervals, their output is entirely dependent on their own inputs, all derived from dT by Forster’s diagnosis. The CMIP5 data according to Taylor et al. must be used with caution. Ocean uptake, k, has a drifting starting value and must be adjusted be time interval. This only makes sense; the further the deep ocean and thermocline vary from equilibrium temperature relative to the atmosphere the higher the efficiency for which they will uptake of give off heat to the atmosphere. This means k is not a linear function to delta T of the atmosphere to itself but of the average delta T between oceans and atmosphere wherever they interface. This later delta is constantly in flux due to high variability and constant drift. M&F assume it’s linear. Although I cannot find the actual data values M&F plugged in, they admit that their values are not valid following volcanic eruptions. This would indicate they k is not adjusted.

The final two problems are not unique to M&F, they are the assumption that climate sensitivity, from which alpha is assumed to be the inverse, is linear to delta temp. But the paleo-temperature record suggests there is an upper limit to T, suggesting an increasing resistance to rise above our current range. This can be explained by a cloud relationship to ERF, and a cloud relationship to Clausius Clapeyron. The final problem is the assumption that natural climate variability only exists on the less than 30-year time scale, (that the paleo-temp record is a hockey stick handle). Wow.

Due to my lack of statistics expertise I will leave criticism there to others but I ask: is it kosher to do OLS on values that have already gone through a process of assumptions to linearize them? Kappa was derived in this manner from N. And F was determined through OLS previously from it’s assumed relationship to dT, k and a over a test period of abrupt change in F. It seem like a lot of smoothing going on.

]]>============ ]]>

================ ]]>

It has … recently come to our attention that a paragraph in the 938-page WG II contribution to the underlying assessment refers to poorly substantiated estimates of the rate of recession and date for the disappearance of the Himalaya glaciers. In drafting the paragraph in question, the clear and well-established standars of evidence,required by the IPCC procedures, were not applied properly.

A footnote alludes to the second paragraph in section 10.6.2 of WGII, but with no mention of the claim in question or whether it is wrong, or true but simply not adequately substantiated with an appropriate citation.

The online HTML version of WGII 10.6.2 does not strike out the erroneous claim, but merely provides a hotlink to an erratum saying that lines 32-43 on p. 493 should be deleted. However the HTML version gives no page numbers or line numbers!

It’s right there with people believing that Canute tried to order the tide back rather than demonstrate that the power of kings had no control over nature.

]]>