All-Proxy CPS

As first fruits of my Mannian CPS emulation, I did a quick sensitivity to see what results looked for the AD800 network without the Tiljander upside-down sediments. I did the run with all proxies except Tiljander sediments as I first cut, since Mannian screening of series that are supposed to be proxies is not a proven method. I’ve also simplified public access to my code quite considerably since yesterday so that it should be relatively easy for someone to experiment.

The top panel shows my emulation of the Mann NH and SH iHAD (land-ocean) reconstructions using the AD800 network (no splicing), together with the Mannian iHad instrumental (I haen’t parsed the relationship of this to HAD and am just using it as is). In the panel below that, I did the same thing using all proxies except the Tiljander sediments. As you see, there is a profound difference particularly in the SH, where the 20th century is not in the slightest anomalous according to this information. Another odd point in the bottom panel version is that the NH 11th century which is believed to be one of the “warm” centers of the MWP emerges, together with the late 17th century LIA, as one of the coldest times of the millennium – something that doesn’t make a whole lot of sense, even for MWP opponents. At present, I don’t know which proxies are leading to this result.

I haven’t been able to replicate Mannian verification statistics calculations. The Mannian “base case” has higher RE and CE statistics than the variation in the bottom panel, but the bottom panel variation is also “99.9% significant” according to the Mannian canon.

The puzzle of deciding between two conflicting reconstructions that are both “99.9% significant” was one that I raised at Climate of the Past Discussions (an interesting point which was ignored by the reviewers, but which remains unresolved.) In likelihood terms (e.g. Brown and Sundberg calibration), I think that the right answer is that one reconstructions would be slightly more “likely” than the other and that the confidence intervals have to be wide enough to encompass both.

Now be careful in using this comparison, as it varies three things – 1) it uses “all” and not just “screened” proxies. Mannian EIV uses “all” proxies, so it’s hardly unreasonable to examine the effect of using “all” proxies under Mannian CPS. 2) it emulates Mannian CPS but without the stupid pet tricks (I realize that this latter term is not very formal, but it’s intended to cover weird programming decisions that probably don’t matter a huge amount in the total scheme of things, but which cannot be justified – sort of like the rain in Spain thing.); 3) the upside-down Tiljander sediments aren’t used. I realize that this is one extra source of variation, but I really can’t stomach doing calculations with upside-down sediments.

Allocating the amount of difference to each thing is something that would be done in an engineering-quality report, something that one would surely expect of the original authors. For now, most of the difference of the difference arises from the use of screened proxies as opposed to all proxies, though I haven’t sorted out what the active ingredients are. I presume that the screening here is another embodiment of the Esper Doctrine, one which makes most statisticians scratch their heads:

this does not mean that one could not improve a chronology by reducing the number of series used if the purpose of removing samples is to enhance a desired signal. The ability to pick and choose which samples to use is an advantage unique to dendroclimatology.

I’ve provided code below that would enable interested readers to experiment with their own allocations. The following two lines download a collation of all the relevant Mannian data, including relevant smooths, which don’t need to be re-smoothed over and over again. (This was collated in the script uploaded yesterday, which is a reference.)

# download.file(“”,”temp.dat”,mode=”wb”);load(“temp.dat”) #collection of information in utilities

raw.mbh=manniancps(k=11,criterion=passing$whole, outerlist=outerlist.mbh, lat.adjustment= -1, smoothmethod=”mann”,verbose=”default”) ;tsp(raw.mbh)

The control on proxy selection is the parameter criterion which is a logical vector of length 1209 describing the inclusion or exclusion of proxies in a network. The table is a convenient basis for constructing logical vectors. Mannian program pointlessly creates huge inventories of multiple versions of proxy subsets which are then pointlessly re-smoothed. In fact, all that is needed is to to do the smoothing once and then pick the appropriate subset. nodendro is another logical vector that can be used. Here is code for the all-proxy cps reconstruction (excluding the 4 Tiljander sediments).,1061:1064));sum(notiljander)
raw=manniancps(k=11,criterion=notiljander, outerlist=outerlist.sensible,lat.adjustment= 0, smoothmethod=”sensible”,verbose=”default”)

The plot is done as follows:

text(790,.35,”MBH Screened”,font=2,pos=4)
title(“CPS iHAD AD800 GL”)
text(780,.35,”No Tiljander”,font=2,pos=4)


  1. Posted Dec 3, 2008 at 2:08 PM | Permalink

    So…after removing all questionable (and questioned!) Mannian Influences and boo-boos from your tested and verified replication of Mann et al’s work, we see a result that has no hockey stick and looks like it tracks reality quite nicely.

    What a surprise…

    Me’ ‘ats off ta’ya, guv’ner! Outstanding analytical effort!

  2. Steve McIntyre
    Posted Dec 3, 2008 at 2:11 PM | Permalink

    #1. I do not claim that any of this represents “reality”. At present, we don’t know what the “reality” was and I do not claim that tweaks of Mannian methods represent reality. I’m just experimenting with some sensitivities to see what the rest of the story is.

    For example, there’s no reason to think that the early 17th century in the SH was warmer than the 20th century. All this shows is that this counter-intuitive result can arise through a plausible implementation of Mannian methods, and one that is also not rejected by Mannian verification statistics.

    • Jedwards
      Posted Dec 3, 2008 at 2:19 PM | Permalink

      Re: Steve McIntyre (#3), On the other hand Steve, this “appears” to track well with the Carbon-14 record of Solar Activity through the same period.

  3. bitwonk
    Posted Dec 3, 2008 at 2:32 PM | Permalink

    Steve, thanks for your work.

    I don’t know R (yet), but as a guy who spends a most of each day reading and writing code, I suggest, if R permits it, the liberal addition of spaces to make your code more readable. I also recommend against butting a bunch of statements together on a line and separating them with semicolons.

    Considering the ratio of the cumulative time spent by people studying your code to the time you spend writing it, and even the time you spend looking at it compared to writing it, liberal use of the spacebar at coding time would be pretty inexpensive. (Ample comments wouldn’t hurt either.) Consider some code from your plot above, formatted for readability:

    layout (array (1:2, dim = c (2, 1)), heights = c (1.1, 1.3))
    par (mar = c(0, 3, 2, 1))
    plot( year, reconstruction[[1]]$cps[,1], type = “l”, xlab = “”, ylab = “”, axes = FALSE, ylim = ylim0)
    axis (side = 1, labels = FALSE);
    axis (side = 2, las = 1);
    box ();
    abline (h = 0, lty = 2)

  4. Steve McIntyre
    Posted Dec 3, 2008 at 2:52 PM | Permalink

    The only difference between my code and your code above is that you’ve separated the axis commands in the plot into different lines.

    There’s a practical reason why I bunch the axis commands together. I do dozens of plots with similar formats and this little package of axis commands goes together; that’s why I keep them on one line. They are one “sentence”. They are a standardized “overhead”.

    WordPress also runs makes a coarser appearance than in Notepad where I do my code. Text versions are typically available in the CA/scripts/ directory. I would suggest that interested parties copy code into Notepad where the appearance is usually cleaner than on screen.

    • James Smyth
      Posted Dec 3, 2008 at 3:53 PM | Permalink

      Re: Steve McIntyre (#7),

      There’s a practical reason why I bunch the axis commands together. I do dozens of plots with similar formats and this little package of axis commands goes together; that’s why I keep them on one line. They are one “sentence”. They are a standardized “overhead”.There’s a practical reason why I bunch the axis commands together. I do dozens of plots with similar formats and this little package of axis commands goes together; that’s why I keep them on one line. They are one “sentence”. They are a standardized “overhead”.

      So, … refactor it into a function?

    • Posted Dec 3, 2008 at 9:00 PM | Permalink

      Re: Steve McIntyre (#7),

      First, I have to say the code density in Steve’s R makes it hard to read for an R newbie like me. However people need to understand the huge amount of work it takes to do this code and just because it slows us down a bit is no reason for Steve to change his style. I have my own also, at least in C++ and it’s full of extra formatting. Compared to the matlab code in Mann 08 the code Steve put on his site is a work of art- very nice really. I think Mann must have a bunch of grad students writing for him or something. Just felt the need to say it, the stuff is nearly incomprehensible.

      Second, I just finished a short analysis of correlation sorting using upside down quadratic tree ring response as proposed by Dr. Loehle. I showed again how the CPS method demagnifies historic values in favor of calibration range values, how non-linear response can further reduce the historic trend and how correlation sorting straightens the non-linear signal.

      So when I see the two curves above, I believe both the first and the second pane are substantially artificially reduced in history just by the CPS math. If I am right, can you imagine how unreasonable the second pane would look if the pre-calibration history was magnified by 1.5ish times?

      • ColinD
        Posted Dec 4, 2008 at 3:41 AM | Permalink

        Re: Jeff Id (#15),
        Just to add my own opinion as a Software Engineer (a bit off topic). Writing (near) impenetrable code may be ok when working on personal projects, but it becomes a liability when the code has to be shared, reviewed and debugged by others, which it what is happening here. eg use meaningfully named constants instead of raw values etc.

        people need to understand the huge amount of work it takes to do this code and just because it slows us down a bit is no reason for Steve to change his style

        There is an initial overhead in adopting new coding style, but it quickly becomes second nature, and takes very little extra time to add comments etc.

  5. Henry
    Posted Dec 3, 2008 at 3:43 PM | Permalink

    Looking at the two charts, you seem to have taken two steps at the same time: removing the Tiljander series and adding back the proxies that Mann screened out. Presumably the big southern hemisphere changes only result from the latter step.

  6. Steve McIntyre
    Posted Dec 3, 2008 at 3:52 PM | Permalink

    #8. As I said in my post, the main effect is from Mann screening in CPS (but not EIV), which is what affects the SH. Which proxies are involved, I don’t know right now. In EIV, I presume that a lot of proxies are flipped over and up/down weighted.

  7. jae
    Posted Dec 3, 2008 at 4:02 PM | Permalink

    Hmm, as expected, the reconstruction is extremly sensitive to the removal of certain proxies. Deja vu all over again.

    • John A
      Posted Dec 3, 2008 at 4:51 PM | Permalink

      Re: jae (#11),

      …and those key proxies have extremely weak relationships to temperature.

  8. srp
    Posted Dec 3, 2008 at 5:27 PM | Permalink

    I wonder if jackknife methods might be helpful here. Distribution-free and good for catching sensitivity to outliers.

    • Andrew
      Posted Dec 4, 2008 at 8:28 AM | Permalink

      Re: srp (#13),

      Moreover, bootstrap confidence intervals for Mannian CPS could easily be constructed: Sample with replacement from the set of all proxies to get a new set of proxies the same length as the original and then compute the reconstruction based on these proxies.
      Do this 1000 times, then take the 25th and 975th highest estimates for each year in the reconstruction as the confidence intervals.

      I’d be very surprised if it didn’t produce floor to ceiling intervals.

  9. Clark
    Posted Dec 3, 2008 at 6:04 PM | Permalink

    I am curious what PNAS would think of a submission presenting a fixed-up, no stupid tricks, no upside-down proxies, no in-fills, no arbitrary proxy tossing, and no artificially truncated proxies.

  10. Steve McIntyre
    Posted Dec 3, 2008 at 9:02 PM | Permalink

    Just because something is called a “proxy” doesn’t mean it is one. All you can show with these things is that you can get different results under different cases – not that there’s a “right” answer.

  11. Manfred
    Posted Dec 4, 2008 at 2:00 AM | Permalink

    Without doubt Mr. Mann has recalculated his hockey stick as soon as he became aware of his error on the Tiljander series published on this website.

    I wonder if he then immediately informed the publisher of publication and also the BBC that his computations were substantially wrong – what I consider to be a indisputable duty of a government paid employee.

    • Posted Dec 4, 2008 at 3:47 AM | Permalink

      Re: Manfred (#17),

      …and in other news, Toronto was today overflown by thousands of pigs in tight formation.

  12. Ivan
    Posted Dec 4, 2008 at 3:52 AM | Permalink

    Dear Steve, you simply must publish this, together with similar sensitivity analysis of Moberg and all others you previously carried out here.

  13. Manfred
    Posted Dec 4, 2008 at 4:57 AM | Permalink

    John A:

    Are there any ethics codes installed in the US for government/NASA employees or membership in scientific organisations ?

  14. Posted Dec 4, 2008 at 6:09 AM | Permalink

    To Steve in #2:

    We can at least say that the graph absent the questionable data matches anecdotal historical records even if numbers are absent.

    For instance, historical record shows that vineyards grew in northern England in the 1500s and 1600s… your graph shows that.

    Likewise, in the 1700s the historical record shows the Thames freezing in winter… your graph shows that.

    This along with numerous other historical records the world over lend more credence to your graph than Mann’s. To accept Mann’s we have to also accept that between 1400 and 1800 all the record keepers the world over were suffering the same delusion and reporting cold when no cold existed, and vineyards were none could grow.

    • thefordprefect
      Posted Dec 4, 2008 at 7:49 AM | Permalink

      Re: Jryan (#22),
      A web search brings the following:
      from realclimate:

      … from the Domesday Book (1087) – an early census that the new Norman king commissioned to assess his new English dominions, including the size of farms, population etc. … Sources differ a little on how many vineyards are included in the book: Selley quotes Unwin (J. Wine Research, 1990 (subscription)) who records 46 vineyards across Southern England (42 unambiguous sites, 4 less direct), but other claims (unsourced) range up to 52. Lamb’s 1977 book has a few more from other various sources and anecdotally there are more still, and so clearly this is a minimum number.

      Of the Domesday vineyards, all appear to lie below a line from Ely (Cambridgeshire) to Gloucestershire. Since the Book covers all of England up to the river Tees (north of Yorkshire), there is therefore reason to think that there weren’t many vineyards north of that line. Lamb reports two vineyards to the north (Lincoln and Leeds, Yorkshire) at some point between 1000 and 1300 AD, and Selley even reports a Scottish vineyard operating in the 12th Century. However, it’s probably not sensible to rely too much on these single reports since they don’t necessarily come with evidence for successful or sustained wine production. Indeed, there is one lone vineyard reported in Derbyshire (further north than any Domesday vineyard) in the 16th Century when all other reports were restricted to the South-east of England.

      Wine making never completely died out in England, there were always a few die-hard viticulturists willing to give it a go, but production clearly declined after the 13th Century, had a brief resurgence in the 17th and 18th Centuries, only to decline to historic lows in the 19th Century when only 8 vineyards are recorded.

      By 1977, there were 124 reasonable-sized vineyards in production – more than at any other time over the previous millennium. This resurgence was also unremarked upon by Lamb, who wrote in that same year that the English climate (the average of 1921-1950 to be precise) remained about a degree too cold for wine production.

      Since 1977, a further 200 or so vineyards have opened (currently 400 and counting) and they cover a much more extensive area than the recorded medieval vineyards, extending out to Cornwall, and up to Lancashire and Yorkshire where the (currently) most northerly commercial vineyard sits. So with the sole exception of one ‘rather improbably’ located 12th Century Scottish vineyard (and strictly speaking that doesn’t count, it not being in England ‘n’ all…), English vineyards have almost certainly exceeded the extent of medieval cultivation.


      • Soronel HaetirSoro
        Posted Dec 4, 2008 at 8:18 AM | Permalink

        Re: thefordprefect (#24),

        One thing to keep in mind when making that sort of comparison, cultivation methods have changed a great deal in the last century. Things like giant rolls of clear plastic to form mini-greenhouses as a simple example. This is not to say that the English climate is not more favorable to wine-making than other periods, just that climate may not be the greastest factor.

        Also, English tastes may have changed enough that people are willing to make a go of a harder business than they might have in the paste.

  15. Timo Hämeranta
    Posted Dec 4, 2008 at 7:05 AM | Permalink

    Recommendable reading:

    Riedwyl, Nadja, Marcel Küttel, Jürg Luterbacher, and Heinz Wanner, 2008. Comparison of climate field reconstruction techniques: application to Europe. Climate Dynamics, published before print April 25, 2008, online


    Riedwyl, Nadja, Jürg Luterbacher, and Heinz Wanner, 2008. An ensemble of European summer and winter temperature reconstructions back to 1500. Geophys. Res. Lett., 35, L20707, doi:10.1029/2008GL035395, October 25, 2008, online

    In the later study please see Figure 1. 30-year Gaussian filtered European summer and winter average temperature anomalies

    You’ll find quite a bathtube 1700-2000.

  16. Chris Z.
    Posted Dec 4, 2008 at 8:10 AM | Permalink

    Re: thefordprefect #24

    The statistics you quote beg the question: How many of today’s British vineyards operate using 11th century cultivating methods and 11th century vines? As a layman, I would expect that vine cultivation has evolved just as much as any other type of agriculture, and I’d also expect modern vine varieties to exist that are more resistant to adverse (cold) climatic conditions than their ancestors – I’ll be happily proven wrong if I’m mistaken!

    In other words: Can we be certain that the over 400 British vineyards today could operate profitably using only the vine varieties and know-how available in 1087? Or might the apparent recent boom in British wine-growing be due to “improved” vines and better understanding how to successfully grow them in suboptimal conditions, rather than being an indication of current climatic conditions making wine-growing easier than before?

  17. Steve McIntyre
    Posted Dec 4, 2008 at 8:59 AM | Permalink

    This post is about methodology.

    This post has nothing to do with vineyards. This post has nothing to do with whether a MWP exists or doesn’t exist. I am not saying that any squiggles derived from Mannian methods have any merit even if readers like one result better than another.

    Please take such discussions to another thread. I’ve deleted a couple of comments on this basis.

  18. thefordprefect
    Posted Dec 4, 2008 at 9:12 AM | Permalink

    Even Mr. McIntyre has pointed out that his newly generated plots are no more acurate that Mann’s – the same (although selected) data is used. If this is flawed for one then it is flawed for all! The talk of “Wow look a medieval wam period and a little ice age have now appeared” is therefore pure wishful thinking!


    Steve: I had already made the point that squiggles developed using Mannian methods have no meaning. I quite agree that people cannot seize on one meaningless squiggle because they like the result. However, I am not interested in one more debate on medieval vineyards on this thread. You say you are responding, but so are they. I already asked you to find another thread for discussing medieval vineyards. Sorry bout that.

    However, my graphs used all the ‘proxies’ except Tiljander – it was Mann’s version that “selected” them.

  19. Patrick M.
    Posted Dec 4, 2008 at 9:36 AM | Permalink

    The amplitude of the wiggles in the No Tiljander graph is much larger than the other graph. Check out the drop from ~0.4 to -0.8 around 1350! The No Tiljander graph just doesn’t look like what I would expect from a temperature proxy graph, (ignoring any long term trend).

  20. per
    Posted Dec 4, 2008 at 9:54 AM | Permalink

    PNAS allows letters within 3 months of publication of an article:
    Letters are brief online-only comments that contribute to the discussion of a PNAS research article published within the last 3 months. Letters may not include requests to cite the letter writer’s work, accusations of misconduct, or personal comments to an author. Letters are limited to 250 words and no more than five references.

    there may be an issue about prior publication…
    have fun

    Steve: PNAS allows prior commentary:

  21. Soronel Haetir
    Posted Dec 4, 2008 at 10:01 AM | Permalink


    How about one of these with the lake sediments flipped back to what original collection team had? You’ve shown the results with them inverted and with them removed, but not afaik with them in the original orientation.

  22. Steve McIntyre
    Posted Dec 4, 2008 at 10:04 AM | Permalink

    #33. As I observed, the main point here is the Mann screening, not the Tiljander presence/absence.

    • UK John
      Posted Dec 4, 2008 at 10:46 AM | Permalink

      Re: Steve McIntyre (#34),

      Thanks Steve, as a regular lurker followed all of this without too much heartache, and see how you can, using Mann methods, get more or less any sort of squiggle you want, and all will be statistically valid (according to Mann).

      The difficult bit is actually trying to understand why people who are probably more able to understand these things than me, still blindly support Mann. Gavin is certainly no fool, so why cannot he see this! or is it us!

      So the difficult bit for me is understanding human nature, not the statistics!

  23. Posted Dec 4, 2008 at 10:05 AM | Permalink

    Sorry I instigated (#1) an unintended side trip, Steve. I didn’t intend for the comment to be “heard” as loudly as it apparently was, nor was my statement phrased as accurately as it might have been. I’ll be more careful in the future.

    Now that you have a good start on emulating Mann’s analysis technique and results – and can more-or-less at-will remove unnecessary/questionable “anomolies” (shall we say?) in the original programming and philosophy, it will be quite interesting to see what will result from manipulating, removing, adding, and filtering all those supposed “proxies”.

  24. Posted Dec 4, 2008 at 10:24 AM | Permalink

    I had already made the point that squiggles developed using Mannian methods have no meaning.

    I wish people realized how clear this is. Seeing temp in these signals is a waste of brain matter. The shapes are artificial, even if there are a few good proxies in there I sure don’t know which they are and neither does Mann. Ninty percent of them are tree rings. So if you start by eliminating those until they are demonstrated to have a linear non-divergent response…

    Also remember the end data has a large portion of pasted on info using RegEM interpolation taking its input based on only 55 “hand picked” proxies This is the uptrend of the HS before measured temp is pasted on, likely a critical step if you want a good r value after averaging. At the link to my post above you can see how the CPS method compresses historic signals by it’s mathematical nature.

    Steve’s work only has to do with replication of the method and a little add on about how much things change if you use a slightly more reasonable method.

  25. Manfred
    Posted Dec 4, 2008 at 6:18 PM | Permalink

    I still think Mann’s reconstruction has several learning effects.

    – first of all, he ended up with another hockey stick, as a result of the combination of several errors, one of them the inclusion of the inverted Tijlander data that flattended past centuries, just as random noise would do.

    – the result with Tiljander excluded doesn’t make much sense either, so Mann has proved that his approach of cherry picking data is very poor.

    – finally, the huge change of southern hemisphere results after extracting the data of a northern hemisphere dataset proves that his data processing algorithms are false.

  26. Posted Dec 5, 2008 at 7:51 AM | Permalink

    Steve, I have lost count of the number of posts you have written on Mann etal 08, and I expect I am not the only one who has lost the plot. When are you going to write a summary of the key points, in clear language that people can understand? And when are you going to write your ‘comment on …’ and submit it for publication? For PNAS (as pointed out by per) it is called a ‘Letter’ and only allows 250 words, and you have 3 months from the publication of the article which I think allows you 4 days from now! Or you could write something longer and submit elsewhere.

    Manfred and others – I don’t even think there is a hockey stick in Mann etal 08. If you ignore the misleading (and as far as I can see, stretched) instrumental record crudely pasted over the top in a thick red line in an attempt to hide the divergence, their proxy manipulations in fig 3 suggest that (a) temperatures now are not significantly higher than in medieval times, (b) the rate of temperature rise now is no higher than in the past.

    • Posted Dec 5, 2008 at 8:45 AM | Permalink

      Re: PaulM (#39),

      If you ignore the misleading (and as far as I can see, stretched) instrumental record crudely pasted over the top in a thick red line in an attempt to hide the divergence, their proxy manipulations in fig 3 suggest that (a) temperatures now are not significantly higher than in medieval times, (b) the rate of temperature rise now is no higher than in the past.

      Thick red annoys me quite a lot as well. Line is Mannian smoothed, it guesses future temperatures. But in climate science guessing future temperatures is OK, as tamino puts it (#81 in here ):

      You’ve also repudiated trends that aren’t based on a centered 30-year moving average — which would necessarily require 15 years of future data. I suppose if you were a physician, you’d insist that we can’t conclude the patient is really ill until 15 years after he’s dead. God save us from sophistry like that.

  27. Sam Urbinto
    Posted Dec 5, 2008 at 2:36 PM | Permalink

    For those not paying attention, all the anomaly for 1880-1960 or for 1991-present tells us is how those periods differ from that metric for 1961-1990. No need to tell the future; the future is here!


    Climatological Normals (CLINO) for the Period 1961-1990, WMO-No. 847
    World Meteorological Organization, 1984: Technical Regulations, Vol. I. WMO Publication No. 49. Geneva, Switzerland.
    Guttman, N.B., 1989: Statistical descriptors of climate. Bulletin of the American Meteorological Society, vol. 70, no. 6, pp. 602-607.

    Normals (“period averages computed for a uniform and relatively long period comprising at least three consecutive 10-year periods”) are computed every decade by individual countries, but the international global standard is only computed every 30 (“averages of climatological data computed for the following consecutive periods of 30 years: January 1, 1901 to December 31, 1930, January 1, 1931 to December 31, 1960, etc.”).

    So the next one is due for 1991-2020. Which will tell us in 2021 how 1880-1990 differ from that period. Can I get a big “Yay!” here?

  28. Posted Dec 22, 2008 at 4:25 AM | Permalink

    The hockey stick holds up

    According to the Christian Science Monitor: “It still looks a lot like the much-battered, but still rink-ready stick of 1998. Today the handle reaches further back and it’s a bit more gnarly. But the blade at the business end tells the same story.”

    ..we can’t conclude the patient is really ill until 15 years after he’s dead

2 Trackbacks

  1. […] The following two lines download a collation of all the relevant Mannian data , including relevant smooths, which don’t need to be re-smoothed over and over again. (This was collated in the script uploaded yesterday, which is a reference … Read more […]

  2. […] All-Proxy CPS « Climate Audit […]

%d bloggers like this: