Central Park: Will the real Slim Shady please stand up?

Today, I’d like to discuss an interesting problem raised recently by Joe d’Aleo here – has the temperature of New York City increased in the past 50 years? Figure 1 below is excerpted from their note, about which they observed.

Note the adjustment was a significant one (a cooling exceeding 6 degrees from the mid 1950s to the mid 1990s.) Then inexplicably the adjustment diminished to less than 2 degrees …The result is what was a flat trend for the past 50 years became one with an accelerated warming in the past 20 years. It is not clear what changes in the metropolitan area occurred in the last 20 years to warrant a major adjustment to the adjustment. The park has remained the same and there has not been a population decline but a spurt in the city’s population in the 1990s.

I’ve spent some time trying to confirm their results and, as so often, in climate science, it led into an interesting little rat’s nest of adjustments, including another interesting Karl adjustment that hasn’t been canvassed here yet.

Update (afternoon): I’ve been able to emulate the Karl adjustment. If one reverse engineers this adjustment to calculate the New York City population used in the USHCN urban adjustment, the results are, in Per’s words, gobsmacking, even by climate science standards.)

Here is the implied New York City population required to justify Karl’s “urban warming bias” adjustments.

newyor5.gif
Continue reading

The New “IPCC Test” for Long-Term Persistence

In browsing AR4 chapter 3, I encountered something that seems very strange in Table 3.2 which reports trends and trend significance for a variety of prominent temperature series (HAdCRU, HadSST, CRUTem). The caption states:

The Durbin Watson D-statistic (not shown) for the residuals, after allowing for first-order serial correlation, never indicates significant positive serial correlation.

The Durbin-Watson test is a test for first-order serial correlation. So what exactly does it mean to say that the a test on the residuals, after allowing for first-order serial correlation, does not indicate first-order serial correlation? I have no idea. I asked a few statisticians and they had no idea either. I’ve corresponded with both Phil Jones and David Parker about this, trying to ascertain both what was involved in this test and to identify a statistical authority for this test. I have been unable to locate any statistical reference for this use of the Durbin-Watson test and no reference has turned up in my correspondence to date. (My own experiments – based on guesswork as to what they did – indicate that this sort of test would be ineffective against a random walk.)

The insertion of this comment about the Durbin-Watson test, if you track back through the First Draft, First Draft Comments, Second Draft and Second Draft Comments was primarily in response to a comment by Ross McKitrick about the calculation of trend significance, referring to Cohn and Lins 2005. The DW test “after allowing for serial correlation” was inserted by IPCC authors as a supposed rebuttal to this comment (without providing a citation for the methodology). I’m still in the process of trying to ascertain exactly what was done and whether it does what it was supposed to do, but the trail is somewhat interesting in itself.
Continue reading

Bürger Comment on Osborn and Briffa 2006

Gerd Bürger published an interesting comment in Science 2006 on cherry-picking in Osborn and Briffa 2006. A few CA readers have noticed the exchange and brought it to my attention. Eduardo Zorita (who I was glad to hear from after our little dust-up at the Nature blog) sent me the info as did Geoff Smith. I started on a summary yesterday, but quickly got distracted into one of the many many possible thickets. So here’s Geoff’s summary:

There’s a pretty hot exchange (at least for CA readers) in last Friday’s Science magazine. Gerd Bürger (lead chapter author and contributor for the TAR) writes about Osborn and Briffa’s 2006 hockey stick (“The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 Years”) commenting critically on site selection and statistics. He writes “…given the large number of candidate proxies and the relatively short temporal overlap with instrumental temperature records, statistical testing of the reported correlations is mandatory. Moreover, the reported anomalous warmth of the 20th century is at least partly based on a circularity of the method, and similar results could be obtained for any proxies, even random-based proxies. This is not reflected in the reported significance levels”.

In commenting on the proxies (most of them well known to CA readers) he says that this “method of selecting proxies by screening a potentially large number of candidates for positive correlations runs the danger of choosing a proxy by chance. This is aggravated if the time series show persistence, which reduces the degrees of freedom for calculating correlations (6) and, accordingly, enhances random fluctuations of the estimates. Persistence, in the form of strong trends, is seen in almost all temperature and many proxy time series of the instrumental period. Therefore, there is a considerable likelihood of a type I error, that is, of incorrectly accepting a proxy as being temperature sensitive’.

He goes on to say ” This effect can only be avoided, or at least mitigated, if the proxies undergo stringent significance testing before selection. Osborn and Briffa did not apply such criteria”.

Bürger indicates the more serious problem is the series screening process, which only looked at proxies with positive correlations. “The majority of those random series would not even have been considered, having failed the initial screening for positive temperature correlations. Taking this effect into account, the independence of the series shrinks for the instrumental period”. This means in Bürger’s opinion that the “results described by Osborn and Briffa are therefore at least partly an effect of the screening, and the significance levels depicted in figure 3 in (1) have to be adjusted accordingly”.

Bürger repeats the analysis with the appropriate adjustments for temperature sensitivity, and finds as a result “the “highly significant” occurrences of positive anomalies during the 20th century disappear. The 99th percentile is almost never exceeded, except for the very last years for {theta} = 1, 2. The 95th percentile is exceeded mostly in the early 20th century, but also about the year 1000″.

There is a reply by Osborn and Briffa, which gives a number of justifications of their procedures (which some will find unconvincing) but concludes ” we agree with Bürger that the selection process should be simulated as part of the significance testing process in this and related work and that this is an interesting new avenue that has not been given sufficient attention until now”.

Progress.

Refs: 1) Gerd Bürger, Comment on “The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 Years”, Science, 29 June 2007: Vol. 316. no. 5833, p. 1844
DOI: 10.1126/science.1140982 available here but only to subscribers

2) Timothy J. Osborn and Keith R. Briffa, Response to Comment on “The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 Years” (29 June 2007)Science 316 (5833), 1844b. [DOI:10.1126/science.1141446] available here

3) Timothy J. Osborn and Keith R. Briffa, The Spatial Extent of 20th-Century Warmth in the Context of the Past 1200 Years, Science, 10 February 2006: Vol. 311. no. 5762, pp. 841 – 844
DOI: 10.1126/science.1120514 available here

A couple of quick points. The inter-relationship of persistence and spurious correlation was discussed in an econometrics context by Ferson et al 2003 (which I’ve discussed here) and has been a concept that has animated much of my thinking. David Stockwell has also been very attentive to the effects of picking from red noise based on correlations. One of the early exercises that I did was to see what happened if, like Jacoby, you picked the 10 “most temperature-sensitive” chronologies from synthetic red noise series with more than AR1 persistence and averaged them. Like Bürger, I did the exercise with persistent series and found that the Jacoby HS was not exceptional relative to red noise selections. (Jacoby only archived the 10 “most temperature-sensitive” series and failed to archive the others. He also refused to provide me the rejected series referring, “as an ex-marine” to a “few good men”).

The Jacoby case was one of the few cases where one could quantify the picking activity and benchmark it against biased selection from red noise.

Geoff spent more space on the Bürger comment than the Osborn and Briffa reply. Its’ main response is that the picking-by-correlation had a relatively minor impact on the selections at their stage because they picked 14 from a universe of supposedly onlly 16 series available from Mann et al 2003 (EOS), Esper et al 2002 and Mann and Jones 2003. They said:

The 14 series used in (2 – Osborn and Briffa 2006) were selected from three previous studies (3—5: Mann et al EOS 2003; Esper et al 2002; Mann and Jones 2003), although this set also encompasses almost all the proxies with high temporal resolution used in the other Northern Hemisphere temperature reconstructions cited in (2) [MBH98, MBH99, Jones et al 1998, Crowley and Lowery 2000, Briffa 2000, Briffa et al 2001, Esper et al 2002, Mann et al 2003, Mann and Jones 2003, Moberg et al 2005, Rutherford et al 2005].

This statement is untrue even if Osborn and Briffa are granted the one stated qualifier and another unstated qualifier. The “high temporal” resolution qualifier is not defined; this qualifier excludes several series from Crowley and Lowery and Moberg et al 2005 and prefers tree rings. The second (unstated) qualifier is that the series go back to 1000. This excludes the majority of the series. (However, Briffa et al 2001 has a very large population of series and a serious “divergence problem”. The Briffa et al 2001 network is one of two networks used in Rutherford et al 2005. The above statement is obviously false in respect to this population.) It is also false even with the long series used in these studies. There are many more than 14 series that cumulatively occur in these studies: there are Moroccan series used in MBH99, a number of oddball series in Crowley and Lowery 2000. I’m in the process of making a definitive count but it is far more than 14.

Osborn and Briffa also gloss over the impact of cumulative data snooping within the literature, by which biased selections are made within the literature. CA readers are familiar with this. For example, consider Briffa’s substitution of Yamal for Polar Urals. For example, updated results for Polar Urals show a very elevated MWP. Even though Briffa had made his name in Nature (1995) for showing a cold MWP in the Polar Urals series, he did not publish the updated information and in Briffa 2000, substituted Yamal (with a HS) for the Polar Urals series. This substitution was followed in all subsequent Team studies except surprisingly Esper et al 2002. The Polar Urals update is excluded from Osborn and Briffa 2006 on some pretext, even though they use both a foxtail and bristlecone series (Mann’s PC1) from sites about 30 miles apart – closer than Polar Urals and Yamal. These individual substitutions are not trivial as this one substitution affects medieval-modern levels in several studies.

Osborn and Briffa observe that

“it is difficult to quantify exactly the size of the pool of potential records from which the 14 series used in (2) were selected, because there is implicit and explicit selection at various stages, from the decision to publish original data to the decision to include data in large-scale climate reconstructions.”

Quite so.

They go on to say:

in our study (2), only two series were excluded on the basis of negative correlations with their local temperature, and no further series had been explicitly excluded by the three studies from which we obtained our data. We cannot be certain that prior knowledge of temperature correlations did not influence previous selection decisions, and there are more levels in the hierarchy of work upon which our study depends at which some selection decisions may have been made on the basis of correlations between proxy records and their local temperature. However, the degree of selectivity is unlikely to be much greater than that for which we have explicit information. Simply, there is not a large number of records of millennial length that have relatively high temporal resolution and an a priori expectation of a dominant temperature signal.

They argue that Bürger has created too large a universe for comparison and that the appropriate simulation is to check cherry-picking of 14 out of 16 – and, surprise, surprise, they emerge with seemingly significant results:

The assessment of the statistical significance of the results of (2) is modified so that, rather than comparing the real proxy results with a similar analysis based on 14 random synthetic proxy series, we now generate 16 synthetic series and select for analysis the 14 that exhibit the strongest correlations with their local temperature records.

Nowhere in either article are bristlecones mentioned and yet they feature prominently in the differing results. Osborn and Briffa use Mann’s discredited PC1 as one proxy and nearby foxtails as another – two out of 14 in a “random” sample! As observed here previously, these do not have a significant correlation with temperature. Under Bürger’s slightly more stringent hurdle, series 1 and 3 are excluded as having too low a correlation (these are the PC1 and foxtails, both of which are HS shaped and important to elevated 20th century results.) Osborn and Briffa say that they use a low correlation hurdle for the following reason:

Our decision to use a weak criterion for selecting proxy records was intended to reduce the probability of erroneous exclusion of good proxies.

Well, one of the “good proxies” that they are working hard not to exclude is Mann’s PC1. 😈 In addition, it is obviously ludicrous that the Team should continue to keep presenting permutations of bristlecones and foxtails as new studies, like the dead parrot in Monty Python. If, in addition, they have to lower the hurdle to get these series in, then don’t lower the hurdle. If the results are any good, they should survive the presence/absence of bristlecones/foxtails.

Their modeling of the cherry-picking process is ludicrous. It’s not even true that only excluded 2 series. What about the Polar Urals update that was in Esper et al 2002 (uniquely)? Why wasn’t that used? Well, they had a pretext for using Yamal instead – yeah, there’s always a pretext. Briffa knows all about this substitution – he was the one that originally did it back in Briffa 2000. Instead of reporting the updated Polar Urals results with a high MWP) as even a mining promoter would have had to do, Briffa substituted his own version of the Yamal series (which is now often attributed in Team articles to Hantemirov even though Hantemirov’s reconstruction is different than Briffa’s.) This substitution has a major impact on a couple of reconstructions – altering the medieval-modern level in Briffa 2000 and D’Arrigo et al 2006. So there’s at least one more series that they excluded. The pretext – that they’ve already got a series from that area (Yamal), but then what about the doubling up of bristlecone/foxtail series.

Osborn and Briffa falsely claim that the 14 series selected constitute “almost all the proxies with high temporal resolution” used in a range of Team studies:

The 14 series used in (2) were selected from three previous studies (3—5), although this set also encompasses almost all the proxies with high temporal resolution used in the other Northern Hemisphere temperature reconstructions cited in (2).

This claim is simply false as any competent reviewer would have pointed out – jeez, any reader of CA could have pointed this out. The studies cited are” MBH98, MBH99, Jones et al 1998, Crowley and Lowery 2000, Esper et al 2002, Briffa 2000, Briffa et al 2001, Rutherford et al 2005, Mann and Jones 2003, Mann et al 2003.

Briffa et al 2001 contains hundreds of series and has a big “divergence problem” not shared by the cherry-picked series. Indeed, the bias of the picking is proven by the lack of divergence. Their retort would be that they meant to limit the matter to series that go back to AD1000. Fine, but then they should say that.

Second, it’s not true even for the series that go back to 1000. I need to do a count, but at a minimum there are at least double that number within the listed studies: there are a couple of Morocco series in MBH99, a French tree ring series, several oddball series in Crowley and Lowery 2000. By the time we get to Osborn and Briffa, there has already been a lot of data snooping.

Beyond that, there is obviously data snooping before this. For example, the Mount Logan dO18 series goes back to AD1000 and has the same sort of resolution as other ice core series. Why isn’t it used? Well, it has a depressed 20th century which is attributed to wind circulation. But then how can you say that Dunde and Guliya aren’t as well, yielding different results. The use of Dunde (via the Yang composite) and not Mount Logan is classic cherry-picking that Osborn and Briffa have totally ignored. And BTW, another year has gone by without Lonnie Tompson reporting the Bona Churchill dO18 results. I’m standing by my prediction of last year that, if and when these results are ever published, they will not have elevated 20th century dO18. (And another year has gone by without Hughes reporting the Sheep Mountain update. What a swamp this is.)

And what about series like Rein’s offshore Peru lithics with a strong MWP anomaly? Is this excluded because it is not of sufficiently high resolution? Well, the Chesapeake Mg-Ca series has resolution of no more than 10 years in the MWP portion and has a couple of weird splices. And what of Mangini’s speleothems with high MWP? Or Biondi’s Idaho tree ring reconstruction? BTW: if the temporal resolution of the Chesapeake Mg-Ca series is used as a benchmark, there are quite a few ocean sediment series that qualify (e.g. Julie Richey’s series).

I need to make a systematic catalog of series going back to the MWP with resolution at least as high as the Chesapeake Mg-Ca series, but off the cuff, I’d say that there are at least 50 series, probably more.

So when Osborn and Briffa say that the universe from which they’ve selected can be represented by selecting 14 of 16, this is completely absurd. There has probably been cherrypicking from at least 3 times that population. But aside form all that, the active ingredients in the 20th century anomaly remain the same old whores: bristlecones, foxtails, Yamal. They keep trotting them out in new costumes, but really it’s time to get them off the street.

NOAA and the Three Monkeys

In a website release earlier this year, NOAA proudly announced the extensive involvement of its officers in IPCC as lead authors, review authors and even the co-chair of IPCC WG1,

Susan Solomon, a senior scientist of the NOAA Earth System Research Laboratory in Boulder, Colo., is co-chair of Working Group 1 (WG1), the Physical Science Basis. Nine of the lead and review authors are from NOAA and 20 of the model runs were done by the NOAA Geophysical Fluid Dynamics Laboratory in Princeton, N.J. Lead authors are nominated by their governments.

NOAA authors and IPCC review editors for WG1 include Thomas Peterson, David Easterling, Thomas Karl, Sidney Levitus, Mark Eakin, Matthew Menne of the NOAA Satellite and Information Service; and Venkatachala. Ramaswamy, David Fahey, Ronald Stouffer, Isaac Held, Jim Butler , Paul Ginoux, John Ogren , Chet Koblinsky, Dian Seidel, Robert Webb, Randy Dole, Martin Hoerling of the NOAA Office of Oceanic and Atmospheric Research, and Arun Kumar of the NOAA National Weather Service.

In addition, a cadre of NOAA scientists from the laboratories and programs, including the joint and cooperative institutes, served as contributors and government reviewers of the final report, which is a state of the science based upon published peer-review literature.

Referring to this story, I submitted an FOI request to NOAA about a month ago for the review comments (now online at IPCC) and the review editors comments, reported at CA here. Here is an excerpt:

I request that a copy of any NOAA records (documents, memoranda, review comments, reports, internal and external correspondence or mail including e-mail correspondence and attachments to or from NOAA employees) be provided to me on the following subjects:
(1) review comments on (a) the Second Order Draft and (b) the Final Draft of the Fourth Assessment Report of the International Panel on Climate Change (IPCC) Working Group I, including, but not limited to, all expert, government and review editor comments;
(2) all annotated responses to such comments by Chapter Lead Authors.

I noted that all my email correspondence with Susan Solomon and Martin Manning had been with their noaa.gov email address.

A.R. Ravishankara, Director, Chemical Sciences Division, Earth Systems Research Laboratory/NOAA (where Susan Solomon and Martin Manning work) has now replied as follows:

“You have asked for copies of NOAA records concerning review comments on the second order draft and the final draft of the Fourth Assessment Report of the IPCC Working Group 1. In addition, you have asked for all annotated responses to such comments by chapter authors.

After reviewing our files. we have determined that we have no NOAA records responsive to your request. If records exist that are responsive to you request, they would be records of the IPCC and as such can be requested from the IPCC…”

Their strategy is a little different than Phil Jones, who claimed that his records were “exempt”. NOAA did not avail itself of any of the possible exemptions for the request – it denied that it had any records whatever. In many FOI regimes, email messages are producible. I don’t know the American law but I imagine it’s not dissimilar from Canadian FOI law where, for example, the University of Victoria’s policy specifically says that “E-mail messages created on University computer equipment and transmitted using the University’s e-mail system are University records.” If there is no similar policy at NOAA, I’d be shocked.

A guide to FOI from senior officials says that “most records in the possession of an agency are “agency records” within the meaning of FOIA.

The 1996 FOIA amendments affirm the general policy that any record, regardless of the form in which it is stored, that is in the possession and control of a Federal agency is usually considered to be an agency record under the FOIA. Although the FOIA occasionally uses terms other than record,’ including information’ and matter,’ the definition of record’ made by the 1996 amendments should leave no doubt about the breadth of the policy or the interchangability of terms

A document that does not qualify as an agency record’ may be denied because only agency records are available under the FOIA. Personal notes of agency employees may be denied on this basis. However, most records in the possession of an agency are agency records’ within the meaning of the FOIA.

What NOAA is arguing, among other things, is that email to and from NOAA employees on NOAA computers about IPCC review comments is the property of IPCC, rather than NOAA. Suppose that this established a precedent. Let’s say that NOAA employees decided to act as reviewers of pornographic videos sent to them by the International Pornography Council and they were caught. Could they argue that the videos on NOAA computers were records of the International Pornography Council and outside the disciplinary scope of NOAA? Of course not. Or think of this another way: if the emails on NOAA computers are IPCC property, then IPCC should be able to exert control over the emails. If IPCC told NOAA to delete all correspondence involving them, would NOAA be obligated to follow their instructions? I doubt it. I can’t imagine a court finding that records on NOAA computers were not NOAA records.

Now the issue has, for the most part, become somewhat moot with the IPCC release of the review comments and responses (although the review editor comments, included in my request to NOAA, are still not online.) As a Canadian, it’s hard for me to take much umbrage at NOAA’s actions and I’m surprised to some extent at having standing. Nonetheless, this sort of action by a government agency is annoying (but it’s bad policy to be annoyed because you have to expect a run-around from government officials.) I presume that Americans who submitted similar requests will get similar replies. It appears that there is an appeal provision and I’ll probably avail myself of that facility.

The three monkeys? See no records, hear no records, got no records.

IPCC and the Briffa Deletions

I’ve posted on several occasions on the deletion of the “inconvenient” section of the Briffa reconstruction. Now that the review comments are online, I want to reprise this, just so you can understand the IPCC process a little better. This repeats some earlier material.

As an IPCC reviewer, I

Show the Briffa et al reconstruction through to its end; don’t stop in 1960. Then comment and deal with the “divergence problem” if you need to. Don’t cover up the divergence by truncating this graphic. This was done in IPCC TAR; this was misleading. (Reviewer’s comment ID #: 309-18)]

In response, IPCC section authors said:

Rejected — though note divergence’ issue will be discussed, still considered inappropriate to show recent section of
Briffa et al. series. 👿

Once again, here’s what they were deleting and what they felt was “inappropriate” to show the public – the post-1960 decline in the Briffa index. (I’ve shown the IPCC TAR version here but the same deletion is made in AR4). By deleting the adverse segments, they enhance the rhetorical impression of the remaining series. Any mining promoter that did this would be in trouble with the securities commissions.

Alexander et al 2007

For those of you who want a thread on this paper (which I don’t have time to read right now)”

http://nzclimatescience.net/images/PDFs/alexander2707.pdf

This study is based on the numerical analysis of the properties of routinely observed hydrometeorological data which in South Africa alone is collected at a rate of more than half a million station days per year, with some records approaching 100 continuous years in length.

The analysis of this data demonstrates an unequivocal synchronous linkage between these processes in South Africa and elsewhere, and solar activity. This confirms observations and reports by others in many countries during the past 150 years. It is also shown with a high degree of assurance that there is a synchronous linkage between the statistically significant, 21-year periodicity in these processes and the acceleration and deceleration of the sun as it moves through galactic space.

Despite a diligent search, no evidence could be found of trends in the data that could be attributed to human activities.

It is essential that this information be accommodated in water resource development and operation procedures in the years ahead.

IPCC Review Comments Now Online

Well, here is a small accomplishment that I think can reasonably be credited to climateaudit. As we approach the due date for the NOAA FOI responses, IPCC has now put the review comments online. Enjoy.

On to Gridded Data

Gavin Schmidt recently told Anthony Watts that worrying about station data quality was soooo last year. His position was a bit hard to follow but it seemed to be more or less as follows: that GISS didn’t use station data, but in the alternative, as defence lawyers like to say, if GISS did use station data (which they deny), de-contamination of station data would improve the fit of the GISS model. It reminds me of the textbook case where an alternative defence is not recommended: where the defendant argues that he did not kill the victim, but, if he did, it was self-defence. In such cases, picking one of the alternatives and sticking with it is considered the more prudent strategy.

In this particular case, I thought it would be interesting to plot up the relevant gridcell series from CRU and GISS and, needless to say, surprises were abundant. Continue reading

Gavin Schmidt: station data "not used" in climate models

Gavin Schmidt has told Anthony Watts that the problematic station data are not used in climate models and any suggestion to the contrary is, in realclimate terminology, “just plain wrong”. If station data is not used to validate climate models, then what is?

His point seems to be that the climate models use gridded data.

But isn’t the gridded data calculated from station data? Well, yes. (And it wasn’t very hard to watch the pea under the thimble here.) So Gavin then argues that the adjustments made in calculating the gridded products have “removed the artefacts” from these poor stations:

If you are of the opinion that this station is contaminated, then you have to admit that the process designed to remove artefacts in the GISS or CRU products has in fact done so –

At this point, all we know is that the process has smoothed out the artefacts. Whether the artefacts have biased the record is a different question entirely and one that is not answered by Gavin’s rhetoric here. At this point, while we have a list of GISS stations there still is no list of CRU stations or CRU station data. How could one tell right now whether CRU has “removed the artefacts or not”? So on the present record Anthony doesn’t have to admit anything of the sort. OF course, if the data and code is made available and it becomes possible to confirm the truth of Gavin’s claim, this situation may change. But right now, no one can say for sure.

Gavin then asserts than any removal of contaminated stations would improve model fit. I’m amazed that he can make this claim without even knowing the impact of such removal.

Personally I’m still of the view that modern temperatures are warmer than the 1930s notwithstanding the USHCN shenanigans. But supposing that weren’t the case and all the stations in the USHCN with very big differentials turned out to be problematic and the good stations showed little change. Surely this wouldn’t improve the fit of the models. I’m not saying that this will be the impact of the verification. I think that the verification is interesting and long overdue, but I’d be surprised if it resulted in big changes.

But you don’t know that in advance and for Gavin to make such a statement seems like a “foolish and incorrect” thing to do. 😈

He urges Anthony not to ascribe “consequences to your project that clearly do not follow” but obviously feels no compunction in making such ascriptions himself. Check it out.

Stamford CT

Today’s tide at surfacestations.org brought in an eastern site,Stamford Ct, courtesy of Kevin Green. A couple of interesting features, including something really weird with the GISS adjustments. Continue reading