One point that intrigued me about the Muscheler vs Solanki dispute was to see what the underlying data looked like. Here’s a graph and some comments. I don’t purport to know a lot about this; I just wanted to get a feel for the data.

The main source for dC14 readings is Intcal04 here. The top panel shows a plot of the dC14 data (I’ve shown the R-script to show how easy the download is in R. You can parameterize the URLs which is really nice for this sort of quick look). Intcal04 describe fitting a smooth as follows:

IntCal04 age-corrected àÅ½”¬?14C (“°) with a 2-standard deviation envelope showing the 1000- and 2000-yr moving averages (red and blue lines, respectively). For the old end of the data set, the moving average truncates the first 500 or 1000 yr. For the recent end, the moving average window was allowed to shrink to the number of remaining points in the data set to avoid this type of truncation.

This is a pretty weird smooth. I fitted it with lowess using f=1/3, a more usual smooth, just to see what it looked like (top panel). I picked 1/3 to try to match some of the main features to the Intcal04 smooth. (an alternate version not inverted with f=2/3 is here.) Then they take the difference between the curve and the smooth, which I’ve shown in the bottom panel – black: versus lowess smooth; red – Intcal04 version. I’ve reversed the vertical to match with conventional ideas of warm periods and cold periods (i.e. LGM is cold).
For comparison, here is the corresponding figure clipped from IntCalo4 (vertically transposed from bottom panel above).

Figure: red- versus 2000 year smooth; blue – versus 1000 year smooth.

What does it all mean? It doesn’t seem very desirable that results should be so sensitive to fairly arbitrary differences in smooth. With a little difference in smooth, the LGM is more pronounced as is the LIA. Does it matter? I have no idea.

Here’s the R script:

```##INTCAL url<-"http://www.radiocarbon.org/IntCal04%20files/intcal04.14c&quot; g=read.table(url,skip=11,sep=",") g<-g[1:(nrow(g)-1),] v< lowess(g[,1],g[,4],f=2/3) url<-"http://www.radiocarbon.org/IntCal04%20files/resid04_2000.14c&quot; h<-read.table(url,skip=11,sep=",") h<-h[1:(nrow(h)-1),] nf <```

``` ```

```layout(array(1:2,dim=c(2,1)),heights=c(1.1,1.3)) par(mar=c(0,4,3,2)) plot(-g[,1],g[,4],type="l",xlab="",ylab="dC14",axes=FALSE) lines(-g[,1],rev(v\$y),col="red") axis(side=2); axis(side=1,labels=FALSE); box() par(mar=c(4,4,0,2)) plot(-g[,1],g[,4]-rev(v\$y),type="l",ylab="dC14 anomaly",axes=FALSE,xlab="") abline(h=0); axis(side=2) ;axis(side=1) ;box() lines(-h[,1],h[,2],col="red") plot(-h[,1],h[,2],type="l",xlab="",ylab="dC14",axes=FALSE) ```

1. John A

LGM?

2. Larry Huldén

LGM : last glaciation minimum ??

3. John Hunter

Come on Steve, you must be losing the plot. You say “this is a pretty weird smooth”. It’s not weird at all. For better or worse, a simple moving-average filter has been, and is, widely used. It has a spectral response that may not be ideal (i.e. it “rings” significantly) but it is well understood and has been so for a very long time. The problem of what one does at the end of a series has also been around for a long time (it also, of course, occurs with filters such a lowess, although the solution used may well be invisible to you if you just use a “black box” routine). If you don’t mind losing data, then it is probably best just to truncate the filtered series one half-width short, which is what has been done here at the “old” end of the data. If one wants a filtered value right up to the end of the record, it is common to shrink the averaging width (as has been done here at the “recent” end”), which has the (well-understood) property of progressively increasing the cut-off frequency of the filter, and phase shifting the result, as the end of the data is approached.

You seem to have discovered that filtered series depend very much on the details of the filter! But I knew that already.

4. Steve McIntyre

LGM – Last Glacial Maximum.

What I had in mind on smoothing was quickly expressed and perhaps not clear. Here they are not smoothing for the purposes of illustration – which is one of the most common uses of smoothing – but for the purpose of calculating a residual and it is the residual which is used for further analysis. All these guys are engaged in learne controversies at Nature about interpretations of the residual curves. The -17500 BP portion is not specifically at issue in the controversy, but, if you look at -17500 BP ( a pretty important date), you get wildly different results in the residual with trivially different smooths. A difference arises in the LIA as well. Can you say that one result is “right” and the other is “wrong”? A 1000 or 2000- year smooth is not physically based. For something like radiocarbon decay, I would have expected some kind of physically based fit. A 2000-year smooth is going to take out information above that scale. I would have thought that these would be fundamental issues that would be argued and presented in the original papers, but they weren’t.

I can’t give much after-market support to this topic without a prolonged effort. John H, you said that you’ve known about the impact of smoothing on dC14 residual series for a long time. I didn’t notice any comments on this on your website.

5. John Davis

Oddly enough, there’s this from Gavin Schmidt at RealClimate ( http://www.realclimate.org/index.php?p=171 )in his piece on “the lure of solar forcing”. It concerns claimed links between solar cycle length and climate; I’m not sure how solar cycle length and C14 relate, but the comments on the effect of smoothing are interesting:

the excellent correlation between solar cycle length and hemispheric mean temperature only appeared when the method of smoothing changed as one went along. The only reason for doing that is that it shows the relationship (that they ‘knew’ must be there) more clearly. And, unsurprisingly, with another cycle of data, the relationship failed to hold up.

6. Greg F

This is one of my pet peeves with the way data is handled in climate science. The frequency response of any digital filter is directly related to the sampling frequency. If you change the sample rate you change the filter response in real terms. Looking at the corrected values I see the sample period goes from 20 to 10 to 5 years. At each transition point the filters real response will double in frequency. The frequency response of the filter at the end of the data (5 year period) will be 4 times higher then it was at the beginning(20 year period). This doesn’t include the shortening of the filter at the end of the data which will raise the cutoff frequency even more.

Another point that seems to be missed is that once you have filtered the data (or even if you haven’t filtered it) you cannot simply interpolate between samples to produce a pseudo – continuous waveform. The data needs to reprocessed through a reconstruction filter. To produce a reasonable pseudo – continuous waveform the highest frequency component of the data needs to be a small fraction of the sample rate. This is usually done by up-sampling the data, and then, running it through a reconstruction filter

7. Posted Aug 10, 2005 at 12:39 PM | Permalink | Reply

The comment of Ilya Usoskin as #11 at RealClimate is here of interest. Usoskin, Solanki ea. have published in different articles how they used 14C data, while Muscheler used a different way to obtain the calibration, but has not (yet) published what was exactly done.

8. John Hunter

Steve (#4): You said: “John H, you said that you’ve known about the impact of smoothing on dC14 residual series for a long time.”

No, Steve, that is not what I said at all — read it again. I was talking about linear filtering in general (which is what this is about) and not specifically about dC14 data (I didn’t even mention dC14). I’ll try again — much of the environment exhibits broad-band signals. Because these signals are broad-band, any attempt to filter them with low-pass, high-pass or band-pass filters (and all the IntCal04 people did was to construct a linear high-pass filter using a low-pass filter and a difference) will yield results that depend strongly on the details of the filter response.

So if you are going to loosely throw around terms like “LIA”, you had better be pretty sure about what time-sales and filtering procedures you are talking about.

And, Steve, phrases like “after-market support” don’t help a scientific discussion — they just suggest where you are coming from.

9. Steve McIntyre

Shouldn’t the Intcal people discuss the specific effects of filter choices on their series, before people begin to reify the versions. Again I would have thought that a negative exponential of some kind could be rationalized for C14 decay. I was pretty surprised to see their smooth selection without any argument or justification. Perhaps there’s a reson, but it wasn’t stated in the original Stuiver articles.

10. Steve McIntyre

Ferinand,
Good spotting to see Usoskin’s response. I like the following comment:

Muscheler et al. (2005) tried to reproduce our reconstruction (Solanki et al., 2004) of solar activity from 14C but in a different way and obtained a different result. Note that the details of how the results were obtained have not been published.Accordingly, their computation cannot be repeated and verified independently.

Muscheler must be auditioning for the Hockey Team.

11. Ed Snack

John Hunter “And, Steve, phrases like “after-market support” don’t help a scientific discussion “¢’¬? they just suggest where you are coming from.” suggests very strongly where you are coming from. Any scientist who does not understand the meaning of the phrase “after-market support” is way, way, out of date. It is, after all, a piece of computerese.

12. John Hunter

Greg F. (#6): It may be one of your “pet peeves”, but to say that

“The frequency response of any digital filter is directly related to the sampling frequency. If you change the sample rate you change the filter response in real terms. Looking at the corrected values I see the sample period goes from 20 to 10 to 5 years. At each transition point the filters real response will double in frequency. The frequency response of the filter at the end of the data (5 year period) will be 4 times higher then it was at the beginning(20 year period).”

is just plain wrong (well, four of the five sentences are incorrect).

The first zero of the frequency response of a 1000-year moving average filter (one of the filters used in the IntCal04 example) is at a frequency of 1 cycle per 1000 years. It has nothing to do with the sampling period.

This kind of discussion is crazy — it’s what makes many blogs and discussion groups a joke. This is about well-understood (and quite trivial) signal processing theory.

13. John Hunter

Steve (#9): You ask “shouldn’t the Intcal people discuss the specific effects of filter choices on their series, before people begin to reify the versions?”

No, Steve, they shouldn’t. If anyone is presented with data:

1. which they know has been filtered,

2. for which the filter parameters are well defined (as is the case with the IntCal04 data), and yet

3. they don’t understand quite elementary signal processing theory,

then they shouldn’t even contemplate doing anything with the data.

I think, in your world, you might say “caveat emptor”, which in my (admittedly sexist) dictionary is defined as “let the buyer beware (he alone is responsible if he is disappointed)”.

14. John Hunter

Ed Snack (#11): I didn’t say I do “not understand the meaning of the phrase ‘after-market support’” (why don’t people take the trouble to read what I say?). I was indicating that this is a discussion of signal processing. No “market” is involved. Unfortunately for some on this site, you can’t bring everything down to the level of “marketing”.

15. Dave Dardinger

Uh, John, I can only think you “understand” the meaning of the phrase ‘after-market support’ in a very superficial way. Yes the phrase came original from business meaning “help we give to the customer even after s/he’s already taken possession of our product so that s/he will be inclined to purchase more later and to build our reputation (and possibly to fulfill contractual obligations)” but it’s taken on a more general meaning of “follow-up discussion of a subject to bring everyone up to speed” Note that my second definition uses the subphrase “bring X up to speed”. This is an example of a phrase which you probably were able to understand without trouble because it’s been around a while. But someone for whom English was a second language and with no knowledge of motors would probably be confused.

Chiding someone for using a common, if not universal, meaning for a phrase is not good form when you’re trying to make a favorable impression on people. Not that you seem to have that intention here, let it be said, however.

16. John Hunter

Dave (#15): You claim that the phrase “after market support” has “taken on a more general meaning of ‘follow-up discussion of a subject to bring everyone up to speed’”. O.K., well I did a Google — firstly, the most common form of the phrase is “aftermarket support” rather than “after market support” (17,500 to 7,930 hits), so I searched for that. Here are the counts for various word combinations:

“aftermarket support”: 17,500

“aftermarket support” “global warming”: 16

“aftermarket support” “paleoclimate”: 0

which certainly doesn’t convince me that the phrase “aftermarket support” has “taken on a more general meaning”, at least in the subjects in which this site is apparently engaged.

So I still contend that Steve’s use of this phrase indicates something about his “market orientation”.

17. TCO

I find the word usage odd, but not sinister. Even if it were capitalistic or business-related is that evil? I mean if I go “hit the head” are you going to get all irked at my Navy phraseology? I did think when you raised, it John, that maybe Steve was being snarky. but now that I understand the meaning, (after publication discussion?), what’s so wrong with it?

18. TCO

I’m lost. Could you spare 2 10-word sentences for an executive summary? What is the “story”?

19. Steve McIntyre

I don’t like it when you get two different looking series depending on whether you insert a smooth using lowess or using 2000-year moving averages – and no justification for the smoothing choice was given in the original article. I’m obviously tuned to impacts of “standardization” and so I picked up on the standardization. The topic was in the news; it just seemed odd to me and I was hoping that someone might have an explanation. I’m not offering any conclusions or even suggesting that there’s a morale to the story.

I think that purporting to make a distinction between “after market support” and “aftermarket support” will make it onto the list of the top ten comments from Hunter .

20. Dave Dardinger

Well, I did say I’d take on your trivial messages to take a load off Steve….

“Global Warming”, let alone “paleoclimate”, are not examples of ‘a more general meaning’. They’re examples of a more specialized meaning. If you really want to see what the context / meaning of a usage of “aftermarket support” is, you need to pick 100 random hits and see how it was used in each case. Even then it’s not really a test of my point. A more general usage of a term may well exist alongside a much more common specialized usage. You might, for instance find that out of 1000 usages of “Home run” 900 were in stories about baseball while only 100 referred to the more general usage meaning “scoring a knockout point”. This doesn’t mean that someone who tells me I hit a homerun against you is showing that s/he was once a baseball player.

21. TCO

Steve, I hope that response was not to me. I don’t know what a lowess is. It seems like you could still boil the gist of the story down.

22. TCO

P.s. you must be evil since you used the word aftermarket support. although the reason for evilness is actually quite lost on me. Maybe you are the Maytag man with too much time on hands?

23. Ed Snack

Oh dear, I thought, John Hunter, that your complaint could only make sense if you didn’t understand the phrase “after-market support”. Otherwise it seemed, well, kind of petty.

Note that this attack on the use of the phrase comes from someone who wrote in another thread “one of the more tiresome aspects of the contrarian canon is the resort to “logical fallacies” and the like. We do not live in a binary world, where something is either true or not true. There are many shades of truth, clouded by uncertainty. You can play your erudite “logical” games, but leave me out of it.”

I suppose that in this case the word erudite can definitely be excluded !

24. TCO

C’mon Ed. Quit picking on him. He’s outnumbered and it’s turning into a bout.

25. John Hunter

Steve, Dave, Ed (any number of postings): I think it was quite reasonable to point out, in a single sentence, an example of Steve’s quite common usage of terms that come from, and are predominantly used by, the business community. Does Steve want to deny that he uses terms like “promotion” to describe something that I would probably call an “article”? It is not surprising — he comes from the business community, and I think it is worthwhile occasionally reminding people of that (when Steve provides the stimulus). I, for one, don’t think I have ever used the term “after-market support” (which doesn’t mean that I don’t know what it means). I can’t remember ever hearing a scientist (or any of my friends) use this term. And I think I showed how few people on the Internet actually use the term in the same passage as they use the term “global warming” (of the 16 examples I found, only one used the term in a way that could be remotely summarised as “follow-up discussion of a subject to bring everyone up to speed”, and in this case it had nothing to do with global warming).

In the context of a web site devoted to issues related to the science of climate change, it was an unusual thing to say.

26. John Hunter

TCO (#24): It’s O.K., let them crow — they seem incapable of discussing the technical issues of signal processing.

27. Steve McIntyre

There are a number of business terms that I consistently use: due diligence, disclosure, and full, true and plain disclosure, to name a few. I suspect that Hunter’s comment: "I can’t remember ever hearing a scientist (or any of my friends) use this term" would probably apply to the terms "due diligence" and "full true and plain disclosure" as well.

Wouldn’t the same be true of the terms "due diligence" and "full true and plain disclosure" and Hunter’s next sentence: "And I think I showed how few people on the Internet actually use the term in the same passage as they use the term “global warming”, once you subtract references derivative from us.

28. John Hunter

Steve (#27): Thank you Steve — that is the only point I was making — we come from different backgrounds and use different terminology. So, I would not use the term “due diligence” — I would probably simply say “care”. Instead of “full true and plain disclosure” I might use “honesty” or “openess”. Instead of “audit”, I would use “review” or “check”. However, I think the important point is that, just as we use different terminology in different fields of endeavour, we also do things in rather different ways. While we sometimes learn useful and better ways of doing things from other fields, it is not necessarily true thay one way is better than another — the context is important. In the long run, I see no clear evidence that your attempts to weed out errors by an almost formal auditing of historic papers should produce any better science than the methods by which science has traditionally progressed — and “your” method may well be extremely inefficient (in everybody’s time, not just yours), narrowly focussed and pretty destructive to the process of lateral thinking which is required for scientific progress.

29. TCO

Interesting. I think there is a place for each. In some ways, it may be very useful for science as a field to have people like Steve. I know that in crystallography, there is a fisker, who basically raised everyone’s games by his activities. One reason that we don’t see more of this, is that there is less incentive for scientists to fisk each other, since what they get real props for is discovery itself. Certainly, we do need that activity to occurr. But I would say someone like Steve, with a different motivation, can be in some ways useful to the field, rather than just having people with the typical grant/promotion incentives.

30. TCO

I mean you will always get more props for being Richard Leaky than for exposing Piltdown Man. But exposing Piltdown Man is still a worthwhile activity, if someone so chooses.

31. Steve McIntyre

I have not advocated formal audit systems for science.

I do advocate that paleoclimatologists should archive their data and source code on an ongoing basis if their results are to be used in IPCC studies and/or if they received U.S. federal funding. I think that the NSF should make a concerted effort to ensure compliance with past obligations which have not been met by paleoclimatologists.

I think that there is a real issue about what type of due diligence is appropriate for IPCC and whether the current system is adequate, despite huffing and puffing that entire stadiums of scientists have reviewed the report. This is a much narrower issue than science processes in general. I try to work from the particular as much as possible.

“Due diligence” is a term that is related to “care”, but has fairly specific meaning. “Full true and plain disclosure” also has specific meanings that are related to “honesty” or “openness”. There’s a reason why technical meanings have developed in law, simply because there are a wide variety of circumstances that have to be dealt with.

A particular viewpoint of mine is that IPCC TAR should meet prospectus-like disclosure, in the sense that the protestations about how wonderful the IPCC review is, should certainly imply that its disclosure standards far exceed those applicable to mining promotions. WE don’t have much experience in disclosure standards for international scientific prospectuses, but there is lots of social experience with business prospectuses. So it seems useful to apply these standards as a minimum.

I have set out particular issues of disclosure involving MBH because it seems to me that there is a real issue of whether their disclosure met even mining promotion standards. If it didn’t, then I can see only two alternatives: scientific disclosure standards need to be upgraded so that they are no lower than mining promoters (this could be done with minimal impact on scientific processes); or MBH failed to meet applicable scientific disclosure standards with whatever that implies.

32. TCO

Prospectus is a sales document. You can’t lie, but you can spin. Due diligence is a different beast, but even then…caveat emptor.

33. Steve McIntyre

Of course, but it imposes standards and obligations on the issuers. I had an exchange with Andrew Weaver at Journal of Climate about “full true and plain disclosure”, which is long story, but it left me convinced that some climate scientists did not understand the difference between “full true and plain disclosure” with its concomitant obligation of disclosure of material adverse results, and “believing that what they said was true”. This is undoubtedly a particular problem in climate science, with all its political edge, than it would be in otehr areas.

34. TCO

I really hope you are not holding out Canadian mining prospectuses out as a standard. Mann is at least a couple notches higher than that (despite his flaws).

35. fFreddy

Re #34
It is not so much the market for Canadian mining stocks, as the process applicable to raising finance, and the consequences for clear violations of those standards.
I assume you are implying that Canadian mining prospectuses are prone to being dodgy: if so, presumably any dodgy promoter can be sued for failing to meet those standards. This provides a risk for him to weigh in the balance against his expected return from issuing a dodgy prospectus.
One of the big problems with the climate alarmists is that there is no real downside for them in selecting virtually any example of extreme weather and citing it as an example of AGW. Given that there is unlikely to be conclusive proof one way or the other for decades, and given that there are considerable short-term benefits to making a big noise that the sky is falling, the economically rational thing to do is to join the party.
The effect is a strong force perverting the scientifc process that we all learned at school. Rather worrying.

36. TCO

They are notably prone to dodginess.

37. fFreddy

Noted. You take the point about process ?

38. Steve McIntyre

I will certainly put my knowledge of Canadian mining prospectuses up against any reader of this blog. You have to differentiate a “prospectus” from promotion by a stock broker or boiler house. When you issue a prospectus, you have to sign an affidavit that you’ve made full, true and plain disclosure. It’s an affidavit that promoters pay attention to because they incur legal liability. You can raise money without a prospectus through private placements; Bre-X didn’t issue a prospectus during its run. There are still ongoing obligations of full, true and plain disclosure for public companies. A full true and plain disclosure does not prevent frauds, but it at least gives a standard for going back. A former premier of Ontario has just been through a prolonged securities investigation because something was left out of a prospectus while he was a director of the company. I can think of no arguments against full, true and plain disclosure. I can think of no justification for withholding information about R2 and bristlecones under a full, true and plain disclosure regime.

39. Greg F

Re#12

… is just plain wrong (well, four of the five sentences are incorrect).

And my mistake was assuming the filter kernel length remained constant. The first two sentences are correct under any circumstances.

1) The frequency response of any digital filter is directly related to the sampling frequency.

In fact most programs used to generate filter coefficients are based on a sampling frequency of 1 and a cutoff frequency between 0 and 0.5 (of the sampling frequency). To get back to the real answer you simply scale it.

2) If you change the sample rate you change the filter response in real terms.

A filter of 250 Hz for a sampling frequency of 1000 Hz would be a normalized cutoff frequency of 0.25 (250/1000). If I use the same filter, but the sampling frequency is increased to 2000 Hz, the cutoff frequency remains 0.25 of the sampling frequency but increases to 500 Hz (0.25*2000) in real terms.

That is 3 wrong out of 5. As I said before my mistake was assuming the filter length remained constant.

3,4,5) Looking at the corrected values I see the sample period goes from 20 to 10 to 5 years. At each transition point the filters real response will double in frequency. The frequency response of the filter at the end of the data (5 year period) will be 4 times higher then it was at the beginning(20 year period).

I have no problem admitting my mistakes, you should try it sometime John.

40. TCO

Fred, yes, yes, yes. Where did you think I didn’t. I’m not THAT dumb. I was just making an additional comment of amusement. It’s like, I don’t know, someone talking about the importance of the rule of Law in anglo countries and referring to OJ Simpson.

Steve: (1) What I said before. My comment was more based on the general dodgy reputation of Canadian mining stocks and the connotation. It was just a little tease. You pumper-dumper. (still kidding!) (2) I think about a corporate divestiture document to private parties when I use the word prospectus. Which of course has legal ramifications if you lie, but is certainly a bit of a sales document and lacks the detail of due diligence done by the acquiring party. I mean it is a very different document than say “the team room”. And it has caveats about numbers being approximate and such. I think you have public offering in your mind, no? (and we are both right).

41. Ed Snack

TCO, sure, but a bit facile, no ? Maybe one difference is that with a mining prospectus (even a Canadian one) that turns out to be a bit suspect, you have a legal channel (or channels maybe) to seek redress, and to gain access to the data that was, say, omitted. It may be too late or no longer of use, but you do have some “rights”. With MBH, there is missing information that is perhaps critical to the acceptance of the reconstruction, but how can one get hold of it if M, B, & H refuse to provide it ? If not for Rep Barton, I suspect nothing else would ever have been released. Even now, with several “smoking guns” the mainstream is trying very hard to ignore the evidence, hoping one suspects that Steve will just lose interest and go away.

Or using your concept of a divestiture document, again if you wish to proceed, you have enforceable means of obtaining information, and an ability to apply sanctions should you not get it or if false or misleading information is provided. No analogy is perfect, but I for one certainly see the validity of Steve’s comparison.

42. John Hunter

Greg F (#39): Unfortunately, you seem rather muddled over the filtering problem. Of course your “mistake was assuming the filter kernel length remained constant” — in the case which this thread addresses, the kernel length IS constant (it is either 1000 or 2000 years), so the filter response DOES NOT change when the sampling interval changes. So you continue to make the same mistake by saying:

“At each transition point the filters real response will double in frequency. The frequency response of the filter at the end of the data (5 year period) will be 4 times higher then it was at the beginning(20 year period).”

– that statement is WRONG.

You finish with: “I have no problem admitting my mistakes, you should try it sometime John.”

Another irritant of the contrarians is to sign off with a completely unsupported claim that the other person has been making mistakes. So, Greg, please point out one that I have made (other than the infamous “global/local” typo wich I admitted to long ago!).

43. Greg F

So, Greg, please point out one that I have made (other than the infamous “global/local” typo wich I admitted to long ago!).

44. John Hunter

Greg F (#42 and #43): Sorry, I thought I made myself clear — I asked you to point out one mistake I made — not a URL containing a protracted discussion.

And perhaps you could now also withdraw your claim that “At each transition point the filters real response will double in frequency. The frequency response of the filter at the end of the data (5 year period) will be 4 times higher then it was at the beginning(20 year period).”

45. Steve McIntyre

John H. and Greg. F, please stick to the issue at hand rather than trolling through past errors and supposed errors. I am refraining from my own editorializing on the issue of Hunter errors.

46. Greg F

Greg F (#42 and #43): Sorry, I thought I made myself clear “¢’¬? I asked you to point out one mistake I made “¢’¬? not a URL containing a protracted discussion.

Gee Johnny … I didn’t know you were a lawyer too! Of course your mistake is front and center in that discussion. And so is your arrogance for that matter.

47. John Hunter

Steve (#45): I WAS trying to stick to the issue in hand. I simply wanted Greg to withdraw his claim (howler?) that suggested that there was something wrong in the way the IntCal04 smoothing was carried out. It would also be nice if this site refrained from propagating unsubstantiated claims of errors by other people. You were quite free to censor these claims by Greg but you apparently chose not to.

Steve: No that’s not what you were doing. I commented to both of you to stop incipient flaming.

48. John Hunter