Some Data and Scripts for BS09

The controversy between Benestad and Schmidt on the one hand and Scafetta and West on the other hand is a typical climate science dispute, in that neither of the parties has provided scripts evidencing their analysis. [Note: BS09 provides url's for data versions used; SW do not.]

Scafetta and West provide a plausible criticism of BS09 – that they used cyclic padding in their wavelet analysis – but Schmidt says that this doesn’t “matter”. Perhaps it does, perhaps it doesn’t. The Team never admits that errors “matter” (even when they do) so third parties may be forgiven for taking Schmidt’s recent claim of not mattering with a grain of salt.

In order to arrive at an informed opinion – as opposed to be swayed by the rhetoric on either side – one has to look at the data and carry out wavelet analysis as described by the authors. Most people aren’t interested enough in the dispute to try to figure out what they did and thus will simply agree with the rhetoric of the side that they prefer.

I requested scripts from both Schmidt and Scafetta without success. Schmidt did not respond to my email. Scafetta replied but was busy on other matters for a while.

As it happens, the wavelet package used in BS09 was the same package (waveslim) as we used in our treering simulations in MM2005(GRL). Indeed, Schmidt used the same wavelet (“la8″) as we used. I’ve downloaded solar data in the past. So, in the absence of scripts from either party of climate scientists, I thought that I might be able to get a foothold on the data fairly quickly. It turned out not to be quite as quick as I planned, but I’ve made my foothold available here for others who may be interested in the dispute.

To simplify access to these materials, I’ve placed some tools online (www.climateaudit.org/scripts/solar) and have carried out my own wavelet analysis, obtaining a result that appears intermediate between Schmidt and Scafetta. It turns out that there is a pretty obvious obvious criticism of the Scafetta analysis that Schmidt didn’t make – perhaps because we criticized them for this in Santer et al 2008.

First, here is the original Scafetta wavelet smooth of the solar data. Their data set is a splice of Lean 1995 and ACRIM, with an offset to Lean in 1980 to splice the two data sets.

The period 1900–1980 is covered by the TSI proxy reconstruction by Lean et al. [1995] that has been adjusted by means of a vertical shift to match the two TSI satellite composites in the year 1980.


Figure 1. From Scafetta and West 2006.

Here’s a script to get ACRIM, PMOD and Lean data and then splice them according to the procedure of Scafetta and West [Upate: (GRL March 2006) procedure here; the procedure of Scafetta and West (GRL 2007) is a little different centering on 1980-1991)]

#Get Solar Data (making Acrim and pmod into monthly series)
source(“http://data.climateaudit.org/scripts/solar/collation.functions.solar.txt”)
pmod=get.solar(dset=”pmod”)
acrim=get.solar(dset=”acrim”)
lean=get.solar(dset=”lean95″);tsp(lean) #1600 1999

#Make annual averages of monthly information
acrim.annual= annavg(acrim)
pmod.annual= annavg(pmod)

#Do splice a la Scafetta-West 2006
Solar=ts.union(window(lean,start=1817),acrim.annual,pmod.annual)
Solar=data.frame(year=time(Solar),Solar)
names(Solar)=c(“year”,”lean”,”acrim”,”pmod”)
(delta.acrim= Solar$acrim[1980-1816]-Solar$lean[1980-1816]) # -1.476112
(delta.pmod= Solar$pmod[1980-1816]-Solar$lean[1980-1816]) # -1.476112
#SW centers on 1980 a la SW06, slightly different centering in SW07
temp= Solar$year>=1980
Solar$acrim.splice=Solar$lean+delta.acrim
Solar$acrim.splice[temp]=Solar$acrim[temp]
Solar$pmod.splice=Solar$lean+delta.pmod
Solar$pmod.splice[temp]=Solar$pmod[temp]

Next here is an updated function that extracts the wavelet decomposition using reflection at the boundary, instead of default periodic. [Aug 10: This use of the option within mra replaces an awkward patch in yesterday's version in which I inserted a long reflection pad, used the default on the padded version and truncated back. The old script is retained in the scripts directory for people desperate to see an awkward programming decision.]

#this pads the series by reflection
wavelet.decomposition< -function(x,wf0="la8") {
N<-length(x)
#steps to interpolate missing data not relevant here
y=x
temp<-!is.na(y)
ybar<-mean(y,na.rm=TRUE)
y<-y[temp]-ybar
N1<-length(y)
J0<-trunc(log(N1,2))
mod.y<-mra(y,wf0,J0,"modwt",boundary="reflection")
names(mod.y) #[1] "D1" "D2" "D3" "D4" "D5" "D6" "D7" "D8" "S8"
test<-mod.y[[1]]
for (i in 2:(J0+1)) test<-cbind(test,mod.y[[i]])
dimnames(test)[[2]]=names(mod.y)
return(test)
}

Now we’ll do a wavelet decomposition with padding and plot the results.

temp=(1881:2008)-Solar$year[1]+1
model=wavelet.decomposition(Solar$acrim.splice[temp] ,wf0=”la8″)
(ybar=mean(Solar$acrim.splice[temp] ) )# 1365.152
dim(model) #[1] 256 8
decomp=cbind(R2=apply(model[,c("D1","D2")],1,sum), D3=model[,"D3"],D4=model[,"D4"],S4=apply(model[,5:ncol(model)],1,sum) )
par(mar=c(3,4,2,1))
plot(1881:2008, Solar$acrim.splice[temp],type=”l”, ylab=”Irradiance (wm-2)”)
title(“ACRIM Irradiance”)
lines(1881:2008,decomp[,"S4"]+ybar,col=2)
mtext(side=1,”ACRIM Spliced with Lean 1995 per Scafetta-West 2006″,cex=.7,line=1.7)

This yields the following graphic.

Figure 2. Emulation of SW06 using up-to-date data

As you can see, the wavelet smooth dips down towards the end (whereas the corresponding smooth in SW06 doesn’t) but not quite as much as BS09.

The explanation is interesting as the probable reason why Schmidt didn’t comment on it.

I’ve marked the year 2000 with a dotted line here. Whereas Santer (Schmidt) et al 2008 used data ending in 1999, Scafetta West’s diagram ends in 2000, although solar data is obviously available since then. [Lucia emailed me that the Scafetta data ends in 2002 - the point is still the same.]

Making a S4 smooth with padded values based on a 2000 (or 2002) endpoint creates an uptick, whereas using actual values through to 2008 yields a downtick. Perhaps not as big a downtick as cyclic reflection. However, the downturn since 2000 (or 2002) has been substantial enough to substantially mitigate the error introduced by the incorrect cyclic padding procedure. So some of Scafetta’s victory here may be a bit rhetorical as the error may actually not “matter” as much as much as one would have thought. [

Note that Rahmstorf’s linear padding or Mannian padding would have resulted in the same sort of problem. This is actually a pretty interesting example of padding impact. I’ve criticized the use of these sorts of smoothed series in regression analyses elsewhere. I haven’t waded further through the articles to see what exactly they did downstream of this point, but, if they used smoothed series in regression analysis, what’s sauce for the goose is sauce for the gander.

The Dog That Didn’t Bark
Note that it was open to Schmidt to observe that Scafetta-West had used an obsolete data set ending in 2000 (or 2002), but they didn’t do so. Why?

My guess is that we recently criticized Santer (Schmidt) et al 2008 in a comment submitted to IJC for exactly the same thing – using obsolete data ending in 1999. Obviously if Gavin argued the point here, it would be used against him in reviving our comment on Santer et al 2008. So instead, he preferred to apply an incorrect procedure. But hey, it’s climate science.

UPDATE Aug 10: There’s an easy of doing a boundary reflection in the mra algorithm: specify boundary=”reflection”. In yesterday’s script, I added a long reflection pad while still using the default (an awkward patch reminiscent of Mann’s butterworth padding). This patched the problem but using the right option is easier and has implemented been inserted in the above code. Here’s the difference between the results with an endpoint in 2002 (which I believe was used in SW06) and an endpoint in 2008. Nicola Scafetta has written in comments below that reflection in 2002 is “right” and reflection in 2008 is an “error”. I haven’t reflected on these comments yet and, at this stage, merely show the difference. For comparison, I’ve also shown the effect of Rahmstorf smoothing based on a 2002 endpoint.

Figure 3. Showing impact of different end points and Rahm-smoothing.

153 Comments

  1. Steve McIntyre
    Posted Aug 9, 2009 at 10:35 AM | Permalink

    The script for the post is at http://www.climateaudit.org/scripts/solar/ca_6774.txt

  2. Steve McIntyre
    Posted Aug 9, 2009 at 11:00 AM | Permalink

    The problem that I have with regression analyses involving the 20th century temperature record is that a lot turns on things like bucket adjustments.

    In fact, one of the reasons why I looked at temperature data in the first place was to try to get these adjustments into a place where they were visible and could be controlled for in such analyses. Needless to say, that effort turned into a bit of a black hole in trying to get fundametnal building blocks like CRU data.

  3. Bill Drissel
    Posted Aug 9, 2009 at 11:01 AM | Permalink

    “Scafetta replied but was busy on other matters” How could anyone be too busy to send you the URL of a CVS repository? No such repository? Could any of this be the basis of important decisions?

    I put whole directory chains into my repository by typing,

    .
    cvs commit -m “XXX changes”
    .

    This, in a rather informal, small company. Do any of these “climate” “scientists” use professional development techniques?

    Regards,
    Bill Drissel

    Steve: Not that I can tell. I’m pretty sure that he’ll have some code sitting somewhere that hasn’t been properly documented. Gavin Schmidt will be exactly the same. They’ll both be in exactly the same situation. The problem is of course that as time passes, climate scientists lose data, code and even confidentiality agreements. And then blame whoever asks for the temerity to ask.

    • Posted Aug 9, 2009 at 11:14 AM | Permalink

      Re: Bill Drissel (#3),
      Bill, to you (and any other professional developer) the code is the most omportant bit of the work, for a climate scientist it’s the publication. Only the result matters.

  4. Kenneth Fritsch
    Posted Aug 9, 2009 at 11:07 AM | Permalink

    This is actually a pretty interesting example of padding impact. I’ve criticized the use of these sorts of smoothed series in regression analyses elsewhere. I haven’t waded further through the articles to see what exactly they did downstream of this point, but, if they used smoothed series in regression analysis, what’s sauce for the goose is sauce for the gander.

    I think we have seen of late more of this apparent “going a bridge too far” to obtain improved regression results that have come from papers tending to show the positive effects of GHG on GW and those that purport to show other effects. Climate science, in general and regardless of POV, would appear to lack necessary statistical vigor in too many published papers. I find it satisfying to observe an analysis rip apart the methods in published papers regardless of where the conclusion may point.

    The point of this thread may be that climate science requires outside analyses and audits in order to avoid the conundrum noted by Steve M in the thread introduction of a self incriminating criticism.

    • Steve McIntyre
      Posted Aug 9, 2009 at 11:13 AM | Permalink

      Re: Kenneth Fritsch (#4),

      I’ve never suggested that “audits” be carried out as part of journal peer review. People don’t want to spend the time. What journals can and should do is require authors to archive code and data as used as a condition of publication so that people with an interest in any article can quickly retrace the steps of the authors, instead of creating obstacles.

      This might have the benefit of improving quality as well.

      • Kenneth Fritsch
        Posted Aug 9, 2009 at 2:18 PM | Permalink

        Re: Steve McIntyre (#5),

        I’ve never suggested that “audits” be carried out as part of journal peer review. People don’t want to spend the time. What journals can and should do is require authors to archive code and data as used as a condition of publication so that people with an interest in any article can quickly retrace the steps of the authors, instead of creating obstacles.

        This might have the benefit of improving quality as well.

        I was not suggesting that blog analyses or audits are any part of the peer review process nor should they be part of it.

        What I do see from some climate scientists is an appeal to the authority of (consensus if you will) peer-review process (RC/IPCC) and then when something gets published by the system with which they disagree they note the weaknesses in the process that will let such a piece of garbage through it.

        Analysis and audits on blogs are, as I see them, a means for thinking people to learn out side of a process that otherwise can be limiting to stifling. From observations of some scientists’ performances on blogs, I can see where perhaps they would prefer the shelter of peer review.

      • steven mosher
        Posted Aug 9, 2009 at 4:17 PM | Permalink

        Re: Steve McIntyre (#5),

        This is especially true if there is a time limit on getting responses in.

  5. nicola scafetta
    Posted Aug 9, 2009 at 11:14 AM | Permalink

    Steve,

    your calculations are erroneous.

    1) the solar maximum is in 2002, not 2000.

    2) I will not tell you. Think more :)

    ciao

    nicola

    • Steve McIntyre
      Posted Aug 9, 2009 at 11:25 AM | Permalink

      Re: nicola scafetta (#7),

      Nicola, I do not purport to be an authority on solar data. I’ve objected to climate science articles by the Team for not providing source code and data and I can hardly not make the same objection when you do the same.

      If I’ve done something wrong, I apologize. I’ve provided source code to show these calculations and would be happy to correct any error.

      I’d be delighted to post your code, showing how the calculation should be done properly if I’ve erred somewhere as may well be possible.

  6. nicola scafetta
    Posted Aug 9, 2009 at 11:35 AM | Permalink

    Steve,

    I was joking. :)

    The reason why I do not want to tell you the solution is because I want to show that if somebody (you or your readers) thinks a little bit he can find the solution by himself because it is very very simple.

    So, let us take it as a summer math problem. Let us see if somebody find the solution and explain why, OK?

    ciao, ciao
    nicola

    • Steve McIntyre
      Posted Aug 9, 2009 at 11:39 AM | Permalink

      Re: nicola scafetta (#9),

      Nicola, while the problem may be very interesting to you, it’s only marginally interesting to me. I’m covering a lot of topics here and am not personally interested in such a game. Particularly if it’s got something to do with phase displacement in wavelets or something like that.

    • steven mosher
      Posted Aug 9, 2009 at 4:22 PM | Permalink

      Re: nicola scafetta (#9),

      If you posted your code, we would not have to waste time on silly games. In the time it takes you to read the post,
      make your comments, etc, you could have just posted the code.

  7. Steve McIntyre
    Posted Aug 9, 2009 at 11:35 AM | Permalink

    Here is a plot of acrim data used here. The data was allocated to months and then annual averages taken as in the script placed online.

  8. Andrew
    Posted Aug 9, 2009 at 11:39 AM | Permalink

    By the way Steve, what is the latest on your submission? Have IJC found a reviewer at least?

    • Steve McIntyre
      Posted Aug 9, 2009 at 11:40 AM | Permalink

      Re: Andrew (#11),

      WE haven’t resubmitted, but I spent some time updating it this weekend and have promised Ross that we’ll get it out before I go away on Aug 15.

  9. nicola scafetta
    Posted Aug 9, 2009 at 11:57 AM | Permalink

    Steve, if you are not interested in games, do not play :)

    There is a severe error in math in your calculations

    If you are not interested in finding your own errors, do not spend time trying to find errors made by others. :)

    ciao,
    nicola

    • Steve McIntyre
      Posted Aug 9, 2009 at 12:13 PM | Permalink

      Re: nicola scafetta (#14),

      I place scripts online so that people can identify any errors right away rather than wasting time trying to figure out what I did. I don’t claim to be perfect. The purpose of providing people with scripts (audit trails) is that errors can be quickly identified.

      I take considerable pride in owning up to errors and promptly dealing with them. IF there’s something wrong with the calculation, I’d be happy to correct it. I seldom use wavelets and have not used them in four years. As I said above, if I’ve made some error, I’d be happy to have someone point it out so that I can correct it. But my grandkids are coming over so I’ll be offline for most of the day.

    • Pat Frank
      Posted Aug 9, 2009 at 12:24 PM | Permalink

      Re: nicola scafetta (#14), easy to say, Nicola. Scientific integrity demands that you demonstrate your claim. If Steve made an error, it’s your responsibility to demonstrate it. Empty claims, such as yours, are mere posturing. As an experimental scientist myself, I find your coy accusations inexcusable.

    • MikeU
      Posted Aug 9, 2009 at 12:27 PM | Permalink

      Re: nicola scafetta (#14),

      When you visit someone else’s home, you don’t get to set the rules, or require that the host plays your games. If you have a valid critique to make, do so. Is your goal to promote understanding, or to just twist the knife in those who make some criticism of your analysis? Explaining the “severe error”, then posting links to your code and data would definitely help promote understanding.

  10. Antonio San
    Posted Aug 9, 2009 at 12:04 PM | Permalink

    Clearly SW and BS deserve eachothers…

  11. benpal
    Posted Aug 9, 2009 at 12:23 PM | Permalink

    I’m rather puzzled by the hide-and-seek game some nicola scafetta (the real one?) seems to be playing. Is it just a way to cover up for not wanting to release his source code?

  12. nicola scafetta
    Posted Aug 9, 2009 at 12:23 PM | Permalink

    Ok, Steve

    let us see if somebody find the error.

    It is just a game, for me too.

    Let us let who wants to play to play a little bit, ok? :)

    have a great Sunday
    ciao,
    nicola

    • Steve McIntyre
      Posted Aug 9, 2009 at 12:29 PM | Permalink

      Re: nicola scafetta (#18),

      OK, I sometimes give puzzles to readers. So let’s give Nicola a little leeway on this on a Sunday afternoon.

      But I answer my little puzzles by the next day. And this little game had better have a short fuse as well. Meanwhile, anyone that wants to spot my “severe error” has access to turnkey scripts placed online (while you have to run the gauntlet of decoding BS09 and/or SW06a,b, 07 to do the same thing there and even then you don’t know whether you implemented things wrong or not.)

  13. Fred
    Posted Aug 9, 2009 at 12:24 PM | Permalink

    climate science games by nicola.

    How boring.

  14. Steve McIntyre
    Posted Aug 9, 2009 at 12:33 PM | Permalink

    I changed 1999 to 2000 in this post for the apparent end date of the SW data. Lucia’s emailed me to say that it was 2002 – though I can’t see where this is said. (I’ve added this in brackets.) The point remains with any of the three end dates.

  15. nicola scafetta
    Posted Aug 9, 2009 at 12:41 PM | Permalink

    Ok, Steve

    I solve the puzzle tomorrow

    ciao,
    nicola

  16. Graeme Strathdee
    Posted Aug 9, 2009 at 1:44 PM | Permalink

    If I was a referee for this current exchange, I would apply the Ice Hockey rule book, given the importance of the hockey stick to climate change competitions. What should I call, and for whom?:
    Faceoff Interference
    High sticking
    Elbowing
    Delay of Game
    Misconduct
    Game Misconduct
    Gross Misconduct

  17. Steve McIntyre
    Posted Aug 9, 2009 at 4:12 PM | Permalink

    For some prior consideration of IPCC handling of solar-temperature correlations – see http://www.climateaudit.org/?p=1079 which references George Reid studies on this topic in the late 1980s and early 1990s.

    Reid’s earlier studies use SST versions before HadCRU developed the PEarl Harbor bucket adjustment. The Pearl Harbor bucket adjustment screwed up Reid’s correlations. The Pearl HArbor bucket adjustment always looked screwy to me and Thompson et al (Nature) advocated a position on WW2 bucket adjustments that was similar to one previously stated here.

    Gradual implementation of the bucket adjustment – as opposed to an overnight implementation – moves the 1940s bump in temperature to the right, synchronizing much better with the TSI charts.

    So before people get too excited about these correlations one way or the other, they need to keep in mind that they are strongly affected by the bucket adjustment thingee.

  18. RoyFOMR
    Posted Aug 9, 2009 at 4:17 PM | Permalink

    Nicola,
    It’s nice that you came here and added your two cents worth. What’s not so admirable is that you seem to treat this as one big joke. Tell that to the guy or gal who may have to sacrifice their job because of a Science that is settled only because they are told it is by a joker with a big smile on his face.
    Shame Nicola. Shame!

  19. Posted Aug 9, 2009 at 5:13 PM | Permalink

    If it weren’t for the math I’d guess most of these people were 7.

  20. RoyFOMR
    Posted Aug 9, 2009 at 5:51 PM | Permalink

    Max
    August 9th, 2009 at 5:13 pm
    If it weren’t for the math I’d guess most of these people were 7.

    Love your honesty Max. A guess doesn’t need data or method to back it up – a bit like the Team but without your transparency! I guess that with the math- most of these people are more senior than you may have thought- but that’s my guess!

  21. David Cauthen
    Posted Aug 9, 2009 at 6:24 PM | Permalink

    Seems that Nicola is not “busy on other matters.”

  22. Posted Aug 9, 2009 at 6:31 PM | Permalink

    For the Layman lurker just trying to get honest answers, there is enough reading without the drama of watching the worlds minds play hide the code.

  23. nicola scafetta
    Posted Aug 9, 2009 at 7:17 PM | Permalink

    Ok,

    Once one becomes familiar with the wavelet decomposition methodology MODWT and its mathematical properties, s/he needs only to carefully follow the instructions detailed in our papers [Scafetta and West, 2005, 2006a]. This requires a reading of our papers :).

    The important ingredients are:

    1) the re-sampling of the data in such a way to center the wavelet band pass filters exactly on the 11 and 22 year solar cycles.

    2) the choice of the year when the reflection padding is applied, that is, the year 2002-3 when the sun experienced a maximum for both the 11 and 22 year cycles;

    Both information are clearly written in Scafetta and West [2005]: for example, look at the captions of all figures where the limit 2002 is clearly stated several times and in the text, and for the re-sampling instructions look at paragraph # 13 in SW05 or in paragraph #8 in SW06a. These paragraphs are quite extended and extremely clear.

    Note that the our papers do not explicitly reports about the “reflection padding” procedure because this is considered a quite standard procedure in time series analysis with wavelets. The limit in 2002-3 may be fine because the algorithm may be optimized if there is continuity in the borders both in the data and in the first derivative for the frequency of interest; this is approximately accomplished in 2002-3 because the 22 year magnetic solar cycle has a maximum during this period. By using the above procedure the end point error is minimized because both the 11 and 22 year cycles, which are the one of interest herein, would not be significantly disrupted by the reflection padding.

    But the most important point is (a): the “centering” of the wavelet band pass filter that is what Steve did not do. You can use also 2004 for the limit year, but 2008 may be problematic because the first derivative of the 22 year cycle is too disrupted, in any case in 2005 I had data up to 2004 not 2008. I hope nobody blames me for not having used in 2005 the data up to 2008!

    Steve also should adjust a little bit the position of the ACRIM data that should be moved on the right by 6 months if he wants to compare with our figure.

    Also his way to merge ACRIM with Lean may be not exactly the same than what I did, Also ACRIM data Steve used may be slightly different from what I used because they were a little bit corrected after 2006. Finally, I do not know if the program Steve uses works like mine. My program works because I tested it, I do not know about R algorithms.

    Moreover, the important thing is when the components are transformed and recomposed, some of the ambiguity at the end point disappears. So, what Steve plots is not the most important data graph.

    When Steve has some time, he may try to implement the above algorithm. Then, we can discuss his new figures better.

    In any case, the major problems with BS09 is not in the mathematical errors that are of a secondary importance, everybody can make some mistake. The major problem is in the physics of our papers that they misinterpreted and misreported, and in the tone they use against us.

    Moreover, it is very important to realize that since 2006b we have discussed the limitations of our previous two works [2005 and 2006a] and changed method also for some of the reasons reported by Steve here, it is not easy to deal with the end points with wavelets and other more important reasons. So, it is possible to get slightly different results at the end points even by adding or subtracting one year. BS09 also failed to correctly report our new approaches that do not make use of wavelets and do not have any end point problem.

    So, I would suggest to focus on our most recent papers, not the one in 2005 and 2006 that are still interesting but they are surpassed. In particular you might be interested in one on my paper that is just coming out, very soon!. The word “wavelet” is never used in it, not once!

    If somebody is interested in my research, he can spend 1:30 watching my presentation at the EPA.

    http://yosemite.epa.gov/ee/epa/wkshp.nsf/vwpsw/84E74F1E59E2D3FE852574F100669688#video

    I hope that this is of help

    nicola

    PS: the little error made by Steve is just very little compared to what BS09 did. If my errors would be always so little, I would be quite happy :)

    • Steve McIntyre
      Posted Aug 9, 2009 at 8:32 PM | Permalink

      Re: nicola scafetta (#35),

      Nicola, thanks for the comment. Again, I urge you to post code for a variety of reasons.

      I think that you’ll find that you’ve got a better chance of reaching an audience if you do so.

      R, in particular, offers the ability to reach a wide audience with near-interactive scripts. Ones that fetch the data, produce the statistics and figures. There are a surprising number of people who will spend time working through a paper if such interactive tools accompany it – and who aren’t interested enough to try to parse through the captions of figures. Street shoppers so to speak. Try it. I think that you’ll be pleased by the result.

    • steven mosher
      Posted Aug 10, 2009 at 12:22 AM | Permalink

      Re: nicola scafetta (#35),

      Let me quote Dr. S and make some points that Lukewarmers and others have been making

      But the most important point is (a): the “centering” of the wavelet band pass filter that is what Steve did not do. You can use also 2004 for the limit year, but 2008 may be problematic because the first derivative of the 22 year cycle is too disrupted, in any case in 2005 I had data up to 2004 not 2008. I hope nobody blames me for not having used in 2005 the data up to 2008!

      This is WHY we ask for a turnkey package that provides the data AS USED. It seems that the guys who write these
      papers have yet to discover the simplest of document and source control tools. Its a basic tenet of reproduceable results.

      Steve also should adjust a little bit the position of the ACRIM data that should be moved on the right by 6 months if he wants to compare with our figure.
      Also his way to merge ACRIM with Lean may be not exactly the same than what I did, Also ACRIM data Steve used may be slightly different from what I used because they were a little bit corrected after 2006. Finally, I do not know if the program Steve uses works like mine. My program works because I tested it, I do not know about R algorithms.

      All of these issues would vanish if one posts the data AS USED and the code AS RUN. Further, I would ask for the same thing I provided when I did analysis. All of my test scripts and test results.

  24. Dennis
    Posted Aug 9, 2009 at 8:03 PM | Permalink

    Nicola-
    Post your code!

  25. Nick Stokes
    Posted Aug 9, 2009 at 8:08 PM | Permalink

    Nicola,
    Looking at SW05, you don’t say much about how the smooth S8 is obtained. B&S say that they used a fifth order polynomial fit. Does your code do something similar?

    That does not require padding, and is not affected by any padding done. So am I right in believing that the padding effect, whether cyclic or reflection, applies only to the bandpass components D8 and D7 (and lower, if used)?

    • MetMole
      Posted Aug 9, 2009 at 9:03 PM | Permalink

      Re: Nick Stokes (#37),

      B&S say that they used a fifth order polynomial fit.

      Was that as well as, or instead of, throwing the alarmists’ regular hissy one?

  26. Another Layman Lurker
    Posted Aug 9, 2009 at 8:50 PM | Permalink

    Re Max @34

    I agree.

  27. theduke
    Posted Aug 9, 2009 at 9:38 PM | Permalink

    Just a theory: mainstream climate scientists do not release their code for one reason: if they do the science may begin to progress by leaps and bounds and leave them behind in the process.

  28. nicola scafetta
    Posted Aug 9, 2009 at 9:53 PM | Permalink

    Nick,

    My smooth is obtained with the MODWT itself. I do not know why BS09 used such a polynomial fit. Probably because by adopting a periodic padding their MODWT smooth component looked horrible. The point in 2000 was merging the point in 1900!

    So, they preferred a nice polynomial fit.

    But I really do not know, please ask them.

    Steve:
    right now I do not have time to post a code, I do not use your R program. My codes with all the libraries are a mess. I will prepare a nice code and put it online, but I will not be able to do it right now. Just a few days.

    You can just correct your code with a few lines, just read the paper paragraphs I indicate above were you find the exact instructions.

    ciao,

    nicola

  29. Chuck Bradley
    Posted Aug 9, 2009 at 11:19 PM | Permalink

    Nicola, the request is for the code that produced the published results, not the changed code.

  30. Nick Stokes
    Posted Aug 9, 2009 at 11:53 PM | Permalink

    As you can see, the wavelet smooth dips down towards the end (whereas the corresponding smooth in SW06 doesn’t) but not quite as much as BS09.

    Steve, I couldn’t see the BS09 plot that you were comparing with here. I gather from Nicola’s response that the SW smooth is a low-pass filter result, while BS use a polynomial fit. So i’d expect a difference on that basis, but I couldn’t see a BS09 smooth dipping down at the end.

    I think Gavin may well be right about the different ending not “mattering” in BS09, since it does not affect the polynomial fitted trend component.

  31. Nick Stokes
    Posted Aug 10, 2009 at 1:31 AM | Permalink

    Steve,
    I’m puzzled about some aspects of your R code. In wavelet.decomposition(), you’ve set “pad” to be a string-valued input, but later assigned a numeric value. But the main thing is that you haven’t passed that pad=”reflect” intent to the mra() call. So mra() presumably uses the default, which seems (somewhat at variance with Nicola) to be “periodic”, according to the documentation for waveslim.

    I still haven’t gathered in complete detail how S8 is calculated, but I presume it is a low-pass filter with a period taper long relative to the bandpass D8. The centre of D8 is 22 years, so the S8 smooth must make heavy use of 30+ years of padded values, and be quite affected by “reflect” vs “periodic”. The BS09 use of detrending with polynomial fitting avoids that for S8, which seems attractive.

    Steve: As a quick patch, I modified my script to do its own reflection padding. I should have examined the mra options more closely as they have an option boundary=”reflection” which would do this automatically. In practice, my patch inserted a long enough reflection to accomplish what I wanted, but I’ll change the script. It seems a bit odd for you to be defending polynomial fitting here: I thought you were a big Rahm-smoothing guy: – linear extrapolation at the end?

  32. RobR
    Posted Aug 10, 2009 at 1:36 AM | Permalink

    I find it interesting that Dr N. S. has been willing to engage in some real dialogue here. This is something we don’t see from the likes of Hansen, Mann and Co. Heres hoping the discussion continues.

    Steve: I quite agree. I met Nicola at AGU a couple of years ago and he’s as nice in person as he comes across here. I hope that he takes the code criticism in the constructive spirit in which I intend it.

  33. Hoi Polloi
    Posted Aug 10, 2009 at 2:32 AM | Permalink

    I find it interesting that Dr N. S. has been willing to engage in some real dialogue here. This is something we don’t see from the likes of Hansen, Mann and Co. Heres hoping the discussion continues.

    Fully agree. Let’s leave the pitchforks and torches at home for a while and give Dr.S some slack.

  34. Rich
    Posted Aug 10, 2009 at 3:37 AM | Permalink

    On “peer review” and scientific publication, you may find this interesting: http://www.bmj.com/cgi/content/abstract/339/jul20_3/b2680. I came across it on Ben Goldacre’s “Bad Science” site. (http://www.badscience.net/2009/08/how-myths-are-made/)

    A quote that seems very pertinent in the context of climate science:
    Conclusion: Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.

  35. Sean Houlihane
    Posted Aug 10, 2009 at 5:29 AM | Permalink

    If I’m reading this correctly, the padding is done using the knowledge that we guess that the solar data is quasi-periodic and can reasonably be predicted by a quick cut&paste of the previous cycle – however this process is not mathematically defined and does rely on guesswork. I believe it is a sound approach – even 2 years ago, the expectation of SC24 was not too bad to feed into the lower significance taps of a filter where half of the input is already well known. (we know know there would have been an over estimation, and can check the significance if we care to) The fact that it is impossible to reproduce the padding means that it is impossible for anyone to replicate and judge the skill of that estimation process. If it is too embarrassing to post the original code, step-by-step intermediate data would be a nice start.
    I do think that some people are under-estimating the difficulty of publishing code which was not expected to be published – I have no CVS repository set up at home, and rarely keep backups of old code.

    Steve: The moral is to button up code for articles at the time of publication. I can personally attest to the difficulties in retracing old code once you’ve moved on. Contrary to my own practices, I didn’t button up my code for our Huybers comment and then get asked for it. I had code that was close but didn’t quite give the same thing. I don’t know why I didnt have the precise code, but I didn’t. I eventually set aside a few days to parse the differences and buttoned up the retraced code, but it was retraced all the same.

  36. Stephen Parrish
    Posted Aug 10, 2009 at 5:45 AM | Permalink

    My tour through the nuclear biz w/r to working on a plant’s licensing and design basis demanded recoverable and repeatable results for the life of the station. Granted these were utilities I worked for, but each had its own way to manage this information. Likewise, as Steve has often talked about, engineering calculations (in the nuclear biz I know) are subject to rigorous review and require robust configuration management controls.

    One utility had a searchable database that allowed you to click a reference in a calculation and subsequently click a reference in that calculation and so on until you got to an image file containing the handwritten calculations of the vendors used in the design of the facility circa 1968.

    You could recover that file in 5 minutes to your desktop.

  37. Steve McIntyre
    Posted Aug 10, 2009 at 5:56 AM | Permalink

    UPDATE Aug 10: There’s an easy of doing a boundary reflection in the mra algorithm: specify boundary=”reflection”. In yesterday’s script, I added a long reflection pad while still using the default (an awkward patch reminiscent of Mann’s butterworth padding). This patched the problem but using the right option is easier and has implemented been inserted in the above code. Here’s the difference between the results with an endpoint in 2002 (which I believe was used in SW06) and an endpoint in 2008. Nicola Scafetta has written in comments below that reflection in 2002 is “right” and reflection in 2008 is an “error”. I haven’t reflected on these comments yet and, at this stage, merely show the difference.

    For comparison, I’ve also shown the effect of Rahmstorf smoothing based on a 2002 endpoint.


    Figure 3. Showing impact of different end points and Rahm-smoothing.

    [note - I added in Rahm-smoothing about 1 minute after I posted this and inserted the revised version within a short window (within Lucia's 10 minute edit).]

    • Steve McIntyre
      Posted Aug 10, 2009 at 6:49 AM | Permalink

      Re: Steve McIntyre (#52),

      The trouble with statistical arguments on smoothed data – as we’ve observed on many occasions – is that it’s very difficult to arrive at any “statistical significance”.

      As both Lucia and I say over and over, readers have to be very careful not to assume that this means the opposite. For example, it’s entirely reasonable to assume that sea level rises as temperature increases, even if Rahm-smoothed regressions have no “statistical significance”. Neither does a statement saying the opposite.

    • Nick Stokes
      Posted Aug 10, 2009 at 7:34 AM | Permalink

      Re: Steve McIntyre (#52), I think what Nicola was referring to was the Gibbs effect (ringing). If the reflection causes a big discontinuity in gradient, that’s a step change interpreted as part of a triangular wave, and after smoothing, the lower frequency components of it remain and feed back into the smooth. Nicola was saying that stopping at 2002 is good because it is near a peak. The gradient is near zero, so the discontinuity on reflection is small. Actually, 2008 is another “good” stopping place, near a minimum.

  38. Nick Stokes
    Posted Aug 10, 2009 at 6:12 AM | Permalink

    Steve,
    One other R code query. I couldn’t see where you had done the timestep selection, with linear interpolation. SW05 explains how if you want to have the D7 band, say, centred on 11 years, you have to ensure that 11 years is 192 steps, so the unit is 0.6875 months, and to get that you have to linearly interpolate. But as far as I can see, you’ve just used a year as timestep, and chosen 128 years to give the right length for mra(). But this doesn’t centre any band on 11 years. I think it does centre D3 on a 12 year cycle, which is reasonably close.

    On your response about polynomial fitting, it’s a quite different situation. This is detrending, and there is no particular emphasis on the end region. The purpose of using the right boundary formulation is to minimise Gibbs effects, not to estimate the latest trend. I think SW05 should have detrended too. For the bandpassed components, MRC might well be a better alternative to periodic or reflection, but it may not matter.

    Steve: Nick, I’m just trying to retrace the steps of the various authors for the benefit of readers interested in this dispute, rather than advocate any particular timesteps. ACRIM and PMOD are available monthly since late 1978, but the Lean reconstructions are available only annually as far as I know. I placed everything on a common scale. And yes, I picked 128 years to match mra – given that GISS for example starts in 1880 (129 years ago), this works fairly well. I don’t “know” that 192 steps matters relative to 128.

    • Nick Stokes
      Posted Aug 10, 2009 at 6:52 AM | Permalink

      Re: Nick Stokes (#53),
      Steve,
      Following up on your response here, the point of timestep selection (according to SW05, bottom p 4) is to ensure that the solar frequencies being sought are aligned with the (detail) bands. mra() produces a binary detail sequence – so for example, D7 captures waves with periods between 128 and 256 timesteps, whatever you have chosen the steps to be. If you want the midpoint of, say, D7 to be the solar 11yr cycle, you have to choose the timestep accordingly. I think BS09 would have done this too. The calculation of timestep is primary; then you choose a data interval to make up the 2^N length required.

      If you just choose 1 year as the step, then D3 would be periods from 8 to 16 years, and the midpoint is 12 years.

      It may not matter if you just want to look at the smooth. But if you want the plots of D7 and D8 to match, and to identify the solar signal, you need the right timestep.

    • Spence_UK
      Posted Aug 10, 2009 at 3:16 PM | Permalink

      Re: Nick Stokes (#53), Re: Nick Stokes (#37),

      Looking at SW05, you don’t say much about how the smooth S8 is obtained. B&S say that they used a fifth order polynomial fit. Does your code do something similar? That does not require padding, and is not affected by any padding done.

      This risks going a little off-topic. I’ve watched people discuss polynomial fits before. The end-points in polynomial fits can be very sensitive; not to end points, but the order of the polynomial.

      It is pretty simple: an odd-order polynomial must go to +inf at one end, and -inf at the other. An even-order polynomial must either go to +inf at both ends, or -inf at both ends. This may occur some distance from the data, or it may occur quite visibly in the fit…

      Since the 20th century temperature shape (and, for that matter, pretty much anything that correlates to some degree with it) has a downturn near the beginning, by choosing an even order polynomial, you tend to get the same at the other end (downturn), by choosing and odd order polynomial you get the opposite (upturn). From that, you should not be surprised by the tendency of pro-AGW people to choose odd order polynomials (e.g. BS 5th order) and for the anti-AGW camp to choose even order polynomials (e.g. Roy Spencer used to present a 4th order poly fit to the UAH data, a practice which some pro-AGW bloggers dismissed as bad science – I’m sure they’d be quick to condemn BS as well, or not as the case may be…)

      • Nick Stokes
        Posted Aug 10, 2009 at 4:17 PM | Permalink

        Re: Spence_UK (#78),
        Again, I think people are misunderstanding the purpose of polynomial fitting in BS09. It isn’t for the purpose of displaying a trend. What the S&W papers are trying to do is, through Fourier- style analysis, tease out information about solar cycles over a century (or 400 yrs). They do that with bandpass filtering. The filtering gets rid of high frequency “noise”, but there is also low frequency noise to separate. That is where detrending with polynomial fitting comes in.

        A Fourier approximation emerges in terms of sinusoids, so it is inevitably periodic. The analysis must regard your 100 yr data segment as part of a period. mra() gives two options. It can be simply periodic, so 2000-2100 is a replay of 1900-2000, or you can reflect, so 2000-2100 is 1900-2000 played backwards, but thereafter the 200 yr sequence is repeated.

        If there is a linear trend, say, and you put it into this system, as far as the analysis is concerned, it is actually a sawtooth wave, or with reflection, a triangle wave. Either has a full set of harmonics (tapering more with the smoother triangle wave). With bandpass filtering, some of these harmonics get into the band where you are trying to see the solar signal. That is why it is good to identify a trend component of the data in advance, and subtract it out. S&W should have done that.

        The issue about odd-even polynomial order doesn’t matter, for several reasons. One is that if you look at BS09, Fig 4, the higher powers aren’t actually having much influence. But more importantly, again, detrending is all about removing low frequency noise. If there were some end effect, that would largely have the effect of high frequency noise (relative to 11yr), which would be attenuated with that end of the bandpass filter.

        • Spence_UK
          Posted Aug 10, 2009 at 6:07 PM | Permalink

          Re: Nick Stokes (#82),

          Nick, I quoted you as saying that polynomial fits are not affected by end-point padding. The ends of polynomial fits can be heavily influenced through a different mechanism, the order of the polynomial. Your statement seemed to me to suggest that the end points could not be influenced by tuning in the same way as end-point padding when filtering. This is simply wrong.

          As for higher order components not being important, that’s a new one on me. The higher order components will always dominate at some stage, even if they are 10 orders of magnitude smaller. This may or may not be near to the data, as I noted in my post, but the order of a polynomial can be absolutely as significant as the padding in a filter.

          If you now want to change the argument to something else, fair enough, but your first statement was wide of the mark.

          Oh, and the high frequency component is not “noise”. With the exception of (typically iid) measurement noise, all of nature’s fluctuations are “signal”.

        • Nick Stokes
          Posted Aug 10, 2009 at 6:29 PM | Permalink

          Re: Spence_UK (#88), Concerning noise etc, I think you’re again missing the point of the S&W analysis. They are trying to use bandpass filtering to identify solar cycles in temperature data. So yes, information outside the bandpass range is noise for the purpose of the analysis, and the aim is to attenuate it, while preserving any solar cycle information present.

          The order of the polynomial is a distraction – a linear fit would have had much the same result. The aim of detrending is to take out a component of the signal which does not (hopefully) include information that you want, in the 11yr and 22yr range. A low order polynomial can do that, because it is just too stiff to take out a significant proportion of these higher frequencies. How the polynomial behaves outside the data range is of no importance, because it is not included in the Fourier analysis.

        • Spence_UK
          Posted Aug 11, 2009 at 2:01 AM | Permalink

          Re: Nick Stokes (#90),

          I’m well aware of what S&W are trying to do. The noise analogy isn’t a good one, no matter who is doing the analysis and to what end.

          You also seem to be unaware that a linear fit is an odd order polynomial.

          I’m not trying to make broad insights into the papers being discussed here. Just pointing out that polynomial fits do have end point effects that can be influenced (perhaps subconciously) by the person doing the analysis, which means that your #37 is way off base. Nothing more, nothing less.

        • Nick Stokes
          Posted Aug 11, 2009 at 2:16 AM | Permalink

          Re: Spence_UK (#97),

          Just pointing out that polynomial fits do have end point effects that can be influenced (perhaps subconciously) by the person doing the analysis, which means that your #37 is way off base.

          Actually, what I said in (#37) is perfectly true, I didn’t say there were no endpoint effects; I said a polynomial fit does not require padding and is not affected by padding done. Which part of that do you dispute?

        • Spence_UK
          Posted Aug 11, 2009 at 4:13 AM | Permalink

          Re: Nick Stokes (#98),

          LOL, OK if you’d rather lay claim to the Roger Irrelevant award for most vacuous claim on a science blog rather than admit that your ideas were way off base, so be it. Yes, padding is not an issue for polynomial fits in the same way that hatstands are not an issue for refrigerators. I had inadvertantly assumed you were trying to make a point. I promise not to make that same mistake again.

          In the meantime, please remember there is plenty of scope for subconcious experimenter bias to creep in through polynomial fits, particularly with regard to end point effects, an issue which seems depressingly common in climate science (how can so many climate studies be so sensitive to such a marginal piece of data?)

        • Nick Stokes
          Posted Aug 11, 2009 at 4:36 AM | Permalink

          Re: Spence_UK (#99),
          Well, after all that bluster, I still have no idea what it is in #37 that you say is off base. But my query to Nicola about padding and BS09 polynomial fitting is far from irrelevant, because he has asserted exactly the opposite.

          In his rebuttal at Pielke Sr, Scafetta observes that the BS09 smooth was derived by cyclic padding.

          It wasn’t.

        • Spence_UK
          Posted Aug 11, 2009 at 5:21 AM | Permalink

          Re: Nick Stokes (#100),

          Well, after all that bluster, I still have no idea what it is in #37 that you say is off base.

          Sure. I can lead a horse to water. I can’t make it drink.

          It wasn’t.

          Glad you’re so certain about that. If we had the code, I could be confident that you’re certainty is justified. But historically, team method descriptions in scientific journals aren’t all they could be. If this is the case, I’d then be fascinated to know what the effect of changing the order would be. Of course, if we had the code, it would be easy to try, wouldn’t it?

        • Spence_UK
          Posted Aug 11, 2009 at 5:26 AM | Permalink

          Re: Spence_UK (#103),

          I can’t believe I just used “you’re” instead of “your”.

          Another thought: if Nicola was just outright wrong, wouldn’t Gavin tear him a new one for this? Is it not possible that the cyclic padding was done first, followed by the polynomial fit? I’m not asserting this, just questioning it. If true, it would mean Gavin gets two bites at the end point cherry.

          Of course, I personally would raise eyebrows at any study so sensitive to treatment of end points – which rules out a remarkable number of climate studies.

        • Posted Aug 11, 2009 at 7:15 AM | Permalink

          Re: Nick Stokes (#100),

          In his rebuttal at Pielke Sr, Scafetta observes that the BS09 smooth was derived by cyclic padding.

          It wasn’t.

          Oh? Have you talked to Rasmus?

        • Nick Stokes
          Posted Aug 11, 2009 at 2:22 PM | Permalink

          Re: lucia (#108), Have I talked to Rasmus? No. I read BS09. Figure 4 shows the smooths in question. Two things:
          1. They are not periodic.
          2. The caption clearly says that they were derived by fitting a fifth order polynomial. Polynomial fitting uses no padding.

  39. Steve McIntyre
    Posted Aug 10, 2009 at 6:30 AM | Permalink

    Readers should be aware that many recent solar theorists e.g. Leif Svalgaard argue that the change in solar strength is much less than Lean 1995. I have not studied the bases for the various arguments and cannot offer any opinion on the merits of the various arguments.

    • Posted Aug 10, 2009 at 10:01 AM | Permalink

      Re: Steve McIntyre (#54),

      Much like Dr. S recommended perusing his more recent work, Dr. lean’s more recent work is recommended by Leif and presumable Dr. Lean.

      Steve and Dr. S, may all your errors be tiny :)

  40. Joeshill
    Posted Aug 10, 2009 at 7:05 AM | Permalink

    snip – enough piling on.

  41. Bob H
    Posted Aug 10, 2009 at 7:44 AM | Permalink

    Based on Dr. Scafetta’s comments that he is using a analysis package different than “R”, I can see where posting the code could be problematic, especially if the software he is using is not widely available. Granted, some of the parameters could be extracted from the code, but the code itself may not be executable. Ultimately, it seems this would point to attaining some “consensus” on the software packages used, so that executable code could be posted. Having had experience with several different programming languages and application code input scripts, it could be difficult to transpose from one language to another.

    Ahhh, pining for the days when almost all scientific and engineering software was written using FORTRAN II or FORTRAN IV; at least everyone knew what they were looking at.

    Steve: I can parse through code in languages that I don’t run. I was able to locate Mann’s PCA error from Fortran code inadvertently left on his old server (which he subsequently deleted) even though I didn’t run the Fortran. There’s value in code even if you can’t execute it. I don’t agree with demands that Nicola immediately produce code unless people demand that Gavin Schmidt produce his concurrently and I suggest that you try making such suggestions at realclimate. :) Having said that, I encourage Nicola to carry out required documentation and to take the high road in archiving his code regardless of what Gavin does.

  42. Bob H
    Posted Aug 10, 2009 at 9:06 AM | Permalink

    Steve,

    I would agree that older code could provide usefull informtion. On the other hand, I had used a product called MOSS (Now MX) several years ago which had a 700+ page manual and a quick reference guide of about 150 pages, most of which one would be lost without. A short snippet of a macro is shown below.

    DESIGN,&MODEL&
    &RT&101,MT1A,&SSTR&,TGE&C4&,-0.100,,,0.000,,,6.000
    &RT&100,MT1B,&SSTR&,TGE&C4&,-0.100,,,6.000
    &RT&101,MT1C,&SSTR&,TGE&C4&,-0.100,,,6.000,,,0.000

    &STDZ&&LT&111,MT1A,&SSTR&,&ZSTR&,0.000,,,&ZD1&,,,-12.0
    &STDZ&&LT&110,MT1B,&SSTR&,&ZSTR&,0.000,,,-12.0
    &STDZ&&LT&111,MT1C,&SSTR&,&ZSTR&,0.000,,,-12.0,,,&ZD2&

    &STDZ&131,MT1A,&SSTR&,&ZSTR&,,,,&ZS1&,,,-0.167
    &STDZ&130,MT1B,&SSTR&,&ZSTR&,,,,-0.167
    &STDZ&131,MT1C,&SSTR&,&ZSTR&,,,,-0.167,,,&ZS2&

    The “<” which appears is not in the original code, but I think you can see this code is nearly incomprehensible without either the knowledge of the options or the manual.

    • steven mosher
      Posted Aug 10, 2009 at 9:59 AM | Permalink

      Re: Bob H (#60),

      Don’t be a tool. I could post the snippet of a 160 line un commented macro written in c and nobody would get it.

  43. Carrick
    Posted Aug 10, 2009 at 10:07 AM | Permalink

    This comment by Nicola raises more issues than it answers:

    1) the re-sampling of the data in such a way to center the wavelet band pass filters exactly on the 11 and 22 year solar cycles.

    The solar cycles aren’t exactly 11 and 22 years in length, nor do they hold fixed to 11- or 22-year periods (in Lean’s data set they vary roughly from 9 to 13 years in duration)

    I haven’t seen Nicola’s code but when he says he “resamples” the data, does he mean he’s forcing the periods so that the cycles all fall on an 11-year cycle?

    Again this is where the actual code would be helpful.

    Nick Stokes:

    Nicola was saying that stopping at 2002 is good because it is near a peak. The gradient is near zero, so the discontinuity on reflection is small. Actually, 2008 is another “good” stopping place, near a minimum.

    This sounds spot on to me.

  44. Bob H
    Posted Aug 10, 2009 at 10:13 AM | Permalink

    steven mosher,
    Exactly the point.

    • steven mosher
      Posted Aug 10, 2009 at 10:40 AM | Permalink

      Re: Bob H (#64),

      Not the point. The request was not to post a code snippet and see if we could solve the puzzle. My requests are for the DATA AS USED and the CODE AS RUN. As one of the poor souls who slogged through hansens fortran after 20 years
      of not writing fortran I’m pretty sure that one of us here will “understand” what Dr. S wrote . I’m down for : ada,lisp,prolog,cobol,forth,basic,rocky mountain basic, c, c++, java,python, matlab. The full source code AS RUN
      will be understandable, it might be ugly and dense and stupid but I’ve slogged through the worst messes.

  45. nicola scafetta
    Posted Aug 10, 2009 at 10:33 AM | Permalink

    I really do not understand why Steve does not want to read my paper first

    From SW06b page 2, paragraph 8 after equation 2, I wrote:

    By sampling the data with a linear interpolation
    algorithm at a time interval of Dt = 11/12 = 0.92 years, the
    smooth curve S4(t) captures the TSI secular variation at time
    scale larger than 2^5 Dt = 29.3 years. The band-pass curve
    D4(t) captures the variation at a time scale from 2^4 Dt = 14.7
    to 2^5 Dt = 29.3 year periodicities, which are centered on the
    22-year cycle. The band-pass curve D3(t) captures the
    fluctuations at a time scale from 2^3 Dt = 7.3 to 2^4 Dt = 14.7
    year periodicities, which are centered in the 11-year cycle.

    Steve should implement the above information in his algorithm, that is, he just need a linear interpolation algorithm with a timestep of 11/12 = 0.916666666666666667, I believe that R has one. Moreover, the MODWT method does not need that the data file is made of a N=2^x number of data, any N is fine.

    About the 2008 it is not as well good as 2002-4 because by a symmetric padding in in 2008 would create a discontinuity in the gradient of the 22 year cycle. Moreover a reflection in 2008 would create an artificial gran maximum in 2015 that would be not physical.

    About Leif Svalgaard theories the readers should be also aware that

    1) all experimental TSI groups and all other recent solar theorists disagree with him.
    2) about the fact that Lean 1995 may not be accurate is something that has been taken into account by my studies since my SW06b three years ago, when the latest Lean data where made available. By contrast Gavin and Hansen have published in “2007” the results of the GISS modelE by using the obsolete Lean2000 data that are quite similar to Lean1995 data.

    • steven mosher
      Posted Aug 10, 2009 at 10:44 AM | Permalink

      Re: nicola scafetta (#65),

      In the time it took you to do this you could post code. heck once hansen got the point it was up in no time.
      delay and rewriting it will just lead to speculation. Most of here ( the engineers at least) have spent time adopting code from the scientific community. For me it was piles of code written by nasa scientists. It wasn’t pretty. We laughed. but in the end the sharing was appreciated and vital.

      free the data; free the code; free the debate

    • Ryan O
      Posted Aug 10, 2009 at 10:45 AM | Permalink

      Re: nicola scafetta (#65),
      .
      Dr. Scafetta,
      .
      I imagine Steve did read your paper (as did I), but even after a few passes through, not everything ***appears*** clear at first. This is no fault of your own; I agree that you were very clear in your paper. However, none of us are solar experts, and most of us do this as a hobby, so it takes some time to sink in.
      .
      CA is a bit different than publishing in a paper. Steve and his guest posters do not prepare the material like one would for publication. Rather, the posts are really describing a work-in-progress. This allows readers to help contribute based on their particular insights or background. As such, mistakes and misinterpretations are likely, and that is not a bad thing at all. As we all progress through these topics together – pointing out each others’ mistakes – our mutual understanding is much greater than would otherwise be possible. Think of it more as a group brainstorming session than a critique.
      .
      I, for one, rarely fully understand something from reading a paper. Full understanding only comes through attempting to replicate it myself. Along the way, I often make a bunch of mistakes. Each mistake helps reinforce why the proper way is correct. But to get there, I have to first make the mistakes. What you are seeing here is simply that process.
      .
      By the way, I and others appreciate you dropping by to comment and explain. It is very helpful. I hope you will continue to do so, both on this topic and any other future ones that may involve your work.
      .
      P.S. I really hope you post or email Steve your code. Steve and others here have gotten quite good at transliterating even the most poorly documented and arcane code into R. Having the code usually helps reinforce the meaning of the words in the paper and helps us understand more fundamentally what you have done.

      • steven mosher
        Posted Aug 10, 2009 at 11:09 AM | Permalink

        Re: Ryan O (#68),

        Seconding Ryan O. I would almost say it is axiomatic that the written word will more often fail to “compile and run” on it’s reader’s brain when the code behind the text is absent.

        • Ryan O
          Posted Aug 10, 2009 at 11:14 AM | Permalink

          Re: steven mosher (#71), Indeed. Slogging through code is a pretty good way to understand what was done. My repertoire isn’t quite as extensive as yours, but I can muddle through Matlab fairly well, and do okay with FORTRAN, C, and C++.

    • Posted Aug 10, 2009 at 3:46 PM | Permalink

      Re: nicola scafetta (#65), Why are you playing hide and seek?

  46. steven mosher
    Posted Aug 10, 2009 at 10:48 AM | Permalink

    SteveMc. Sounds like a separate thread on TSI ( with leif and nicola and others) would be cool ( someday)
    Anyways kudos to nicola for showing up and taking the time. I won’t beat Gavin up. It’s so much more fun to watch Lucia
    do it.

  47. Posted Aug 10, 2009 at 10:52 AM | Permalink

    Nice post Steve, that’s what happens when you take a day off from blogland, you miss all the fun.

    I’ve read most of the comments now and after reading Dr. Scafetta’s question and answer, it makes for several excellent points regarding wavelet analysis on this problem. One of which Dr. Schmidt actually brought up which was that this is not the ideal application for wavelet analysis. I agree that for meaningful calculations a centering of padding on the known signal is probably the best method but perhaps the level of filtering is too high when we get to the point that endpoint treatment becomes a significant detail.

    Another point which has been covered both here and at Lucia’s is the publishing of the code. This should be a requirement in climate papers where code is used in my opinion. I hope Dr. Scafetta will take it into consideration in the future, think of what it would have done for Dr. Schmidts rebuttal paper if he had quick access to the code and data. All the detail would have been understood in the rebuttal and we probably wouldn’t be discussing the mundane rebuttal issues but rather discussing the more significant details of the datasets. It probably would have altered the entire tone of the paper which was basically described as arrogant elsewhere.

    The science would progress at a faster rate and in this instance would likely have been far less contentious on the mundane and truly meaningless points of this particular rebuttal.

    Imagine how quickly we could have worked through Dr. Steig’s analysis if he had released his code. The minor problems would come out but the whole process would have been quickly understood. Nobody would need to make any stink about Matlab classes or one group trying to embarrass another.

    Help us green the planet, free the code.. :D

  48. Calvin Ball
    Posted Aug 10, 2009 at 1:55 PM | Permalink

    70, old Chinese saying: he who does half-assed job needs rebuttal. He who does full-assed job don’t need rebuttal.

  49. Carrick
    Posted Aug 10, 2009 at 2:15 PM | Permalink

    Nicola:

    Steve should implement the above information in his algorithm, that is, he just need a linear interpolation algorithm with a timestep of 11/12 = 0.916666666666666667, I believe that R has one. Moreover, the MODWT method does not need that the data file is made of a N=2^x number of data, any N is fine.

    I’m left with the question of why are you are going to this trouble.

    The solar cycle isn’t constant, rather it varies from a period of 7 years to up to 14 years, based on available sunspot data as well as on a smaller duration set of luminosity data.

    What’s the point of setting up your wavelet so that the 11-year period fall in the center of a bin in this case?

    (Also, normally I wouldn’t normally linearly interpolate time-domain data, rather I’d use the sampling theorem assuming the data weren’t aliased, but regardless..)

    • Nick Stokes
      Posted Aug 10, 2009 at 2:45 PM | Permalink

      Re: Carrick (#74),
      With MRA, it’s like selecting a channel on a radio or TV receiver. The broadcast signal isn’t exactly periodic either. You want one of the bandpass filter regions (say D7) to contain the information about the “11 yr” cycle, and exclude the “22yr” cycle (and other unwanted cycles) – you want that to appear on D8. Same as with your radio station. So your best chance is to design the bandpass filter so the frequency you want (best estimate) is in the middle. Choosing the timestep (with sampling, or interpolation) is the only control that you have, since the bandpassing is structured with periods being powers of 2 times the sample period.

    • John S.
      Posted Aug 10, 2009 at 7:40 PM | Permalink

      Re: Carrick (#74),

      normally I wouldn’t normally linearly interpolate time-domain data, rather I’d use the sampling theorem assuming the data weren’t aliased

      Shannon’s bandlimited interpolation would certainly be superior, if interpolation was really needed. But that’s not really the case. If you want to center the bandpass on the 11yr period, this should be done in the frequency domain, not in the time domain. Linear interpolation simply weakens S&W’s treatment. And aliasing, of course, is always a problem in climate data decimated from two (or even four) daily readings to yearly averages.

      I’m still wondering why they didn’t do straightforward bandbass filtering or, better yet, cross-spectrum analysis, instead of resorting to wavelets. After all, it’s the coherence between the two series that they’re ultimately trying to establish.

  50. Carrick
    Posted Aug 10, 2009 at 2:20 PM | Permalink

    Here’s my histogram of period (based on difference in years between adjacent maxima (the 17 year solar cycle from 1796 is not included here).

    The period is in years.

  51. Robert L
    Posted Aug 10, 2009 at 2:59 PM | Permalink

    Having been following some of the solar related issues for a few years now (maybe 35), it seems to me that there really is not an 11 and 22 year cycle, but rather a 22 year cycle like a sine wave, after an abs() function.

    i.e. we only see the half cycles as peaks, when really they are dips. Think about it, the Sun’s magnetic field reverse with each solar cycle, so we are really seeing the N-S phase, followed by the S-N phase.

    Either way, using my best statistical monitor (mark 1 eyeball) I’m quite amazed at how similar the graphs in Fig.2 and Fig.3 align with the C20 temperature records. Correlation is not causation, but that graph is of the output of our primary energy source.

    cheers,
    Robert

  52. Ryan O
    Posted Aug 10, 2009 at 3:59 PM | Permalink

    Guys, I think Nicola is just having a bit o’fun.

  53. VG
    Posted Aug 10, 2009 at 4:13 PM | Permalink

    OT put important

    This is VIP The PM of Australia has put up a climate blog please use it especially Australians

    http://www.pm.gov.au/PM_Connect/PMs_Blog/Climate_Change_Blog

    THis will influence them over time
    Is this worth a huge post here?

  54. sky
    Posted Aug 10, 2009 at 4:48 PM | Permalink

    Scafetta is clearly enjoying this one, as well he should. Meanwhile, if S&W’s results stop in 2002, I’m at a loss to understand what the subsequent downturn shown here in Steve McIntyre’s attempt at replication has to do with anything.

    • Ivan
      Posted Aug 10, 2009 at 6:01 PM | Permalink

      Re: sky (#83),

      Meanwhile, if S&W’s results stop in 2002, I’m at a loss to understand what the subsequent downturn shown here in Steve McIntyre’s attempt at replication has to do with anything.

      Exactly. Only thing that using data up to 2008 would demonstrate is that solar forcing decreased since 2002, which is quite uncontroversial finding for someone wishing to prove sun-climate connection, since global temperature also decreased in the same time frame. Why this is so interesting to pursue further is beyond me. Specially when take into account that Scafeta’s paper was published in 2006, so it’s not obvious what Steve actually object to Scafeta? Not using data up to 2008, while writing paper in 2006?

      • Ryan O
        Posted Aug 10, 2009 at 6:17 PM | Permalink

        Re: Ivan (#87), Steve’s not objecting to anything. He’s merely showing what happens if you use data out to 2008. Now that we have data out to 2008, this may be pertinent to S&W’s results. None of us know yet. Right now, this is just an exercise to see if S&W’s method can be replicated by a bunch of bloggers.

        • Ivan
          Posted Aug 10, 2009 at 9:26 PM | Permalink

          Re: Ryan O (#89),

          Those are two distinct enterprises – to see what happens when one add new data that go until 2008 to old Scafetta’ ones (is it really so surprising that he didn’t include data to 2008 but stopped at 2002, writing his paper in 2004 or 2005?), and, on the other hand, replicating or auditing old Scafeta’s results until 2002. This is confusing – if the problem with Scafetta is that his old results are not reliable, cannot be replicated or whatever – explore then that problem. But, if the central problem here is that new solar data could take into question Scafetta’s previous results, what that has to do with replication of previous study?

          But, suppose, Steve is right. The final consequence would be that solar forcing decreased since 2001 or 2002 (and Scafetta arguably wanted to avoid that conclusion), along with global mean temperature. Am I missing something?

      • Posted Aug 11, 2009 at 7:20 AM | Permalink

        Re: Ivan (#87), Re: sky (#83),
        There is some confusion here about what Steve is criticising. I was confused initially until I read it through again. My understanding is as follows (Steve please correct me if I’m wrong!). Steve is not criticising SW05 or any other of Scafetta’s papers. Steve is criticising Scafetta’s criticism of BS09 (as posted on climatesci.org a few days ago), on the grounds that when Scafetta’s method is applied to the latest data, it seems to yield a downturn around the year 2000, as did the BS09 paper. The issue now is whether or not Steve has implemented Nicola’s method correctly.

        Also I would like to recommend readers to ignore Nick Stokes, who is mostly either wrong, confused or irrelevant (as repeatedly pointed out by an exasperated Spence_UK). BS09 DID use periodic padding, as pointed out by Scafetta – Gavin implicitly acknowledged this on the thread at Lucia’s.

        • Steve McIntyre
          Posted Aug 11, 2009 at 8:18 AM | Permalink

          Re: PaulM (#109),

          Pretty much.

          It does appear that BS09 have incorrectly applied cyclic padding. While the Team routinely says that Team errors do not “matter” (the infallibility doctrine that Pielke Jr has noted), in this case, it appeared to me that Gavin’s error actually might not matter as much as the first impression from Nicola’s rebuttal in that recent low solar values have mitigated the effect of the error. Only somewhat, since 1900 values were quite a bit lower. Since we don’t have Gavin’s results as digital data – only a squiggle in a publication – it’s hard to say.

          And, of course, I criticized all parties for creating a typical climate science schmozzle in which the parties exchange criticisms in the PeerReviewedLitchercher without any code and without any archive of data as used, making reconciliation of the claims pointlessly difficult.

          In doing my calculations, I was only looking at the smooth. Nicola used an interpolation recipe. I’d like to understand the potential effect of such interpolation before agreeing that this is the only “right” way of doing the analysis.

          Nicola has argued that a 2008-based reflection is an “error”, but as a reader observed, 2008 also appears to be a stationary/near-stationary point and thus maybe a 2008 reflection is not an “error”.

          I’m obviously not criticizing Nicola for not using 2008 data – what an absurd interpretation by a couple of critics. However it is an entirely legitimate exercise to test claims against up-to-data; I routinely do so and did so in this case.

        • steven mosher
          Posted Aug 11, 2009 at 10:07 AM | Permalink

          Re: Steve McIntyre (#110),

          I especially like the checking of up to date data. I call this regression testing the science. Where possible we should be able to regression test the science. What would parkers calculations yeild to, petersons paper on TOBS (err was it karl?) This is one of the great benefits of creating a repository of climate science and climate code:regression testing established methods against new data.

        • Geoff Sherrington
          Posted Aug 11, 2009 at 8:59 PM | Permalink

          Re: steven mosher (#115),

          As our host would know, certain disciplines like mineral work in many countries have a legal requirement for reports to be created at regular intervals and held in government repositories. In many countries, a useful portion of the core and chips from drilling is also required to be labelled and held.

          When secrecy is involved, the operator has a pre-defined period before release, typically 3 years, or the ability to negotiate a shorter or longer period.

          While I like the idea of free-lance code polishers, surely a government requirement that the climate science data and code be submitted to public archives would be a way to go, perhaps as a minimum, perhaps as complementary to the code polishers.

        • Nick Stokes
          Posted Aug 11, 2009 at 3:15 PM | Permalink

          Re: Steve McIntyre (#110), Steve, interpolation is not the key issue in what S&W do, it’s timestep selection. The basic reason is this. When you submit arguments to mra(), you provide a time series of data values, but there’s actually no timing information. The time interval could be years or nanoseconds – mra() decomposes the data into frequency bins in just the same way, based on the number of data items. You want an 11 year cycle to land in the middle of one of those bins. The only way you can make that happen is by presenting the data in such a way that the timestep is an appropriate fraction of 11 years (eg 1/192 or 1/96). Yo achieve that, S&W use interpolation.

        • Nick Stokes
          Posted Aug 11, 2009 at 2:46 PM | Permalink

          Re: PaulM (#109), Yes, BS09 did use cyclic padding. My statement was that they did not use it for their smooth, which was derived by polynomial fitting. The effect, as I noted (by way of query) in #37, was to use the padding only for the bandpass components D7 and D8 (re SW05). These should have no trend, and cyclic padding is reasonable, although I would probably prefer reflection.

          People might like to reflect on why, if cyclic padding is a grevious error, it is provided as the default option for mra()?

          Steve: C’mon, Nick. Maybe mra() is mainly used on stationary series.

        • Nick Stokes
          Posted Aug 11, 2009 at 5:38 PM | Permalink

          Re: (#123)

          Steve: C’mon, Nick. Maybe mra() is mainly used on stationary series.

          Indeed, and as I’ve suggested above the right thing to do is to detrend, which B&S did, and S&W didn’t. Then the conditions for the periodic mra() default apply (though I would probably still prefer reflection, but it’s optional).

        • Steve McIntyre
          Posted Aug 11, 2009 at 5:51 PM | Permalink

          Re: Nick Stokes (#133),

          Indeed, and as I’ve suggested above the right thing to do is to detrend, which B&S did, and S&W didn’t.

          It’s get pretty hard to keep track of opportunistic Team methodologies – but you should also recall that Schmidt’s close associates, Mann and Ammann, excoriated von Storch and Zorita for detrending. As usual with the Team, it depends on whose ox is being gored.

          With respect, I’m not all that interested in your “opinion” on what’s right (nothing personal) – I’d be far more interested in a citation to a specific reference and, if the reference is to a text, then to the page and quotation.

          Further, citations of biology and electrical engineering examples are not all that helpful for non-stationary series.

          Steve: I apologize to Nick for the middle sentences as he did provide a reference. Whether it does what it’s supposed to is another issue.

        • Nick Stokes
          Posted Aug 11, 2009 at 5:57 PM | Permalink

          Re: Steve McIntyre (#134),

          With respect, I’m not all that interested in your “opinion” on what’s right (nothing personal) – I’d be far more interested in a citation to a specific reference and, if the reference is to a text, then to the page and quotation.

          But I did exactly that (#133). A text which explains detrending and summarises the procedure.

          Steve: My apologies. I noticed that almost immediately. However, things are moving at warp speed, it seems, and even a one-minute edit window was too long in this case. I’ve added a comment noting your proper rebuttal of part of my statement. Having said that, perhaps you could turn your attention to Mann and Ammann’s vituperative criticism of detrending by von Storch and Zorita and explain why it was wrong in their application and right in the BS applicaiton.

        • Nick Stokes
          Posted Aug 11, 2009 at 6:43 PM | Permalink

          Re: Nick Stokes (#137), Yes, that’s OK

          Having said that, perhaps you could turn your attention to Mann and Ammann’s vituperative criticism of detrending by von Storch and Zorita and explain why it was wrong in their application and right in the BS applicaiton.

          but now I need a citation.

          Steve:
          See: http://www.realclimate.org/index.php/archives/2006/04/a-correction-with-repercussions/ and http://www.climateaudit.org/?p=651 and refs therein.

        • Nick Stokes
          Posted Aug 12, 2009 at 3:45 AM | Permalink

          Re: Steve M(#137),
          OK, I’ve looked into the von Storch/Zorita issue. It’s obviously a heated debate from the past that I don’t have any wish to revive. But it is a very different situation. With BS09 etc we have a straightforward Fourier analysis, where the case for detrending is clearcut in terms of properly representing features of the data that are known to be inappropriately represented by sinusoids in time. You need to augment the set of basis functions, and detrending in effect adds the set of low order polynomials.

          With VZ, MBH et al, the issues are totally different. They are not representing a time series in sinusoids. Instead they are calibrating by correlating instrumental readings with proxies. If you detrend you answer a different question – what is the correlation between the residuals after detrending? Which question is appropriate depends entirely on what you are looking for. If you believe the trends should be included in the correlation, then detrending weakens the test. I believe this was the Ammann et al view. As I understand, VZ et al believed that some of the trend in the proxies may have been unrelated to climate, so testing the correlation of residuals was safer. I have not looked into this thoroughly, and don’t have a view as to who is right. It’s just a totally different aspect of detrending.

        • Nick Stokes
          Posted Aug 12, 2009 at 3:50 AM | Permalink

          Re: Nick Stokes (#145), I said Fourier when strictly it is of course a wavelet analysis. But the issues are the same, and equally disconnected from the use of detrending in correlation.

        • sky
          Posted Aug 11, 2009 at 5:56 PM | Permalink

          Re: PaulM (#109),

          Being a working stiff, I expect to see an apples-to-apples comparison of methodology. It’s fine if someone wants to explore the effects of a longer record, as well. But then you get pears thrown into the comparison. And with some contributors here, kiwi fruit is being thrown around.

  55. Carrick
    Posted Aug 10, 2009 at 5:17 PM | Permalink

    Nick, I understand the point of binning and placing the center frequency near the center of a bin when you have a narrow band process, but when the peak width is as broad as we are seeing here, are you really accomplishing anything other than inserting noise by doing a linear interpolation?

    I guess that’s what I’m really driving at here… I don’t a) see how it is going to help (my experience it won’t have much effect) and b) you may be causing more harm than by using a very noisy interpolating function.

    • Nick Stokes
      Posted Aug 10, 2009 at 5:36 PM | Permalink

      Re: Carrick (#84),
      Carrick, maybe it often won’t matter. For example, annual data has a bin from 8 to 16 years, which the solar cycle fits into. But if you used monthly data, the cycle is about 132 months, and 128 is the bin edge, so the signal you want would fall between two bins. Also the “bin” edges are not well defined – components overlap.
      I don’t think the noise of interpolation is much of a problem – it’s very high frequency. For example, S&W subdividing monthly data – that’s noise at over 100 x the solar bin, and the band pass filter will fix it.

  56. Kenneth Fritsch
    Posted Aug 10, 2009 at 5:49 PM | Permalink

    I have not, as yet, read the Scafetta and West papers of interest to this thread and I do not come to this party well versed in wavelet analysis, at least at this point in time.

    Would I do think I recognize (and it may be OT) from reviews of the paper is that, with the lack of understanding of the underlying “amplification” effects of TSI on regional and global temperatures, an analysis such as that undertaken by Scafetta and West (2006b) in matching TSI and temperature patterns is fraught with dangers of cherry picking the selection criteria.

    Based on the choices that were apparently available to Scafetta and West (in doing the wavelet analysis alone) I would be most interested in doing some sensitivity testing based on alternative choices or obtain more detailed a priors for the selections.

    • Mark T
      Posted Aug 10, 2009 at 6:59 PM | Permalink

      Re: Kenneth Fritsch (#86),

      I do not come to this party well versed in wavelet analysis

      For purposes of discrete data analysis, think in terms of a spectrogram, but with variable resolution in time and frequency. In general, low frequency components are resolved with poor time localization, but excellent frequency localization, and vice versa for high frequency components. The big to do is that wavelets have compact support, i.e., there is no assumption of periodicity.

      As for the specifics of the S&W implementation, I haven’t gotten far enough to speak intelligently on the matter (yet).

      Mark

      • Kenneth Fritsch
        Posted Aug 11, 2009 at 10:21 AM | Permalink

        Re: Mark T (#91),

        The big to do is that wavelets have compact support, i.e., there is no assumption of periodicity.

        Thanks, Mark, for attempting to teach an old dog new tricks. I get the distinct feeling from some comments at this thread that wavelet analysis is not so susceptible to arbitrary inputs. I do continue to have questions about sensitivty of the S&W results to input data and methods applied.

        I will limit my immediate analysis to observations such as Scafetta’s Italian accent (which I found added a lot of charm to his linked oral presentation) appearing differently between his posting at CA and at Roger Pielke Sr’s. Perhaps he is intending to have some fun at CA.

        • Mark T
          Posted Aug 11, 2009 at 12:39 PM | Permalink

          Re: Kenneth Fritsch (#116),

          I get the distinct feeling from some comments at this thread that wavelet analysis is not so susceptible to arbitrary inputs.

          I’m not sure if I’d go that far. Wavelet decompositions are still going to have a problem uncovering components with periodicity that is longer than the sample record, as well as components that have periodicity that is shorter than the shortest scale. A wavelet decomposition is basically successive channelization stages. Multiresolution analysis is what we (signal processing engineers) call it. The only “magic” is that the filter banks obey certain properties.
          .
          C. S. Burrus and R. A. Gopinath did a bunch of work developing wavelet functions for MATLAB in Introduction to Wavelets and Wavelet Transforms, Computational Mathematics Laboratory and Electrical and Computer Engineering Department, Rice University, Houston, TX, April 1993.
          .
          Oh, and consider this: the simplest (and first, circa 1909) wavelet basis is the Haar basis, which has as its discrete high and lowpass pairs [1, -1]/2 and [1, 1]/2 respectively. They constitute what is known as a quadrature mirror filter pair.
          .
          Mark

        • Kenneth Fritsch
          Posted Aug 11, 2009 at 4:00 PM | Permalink

          Re: Mark T (#117),

          A wavelet decomposition is basically successive channelization stages. Multiresolution analysis is what we (signal processing engineers) call it. The only “magic” is that the filter banks obey certain properties.

          Mark, as one who has a good understanding of wavelet analysis as applied to signal processing, what is your view of its use in finding patterns in TSI and temperatures? Where does the analogy apply and where might it breakdown? Can one obtain spurious results with wavelet analysis if it is applied to time series where incorrect assumptions have been made about the generation of the signal?

        • Mark T
          Posted Aug 11, 2009 at 8:35 PM | Permalink

          Re: Kenneth Fritsch (#127),

          Mark, as one who has a good understanding of wavelet analysis as applied to signal processing, what is your view of its use in finding patterns in TSI and temperatures? Where does the analogy apply and where might it breakdown?

          It is still a signal processing problem, regardless of what the source of the data is (and a time series problem to boot.) I have no problem with it though you must be aware that any time you are not sure what it is you are actually looking for, you have to take what you find with a grain of salt. If I am looking at a communication signal, I have a good idea what the waveform looks like even after it is distorted by the channel. How do we know what to expect in TSI/temperature series? We don’t, but if similarities show up in both, I would not have a problem if someone claimed a connection. I’d expect them to describe the relationship, however, and would also be wary of any cause-effect attribution without some really, really good evidence.

          Can one obtain spurious results with wavelet analysis if it is applied to time series where incorrect assumptions have been made about the generation of the signal?

          I think “spurious” in the sense of a statistical correlation is an incorrect usage of the word here. Note, however, that filtering does represent a correlation of the data with the impulse response of the filter itself. What do you mean by “incorrect assumptions have been made about the generation of the signal?” Basically, if the data has energy at the frequencies represented by the particular wavelet stage, it will be resolved. I’ve never really bought into the wavelet shape issue since “matching” is all about bandwidth anyway.
          .
          Mark

        • Kenneth Fritsch
          Posted Aug 12, 2009 at 8:44 AM | Permalink

          Re: Mark T (#141),

          Thanks again Mark for taking the time to reply to my questions. I think the point that you make in this comment is what I was looking for:

          How do we know what to expect in TSI/temperature series? We don’t, but if similarities show up in both, I would not have a problem if someone claimed a connection. I’d expect them to describe the relationship, however, and would also be wary of any cause-effect attribution without some really, really good evidence.

          From your question here I see that my original question was poorly worded:

          I think “spurious” in the sense of a statistical correlation is an incorrect usage of the word here. Note, however, that filtering does represent a correlation of the data with the impulse response of the filter itself. What do you mean by “incorrect assumptions have been made about the generation of the signal?” Basically, if the data has energy at the frequencies represented by the particular wavelet stage, it will be resolved.

          What if one had a time series from a chaotic system and applied wavelet theory to it?

          What I sometimes see in pattern matching (not necessarily using wavelet analysis) is that the data set to be matched is cherry picked, or at least has the potential for such a selection, from several available series and then any discrepencies are more or less hand waved away. The end result can then a reasonably nice looking match with which I have trouble for the reason that you eluded to in the first excerpt above. Is there something in the wavelet analysis that avoids or mitigates this problem?

        • Mark T
          Posted Aug 12, 2009 at 10:56 AM | Permalink

          Re: Kenneth Fritsch (#148),

          What if one had a time series from a chaotic system and applied wavelet theory to it?

          That’s what you would use it for, actually. As I hinted at, however, you have to be wary when you get “hits” that don’t repeat or are not otherwise regular. The same applies for Fourier analysis, too (which bender and I have discussed at length.) Even white noise will occasionally produce a match, further complicating the analysts life!

          Is there something in the wavelet analysis that avoids or mitigates this problem?

          Uh, not really, but each stage of the wavelet decomposition has a specific bandwidth, so you won’t “match” to a single pattern but a range of frequencies. This is not unlike a Fourier transform, btw, which has a range of frequencies in each “bin.” The difference, of course, is that the wavelet decomposition also provides time localization (as I described above) whereas a single Fourier Transform only has one time index: everywhere in the series.
          .
          I think, btw, that you would note that even what you are doing would result in a range of frequencies/signals that provide a result, too. The correlator receiver, useful for CDMA communication systems, is similar. Consider all the effort that went into finding codes that do NOT have a range of signals that correlate well with your desired signal!
          .
          Mark

  57. Colin
    Posted Aug 10, 2009 at 10:12 PM | Permalink

    I think I am always with you guys that posting relevant codes is always the right thing to do, but it seems that the analysis of these papers has been much deeper than if Dr. Scafetta had simply posted the code to begin with. In his original response on Climate Science, one of Scafetta’s points was that Benestad and Schmidt never would have written their paper had they not been so cavalier to run a pre-packaged program with a MODWT algorithm, despite not understanding the basics of the algorithm itself(note I have no idea if Benedstad and Schmidt actually understand the algorithm or not). If this is one of Scafetta’s issues with Gavin, why should he supply the code here (where, a large portion of the readers will also fail to understand the actual MODWT algorithm, and where Steve made an evidently simple scientific error in the original post).

    I understand that posting the code would have prevented Gavin’s (and later Steve’s) failure to reproduce the results. But it also would have prevented some of the deeper conversation behind the code’s use in the first place. Of course, I like the idea of posting the code, because it facilitates analysis by those who already know all of the science behind the code, but are stifled in their analysis by the inability to reproduce the results.

    • steven mosher
      Posted Aug 10, 2009 at 11:32 PM | Permalink

      Re: Colin (#94),

      This is akin to saying that if Mann had posted his Code, then Steve never would have sampled bristlecone pines with Mr. Pete. BTW I proved fermat’s last theorem but this comment second is too small to hold the proof. When did it become an option in science to not show one’s work? What role did copyright play in this, what role did IP? what role did funding sources play? what role did journal’s play? what role did the publish or perish mentality play?

      • Geoff Sherrington
        Posted Aug 11, 2009 at 4:40 AM | Permalink

        Re: steven mosher (#95),

        I solved Fermat’s last theorem, the one after the one you solved, but my proof was marginal.

  58. Bengt
    Posted Aug 11, 2009 at 12:39 AM | Permalink

    A model for how climate scientists could report comes from economics. William Nordhaus, the premier climate economist, maintains a completely open approach: http://www.econ.yale.edu/~nordhaus/homepage/DICE2007.htm . It is somewhat remarkable that economists dominate (a part of) the climate science community along the replicability dimension. Too bad for us economists that Steve did not pursue economics. No doubt, he would have appreciated the openness of Nordhaus and other economists who want to be taken seriously.

  59. MrPete
    Posted Aug 11, 2009 at 4:51 AM | Permalink

    A comment on freeing the code:

    It seems to me that we are witnesses to (and possibly facilitators of) a sea-change in work habits for scientists such as the genial Dr. Scafetta. Let’s believe Dr. Scafetta’s own statements that he has taken the code-publishing encouragements to heart.

    Why would he be reluctant to share his “real” code? He said it himself: it is a mess. Several here have interpreted that to mean “it is such a mess you would not be able to figure it out.”

    In my experience, there is another more likely reason: the code is such a mess that it is an embarrassment.

    I’m sure that from now on, Dr. Scafetta will write code with audience in mind. But his existing code was not created with that perspective.

    My question: what can we do to make this transition easier for amenable scientists such as Dr. Scafetta?

    It seems to me we could offer a valuable (one-time-per-taker) volunteer service: confidential code-cleanup.

    Purpose: without changing any calculations, your code will be given a basic beauty-treatment so that it can be made publicly available for comment.

    How: Steve McIntyre is a trusted broker who can approve a group of code experts willing to perform this service. You send them your code; they discuss/clean the code offline and send it back to you. In turn, once you agree that your code a) still represents your calculations correctly; b) is no longer an embarrassing pile of spaghetti…you will make the code publicly available.

    I’m quite certain there are plenty of folk here with the ability to do this for any of a number of computer languages. Those of us who are professional programmers often have access to high end code cleanup tools that can partially automate the process.

    Does this make sense? Dr. Scafetta, your thoughts would be appreciated on this.

    • Steve McIntyre
      Posted Aug 11, 2009 at 6:35 AM | Permalink

      Re: MrPete (#102),

      Pete, that’s a good point. We have a large number of software-type readers and I’m sure that we’d get a volunteer to act as a sort of editor. The idea would not to change the “voice” of the script to production software, but to maintain the voice as that of a scientist, but acting as a good editor.

      Personally, I’ve found it interesting to see how Ryan O structured his Steig code. His code was much more thoroughly commented than mine and included a lot more commenting at the very top of a script to say what was going on. But they still have the “voice” of a working science script and not production software.

      It takes more time to do it like Ryan O does it, but now that I’ve accumulated so much code over the past few years, I’m trying to copy his style more and more.

      Other times, people have interesting tricks and skills. Nicholas was very sharp on how to scrape webpages; adapting some of his code has enabled me to turnkey many scripts. And once you start turnkeying scripts, it makes the scripts themselves like a very sophisticated hyperlink and gives you a different approach to presentation.

      Jeff Id has applied this approach and this has resulted in excellent presentations at his blog. Jeff had no particular need to learn R other than to facilitate this sort of public exchange, but did so with great effect.

      I wish Lucia would do this; it would be very consistent with her approach. On many occasions, she writes good posts where I’d like to handle the data right then, but don’t want to spend an hour locating the data, remembering how to extract it and doing the analysis.

      • steven mosher
        Posted Aug 11, 2009 at 10:01 AM | Permalink

        Re: Steve McIntyre (#106),

        Agreed. Also, a long time ago I asked Tammy to post code, but haven’t kept up with the badgering.

    • MetMole
      Posted Aug 11, 2009 at 6:45 AM | Permalink

      Re: MrPete (#102),

      Oh what a superb suggestion, MrPete.

      I suspected embarrassment on the part of the scientists too. Once one is aware of coding standards and the many reasons why they are necessary, being seen not to employ them becomes the programmer’s equivalent of the writer’s being seen to have no knowledge of grammer ….. or spelling.
      .
      Looks left. Looks right. Look ahead. Waits…. :)

  60. MetMole
    Posted Aug 11, 2009 at 5:47 AM | Permalink

    OFF TOPIC — a question of etiquette:

    I’ve heard of Nicolo Machiavelli and even read one of his books — yeah, really, and without moving my lips.

    Now I know almost next to nothing about the Italian language and had assumed by default, so to speak, that the name Nicola was the female equivalent of Nicolo. However, with my eagle eye (I keep one in my cheek pouch), I have noticed that when most others here refer to Dr Nicola Scafetta in such a way as to imply a particular sex to the good doctor, that sex is male.

    Steve: Male

  61. Posted Aug 11, 2009 at 8:30 AM | Permalink

    OFF TOPIC – Re: MetMole (#105), …
    The Renaissance philosopher was Niccolò Machiavelli (with the accent on the last “o”).
    The equivalent female name for Nicola is Nicoletta.
    Among the few most popular names for an Italian male are Andrea and Luca, used esclusively for a male. Even Maria, a female name in its essence, (English: Mary) is, sometimes, put as a second name to a male.

    Steve: Enough on this.

  62. Carrick
    Posted Aug 11, 2009 at 8:56 AM | Permalink

    Nick Stokes:

    In his rebuttal at Pielke Sr, Scafetta observes that the BS09 smooth was derived by cyclic padding.
    It wasn’t.

    It’s absurd to make such a claim here. We are all aware there is no way you could know this without seeing Gavin’s code.

    I fully expect that BS made this error. I also agree with Steve that it probably—by luck only—had little effect.

    • Nick Stokes
      Posted Aug 11, 2009 at 2:33 PM | Permalink

      Re: Carrick (#112),

      It’s absurd to make such a claim here. We are all aware there is no way you could know this without seeing Gavin’s code.

      No, this is absurd. It implies that we can’t know anything without seeing the underlying code. And as people here complain, we never do get to see the code. So we can’t believe anything people say about their computational results? Ever?

      • Ryan O
        Posted Aug 11, 2009 at 2:40 PM | Permalink

        Re: Nick Stokes (#121),

        It implies that we can’t know anything without seeing the underlying code. And as people here complain, we never do get to see the code. So we can’t believe anything people say about their computational results? Ever?

        .
        We have a winner!
        .
        Replication is fundamental to science. If it can’t be replicated, it didn’t happen. If the original code is required for replication, then unless the code is provided, it didn’t happen. If the original code is not required for replication, well, at the end of the replication, guess what? You still have code.

        • Nick Stokes
          Posted Aug 11, 2009 at 2:50 PM | Permalink

          Re: Ryan O (#122), I’m in favour of being able to see the code. I’m just rejecting the proposition that I can’t say anything about BS09 unless I’ve seen the code. If that were the standard, there would be a lot of silence around here.

        • Ryan O
          Posted Aug 11, 2009 at 3:03 PM | Permalink

          Re: Nick Stokes (#124), No. The very reason CA is noisy is because code is not provided.

  63. Carrick
    Posted Aug 11, 2009 at 9:01 AM | Permalink

    Let me clarify that when I said “little effect” what I really was driving at the fact that the answer that BS09 came up with had the correct downturn at the endpoint. That isn’t a “little effect” so much as an error that didn’t produce a wildly incorrect answer. “Little effect” in that sense… I apologize for the sloppy language.

  64. steven mosher
    Posted Aug 11, 2009 at 1:04 PM | Permalink

    www-stat.stanford.edu/~wavelab/Wavelab_850/wavelab.pdf

    Dr. S and others would do well to read this. especially the section called the scandal

    excerpted below.

    A la R􏰄echerche des Parametres Perdues􏰀 Once, one of us read a paper on wavelets
    that was very interesting. He had a vague idea of what the author of the paper was
    doing and wanted to try it out. Unfortunately, from the paper itself he couldn’t
    figure out what filter coeficients, thresholds and similar tuning parameters were
    being used􏰀 He spoke the author of the paper, who replied, “Well, actually, the reason
    we didn’t give many details in the paper was that we forgot which parameters gave
    the nice picture you see in the published article, when we tried to reconstruct that
    figure using parameters that we thought had been used, we only got ugly looking
    results.

  65. Mark T
    Posted Aug 11, 2009 at 1:21 PM | Permalink

    The “scandal” has nothing really to do with wavelet analysis per se. It was merely an author that failed to record how he configured his analysis. It just so happened the method was wavelet analysis. It could have been true for any experimenter working with any method. That’s why we document our progress: reproducibility!
    .
    Mark

  66. Posted Aug 11, 2009 at 4:07 PM | Permalink

    Nick–
    At this point, I’m not sure what you are defending. Scaffetta’s accusation pertains to the pink and blue lines. Those appear periodic.

    I have exchanged roughtly 3 emails with Rasmus and about 7 emails with Gavin and a number with Nicola.

    My understanding is that after reading what Nicola wrote at Roger Sr., Rasmus knew he had done what Nicola said, tweaked his code and repeated their computations with the non-periodic conditions. I may be mistaken about the precise nature and extent to which Rasmus mis-applied Scaffetta’s method

    Anyway, G&R feel confident that Scaffetta’s method remains not-robust if they use the boundary conditions he uses. However, they are deferring in depth discussion until they have access to Nicola’s code. If I understand correctly, under the circumstances, Gavin judges this it is less confusing to respond applying the exact methodology Scaffetta so as to avoid the confusion that will ensue if even their new code has slight differences in methodology.

    This means everyone is going to have to wait.

    Quite honestly, I think BS09 did show a problem with SW, but for a bit that isn’t being discussed much here. I am planning to do some tweaking of Rasmus’s code to explain why information in approximately 5 sentences in BS09 probably shows that SW’s method is likely to contain a sizeable amount of uncertainty– but maybe when I fiddle I’ll change my mind. I am horrible at R so this is going to take me a while.

    • Nick Stokes
      Posted Aug 11, 2009 at 4:36 PM | Permalink

      Re: lucia (#128), Lucia, Scaffetta’s accusations are unclear. But the diagram that he gives implies that B&S used cyclic padding for their smooth. This is S8, which is not actually given by a curve in Fig 4, but by the grey bands. I quoted Steve on this, because he stated explicitly what most people have been taking as Scaffetta’s meaning.
      The smooth is the key issue, because it contains the physical features that people have been talking about (eg “low solar values” here). Cyclic, or even reflection, padding would be a problem there. The pink and blue curves that you refer to are the bandpassed curves, which should not have trends.
      I asked Nicola above #42. He seems to confirm there that in fact he used MODWT, which is a filtered smooth but would require padding, while B&S used polynomial fitting at this stage. He made an attempt to impute a motive, but in fact it is just orthodox detrending.

      And yes, Gavin agreed that to duplicate what S&W had done they should have used reflection conditions, so I am not surprised that Rasmus is doing so. But in fact, B&S by detrending were using a better method. I think the absence of detrending in S&W is a big problem, even if slightly mitigated by using reflection.

  67. Posted Aug 11, 2009 at 5:01 PM | Permalink

    Nick–

    I’m really having a hard time trying to figure out just what you are trying to defend in Gavin and Rasmus’s choice of periodic boundary conditions. Gavin & Rasmus aren’t disputing that they did not reproduce Nicola’s method. Their goal was to test Nicola’s method. I think they are wise enough to know that it’s not in their interest to post endless rationalizations why it’s ok for the method they tested to differ from Nicola’s in one way Nicola happened to notice because their method actually differs in two ways, one of which Nicola did not notice.

    • Nick Stokes
      Posted Aug 11, 2009 at 5:31 PM | Permalink

      Re: lucia (#130), Lucia, I’m just defending my statement that you challenged (#108). B&W have been accused of using cyclic padding for their smooth, which would have been inappropriate. They didn’t.
      They have conceded that they didn’t do things exactly as S&W did (with the reasonable defence that S&W didn’t say what they did). And to emulate, they should have. But what Scaffetta said was

      Any person expert in time series processing can teach Benestad and Schmidt that it is not appropriate to impose a cyclical periodic mode to a non stationary time series such as the temperature or TSI records that present clear upward trends from 1900 to 2000.

      And that just isn’t what B&S did. They detrended, in the orthodox way.
      In fact, I wrongly said above that S’s accusation wasn’t clear. That fragment is very clear, and wrong.

  68. Steve McIntyre
    Posted Aug 11, 2009 at 5:18 PM | Permalink

    Lucia, I obviously support the exchange of code and strongly believe that such exchanges would eliminate controversy.

    Gavin’s recent reasoning is almost verbatim from my emails to Mann in 2003. You paraphrase Gavin:

    If I understand correctly, under the circumstances, Gavin judges this it is less confusing to respond applying the exact methodology Scaffetta so as to avoid the confusion that will ensue if even their new code has slight differences in methodology.

    This is highly sensible. But see here for our Nov 11, 2003 email to Mann (post MM03):

    You have claimed that we used the wrong data and the wrong computational methodology. We would like to reconcile our results to actual data and methodology used in MBH98. We would therefore appreciate copies of the computer programs you actually used to read in data (the 159 data series referred to in your recent comments) and construct the temperature index shown in Nature (1998) (“MBH98”), either through email or, preferably through public FTP or web posting.

    or the next day:

    We are making a concerted effort to reconcile our results with your results and to avoid debate which is merely at cross purposes.  To accomplish this, as requested yesterday, we would appreciate a copy of the computer programs actually used to read in the 159 series and to carry out the temperature reconstruction in MBH98. I note that you did not reply to many of our concerns, but am focussing here on matters which can easily be resolved merely by releasing some text files. 

    Instead, we got totally stonewalled.

    Are attitudes changing on the Team? Or do they want code only when their own ox is gored?

    BTW Gavin has not replied to my email asking him to request Mann to provide code for the MBH99 confidence intervals.

    • Posted Aug 11, 2009 at 8:55 PM | Permalink

      Re: Steve McIntyre (#131),
      I wasn’t necessarily referring to Gavin’s discussion of him giving out his and Rasmus’s code immediately. I think that’s his view about posting any long discussion of how the conclusions of his paper hold up when he and Rasmus are able to replicate Scaffetta’s method. He’s done intermediate calculations based on what they now think Nicola did, but even this may differ from what Scaffetta really did. I advised Gavin to ask Nicola for his code and he did. But Nicola has elected to provide it as part of comment/reply.

      So… since Gavin should have the code fairly soon, Gavin would rather wait, replicate Scaffetta’s method exactly and then discuss how the conclusions of BS09 hold up once he uses Scaffetta’s exact method. Reporting on an intermediate method might just add to confusion.

      Anyway, that’s my understanding. If Gavin where here might correct me.

      I suspect in then end the public will have both Scaffetta and Gavin/Rasmus’s code.

      • Posted Aug 11, 2009 at 9:05 PM | Permalink

        Re: lucia (#142),

        I looked for additional replies from gavin on your blog threads and didn’t see it. It doesn’t matter but I wanted to mention that I emailed Dr. Scafetta privately to give him my opinion that this was an excellent opportunity to take the high road. Hopefully, the code will be forthcoming soon.

  69. Posted Aug 11, 2009 at 5:52 PM | Permalink

    Dr. Scafetta has emailed me a link to a presentation he gave in February. He’s discovered a link between decadal temperature fluctuations and planetary motion. It’s a long video but very interesting in that he discusses the origin of the TSI data and a variety of issues with the collection and the reasons for the disputed trend by scientists.

    http://noconsensus.wordpress.com/2009/08/11/century-to-decade-climate-change-created-by-planetary-motion/

  70. JFD
    Posted Aug 11, 2009 at 7:39 PM | Permalink

    Maintain your ground, Nick. Your responses contain logic as a minimum. If your responses represent reality then at some point someone, perhaps lucia, will “see your point” and illustrate why it is acceptable. If you are way offbase, then you yourself will recognize it and can then offer a mea culpa.

  71. Carrick
    Posted Aug 11, 2009 at 8:01 PM | Permalink

    Nick Stokes:

    No, this is absurd. It implies that we can’t know anything without seeing the underlying code. And as people here complain, we never do get to see the code. So we can’t believe anything people say about their computational results? Ever?

    Nick you made a definitive statement about what B&S did or didn’t do without having the code in their analysis available.

    That is the absurd part. I trust Gavin implicitly, but people make mistakes.

  72. John S.
    Posted Aug 12, 2009 at 8:36 AM | Permalink

    Conscientious time-series analysts have their own version of the Hippocratic oath: Do no harm to the data. Trend removal, linear or polynomial, introduces artificial signal components with an negative sign. This may be prescribed for a strongly trending series in order to avoid an artificial sawtooth in the cyclical padding inherent in any DFT analysis. It is totally unnecessary, however, in bona fide bandpass filtering, which removes the lowest frequencies without assuming any arbitrary form and never extends beyond the boundaries of data.

    It would be nice to see Scafetta provide the frequency response function of each bandpass he employed. It is too painful, however, to watch this thread descend into an uncomprehending mangle of analytic concepts. Adios!

  73. Posted Aug 12, 2009 at 9:17 AM | Permalink

    JeffId–
    I communicated with Gavin, Rasmus and by email. I have a current version of Rasmus’s code with the modifications based on what they think Nicola meant in Pielke’s thread. But… I promised not to circulate that and had described what I hoped to explore using their code. I’m interested in their tests of Nicola’s method using synthetic solar and surface temperatures modeled white noise and I want to see what happens if I tweak that a bit. (I also explained I am not good with R, so it may take me a while.)

  74. Calvin Ball
    Posted Aug 12, 2009 at 8:45 PM | Permalink

    Even white noise will occasionally produce a match, further complicating the analysts life!

    Which is how we get Allahfish and Mary on toast. And this:

  75. Kenneth Fritsch
    Posted Aug 13, 2009 at 10:55 AM | Permalink

    Mark T, your replies have been helpful in pointing me to my confusion and lack of understanding of wavelet analysis. I read the following three links which helped me better understand wavelet analysis and appreciate why Scafetta and West might apply this type of analysis to TSI and temperature time series and, of course, better understand the comments that you have made previously in this discussion.

    Would you have any better references for a layperson attempting to understand enough about wavelet analysis to make better sense of the Scafetta and West paper?

    Can you link me to other climate studies where wavelet analysis was used?

    Once a wavelet analysis is carried out what is the end game? Can you give examples? It appears from reviews that I have read that Scafetta and West were doing pattern matching of TSI and temperatures proxies. Is that often part of the analysis in wavelet analyses? Obviously I need to read more on wavelet analysis.

    A Fifteen Minutes Introduction of Wavelet Transform and Applications, by Paul C. Liu:

    http://www.iaa.ncku.edu.tw/~jjmiau/15Wavelet.pdf

    A book review of “An introduction to wavelet analysis”, by David F. Walnut:

    http://www.math.uiowa.edu/~jorgen/walnut.pdf

    Wavelet analysis of chaotic time series by J.S. Murgu´ıa and E. Campos-Cant´on:

    http://www.ejournal.unam.mx/rmf/no522/RMF52211.pdf

  76. snowmaneasy
    Posted Aug 18, 2009 at 1:00 PM | Permalink

    Any comments on Dr.Scafetta’s latest paper in Journal of Atmospheric and Solar-Terrestial Physics (3-August 2009)…Empirical analysis of the Solar contribution to global mean air surface temperature change ???

    http://nl.sitestat.com/elsevier/elsevier-com/s?sciencedirect&ns_type=clickout&ns_url=http://www.sciencedirect.com/science?_ob=GatewayURL&_origin=IRSSCONTENT&_method=citationSearch&_piikey=S1364682609002089&_version=1&md5=b11a528b6b4f4723612f26f9355a7f4f

Follow

Get every new post delivered to your Inbox.

Join 3,194 other followers

%d bloggers like this: