Scripts and data for Lindzen and Choi 2011 are now online at CA here together with the original article.
It will take me a little while to get to this. The scripts are in IDL. Translations to R welcomed.
Scripts and data for Lindzen and Choi 2011 are now online at CA here together with the original article.
It will take me a little while to get to this. The scripts are in IDL. Translations to R welcomed.
53 Comments
There is also the free version, GDL:-
http://en.wikipedia.org/wiki/GNU_Data_Language
michel,
regrettably, IDL implements more functions than GDL currently does. “JULDAY” is probably only one of them.
GDL> main
% Compiled module: MAIN.
% Compiled module: OBS.
% Compiled module: FILE_LINES.
% Compiled module: STRSPLIT.
% Compiled module: UNIQ.
% Compiled module: MEAN.
% OBS: Function not found: JULDAY
% Execution halted at: OBS 45 obs.pro
% MAIN 15 main.pro
% $MAIN$
GDL>
You can get the source code for JULDAY at:
http://idlastro.gsfc.nasa.gov/ftp/ittvislib/julday.pro
Thanks for filling that gap. However the next whole is TS_SMOOTH. I guess it won’t be the last. GDL is clearly less well equipt in terms of its library of fucntions at this stage and probably would require more coding effort than porting to R.
Unfortunately neither approach allows identical replication of L&C but replication by similar means in another language should confirm the same result and if there’s a significant difference (not due to porting errors) that would be significant as well.
P. Solar,
I got the L&C-11 scripts running with gdl version 0.9.1 and closely reproduced at least “Fig.jpg”. Here is a list of urls for all needed IDL library routines:
h ttp://idlastro.gsfc.nasa.gov/ftp/ittvislib/julday.pro
h ttp://idlastro.gsfc.nasa.gov/ftp/ittvislib/label_date.pro
h ttp://idlastro.gsfc.nasa.gov/ftp/pro/plot/legend.pro
h ttp://idlastro.gsfc.nasa.gov/ftp/ittvislib/regress.pro
h ttp://idlastro.gsfc.nasa.gov/ftp/ittvislib/timegen.pro
h ttp://idlastro.gsfc.nasa.gov/ftp/ittvislib/ts_coef.pro
h ttp://idlastro.gsfc.nasa.gov/ftp/ittvislib/ts_fcast.pro
h ttp://idlastro.gsfc.nasa.gov/ftp/ittvislib/ts_smooth.pro
h ttp://astro.uni-tuebingen.de/software/idl/textoidl/textoidl.pro
h ttp://coco.atmos.washington.edu/ion_script/gamap2/graphics/screen2jpg.pro
h ttp://idl-coyote.googlecode.com/svn-history/r293/trunk/coyote/retired/tvread.pro
Put the above files into the L&C-2011 scripts directory; cd into this directory and put the patch below into a file named, say, “LC-11.diff”. Then apply the lc11.diff using the shell command
patch -p3 < LC-11.diffOne last thing: My gdl version behaves very strangely (spurious syntax error messages) when gdl was called with main.pro on the command line. This errors did not show up when “main” was called from the gdl prompt.
diff -ur a/lindzen_choi_2011/scripts/amip.pro b/lindzen_choi_2011/scripts/amip.pro
--- a/lindzen_choi_2011/scripts/amip.pro 2011-09-25 18:40:42.000000000 +0200
+++ b/lindzen_choi_2011/scripts/amip.pro 2011-09-27 20:06:10.000000000 +0200
@@ -3,7 +3,7 @@
ObsName='ERBE&CERES'
clim_period=59
tot_period=288
-path=['./amip_olr/','./amip_osr/']
+path=['../amip_olr/','../amip_osr/']
file=['ncar_ccsm3_0_20n20s.dat',$
'mpi_echam51_20n20s.dat',$
'iap_fgoals1_0_g1_20n20s.dat',$
diff -ur a/lindzen_choi_2011/scripts/legend.pro b/lindzen_choi_2011/scripts/legend.pro
--- a/lindzen_choi_2011/scripts/legend.pro 2011-08-21 16:37:24.000000000 +0200
+++ b/lindzen_choi_2011/scripts/legend.pro 2011-09-27 21:09:00.000000000 +0200
@@ -442,7 +442,7 @@
if vectorfont[i] ne '' then begin
; if (num eq 1) and vertical then xp = x + xt/2 ; IF 1, CENTERED.
xyouts,xp,yp,vectorfont[i],width=width,color=colors[i] $
- ,charsize=charsize,align=xalign,charthick = charthick,/norm,font=font
+ ,charsize=charsize,align=xalign,charthick = charthick,/norm;,font=font
xt = xt > width
xp = xp + width/2.
endif else begin
@@ -456,11 +456,11 @@
TEXT_ONLY:
if vertical and (vectorfont[i] eq '') and symline and (linestyle[i] eq -99) then x=x0 + xspacing
xyouts,x,y,delimiter,width=width,/norm,color=textcolors[i], $
- charsize=charsize,align=xalign,charthick = charthick,font=font
+ charsize=charsize,align=xalign,charthick = charthick;,font=font
x = x + width*xsign
if width ne 0 then x = x + 0.5*xspacing
xyouts,x,y,items[i],width=width,/norm,color=textcolors[i],charsize=charsize, $
- align=xalign,charthick=charthick,font=font
+ align=xalign,charthick=charthick;,font=font
x = x + width*xsign
if not vertical and (i lt (n-1)) then x = x+2*xspacing; ADD INTER-ITEM SPACE
xfinal = (x + xspacing*margin)
diff -ur a/lindzen_choi_2011/scripts/main.pro b/lindzen_choi_2011/scripts/main.pro
--- a/lindzen_choi_2011/scripts/main.pro 2011-09-25 18:40:40.000000000 +0200
+++ b/lindzen_choi_2011/scripts/main.pro 2011-09-27 22:50:22.000000000 +0200
@@ -1,4 +1,5 @@
pro main
+myctblack=0
; -----------------------------------------------------------------------------
; This is the main program calling all procedures, calculating feedbacks, and
@@ -24,7 +25,7 @@
; endfor
; ---------------
-; Display 1 - Timeseries
+print,"Display 1 - Timeseries"
; ---------------
window,1,xs=600,ys=500
loadct,5
@@ -32,19 +33,19 @@
!p.charsize=1.3
!P.background=255
dummy=label_date(date_format=('%y'))
- plot,jul,sst,color=!myct.black,$
+ plot,jul,sst,color=myctblack,$
xtitle='Year',ytitle='T (K)',$
xtickunits=['time'],xtickformat='label_date',xstyle=1,xminor=5
- plot,jul,replicate(0,258),color=!myct.black,yr=[-4,8],$
+ plot,jul,replicate(0,258),color=myctblack,yr=[-4,8],$
xtitle='Year',ytitle=textoidl('R (Wm^{-2})'),$
xtickunits=['time'],xtickformat='label_date',xstyle=1,xminor=5
oplot,jul,olr,color=112 ;red
oplot,jul,osr,color=47 ;blue
- legend,label=['LW','SW'],line=[0,0],lcolor=[112,47],$
- charsize=1.3,boxcolor=255,spacing=1.3
+ legend,['LW','SW'],line=[0,0],colors=[112,47],$
+ charsize=1.3,outline_color=255,spacing=1.3
; ---------------
-; Feedback analysis for ERBE and CERES;
+print, "Feedback analysis for ERBE and CERES"
; Call lc11.pro procedure, which include both LC11 method and simple regression.
; ---------------
; Call procedure for calculating climate feedback
@@ -55,8 +56,12 @@
LagDim,SlopDim,ConstDim,CorrDim,SlopDim2,ConstDim2,CorrDim2
; ---------------
-; Display 2 - Timeseries
- goto,jump
+print, "Display 2 - Timeseries"
+; goto,jump
+; goto not available in GDL outside of procedures, hence:
+if 0 ne 0 then begin
+
+
; ---------------
; SST timeseries graph with warming and cooling intervals
window,2,xs=600,ys=300
@@ -78,7 +83,7 @@
endfor
;
; ---------------
-; Display 3 - Feedback analysis for ERBE and CERES
+print,"Display 3 - Feedback analysis for ERBE and CERES"
; ---------------
window,3,xs=600,ys=600
!p.multi=[0,2,2,0,0]
@@ -90,12 +95,12 @@
xtitle='T (K)',ytitle='R (W/m2)' ;textoidl('R (Wm^{-2})')
oplot,findgen(10)-5,SlopDim2(5)*(findgen(10)-5)+ConstDim2(5)
if ConstDim2(5) ge 0 then begin
- legend,label=['y='+string(SlopDim2(5),format='(f4.1)')+'x+'$
- +strtrim(string(ConstDim2(5),format='(f4.1)'),1)],boxcolor=-5,charsize=1.3
+ legend,['y='+string(SlopDim2(5),format='(f4.1)')+'x+'$
+ +strtrim(string(ConstDim2(5),format='(f4.1)'),1)],outline_color=-5,charsize=1.3
endif
if ConstDim2(5) lt 0 then begin
- legend,label=['y='+string(SlopDim2(5),format='(f4.1)')+'x'$
- +strtrim(string(ConstDim2(5),format='(f4.1)'),1)],boxcolor=-5,charsize=1.3
+ legend,['y='+string(SlopDim2(5),format='(f4.1)')+'x'$
+ +strtrim(string(ConstDim2(5),format='(f4.1)'),1)],outline_color=-5,charsize=1.3
endif
f_sim= SlopDim2(5) ; Feedback from simple regression
; Scatter plots for Lindzen-Choi Method
@@ -105,14 +110,14 @@
xtitle='T (K)',ytitle='R (W/m2)';textoidl('R (Wm^{-2})')
oplot,findgen(10)-5,SlopDim(pos(0))*(findgen(10)-5)+ConstDim(pos(0))
if ConstDim(pos(0)) ge 0 then begin
- legend,label=['y='+string(SlopDim(pos(0)),format='(f4.1)')+'x+'$
+ legend,['y='+string(SlopDim(pos(0)),format='(f4.1)')+'x+'$
+strtrim(string(ConstDim(pos(0)),format='(f4.1)'),1)],$
- boxcolor=-5,charsize=1.3
+ outline_color=-5,charsize=1.3
endif
if ConstDim(pos(0)) lt 0 then begin
- legend,label=['y='+string(SlopDim(pos(0)),format='(f4.1)')+'x'$
+ legend,['y='+string(SlopDim(pos(0)),format='(f4.1)')+'x'$
+strtrim(string(ConstDim(pos(0)),format='(f4.1)'),1)],$
- boxcolor=-5,charsize=1.3
+ outline_color=-5,charsize=1.3
endif
f_lc=SlopDim(pos(0)) ; Feedback from LC11
; Lead-lag slopes
@@ -120,26 +125,28 @@
xtitle='Lagged month',xminor=2, $
ytitle='R/T (W/m2/K)';textoidl('R/T (Wm^{-2}K^{-1})')
oplot,LagDim,SlopDim2,linestyle=1
- legend,label=['LC','Simple'],line=[0,1],boxcolor=-5,charsize=1.3,spacing=2
+ legend,['LC','Simple'],line=[0,1],outline_color=-5,charsize=1.3,spacing=2
; Lead-lag correlations
plot,LagDim,CorrDim,xr=[-5,5],xs=1,yr=[-1,1],$
title='Lag Correlation',xtitle='Lagged month',xminor=2
oplot,LagDim,CorrDim2,linestyle=1
- legend,label=['LC','Simple'],line=[0,1],boxcolor=-5,charsize=1.3
+ legend,['LC','Simple'],line=[0,1],outline_color=-5,charsize=1.3
- jump: print,'Skip Display 2 and 3'
+;jump:
+ endif
+ print,'Skip Display 2 and 3'
; ---------------
-; Open AMIP data
+print, "Open AMIP data"
; Call amip.pro procedure, which include 11 AMIP model outputs
; ---------------
-; Call proceedure for reading AMIP model outputs
+print,"Call proceedure for reading AMIP model outputs"
amip,olr_amip,osr_amip
-; Generate julian day corresponding to AMIP outputs
+print,"Generate julian day corresponding to AMIP outputs"
julmo=timegen(start=julday(1,1,1985),final=julday(12,1,2008),units='M')
; ---------------
-; Display 4 - Timeseries for AMIP data
+print, "Display 4 - Timeseries for AMIP data -1"
; ---------------
title=['CCSM3','ECHAM5/MIP-OM','FGOALS-g1.0',$
'GFDL-CM2.1','GISS-ER','INM-CM3.0','IPSL-CM4','MRI-CGCM2.3.2',$
@@ -148,76 +155,86 @@
window,4,xs=800,ys=800
loadct,5
!p.multi=[0,2,7,0,0]
+
+ print, "Display 4 - Timeseries for AMIP data -2"
+
dummy=label_date(date_format=('%y'))
- plot,julmo,replicate(0,288),color=!myct.black,yr=[-8,8],$
+ plot,julmo,replicate(0,288),color=myctblack,yr=[-8,8],$
ytitle='R (W/m2)',$ ;textoidl('R (Wm^{-2})'),$
xticklen=0.1,xtickunits=['time'],xtickformat='label_date',$
xstyle=1,xminor=5,charsize=2,xmargin=[6,2],$
ystyle=1,yminor=2,yticks=2,ymargin=[1.9,0.3]
- legend,label=['ERBE&CERES'],$
- boxcolor=-5,charsize=1.3,spacing=1.4,halign=0.03,valign=0.91
+ legend,['ERBE&CERES'],$
+ outline_color=-5,charsize=1.3,spacing=1.4 ;,halign=0.03,valign=0.91
+
+print, "Display 4 - Timeseries for AMIP data -3"
oplot,jul,olr,color=112 ;red
oplot,jul,osr,color=47 ;blue
- legend,label=['LW','SW'],line=[0,0],lcolor=[112,47],$
- charsize=1,boxcolor=255,spacing=1
+ legend,['LW','SW'],line=[0,0],colors=[112,47],$
+ charsize=1,outline_color=255,spacing=1
+ print, "Display 4 - Timeseries for AMIP data -4"
for m=0,10 do begin
- plot,julmo,replicate(0,288),color=!myct.black,yr=[-8,8],$
+ print, "Display 4 - Timeseries for AMIP data -5"
+ plot,julmo,replicate(0,288),color=myctblack,yr=[-8,8],$
ytitle='R (W/m2)',$ ;textoidl('R (Wm^{-2})'),$
xticklen=0.1,xtickunits=['time'],xtickformat='label_date',$
xstyle=1,xminor=5,charsize=2,xmargin=[6,2],$
ystyle=1,yminor=2,yticks=2,ymargin=[1.9,0.3]
- legend,label=[title(m)],$
- boxcolor=-5,charsize=1.3,spacing=1.4,halign=0.03,valign=0.91
+ legend,[title(m)],$
+ outline_color=-5,charsize=1.3,spacing=1.4 ;,halign=0.03,valign=0.91
oplot,julmo,ts_smooth(olr_amip(*,m),3),color=112 ;red
oplot,julmo,ts_smooth(osr_amip(*,m),3),color=47 ;blue
endfor
; ---------------
-; Display 5 - Feedback analysis for AMIP 11 models
+print, "Display 5 - Feedback analysis for AMIP 11 models"
; ---------------
window,5,xs=800,ys=600
!p.multi=[0,4,3,0,0]
!x.margin=[7,2]
!y.margin=[3,2]
-; ERBE&CERES results
+print, "ERBE&CERES results"
+
pos=where(CorrDim eq max(CorrDim(5:10)))
plot,LagDim,SlopDim,xr=[-5,5],xs=1,yr=[-4,6],title ='ERBE&CERES',$
xtitle='Lagged month',xminor=2,charsize=2, $
ytitle='R/T (W/m2/K)';textoidl('R/T (Wm^{-2}K^{-1})')
oplot,LagDim,SlopDim2,linestyle=1
- legend,label=['LC','Simple'],line=[0,1],boxcolor=-5,charsize=1.3,spacing=1.4
- legend,label=['F_LC ='+string(SlopDim(pos(0)),format='(f4.1)'),$
+ legend,['LC','Simple'],line=[0,1],outline_color=-5,charsize=1.3,spacing=1.4
+ legend,['F_LC ='+string(SlopDim(pos(0)),format='(f4.1)'),$
'F_sim='+string(SlopDim2(5),format='(f4.1)')],$
- boxcolor=-5,charsize=1.3,spacing=1.4,halign=0.05,valign=0.95
-; 11 AMIP model results
+ outline_color=-5,charsize=1.3,spacing=1.4
+ ;,halign=0.05,valign=0.95
+
+print, '11 AMIP model results'
for m=0,10 do begin ; 11 models
olrmo=olr_amip(*,m) ; AMIP monthly LW flux from 1985
osrmo=osr_amip(*,m) ; AMIP monthly SW flux from 1985
-; Smoothing and scaling
+print, "Smoothing and scaling"
sstmo=ts_smooth(sst*0.78,3)
olrmo=ts_smooth(olrmo,3)
osrmo=ts_smooth(osrmo,3)
-; Call LC11.pro procedure
+print, "Call LC11.pro procedure"
; put 'olrmo' for LW, put 'osrmo' for SW, put 'olrmo+osrmo' for Net
fluxmo=olrmo
lc11,julmo,sstmo,fluxmo,int1,int2,dT,dTmax,dR,dRmax,$
LagDim,SlopDim,ConstDim,CorrDim,SlopDim2,ConstDim2,CorrDim2
;
- ; Lead-lag slopes
+print, "Lead-lag slopes"
pos=where(CorrDim eq max(CorrDim(5:10)))
plot,LagDim,SlopDim,xr=[-5,5],xs=1,yr=[-4,6],title =title(m),$
xtitle='Lagged month',xminor=2,charsize=2, $
ytitle='R/T (W/m2/K)';textoidl('R/T (Wm^{-2}K^{-1})')
oplot,LagDim,SlopDim2,linestyle=1
- ;legend,label=['LC','Simple'],line=[0,1],boxcolor=-5,charsize=1.3,spacing=1.4
- legend,label=['F_LC ='+string(SlopDim(pos(0)),format='(f4.1)'),$
+ ;legend,['LC','Simple'],line=[0,1],outline_color=-5,charsize=1.3,spacing=1.4
+ legend,['F_LC ='+string(SlopDim(pos(0)),format='(f4.1)'),$
'F_sim='+string(SlopDim2(5),format='(f4.1)')],$
- boxcolor=-5,charsize=1.3,spacing=1.4,halign=0.05,valign=0.95
+ outline_color=-5,charsize=1.3,spacing=1.4;,halign=0.05,valign=0.95
endfor
- screen2jpg,'Fig.jpg'
+ screen2jpg,'Fig-gdl.jpg'
end
diff -ur a/lindzen_choi_2011/scripts/obs.pro b/lindzen_choi_2011/scripts/obs.pro
--- a/lindzen_choi_2011/scripts/obs.pro 2011-09-25 18:40:42.000000000 +0200
+++ b/lindzen_choi_2011/scripts/obs.pro 2011-09-27 20:05:35.000000000 +0200
@@ -5,7 +5,7 @@
pro obs,jul,sst,sstmo,olr,osr
-path1='./obs/'
+path1='../obs/'
file1='oisst_36_20n20s_1985_1999.txt'
file2='amipsst_mon_20n20s_1985_2008.txt'
file3='erbe_s10n_36_20n20s_1985_1999.txt'
The last missing function is regress(). The L&C-2011 script seems to use an IDL-7.x version of regress(), which isn’t easily availabe on the net, but is contained as IDL source code within IDL installations. All others missing functions are available:
label_date.pro
legend.pro
textoidl.pro
ts_fcast.pro
julday.pro
ts_coef.pro
ts_smooth.pro
Since these files are mostly from older IDL versions, some minor editing of L&C scripts is needed (such as adapting named parameter to the older definitions or replacing color=”!myct.black” by color=0). If anyone understands the regress() call wrt the parameters “Correlation” and “Const” so far that the call could be adapted to use the older version available on the net, then a GDL version of the L&C scripts could become fully functional. It may make it easier to develop and test the R-version.
Actual 7.x – version of regress()
Result = REGRESS( X, Y, [, CHISQ=variable] [, CONST=variable] [, CORRELATION=variable] [, /DOUBLE] [, FTEST=variable] [, MCORRELATION=variable] [, MEASURE_ERRORS=vector] [, SIGMA=variable] [, STATUS=variable] [, YFIT=variable] )
http://star.pst.qub.ac.uk/idl/REGRESS.html
Obsoleted 5.4 Version of regress()
REGRESS(X, Y, Weights, Yfit, Const, Sigma, Ftest, R, Rmul, Chisq)
http://star.pst.qub.ac.uk/idl/Obs_in_54.html
Here is the IDL 8 version, which should be compatible with IDL 7x
; $Id: //depot/idl/releases/IDL_80/idldir/lib/regress.pro#1 $ ; ; Copyright (c) 1982-2010, ITT Visual Information Solutions. All ; rights reserved. Unauthorized reproduction is prohibited. ;+ ; NAME: ; REGRESS ; ; PURPOSE: ; Perform a multiple linear regression fit. ; ; REGRESS fits the function: ; Y[i] = Const + A[0]*X[0,i] + A[1]*X[1,i] + ... + ; A[Nterms-1]*X[Nterms-1,i] ; ; CATEGORY: ; G2 - Correlation and regression analysis. ; ; CALLING SEQUENCE: ; ; Result = REGRESS(X, Y) ; ; INPUTS: ; ; X: The array of independent variable data. X must ; be dimensioned as an array of Nterms by Npoints, where ; there are Nterms coefficients (independent variables) to be ; found and Npoints of samples. ; ; Y: The vector of dependent variable points. Y must have Npoints ; elements. ; ; OUTPUTS: ; ; REGRESS returns a column vector of coefficients that has Nterms ; elements. ; ; KEYWORDS: ; ; CHISQ: Sum of squared errors divided by MEASURE_ERRORS if specified. ; ; CONST: Constant term. (A0) ; ; CORRELATION: Vector of linear correlation coefficients. ; ; COVAR: Covariance matrix of the coefficients. ; ; DOUBLE: if set, force computations to be in double precision. ; ; FTEST: The value of F for goodness-of-fit test. ; ; MCORRELATION: The multiple linear correlation coefficient. ; ; MEASURE_ERRORS: Set this keyword to a vector containing standard ; measurement errors for each point Y[i]. This vector must be the same ; length as X and Y. ; ; Note - For Gaussian errors (e.g. instrumental uncertainties), ; MEASURE_ERRORS should be set to the standard ; deviations of each point in Y. For Poisson or statistical weighting ; MEASURE_ERRORS should be set to sqrt(Y). ; ; SIGMA: The 1-sigma error estimates of the returned parameters. ; ; Note: if MEASURE_ERRORS is omitted, then you are assuming that the ; regression is the correct model. In this case, ; SIGMA is multiplied by SQRT(CHISQ/(N-M)), where N is the ; number of points in X. See section 15.2 of Numerical Recipes ; in C (Second Edition) for details. ; ; STATUS = Set this keyword to a named variable to receive the status ; of the operation. Possible status values are: ; 0 for successful completion, 1 for a singular array (which ; indicates that the inversion is invalid), and 2 which is a ; warning that a small pivot element was used and that significant ; accuracy was probably lost. ; ; Note: if STATUS is not specified then any error messages will be output ; to the screen. ; ; YFIT: Vector of calculated Y's. ; ; PROCEDURE: ; Adapted from the program REGRES, Page 172, ; Bevington, Data Reduction and Error Analysis for the ; Physical Sciences, 1969. ; ; MODIFICATION HISTORY: ; Written, DMS, RSI, September, 1982. ; Added RELATIVE_WEIGHT keyword W. Landsman August 1991 ; Fixed bug in invert Bobby Candey 1991 April 22 ; Added STATUS argument. GGS, RSI, August 1996 ; CT, RSI, March 2000: Fixed status flag. Cleaned up help. ; CT, RSI, July 2000: Change arguments to keywords. ; Add MEASURE_ERRORS [equivalent to 1/sqrt(Weights)], ; removes need for RELATIVE_WEIGHT. ;- ; FUNCTION REGRESS,X,Y, $ weights_old,yfit_old,const_old,sigma_old, $ ; obsolete arguments ftest_old,r_old,rmul_old,chisq_old,status_old, $ ; obsolete arguments RELATIVE_WEIGHT=relative_weight, $ ; obsolete keyword CHISQ=chisq, $ CONST=const, $ DOUBLE=double, $ FTEST=ftest, $ CORRELATION=correlation, $ MCORRELATION=mcorrelation, $ MEASURE_ERRORS=measure_errors, $ SIGMA=sigma, $ STATUS=status, $ YFIT=yfit COMPILE_OPT idl2 ON_ERROR,2 ;Return to caller if an error occurs sy = SIZE(Y) ;Get dimensions of x and y. sx = SIZE(X) ndimX = sx[0] nterm = (ndimX EQ 1) ? 1 : sx[1] ; # of terms (coefficients) nptsX = sx[ndimX] ;# of observations (samples) npts = sy[1] ;# of observations (samples) IF (nptsX NE npts) THEN MESSAGE, $ 'X and Y have incompatible dimensions.' IF (ndimX EQ 1) THEN x = REFORM(x, 1, npts, /OVER) ; change X to a 2D array double = (N_ELEMENTS(double) GT 0) ? KEYWORD_SET(double) : $ ((SIZE(x,/TNAME) EQ 'DOUBLE') OR (SIZE(y,/TNAME) EQ 'DOUBLE')) one = double ? 1d : 1.0 two = double ? 2d : 2.0 IF (N_PARAMS() GT 2) THEN BEGIN ; old-style REGRESS (with arguments) IF (N_ELEMENTS(weights_old) NE npts) THEN MESSAGE, $ 'Weight has been replaced by MEASURE_ERRORS. See Online Help.' no_weights = KEYWORD_SET(relative_weight) weights = no_weights ? one : weights_old ; note there is no ^2 ENDIF ELSE BEGIN ; new style (with keywords) IF KEYWORD_SET(relative_weight) THEN $ MESSAGE,/INFO,'Keyword RELATIVE_WEIGHT is obsolete.' no_weights = N_ELEMENTS(measure_errors) NE npts ; note different meaning from Weights argument weights = no_weights ? one : 1/measure_errors^two ENDELSE sw = no_weights ? npts : TOTAL(weights, DOUBLE=double) ;sum of weights ymean = TOTAL(y*weights, DOUBLE=double)/sw ;y mean wgt = no_weights ? one : REBIN(TRANSPOSE(weights),nterm,npts) xmean = TOTAL(wgt*x,2, DOUBLE=double)/sw wmean = sw/npts wgt = TEMPORARY(wgt)/wmean ww = weights/wmean nfree = npts-1 ;degs of freedom sigmay = SQRT(TOTAL(ww * (y-ymean)^2)/nfree) ;weights*(y(i)-ymean) xx = x - REBIN(xmean,nterm,npts) ;x(j,i) - xmean(i) wx = TEMPORARY(wgt) * xx ;weights(i)*(x(j,i)-xmean(i)) sigmax = SQRT(TOTAL(xx*wx,2)/nfree) ;weights(i)*(x(j,i)-xm)*(x(k,i)-xm) array = (wx # TRANSPOSE(xx))/(nfree * sigmax #sigmax) array = INVERT(array, status) IF (status EQ 1L && ~ARG_PRESENT(status)) THEN BEGIN IF (ndimX EQ 1) THEN x = REFORM(x, /OVER) ; change X back to a vector MESSAGE, "Inversion failed due to singular array." END IF (status EQ 2L && ~ARG_PRESENT(status)) THEN BEGIN MESSAGE, /INFO, $ "Inversion used a small pivot element. Results may be inaccurate." endif correlation = (TEMPORARY(wx) # (y - ymean)) / (sigmax * sigmay * nfree) a = (correlation # array)*(sigmay/sigmax) ;get coefficients yfit = a # x ;compute fit const = ymean - TOTAL(a*xmean) ;constant term yfit = yfit + const ;add it in freen = npts-nterm-1 > 1 ;degs of freedom, at least 1. chisq = TOTAL(ww*(y - yfit)^2)*wmean ;weighted chi squared ; correction factor for no weights (see Num.Rec. sec 15-2) varnce = no_weights ? chisq/freen : one/wmean ; error terms sigma = SQRT(array[LINDGEN(nterm)*(nterm+1)]*varnce/(nfree*sigmax^2)) mult_covariance = TOTAL(a*correlation*sigmax/sigmay) ftest = (mult_covariance GE 1.) ? 1e6 : $ mult_covariance/nterm / ((1.-mult_covariance)/freen) mcorrelation = SQRT(mult_covariance) IF (ndimX EQ 1) THEN x = REFORM(x, /OVER) ; change X back to a vector ; fill in obsolete arguments, if necessary IF (N_PARAMS() GT 3) THEN BEGIN status_old = status chisq_old = chisq/freen ; reduced chi-square rmul_old = mcorrelation r_old = correlation ftest_old = ftest sigma_old = sigma const_old = const yfit_old = yfit ENDIF RETURN, a ENDNice. I haven’t had to learn a new language this week , I was starting to get bored 😉
Good work on getting all this. You certainly seem to have influence.
And thanks of course to L&C for the innovative work and for making this available.
Two questions:
1.)How readily does R work with complex variables?
Because the EE’s need to keep track of magnitude and phase of our non-constant signals, we tend to default to phasor notation to keep both in a single (complex) variable.
2.) Is it correct to say that ARMA processes have delayed attenuated echoes of prior time series values? Is this what they refer to with the keyword ‘autocorrelated’ ?
I’m not seeing how that would be a great tool for diagnosing behaviour where the ‘feedback’ changes a lot depending on the frequency of the signal.
I am not a very mathy person, and so my worldview is not overly cluttered with equations, and flawless jargon to go with them.
Hopefully, something herein is sufficiently WRONGO to elicit a didactic response.
TIA
RR
You might find this summary of value.
geek
accusation or confession?
HMMM could that be said for your last name too 🙂
both
Is the paper/article meant to be there as well? LC11.pdf looks to be the readme converted to a pdf.
Translation to Mathematica is also welcome. 😉
If someone wants a zipped version of the folder, including the LC11 paper itself, you get the link (and a few comments) at:
http://motls.blogspot.com/2011/09/lindzen-choi-2011-paper-data-scripts.html
My small contribution:
Note that array subscripts in IDL start at 0, while in R is at 1. I’ve just added 1 to all array subscripts. This may be confussing for large pieces of code, if you have a better suggestion, please let me know. My commenst in the code start with 3 dashes ###
I have used “=” instead of “<-" it is the same and you have to type less.
amip.pro
# pro amip,olr,osr ObsName='ERBE&CERES' clim_period=59 tot_period=288 ### you may want to give the full path here # path=c('./amip_olr/','./amip_osr/') ### original script path=c('../amip_olr/','../amip_osr/') ### for Linux, Unix, Mac file=c('ncar_ccsm3_0_20n20s.dat', 'mpi_echam51_20n20s.dat', 'iap_fgoals1_0_g1_20n20s.dat', 'gfdl_cm2_1_20n20s.dat', 'giss_model_e_r1_20n20s.dat', 'inmcm3_0_20n20s.dat', 'ipsl_cm41_20n20s.dat', 'mri_cgcm2_3_2a_20n20s.dat', 'miroc3_2_hires_20n20s.dat', 'miroc3_2_medres1_20n20s.dat', 'ukmo_hadgem1_20n20s.dat') # ------------------------------------------------------------------------ # Read flux # ------------------------------------------------------------------------ olr=array(NaN,dim=c(tot_period,11)) osr=array(NaN,dim=c(tot_period,11)) # --------------------------------- for (p in 1:2){ # 1:LW, 2:SW ### in IDl array subscripts start at 0, in R at 1 for (m in 1:11){ # AMIP models # --------------------------------- ### lun in IDL and /g = /get_lun is a logical unit associated to the file ### for this case read.table is simpler and more efficient fpath=paste(path[p],file[m],sep='') fdata = read.table(fpath) dat = array(dim=c(length(fdata[,1]),5)) dat[,1] = fdata[,1] # Order dat[,2] = substr(fdata[,2],1,4) # year dat[,3] = substr(fdata[,2],5,6) # Month dat[,4] = 1 # Day dat[,5] = as.numeric(fdata[,3]) month = as.numeric(dat[,3]) if (p == 1) olrmo=as.numeric(dat[,5]) if (p == 2) osrmo=as.numeric(dat[,5]) # Calculate anomaly # Note: # Lindzen and Choi (2011) references the anomalies to the monthly means # for 1985-1989. for (j in 1:12) { pos=which(month == j+1) pos2=which(month[1:60] == j+1) if (p == 1) olrmo[pos]=olrmo[pos]-mean(olrmo[pos2],na.rm=T) if (p == 2) osrmo[pos]=osrmo[pos]-mean(osrmo[pos2],na.rm=T) } if (p == 1) olr[1:length(olrmo),m]=olrmo if (p == 2) osr[1:length(osrmo),m]=osrmo # --------------------------------- } } # ---------------------------------Nice.. run it through formatR
Oops,
here he starts with the errors. in last for loop j is used only to get month number (j+1), not for subscripting so I should have left it as it was: (j in 0:11)
Note, if you hoover over the code, in firefox at least, you may copy it or see the code without line numbers
corrected:
# pro amip,olr,osr ObsName='ERBE&CERES' clim_period=59 tot_period=288 ### you may want to give the full path here # path=c('./amip_olr/','./amip_osr/') ### original script path=c('../amip_olr/','../amip_osr/') ### for Linux, Unix, Mac file=c('ncar_ccsm3_0_20n20s.dat', 'mpi_echam51_20n20s.dat', 'iap_fgoals1_0_g1_20n20s.dat', 'gfdl_cm2_1_20n20s.dat', 'giss_model_e_r1_20n20s.dat', 'inmcm3_0_20n20s.dat', 'ipsl_cm41_20n20s.dat', 'mri_cgcm2_3_2a_20n20s.dat', 'miroc3_2_hires_20n20s.dat', 'miroc3_2_medres1_20n20s.dat', 'ukmo_hadgem1_20n20s.dat') # ------------------------------------------------------------------------ # Read flux # ------------------------------------------------------------------------ olr=array(NaN,dim=c(tot_period,11)) osr=array(NaN,dim=c(tot_period,11)) # --------------------------------- for (p in 1:2){ # 1:LW, 2:SW ### in IDl array subscripts start at 0, in R at 1 for (m in 1:11){ # AMIP models # --------------------------------- ### lun in IDL and /g = /get_lun is a logical unit associated to the file ### for this case read.table is simpler and more efficient fpath=paste(path[p],file[m],sep='') fdata = read.table(fpath) dat = array(dim=c(length(fdata[,1]),5)) dat[,1] = fdata[,1] # Order dat[,2] = substr(fdata[,2],1,4) # year dat[,3] = substr(fdata[,2],5,6) # Month dat[,4] = 1 # Day dat[,5] = as.numeric(fdata[,3]) month = as.numeric(dat[,3]) if (p == 1) olrmo=as.numeric(dat[,5]) if (p == 2) osrmo=as.numeric(dat[,5]) # Calculate anomaly # Note: # Lindzen and Choi (2011) references the anomalies to the monthly means # for 1985-1989. for (j in 0:11) { pos=which(month == j+1) pos2=which(month[1:60] == j+1) if (p == 1) olrmo[pos]=olrmo[pos]-mean(olrmo[pos2],na.rm=T) if (p == 2) osrmo[pos]=osrmo[pos]-mean(osrmo[pos2],na.rm=T) } if (p == 1) olr[1:length(olrmo),m]=olrmo if (p == 2) osr[1:length(osrmo),m]=osrmo # --------------------------------- } } # ---------------------------------The above code porting passes the parser but probably will not work as expected:
R> help (“>
x1 <- 0.5 – 0.3
x2 >
> x1 x2 x1 == x2
[1] FALSE
> x1
[1] 0.2
> x2
[1] 0.2
> x1 == x2
[1] FALSE
Yeah, figure that one out. 0.2 is not the same as 0.2 😕
Probably better to use
if (p<2) {
…
}else{
…
}
Sorry wordpress mangles that a bit.
R> help ("> x1 <- 0.5 - 0.3 x2 > > x1 x2 x1 == x2 [1] FALSE > x1 [1] 0.2 > x2 [1] 0.2 > x1 == x2 [1] FALSEYeah, figure that one out. 0.2 is not the same as 0.2 😕
Probably better to use
if (p<2) { ... }else{ ... }OK workpress still mangles it when it’s in a protected. You’ve been warned , check out the help on relational operators for the full story.
It works fine here:
“R version 2.12.2 (2011-02-25)”
If wordpress doesn’t mangle it , this should allow grabbing the whole lot in one command.
Careful with the options or else you’ll get loads of other stuff too.
You got me a bit lost. I think it works fine for that case and for non-integers as well:
(use
for the code)
> for (p in 1:2) { + print(paste('p =',p)) + print('p==1?') + print(p==1) + print(paste('p =',p)) + print('p==2?') + print(p==2) + } [1] "p = 1" [1] "p==1?" [1] TRUE [1] "p = 1" [1] "p==2?" [1] FALSE [1] "p = 2" [1] "p==1?" [1] FALSE [1] "p = 2" [1] "p==2?" [1] TRUE> for (p in seq(0,1,0.5)) { + print(paste('p =',p)) + print('p==0.5?') + print(p==0.5) + } [1] "p = 0" [1] "p==0.5?" [1] FALSE [1] "p = 0.5" [1] "p==0.5?" [1] TRUE [1] "p = 1" [1] "p==0.5?" [1] FALSEI mean [ sourcecode ] [ / sourcecode ] without spaces
sourcecode between squared brackets to open, same with slash before “s” to close
see the example and warning shown in the help:
help ("==") > x1 <- 0.5 - 0.3 > x2 <- 0.3 - 0.1 > x1 == x2 [1] FALSE > x2 <- 3-1 > x1 <- 5-3 > x1 == x2 [1] TRUEI don’t understand the subtle difference in what this assignment is actually assigning that makes the result false.
Neither do I think that having the code path change as a result of the value of data makes for writing reliable code.
It seems to be one of those R specialities like the else clause parsing.
}else{ ## this works OK else { # this is not the else clause ! }These are things that are likely to cause serious (and subtle) problems for those who do not some considerable air miles fly with R.
Apparently this is a “feature”.
It seems behaviour is stable for integers, so the “if p==1” etc should be OK. Apologies for the false alert.
For floating point equality, use all.equal(), as explained here.
Integer comparison does not present the same problem.
That’s quite true. Thanks for the advice.
It should be changed to
etc.
That criticism should be directed to Lindzen and Choi, I tried to keep the translation as close as possible to the original code.
I wrote too much in a hurry
It seems that the correct form should be
a bit winded R in this case I believe.
Solar ” Neither do I think that having the code path change as a result of the value of data makes for writing reliable code.”
jgc
“That criticism should be directed to Lindzen and Choi, I tried to keep the translation as close as possible to the original code.”
No that is an issue for R developers. It is a serious defect in a language if it shows that kind of instability wrt the values of the data. The pedantic run-around that is necessary to get stable results seems like a sticking plaster workaround.
L&C did not use R so they are not concerned.
See my earlier post. I think your original code was safe since p is only 1 or 2 and I think the problem only exists for (some) real values.
Having drawn attention to this potentially serious problem in porting this to R , I suggest we don’t get bogged down in discussing this further.
My bottom line is that your code was correct.
IDL actually should have the same issues when run on the same hardware. Any language will because the limitation is in the hardware. Not seeing the problems doesn’t necessarily mean they aren’t there.
NASA has a page devoted to IDL and floating point issues:
http://idlastro.gsfc.nasa.gov/idl_html_help/Accuracy_and_Floating_Point_Operations.html
It includes code for checking accuracy and errors.
Replacing TS_SMOOTH or examining what is does would seem critical here since it used some autoregression magic to end-fill in data window to avoid shortening the periods.
I’m instantly wary of this sort of thing , especially here where the data periods are relatively short due to the selection process.
How much of the “data” being analysed is end-fill projections and what do those projections look like in relation to the true data ?
The full description of the routine is here:
http://idlastro.gsfc.nasa.gov/ftp/ittvislib/ts_smooth.pro
; This function computes central, backward or forward moving-averages
; of an n-element time-series (X). Autoregressive forecasting and
; backcasting is used to extrapolate the time-series and compute a
; moving-average for each point of the time-series. The result is an
; n-element vector whose type is identical to X.
Thanks , I’ve seen what it does.
L&C are only smoothing over three points so there is one made up point at each end. Trouble is some of there data segments are only a few months long.
My approach with this is why use artificial data? If it does not make any significant difference, why use it? If it does make a difference DONT use it.
At the very least it will affect the R2 of the result in an artificial way.
Unfortunately I don’t have access to IDL so until it is simplified for GDL or ported to R I can’t investigate further.
Please see my reply above (https://climateaudit.org/2011/09/25/lindzen-choi-2011/#comment-305404)
Best
Thanks, sourcing those routines has been huge help.
However, I’m having trouble running the patch.
when it was in the same dir , patch -p3 could not find the files (typical patch run around). I took the diff file up one level and used -p2 and it now finds them. However, the patch fails straight away.
patching file scripts/amip.pro
patch: **** malformed patch at line 6: ObsName=’ERBE&CERES’
My guess is the diff options used to create the file.
I started to do it by hand but there’s quite a lot in there and you’ve already done the hard work so it seems silly to repeat it.
Can you see what is needed?
TIA.
OK, this seems to be spaces getting stripped out by WP at the beginning of lines.
I tried correcting by hand but I’m still getting errors. This is not a reliable way of doing it.
could you repost and wrap it with [ sourcecode ] [ / sourcecode ] tags (without spaces).
Thanks.
Ok, first testing sourcode tags
a b c d
e f g h
i j k l m >
Sourcecode tags don’t work too; leading spaces still appear to be stripped.
Try this link instead: h ttp://www.pastebin.ch/6669/txt
Hmm, Pastebin.ch adds an additional carriage return to each line, even in the plain text version. If this should become a problem with patch, check “h ttps://gist.github.com/1247625”, and download it either as tar.gz archive or use git clone git://gist.github.com/1247625.git . Besides, it would be very nice to have an upload directory at climateaudit.info, and even nicier if Mr. Pete could set up a git repository for such collaborative efforts.
Well I’m getting somewhere now. I still had a dozen or so hunks that failed in main.pro that I did by hand from the rejects. That looks good.
Some remaining issues in textoidl.
thanks for all your efforts.
BTW the tags worked on the last post there and no space gets lost.
😉
P. Solar,
seems like you have stale version of textoidl which uses textable. Maybe wget had produced something along “textoidl.pro.1” because another textoidl.pro version already existed on your disk? Anyway, simply remove all calls to textoidl and replace the call by plain strings (like ‘W/m^2’), since textoidl doesn’t work as expected anyway.
The rejects appear a bit strange. I tested the patch locally before uploading it. Did you use the patch from gist.github.com?
Re sourcecode tags, probably because I used angle brackets instead of squared brackets.
Btw, I’ve much beter version ready, which does also account for a severe GDL bug ( “PLOTS destroys plot layout with !P.MULTI – ID: 3323178”), fixes the color scheme and also prints the currently skipped plots. I’ll post this as soon as I’ve figured out why XTICKFORMAT isn’t working as expected.
Fine, I got clean copy of everything including github patch and it applied perfectly. Many thanks.
If you suss the other probs please post when you update.
P.Solar, I’ve posted a turnkey shell script, yet it landed in the moderation queue due to the URLs contained therein.
Well Steve, I don’t have any idea what you life victory conditions are, but you have started to people to deposit their data and mathematics on public databases.
That is one hell of a victory.
A day later, my shell script still does not show up here. Until it does, just use the copy at h ttps://gist.github.com/1252987 instead
I’ve ported the obs() procedure to R. obs() now returns a list of two time series, because sst,olr,osr on the one hand and sstmo on the other are of different length. When comparing these time serires to the LC11 GDL output, only small difference show up. The function can be found at h ttps://github.com/hm2/lc11/R. You may also use git clone git://github.com/hm2/lc11 to get the complete environment in a few seconds; the GDL port is also contained therein.
##################################################################### # Ported IDL procedure: obs,jul,sst,sstmo,olr,osr # Read 36-day mean and monthly mean observed data, and make outputs: # jul: Julian day, sst: SST, olr: LW flux, osr: SW flux # # See https://climateaudit.org/2011/09/25/lindzen-choi-2011/ # # # R function obs() # # Parameters: none # # Return: # The R - version of "obs()" returns a list containing two # irregularily spaced time series named ts1 and ts2. Both # cover the same time span, but ts2 has more datapoints in # between. # # $ts1 : sst,olr and osr, indexed by "jul" of type zoo # $ts2 : only sstmo, indexed by "julmo" of type zoo # # Usage example: # devAskNewPage(T) # data <- obs() # plot(data$ts1) # plot(data$ts2) # # Bugs : # - Since the original version does not return julmo, but does return # sstmo, there might be a problem with the original script? # - Spectral darkening for single days might be due to a typo? # - Spectral darkening corrections do not cover the whole time range # # Author: # Hugo M # # License: # GPL as far as the port of the original IDL scripts to R is concerned. # Original copyrights may belong to Lindzen and Choi # #####################################################################+ obs <- function() { julday <- function(month,day,year) { julian(ISOdate(year=year, month=month,day=day,hour=0,tz="GMT")) } ref <- c(1,37,73,109,145,181,217,253,289,325) # DayOfYear for 36-day mean # --------------- # Read 36-day mean SST for 1985-1999 # 147 lines # --------------- # Read data file # 0=DayOfYear, 1=year, 2=Month, 3=Day, 4=SST dat <- read.table('../obs/oisst_36_20n20s_1985_1999.txt', col.names=c('DayOfYear','Year','Month','Day','SST')) sst36 <- dat$SST # Calculate anomaly # Note: Lindzen and Choi (2011) references the anomalies to the # 36-day means for 1985-1989. for ( r in ref ) { idx <- dat$DayOfYear == r midx <- idx & dat$Year %in% 1985:1989 sst36[idx] <- dat$SST[idx] - mean(dat$SST[midx],na.rm=T) } jul36 <- julday(dat$Month,dat$Day,dat$Year) # --------------- # Read monthly mean SST for 1985-2008 # --------------- # data example # 1 198501 26.767 # 2 198502 26.999 # 3 198503 27.270 dat <- read.fwf('../obs/amipsst_mon_20n20s_1985_2008.txt', col.names=c('Order','Year','Month','SST'), widths=c(4,-2,4,2,-4,6)) # Calculate anomaly. Note: # Lindzen and Choi (2011) references the anomalies to # the monthly means for 1985-1989. sstmo <- dat$SST for ( m in 1:12 ) { idx <- dat$Month == m midx <- idx & dat$Order %in% 1:60 sstmo[idx] <- dat$SST[idx] - mean(dat$SST[midx],na.rm=T) } julmo <- as.numeric(julday(dat$Month,1,dat$Year)) stopifnot(length(sst36) == 147) stopifnot(length(jul36) == 147) stopifnot(length(sstmo) >= 288) stopifnot(length(julmo) >= 288) # Combined 36-day and monthly mean for 1985-2008 # sst36: 1985.1-1999.12 (150 36-d mon) + 2000.1-2008.12 (108 mon) # sstmo: Monthly mean for 1999.10-2008.12 sst <- c(sst36,sstmo[178:288]) # Combined Julian day # jul36: 36-day mean for 1985.1-1999.9 # julmo: Monthly mean for 1999.10-2008.12 jul <- c(jul36,julmo[178:288]) # --------------- # Read 36-day mean ERBE flux for 1985-1999 # Nan values exist, 147 lines # --------------- # Read data file dat <- read.table('../obs/erbe_s10n_36_20n20s_1985_1999.txt', col.names=c('DayOfYear','Year','Month','Day','LW','SW'), na.strings='-NaN') # Calculate anomaly # Lindzen and Choi (2011) references the anomalies to the 36-day means # for 1985-1989. olr36 <- dat$LW osr36 <- dat$SW for ( r in ref ) { idx <- dat$DayOfYear == r midx <- idx & dat$Year %in% 1985:1989 olr36[idx] <- dat$LW[idx] - mean( dat$LW[midx],na.rm=T ) osr36[idx] <- dat$SW[idx] - mean( dat$SW[midx],na.rm=T ) } # --------------- # Read monthly mean CERES flux for 2000-2008 # -999 values exist # --------------- # data example: # 183 200003 254.748 89.582 # 184 200004 253.510 88.463 # 185 200005 257.127 87.050 # 186 200006 258.042 89.873 dat <- read.fwf('../obs/ceres_es4_mon_20n20s_2000_2008.txt', widths=c(4,-2,4,2,-3,7,-3,7), col.names=c('Order','Year','Month','LW','SW')) dat$jd <- julday(dat$Month,1,dat$Year) dat$SW[dat$SW == -999] <- NA dat$LW[dat$LW == -999] <- NA # CERES shortwave flux scaling due to spectral darkening # (Mattews et al. 2005 SPIE) warning("please check if setting values for spectral darkening for single days") warning("Spectral darkening corrections do not cover the whole time range") idx <- dat$jd == julday(03,01,2000) dat$SW[idx] <- dat$SW[idx] * 1.003 idx <- dat$jd >= julday(05,01,2000) & dat$jd <= julday(09,01,2000) dat$SW[idx] <- dat$SW[idx] * 1.001 idx <- dat$jd == julday(10,01,2000) dat$SW[idx] <- dat$SW[idx] * 1.002 idx <- dat$jd >= julday(11,01,2000) & dat$jd <= julday(02,01,2001) dat$SW[idx] <- dat$SW[idx] * 1.003 idx <- dat$jd == julday(04,01,2001) dat$SW[idx] <- dat$SW[idx] * 1.006 idx <- dat$jd >= julday(05,01,2001) & dat$jd <= julday(08,01,2001) dat$SW[idx] <- dat$SW[idx] * 1.007 idx <- dat$jd == julday(09,01,2001) dat$SW[idx] <- dat$SW[idx] * 1.008 idx <- dat$jd >= julday(10,01,2001) & dat$jd <= julday(02,01,2002) dat$SW[idx] <- dat$SW[idx] * 1.007 idx <- dat$jd == julday(03,01,2002) dat$SW[idx] <- dat$SW[idx] * 1.009 idx <- dat$jd >= julday(04,01,2002) dat$SW[idx] <- dat$SW[idx] * 1.011 # Calculate anomaly # Lindzen and Choi (2011) references the anomalies to the monthly means # for 2000-2008. (??? /hm) warning("dat[1:60] does not span 2000-2008, but mar 2000 - feb 2005") olrmo <- dat$LW osrmo <- dat$SW for ( m in 1:12 ) { idx <- dat$Month == m midx <- idx & dat$Year %in% 2000:2005 & dat$Order <= 242 # 03/2000 - 02/2005 olrmo[idx] <- dat$LW[idx] - mean( dat$LW[midx],na.rm=T ) osrmo[idx] <- dat$SW[idx] - mean( dat$SW[midx],na.rm=T ) } # Combined 36-day and monthly mean for 1985-2008 stopifnot(length(olr36) == 147) stopifnot(length(olrmo) == 258-153+1) # 1985.1-1999.12 (150 36-d mon) + 2000.1-2008.12 (108 mon) olr <- replicate(258,as.numeric(NA)) olr[1:147] <- olr36 # 36-day mean for 1985.1-1999.8 olr[153:258] <- olrmo # Monthly mean SST for 2000.3-2008.12 # 1985.1-1999.12 (150 36-d mon) + 2000.1-2008.12 (108 mon) osr <- replicate(258,as.numeric(NA)) osr[1:147] <- osr36 # 36-day mean for 1985.1-1999.8 osr[153:258] <- osrmo # Monthly mean SST for 2000.3-2008.12 require(zoo) warning("length(sstmo) == ",length(sstmo), " is greater than length(jul) == ", length(jul)) as.date <- function(jd) as.Date(jd, origin=as.POSIXct("1970-01-01", tz="GMT")) list(ts1 = zoo(data.frame(sst=sst,olr=olr,osr=osr),as.date(jul)), ts2 = zoo(data.frame(sstmo=sstmo),as.date(julmo))) }I’ve ported lc11.pro to R too. Rests to find an R equivalent for TS_SMOOTH(), or port ts_smooth.pro. See h ttps://github.com/hm2/lc11
Dear Steve,
Did you ever get around to examining the LC11 method? If it helps you may also be interested in Frankignoul, C. 1999: A Cautionary Note on the Use of Statistical Atmospheric Models in the Middle Latitudes: Comments on “Decadal Variability in the North Pacific as Simulated by a Hybrid Coupled Model”, J. Clim., 12:1871-1872. Also, papers that cite Frankignoul 1999.
One Trackback
[…] Steve just announced via Climate Audit that he was given all the files by Richard and his colleague and you may download the data files […]