Multivariate Calibration

In calibration problem we have accurately known data values (X) and a responses to those values (Y). Responses are scaled and contaminated by noise (E), but easier to obtain. Given the calibration data (X,Y), we want to estimate new data values (X’) when we observe response Y’. Using Brown’s (Brown 1982) notation, we have a model

Y=\textbf{1}\alpha ^T + XB + E (1)

 Y'=\alpha ^T + X'^T B + E' 


where sizes of matrices are Y (nXq), E (nXq), B(pXq), Y’ (1Xq), E’ (1Xq), X (nXp) and X’ (pX1). [tex]\textbf{1}[/tex] is a column vector of ones (nX1). This is a bit less general than Brown’s model (only one response vector for each X’). n is length of the calibration data, q length of the response vector, and p length of the unknown X’. For example, if Y contains proxy responses to global temperature X, p is one and q the number of proxy records.

In the following, it is assumed that columns of E are zero mean, normally distributed vectors. Furthermore, rows of E are uncorrelated. (This assumption would be contradicted by red proxy noise.) The (qXq) covariance matrix of noise is denoted by G. In addition, columns of X are centered and have average sum of squares one.

Classical and Inverse Calibration Estimators

Classical estimator of X’ ( CCE (Williams 69) , indirect regression (Sundberg 99), inverse regression (Juckes 06) ) is obtained by generating ML estimator with known [tex]B[/tex] and [tex]G[/tex] and then replacing [tex]B[/tex] by [tex]\hat{B}[/tex] and [tex]G[/tex] by [tex]\hat{G}[/tex] where

[tex]\hat{B}=(X^TX)^{-1}X^TY[/tex] (3a)

[tex]\hat{\alpha}^T=(\textbf{1}^T \textbf{1})^{-1}\textbf{1}^TY[/tex] (3b)


[tex]\hat{G}=(Y_c ^TY_c-\hat{B}^TX^TY_c)/(n-p-q) [/tex] (4)

([tex]Y_c=Y-\textbf{1}\hat{\alpha}^T[/tex] , i.e. centered Y ), yielding CCE estimator

[tex] \hat{X}’=(\hat{B} S^{-1}\hat{B}^T)^{-1}\hat{B}S^{-1}(Y’^T-\hat{\alpha})[/tex] (5)


[tex]S=Y_c^TY_c-\hat{B}^TX^TY_c[/tex] (6)

Another way to go is ICE (inverse calibration estimator (Krutchkoff 67), direct regression (Sundberg 99) ) , directly regress X on Y,

[tex]\hat{\hat{X}}’^T=(Y’-\hat{\alpha}^T)(Y_c^TY_c)^{-1}Y_c^TX[/tex] (7)

Note that nobody yet has said that these estimators are optimal in any sense. It turns out that if we have special prior knowledge of ‘ (Xs and Ys sampled from normal population), ICE is optimal.

Important note (yet without proof here) is that sample variance of econstruction in the calibration period will be smaller than the reconstruction in the case of ICE, and larger with CCE. In the absence of noise, ICE and CCE yield (naturally) the same result. Update: see Gerd’s link and, and also note that ICE is a matrix weighted average between CCE and zero-matrix (Brown82, Eq 2.21).

Confidence Region for X’

Following Brown, we have [tex](100-\gamma)[/tex] per cent confidence region, all X’ such that

[tex](Y’^T-\hat{\alpha}-\hat{B}^TX’)^TS^{-1}(Y’^T-\hat{\alpha}-\hat{B}^TX’)/\sigma ^2(X’)\leq (q/v)F(\gamma)[/tex] (8)

where [tex]F(\gamma)[/tex] is the upper [tex](100-\gamma)[/tex] per cent point of the standard F-distribution on q and v=(n-p-q) degrees of freedom and

[tex]\sigma ^2(X’)=1+1/n+X’^T(X^TX)^{-1}X’[/tex] (9)

The form of this confidence region is very interesting, and it is important to note that letting [tex]\gamma[/tex] approach one the region degenerates to the CCE estimate [tex]\hat{X}’[/tex]. Update2: Central point of the region is NOT (AFAIK for now ;) ) ML estimate, and the relation of central point and CCE is, as per Brown,

[tex]C^{-1}D[/tex] (10) , where

[tex]C=\hat{B}S^{-1}\hat{B}^T-(q/v)F(\gamma)(X^TX)^{-1}[/tex] (11)


[tex]D=\hat{B}S^{-1}(Y’^T-\hat{\alpha})[/tex] (12) .

Often calibration residuals are used to generate CIs for proxy reconstructions. We’ll see what will be missing in that case:
I simulated proxy vs. temperature cases with q=40, n=79 and SNR=1 and SNR=0.01. With SNR 1 we’ll get nice CIs, (which agree quite well with calibration residuals), but when SNR gets lower, the confidence region grows rapidly, being open from the upper side quite soon! Yet, in the latter case calibration residuals indicate relatively low noise. The dangerous situation is when true X’ is greater than calibration X (the very thing hockey sticks are trying to prove wrong).

Above: SNR=1, Below: SNR=0.01



  1. Direct usage of calibration residuals for estimating confidence intervals is quite dangerous procedure.
  2. Assumptions of ICE just do not work in proxy reconstructions


Brown 82: Multivariate Calibration, Journal of the Royal Statistical Society. Ser B. Vol. 44, No. 3, pp. 287-321

Williams 69: Regression methods in calibration problems. Bull. ISI., 43, 17-28

Krutchkoff 67: Classical and inverse regression methods of calibration. Technometrics, 9, 425-439

Sundberg 99: Multivariate Calibration – Direct and Indirect Regression Methodology
( )

Juckes 06: Millennial temperature reconstruction intercomparison and evaluation

( )

Post a Comment

Required fields are marked *



Get every new post delivered to your Inbox.

Join 2,874 other followers

%d bloggers like this: