I think I see a mistake in your algebra. dim(X) = (n,m). The idea of truncation is that only a few of the eigenvalues and corresponding eigenvectors are preserved. In the matrix algebra, this means that the inner sums are truncated, but not the outer dimensions. Therefore dim(Vhattranspose) = (k,m) or dim(Vhat) = (m,k), not (k,k).

The next point I am not 100% sure about, but here goes. I think the rotation matrix R has dim( R ) = (m,m). Remember that the eigenvectors get rotated in such a way that they stay orthogonal. That means that two eigenvectors, multipled and summed across all values of m, are zero for 2 different values of k, the eigenvalue and eigenvector index. The only way this can hold is if dim( R ) = (m,m).

The reason I am not 100% sure is because I have not yet found a paper where these things are written down mathematically. I agree with some of the previous comments, that a mathematically detailed description has the best chance of being unambiguous. Like it or not, the computer IS doing math. If we don’t understand these things mathematically, then we don’t understand what the computer is doing. I applaud you for trying to bring math to the discussion.

]]>Are the authors not allowed to put equations in these papers? I find verbal descriptions of algorithms not very helpful. Just complaining.

]]>http://www.uwlax.edu/faculty/will/svd/compression/index.html

would help a lot. And those relevant papers should be made freely available for everyone. Something IPCC should do instead of having press conferences on monthly basis..

]]>I remember taking the detection theory class a few years ago (Van Trees, DEM Vol. 1) and, as I recall, there’s a problem at the end of Chapter 4 that was a back-door introduction to the standard KF. I had already taken a class that introduced it years earlier, but we worked the problem and Ziemer asked “do you know what you just solved?” We were clueless. After he told us, I asked “all this does is find the time varying mean, right?” Yup… I was just glad to be done with that class anyway as it was tough.

Mark

]]>The equation is simply:

So the more eigenvalues are retained the close the reconstruction approximates an OLS reconstruction – and the more low-frequency variation is lost.

]]>http://www.cse.ogi.edu/PacSoft/projects/sec/wan01b.ps ]]>

Unscented Kalman filter would be my number one candidate if I had to work with that data.

Which is, btw, an online (adaptively tracking) method of signal extraction.ðŸ™‚

Some form of Kalman filter is what I’m probably going to implement in my current radar problem. We’re only worried about detection at the moment, however, so I haven’t looked into it. I spent a little time researching the KF and EKF a few years ago, but never paid much attention to the UKF (more complex than I needed at the time).

Mark

]]>Mark

]]>