Truncated SVD and Borehole Reconstructions

In recent discussions of Steig’s Antarctic reconstruction, one of the interesting statistical issues is how many principal components to retain. As so often with Team studies, Steig provided no principled reasoning for his selection of 3 PCs, statements about their supposed physical interpretation were untrue and, from some perspectives, the choice of 3 seems opportunistic.

As we’ve discussed previously, the issue of PC retention was a battleground MBH issue and was never resolved in a principled way. After the fact, Mann adduced a retention strategy that was not described in the text and could not be evidenced in other networks. Wahl and Ammann invented an ad hoc argument that the MBH result “converged” (as long as the bristlecones were in); this “convergence” argument is not one that appears in any literature by known statisticians. Although Wegman described this method as having “no statistical integrity”, that did not prevent its sanctification by IPCC AR4 through their benediction of Wahl and Ammann 2007. I won’t review this long and sordid history as we’ve discussed it on many occasions in the past.

Reading some of the interesting posts by the two Jeffs and Ryan O at Jeff Id’s blog on truncated total least squares, I recalled that borehole inversion used truncated SVD. PhilB had done some work on emulating borehole inversion calculations a couple of years ago. I corresponded with him about this, but hadn’t figured out how the calculations actually worked, but recalled that results were very sensitive to retained PCs.

I thought that it would be interesting to re-visit this topic, in part to get another perspective on retention issues in a completely different context and in part to try to get a better handle on borehole reconstructions which we’ve barely touched on over the past few years.

Unfortunately it took me a while to figure out how these calculations. While I think that I can replicate some examples, I’m not 100% sure that I’ve got it right. As usual, I’ve placed materials online, in case readers can improve on what I’ve set out here.

There is an excellent data set of borehole readings at WDCP-Paleo indexed here. Data listings can be accessed from links at the bottom of the page, which in turn links to the data provided in a somewhat annoying html format, though the data can be scraped from the html pages. I’ve placed online a read-function keyed by borehole id, which will be illustrated below. This is set up for North American listings, but can be easily modified for other locations (for other locations, you’ll need to check that calls are lowercase).

Many of the borehole locations are “holes of opportunity” from hard rock mineral exploration in mining camps in northern Ontario and Quebec; many sites are familiar names to me. I might even have visited some of the sites at some point in my life.

Tamino, who stoutly defended NASA’s inability to locate post-1991 data from stations in Peru and Bolivia, had a post on boreholes a while ago, in which his type example was borehole CA-001-0 in Ottawa. This proved an odd choice as a type case as boreholes theoretically are supposed to be away from cities. For the benefit of Tamino and readers unfamiliar with Canadian geography, Ottawa is the capital of Canada. Being from Toronto, I concede urban status to Ottawa somewhat reluctantly, but must do so. Like other urban centers in North America, it’s not an obvious location for mineral exploration and Tamino surprised me a little here. However, Tamino’s readers need not fear that the Canadian federal government is going to be displaced by a mine, as I presume that either this borehole was put down for some purpose other than mineral exploration, unless the location in the database is wrong.

Borehole inversion methods have a long history. The researcher hopes to reconstruct the surface temperature history from a set of depth-temperature measurements using the heat equation. The problem turns out to be very “ill-conditioned” (the literature derives from numerical analysis, rather than statistics though the issues are similar). Therefore, various regularization methods have been adopted over the years, of which the most common appears to be truncated SVD.

Hugo Beltami’s website has many papers on the topic, most of which seem to describe the same methodology in more or less similar terms. Beltrami provides equations and examples, but it still took a while to replicate his methodology and I’m not 100% sure that I’ve got it right, though I’ve got much of it. As a benchmark case, I eventually settled on borehole ca-8813 at Belleterre, Quebec illustrated in Figure 6 of Beltrami et al 1992

The idea behind borehole inversion (expressed here in matrix terms rather than the continuous expressions of the papers):
T_z = T_0 + q_0z/\lambda + E \tau
where
T_z is the downhole temperature,
E is a matrix derived from the heat equation (see Beltrami’s articles for the assertion) and
\tau is the temperature history, expressed typically as a vector (step function).
E[j,k]= erfc( z[j]/2 \sqrt{\kappa* t[k-1]} ) - erfc ( z[j]/2* \sqrt{2\kappa*t[k]})
Where erfc is the standard function,
\kappa is heat diffusivity here converted to years,
t is a sequence of time intervals.

The idea is to estimate the history \tau from T_z by matrix inversions, the mechanics of which are perhaps best seen in the script below.

First download information from the WDCO site for ca-8813 using online tools.

source(“http://www.climateaudit.info/scripts/borehole/functions.borehole.txt”)
belle=read_borehole(id=”ca-8813″)
Data=belle$bore
z=Data$z ;range(z) # 25.98 389.17
Q=T_z=Data$T_z

In this paper, Beltrami used 20-year steps and the period 1000-2000. There has been discussion in the literature over resolution and, in later papers, Beltrami typically uses only 100-years and a 500 year interval. However, for today’s purposes, which are about truncation principles as much as anything, the denser net is advantageous. Thus:

delta=20
t=seq(0,1000,delta);K=length(t); #51 #time per page 175 Beltrami

Next we make the matrix from the heat conduction equation. This could be done in one line using the outer function, but I’ve sacrificed a little elegance for transparency:

M=length(z);K=length(t)
E=array(1,dim=c(M,K-1) )
for( k in 1:(K-1)){
for(j in 1:M) {
E[j,k]= erfc(z[j]/(2*sqrt(Kdiff*t[k]))) – erfc(z[j]/(2*sqrt(Kdiff*t[k+1])))
}}
dim(E) #[1] 44 51
dimnames(E)[[2]]=t[2:K]-delta/2;dimnames(E)[[1]]=z
E[1:3,1:5]
# 10 30 50 70 90
#25.98 -0.4646354 -0.1404778 -0.06778118 -0.04175596 -0.02900359
#34.64 -0.3295700 -0.1609920 -0.08293746 -0.05241155 -0.03690798
#43.3 -0.2229484 -0.1658662 -0.09284768 -0.06062173 -0.04344844

The second term in Beltrami’s equation includes a termq_0 for surface heat flow density. I was unable to locate any information on what value was used and the value appears to get washed out in the coefficient. However, I located an interesting map of surface heat flow density http://www.heatflow.und.edu/agus05.ppt and used a value of q_0=0.04 wm-2 from the map coloring as an approximation.

q_0=0.04 #deem inspection of map of surface heat flow density
R=q_0*z/Kcond

I did a couple of quick experiments with regression and ridge regression, both of which yielded warnings due to singularity:

A=data.frame(Q,R,E); dim(A) #[1] 44 52
fm=lm(Q~.,data=A);summary(fm) # #OLS fails
# prediction from a rank-deficient fit may be misleading
library(MASS)
fm.ridge=lm.ridge(Q~.,data=A);summary(fm)
#prediction from a rank-deficient fit may be misleading

The regression fails because of problems in inverting the matrix. Beltrami regularizes this by retaining only a few PCs; he reported the retention of all PCs with eigenvalues above 0.025 (page 173), which, for this example, proves to be, 6

MinEigen=.025 #page 173 right column
one=rep(1,M)
A=cbind(one,R,E) #this is matrix desribed on page 173 left column
svd.bore=svd(A)
U=svd.bore$u;d=svd.bore$d;V=svd.bore$v
round(d,5)
#[1] 30.18039 3.04874 0.54588 0.21885 0.07824 0.02507 0.00692
r0=sum(d>MinEigen);r0 #6

According to my original objective, I did some experiments with different numbers of retained PCs.

beta=list();Qhat=list()
for(r in 1:8) {
if(r==1) Sr=as.matrix(1/d[1:r]) else Sr=diag(1/d[1:r])
beta[[r]]=as.matrix(V[,1:r])%*%Sr %*%t(U[,1:r]) %*% Q;
Qhat[[r]]=c(A%*%beta[[r]]); # 5.779109 5.665154 5.569599 5.492003 5.429110
}

Here is the target reconstruction excerpted form Beltrami et al 1992 Figure 6.

Here is my emulation of Beltrami’s methodology On the left, I’ve plotted the original data (black points) with the fits from the first few recons; they all fit more or less perfectly with relatively few PCs. On the right, I’ve plotted the various reconstructions. The recon with 6 PCs looks pretty close to the Beltrami example, but I don’t guarantee it.

A few comments. As more PCs are added, the results start diverging wildly. I’ve shown up to 8 here, but they get more and more extreme.

Keep in mind that the underlying temperature measurements are smooth. It appears to me that the notches on the right hand side of the reconstruction are artifacts of a 1-dimensional Chladni-type situation, sort of like what we saw in a totally different context with Stahle SWM tree ring network eigenvectors. There’s a reason for this: the coefficients are obtained from the eigenvectors.

Keep in mind here the important and interesting sign change property of eigenvectors in spatially autocorrelated regions that we’ve discussed in the past. The first eigenvector has no sign change; the 2nd has one sign change and so on. I can’t see any reason for using 6 eigenvectors in this case.

At this time, I draw no morale on what any of this means. I simply was interested in seeing how the borehole recons worked and in seeing what principles, if any, were used in SVD truncation. It’s a fairly large topic, but not unconnected with issues that we’ve discussing and may perhaps give an alternative perspective on some issues.

I’m very far from a clear understanding of how much weight can be placed on these borehole reconstructions, but we’re in a much better position to experiment with these matters. If nothing else, the tools may prove helpful to some readers.

References:
Beltrami, H., A. M. Jessop, and J. C. Mareschal. 1992. Ground temperature histories in eastern and central Canada from geothermal measurements: Evidence of climate change. Global Planet. Change 98, no. 2-4: 167–184.

Click to access Beltrami_et_al_1992_GPC.pdf

Beltrami website: http://esrc.stfx.ca/pdf/

41 Comments

  1. RomanM
    Posted May 3, 2009 at 3:29 PM | Permalink

    Looks like I have some reading to do.

    I have always had a great deal of skepticism for the ability of boreholes to produce any reasonable information about past temperatures. The assumptions about substrate uniformity and the changes introduced in the process of drilling the borehole appear to me as pretty large obstacles to deal with. The results they end up do not resemble those of any other paleo method.

    … and now we have matrices deficient in rank!

  2. Alan S. Blue
    Posted May 3, 2009 at 3:37 PM | Permalink

    I think the formatting and equations around this text: “is the downhole temperature, E is a matrix derived from the heat equation and” needs a little help. (I don’t think the /tau should be visible.) In particular, there’s no discussion (here) of the entire term containing lambda.

    While I’m nitpicking, the text “struggled a little with Canadian geography in a ” is next to a apparently malformed link. And in the text “At this time, I draw no morale” would seem to prefer the word moral than the word morale.

    Delete at will, the only real comment is on the equation portion’s clarity.

  3. Ian M
    Posted May 3, 2009 at 5:03 PM | Permalink

    Interesting (and beyond me) as always. Just a quick point on the location used by Tamino: it looks like he relied on Google Maps. If you input the coordinates in Google Maps, it takes to about the corner of Fairmont and Young Street, just off Hwy 417, on the southwestern outskirts of Ottawa. I assume you are suggesting that the point should be landing somewhere else…

    Cheers.

    • Steve McIntyre
      Posted May 3, 2009 at 6:03 PM | Permalink

      Re: Ian M (#3),

      Oops, my error. Teach me not to snark. I wonder how on earth a borehole in Ottawa is supposed to be free of anthropogenic influence.

    • Posted May 3, 2009 at 10:34 PM | Permalink

      Re: Ian M (#3), Also just off Hwy 417 is the University of Ottawa, about 4 km to the NE of the borehole’s putative location. Do UOttawa geology students like to drill boreholes in their odd moments?

  4. Posted May 3, 2009 at 10:28 PM | Permalink

    The resolution gets pretty bad in the past; heat equation is a female dog. Don’t forget about the story how Huang, Pollack, Shen were re-engineered in order not to contradict the emerging fad of the hockey stick:

    http://motls.blogspot.com/2007/03/borehole-paleoclimate-reconstructions.html

  5. Dave Dardinger
    Posted May 3, 2009 at 11:01 PM | Permalink

    Steve,

    If I recall from the last time we discussed boreholes here, there were problems put forward concerning things like how to treat water tables and the like. Is there metadata with the temperature readings concerning either present or past water flow through the system being studied?

    • Geoff Sherrington
      Posted May 4, 2009 at 6:46 AM | Permalink

      Re: Dave Dardinger (#8),

      Yes, it was discussed before as an ill-posed mathematical question. There was also mention of the sideways percolation of groundwaters of potentially different and unknown temperature. I think I suggested a read of the large text on the Kola superdeep borehole in Russia, where the vertical temperature variation is anything but simple to a depth of 12 km. In some rocks there are contemporary reactions happening, like oxidation and hydration or dehydration, which can be exothermic or endothermic. From one location to another the natural vertical thermal gradient can vary over an about order of magnitude in the top few hundred mwters of drill hole. Then there is the problem of accurately relating rock age to drill depth or ice age to depth if drilled in ice. BTW, it is common practice in some places to drill holes for geotechnical data where towns are about to be built. There are also some cases with ore deposits where the town or processing plat had to be relocated because paydirt was found to extend beneath the urban sprawl.

  6. joshv
    Posted May 4, 2009 at 9:39 AM | Permalink

    At first blush this sort of analysis seems implausible at best. The idea that somehow the temperature from a decade ago, let alone centuries ago is somehow encoded in bore hole temperature gradients seems highly unlikely. Even if it where, the mathematical inversion required to produce a surface temperature time series appears, as Steve has found here, to be highly sensitive to tiny changes in input temperature.

    I’d think it would be relatively easy to conduct some computer simulations to see if this sort of inversion is feasible under even the most perfect of conditions – for example, model an entirely consistent stack of bedrock, with a uniform heat capacity, and constant heat flux on one end (ground temperature) and a varying flux at the other end (representing the time evolution of atmospheric and solar forcing) and see how the temperature gradient in the stack varies over time. How long does it take a surface signal to become lost in noise? How deeply do surface signals penetrate? How large does a signal have to be to survive for a year? A decade? Do standard inversion algorithms reliably reproduce the simulated forcings?

    Has anybody done this?

    • Steve McIntyre
      Posted May 4, 2009 at 10:04 AM | Permalink

      Re: joshv (#11),

      there’s quite a bit of literature on the topic some of which is mathematically complicated. There’s less literature on mundane things like groundwater, but as someone who’s been underground in northern Ontario, this is hardly an issue that can be ignored. I’m not entirely sure how it’s handled, but the lack of discussion is not encouraging.

  7. Phil B.
    Posted May 4, 2009 at 1:30 PM | Permalink

    Steve, q0 and T0 are also part of solution along with tau. I will be home around 7 pst and email you some more comments

  8. Posted May 4, 2009 at 2:55 PM | Permalink

    I’ve read a couple of times now and have too many questions I need to answer for myself.

    In the meantime, does anyone know how if borehole sites are chosen to avoid porous areas where water can flow? Heat flows a lot better if you carry it with you 😉

    I’m going to have to run the scripts to see the results to understand.

    • Steve McIntyre
      Posted May 4, 2009 at 4:11 PM | Permalink

      Re: Jeff Id (#14),

      they say so, but I don’t know how they could establish it. It’s something that concerns me tho I can’t say right now whether it’s a lare or small problem.

  9. Posted May 4, 2009 at 5:25 PM | Permalink

    In the equation the first term throws a div/zero error at t[0]

    E[j,k]= erfc(z[j]/(2*sqrt(Kdiff*t[k]))) – erfc(z[j]/(2*sqrt(Kdiff*t[k+1])))

    I used the NORMT3 library. By adjusting t=seq( 0.001,1000,delta); I get a close result but is there a different library which handles the error better?.

    Steve: I did the same thing with t at 0 in my script – unfortunately I omitted this in the transcription to the thread. Sorry bout that.

  10. Posted May 4, 2009 at 5:50 PM | Permalink

    Google ran into this post before it found another R library. The NORMT3 also returns a complex result.

    Re: Steve McIntyre (#15),

    That’s not a small issue. It’s probably the main issue. Also, these boreholes all have similar perturbation recovery patterns which make me suspicious of the math.

  11. Posted May 4, 2009 at 7:31 PM | Permalink

    The singular value cutoff restricts the resolution but improves the stability of the solution

    I’m too lazy to go back to RC and look up gavin’s response on the Steig paper but it leaves me to question– Did these guys all go to the same school?

    It’s not a paper wrecker or anything but I sure can understand why you are trying higher values. They should add a 20 credit metrology lab to the PhD curriculum.

  12. Posted May 4, 2009 at 9:10 PM | Permalink

    Sorry to take up so much of this thread.

    After reading for a few hours, I’m getting it now. PCA is a reduced variable solution to linear equations. The thing that I’m understanding now is that after plotting the individual columns in E the shape of each has a piece of downslope from the beginning of the measurements. Since this downslope (or a piece of it) is present in every solution E the SVD will give a strong response to that.

    The data in this case is uncentered so we aren’t really looking at PC’s right?

    PC1 – high sharp edge downslope.
    PC2 – down up
    PC3 – down up down
    PC4 – up down up

    The addition of the PC’s is reasonable until the equations become less defined. — too many DOF. This means stability of solution across multiple boreholes is reasonable and my comment in #18 is misplaced skepticism.

    The problem I can’t figure out is if the maximum variance (by the equations) is almost always near the surface, you’re guaranteed a high response in that area. High positive or negative depending on the temp gradient near the surface. IMO not a small point because it guarantees we can’t easily have large scale anything in the older record. The historic trend can creep above or below the initial huge spike but a tweak of the thermal coefficients probably rectifies that. (haven’t tried it) Either way, you will always get the spike and recovery.

    Now every well I’ve ever drank from has 50ish degree water. Some of these wells are a several hundred feet deep. In the warm climate of Illinois this guarantees a first few hundred feet of apparent warming, always. In a dry well there wouldn’t be this effect but not too many places have zero ground water. Perhaps California in 10 years.

    I don’t like PCA boreholes anymore. The math is sensitive to conduction parameters and can’t be trusted. I still don’t understand the reason for putting 1’s in the first column of the matrix.

    Steve, since you’re the one who did this analysis and much more experienced with PCA, am I missing something.

    BTW, nice work. This wasn’t terribly trivial to figure out I bet.

    • Geoff Sherrington
      Posted May 5, 2009 at 5:51 AM | Permalink

      Re: Jeff Id (#19),

      Some of your questions like variance near the surface can be answered by the engineering of drill holes. Techniques can vary widely, but here are some complications.

      The top of the hole can be bored wider than lower down. It is often cased with steel pipe held in place by a concrete collar, with a safety cap on the top to keep it unfouled if it needs to be revisited. This non-geological material might affect shallow heat flow.

      The hole usually fills with water to near the surface. There is thus a perturbing influence, perhaps like a heat pipe that equalises the temperature lower down before measurements are made. Commonly, the water level moves up and down daily and so produces a column of air at the top whose properties are not uniform over the test time.

      The very act of drilling creates considerable local heat. It is customary to leave holes alone for many months/years before doing temperature measurements. Thus there is often time for groundwater contamination of original temperatures.

      It’s a bit like some say is a consequence of Heisenberg, that the act of measurement disturbs the measurement value.

      The geothermal gradient can vary by an order of magnitude from place to place. It will generally show a warming with depth, though there are exceptions. This heat flows to the surface at rates dependent in part on the rock types.

      Rock strata have rather different coefficients of thermal conductivity and if you hit a horizontal insulator like a coal seam it would seem to wreck the whole idea.

      Mathematically, the winter/summer temperature variation is much larger than year/year. If you can’t extract the seasonal variation from the data, what hope for the annual?

  13. Steve McIntyre
    Posted May 4, 2009 at 10:30 PM | Permalink

    You;re right; it took me quite a bit of time to be able to replicate the above graphic, though the algorithm is pretty simple.

    In terms of the pattern development, I think that you’d find it helpful to re-read the posts on Stahle/SWM and one-dimensional spatial autocorrelation e,g, http://www.climateaudit.org/?p=2898 THat post has an interesting reference to a paper by Diaconis ( a math big shot) on “horseshoe patterns” that is a related phenomenon.

    You get the sequence in which each eigenvector has one additional change of sign. Given that the original data is extremely smooth, this leads to quirky artifacts that in these examples seem to play out as end effects. There’s no reason in the original data for the notches at the right end of this particular reconstruction; in my judgement, they are totally an artifact of the method.

    But as I mentioned, the more recent studies smooth everything out more. There is so little detail in them that they are very hard to use as benchmarks – it took me a while to find these older plots, ones with enough texture to work up a benchmark method.

    In this application, all the E-columns are very smooth, so you’re in effect regressing one smooth series against a bunch of other very smooth series.

    • Steve McIntyre
      Posted May 4, 2009 at 10:41 PM | Permalink

      Re: Steve McIntyre (#20),

      Jeff, the references in the post cited above are well worth re-reading in the context of the Antarctic situation.

      Diaconis and de Leeuw in separate articles both observe that PC methods applied to biological situations can lead to incorrect “ordination” making things that aren’t neighbors seem like neighbors.

      That seems to be one of the problems with Steig’s 3-PC Antarctica. Things are mushed into too few holes and false neighborhoods are created, perhaps along the lines of the ordincation problem discussed in the refs.

      • Ryan O
        Posted May 5, 2009 at 7:51 AM | Permalink

        Re: Steve McIntyre (#20), In Steig’s case, it’s exacerbated by RegEM. Due to the truncation in RegEM, the PCs end up interacting with each other (since they don’t remain orthogonal in Xmis), which produces some odd effects. To try to minimize the effects, you need to run at higher regpars – which you can’t do because the initial infilling of zeros starts to effect the results (you can prove that by infilling with something other than zeros). So you can’t run at very high regpars to do the reconstruction without getting some clearly unphysical and wildly divergent results – but running at low regpars allows the PCs to interact during the infilling. I’ve gotten r as high as 0.4 between the first 2 PCs in Xmis.

        I’m trying to put together a post for tAV on it.

        • Posted May 5, 2009 at 8:16 AM | Permalink

          Re: Ryan O (#25),

          Ok you got my interest. If the pre-1982 pc’s aren’t orthogonal, it’s pretty hard to imagine redistributing the PC’s according to the original AVHRR 3PC spatial weighting. Uh oh.

  14. Posted May 4, 2009 at 11:52 PM | Permalink

    so you’re in effect regressing one smooth series against a bunch of other very smooth series.

    Not too many DOF, especially in the older values.

    I just spent an hour on the Diaconis paper. Although I didn’t try to go through his derrivations, the conclusion was very clear. I have a good understanding of the sign change even in non-spatially correlated data such as fig 6 would represent in two D. Your Chladni post really brought it home, so without belittling the incredibly complex math in the Diaconis paper, the horseshoes are predictable. What’s so amazing now is the number of studies which apparently suffer from these artifacts.

    I need to stop learning for a while and take a nap. Thanks much.

  15. bender
    Posted May 5, 2009 at 6:40 AM | Permalink

    OP:

    I can’t see any reason for using 6 eigenvectors in this case.

    Nor can I. It sure would be nice to have Beltrami comment. His choice to retain all eigenvectors with eigenvalues of 0.025 seems as arbitrary as choosing 6 eigenvectors. Why not 0.020? Or 0.030?

  16. Posted May 5, 2009 at 6:43 AM | Permalink

    #22 Thanks Geoff,

    Steve’s going to kick me off this thread soon for TCO’in his blog.

    Commonly, the water level moves up and down daily and so produces a column of air at the top whose properties are not uniform over the test time.

    I have to ask, why does the water move? I mean water moving would trash even a ten year temp signal wouldn’t it? This just as bad as tree rings.

    I had thought they were using very specific drilling sites and making holes with documented strata. I mean that, that’s what I thought. I should have learned, what an idiot! The thermometer tree thing was a surprise last August but I was naive enough to think boreholes were potentially better, after all we’re measuring temp directly. Pick a site which is known to be dry/solid rock drill a hole, record strata to make sure you’re ok and measure temps when things settle down from drilling.

    We’re making huge multi-trillion dollar decisions and can’t bother to calibrate a tree. For boreholes we are expected to believe a priori that the water doesn’t move in a borehole enough to affect temps yet there is no proof or study to demonstrate that… WTF. I know enough about the water table to know water flows from higher altitudes to low underground, how the heck can it not be cooler a couple hundred feet down. The heat’s been carried deeper for a billion years by cool water. Seriously surface water is cooler than the surface by evaporation, haven’t these guys ever swam in a lake? Please tell me I’m missing something.

    —-
    Are you saying nobody checks for changes in strata either? Are they expecting, the math to see the change?

    • Steve McIntyre
      Posted May 5, 2009 at 8:23 AM | Permalink

      Re: Jeff Id (#24),

      Jeff, nearly all the “boreholes” are drill holes from mineral exploration and are “holes of opportunity”. Many were drilled many years ago. A lot of the holes are in Canada. A lot of exploration is for gold; gold occurs geologically in “shear zones”. The surface of northern Ontario is swamp and lakes. When you go underground in a mine in the Canadian shield, it’s not the least bit dry. There’s water dripping everywhere; keeping the mine pumped out is a regular operating cost.

      The older literature was devoted to trying to detect an Ice Age signature; the literature is not uninteresting but they are building a lot of superstructure on thin structures.

      • Posted May 5, 2009 at 10:45 AM | Permalink

        Re: Steve McIntyre (#27),

        Wow. So we are expected to believe that the simple conduction equations solved through questionable methods can give an answer to tenths of a degree in water saturated ground. I remember a geologist friend telling me that something like 90% of a river flows underground. Makes sense to me, rivers don’t have plastic bottoms after all. It must be zero heat capacity water the underground rivers use.

        I thought I was a new skeptic about bore holes when I woke up, silly me.

    • Steve McIntyre
      Posted May 5, 2009 at 8:28 AM | Permalink

      Re: Jeff Id (#24),

      I’ve browsed some groundwater literature and ironically I’ve read a couple that use borehole temperature profiles to test for convection. 🙂 I’ll do a post on this some time. What a jungle.

      • Ryan O
        Posted May 5, 2009 at 8:32 AM | Permalink

        Re: Steve McIntyre (#28), Not that I know much about boreholes, but that would seem to bury the concept of using them for temperature reconstructions at the bottom of a very deep . . . um . . . borehole.

    • Geoff Sherrington
      Posted May 7, 2009 at 5:29 AM | Permalink

      Re: Jeff Id (#24),

      Please note my caveat in #22 that conditions can vary widely.

      It would be possible to select sites that are far less troublesome than others. Maybe some would be so good as to allow inversion maths to be tested on them. That is partly the reson for for doing this work on ice, the other part being time calibration from ice bands.

      To answer one question, depending on location, groundwaters can be influenced like tides. They are probably everywhere influenced by seasonal evaporation/precipitation patterns.

  17. bernie
    Posted May 5, 2009 at 10:01 AM | Permalink

    Interesting discussion though the math is largely beyond me.
    On a more practical side, I have just been exploring the installation of a geothermal system here in New England. The engineers indicated that how deep you had to drill and whether you could use an open or closed system re. extracting heat from the water in the boreholes varied significantly by both gross (long/lat) and micro (position relation to water flow) locations but within pretty well defined limits – roughly 45 +/- 5F. We are only talking approx. 1200 feet depths so perhaps this is OT. Still it does seem like there are too many variables for any hope of a clean solution.

  18. SteveBrooklineMA
    Posted May 5, 2009 at 8:40 PM | Permalink

    I highly recommend that interested readers have a look at the fundamental paper: Arthur H. Lachenbruch and B. Vaughn Marshall. “Changing Climate: Geothermal Evidence from Permafrost in the Alaskan Arctic,” Science, New Series, Vol. 234, No. 4777 (Nov. 7, 1986), pp. 689-696. If you assume the surface temperature has risen linearly with time, then the resulting temperature as a function of depth after a certain time has a closed form. Then all you have to do is fit the parameters of that form to the data. They have some higher order models as well. The approach is pretty convincing, and I’m usually pretty skeptical. I suspect that more general inversion as described in this post is really not going to add much to the analysis, and is numerically dubious anyway.

    • Willis Eschenbach
      Posted May 13, 2009 at 9:31 PM | Permalink

      Re: SteveBrooklineMA (#32), there are problems with boreholes not discussed in the paper you cite. See the thread at CA, in particular my comments here and here, for some of the problems.

      The main problem is that two boreholes only a few miles apart can show very divergent temperature “histories”. How can that be? Here’s an example from Utah:

      These two boreholes are only 5 miles apart … hardly reassuring regarding the inherent accuracy of the borehole thermometer.

      The two main problems are inhomogeneity and water. The inhomogeneous nature of the subsurface means that the heat conduction is far from uniform. Water also greatly complicates the calculation problems.

      Drilling in the permafrost eliminates the water problem … at least in the upper layers. But even in the report you cite, there are a number of boreholes which show warming, while a smaller number show cooling … he handwaves away the discrepancy, but I’d like to understand it.

      w.

  19. Alexander Harvey
    Posted May 9, 2009 at 8:35 AM | Permalink

    Steve,

    I would not be at all surprised if you found that the PCs corresponded to functions of the form COS(ln(t)w+a) where “w” acts like a angular frequency (albeit dimensionless) and “a” is a phase angle. If so you may find that w increases in steps derived from a dimensionless “aperture” given by 2*ln(389.17/17.32) =~ 6.22.

    Indicating steps of 2*pi/6.22 =~ 1, giving 0, 1, 2 crossings etc.

    Simply throwing the graph into ln(t) should show if such sinusoids are present.

    More generally functions of the form t^z (z a complex constant) tend to dominate in such reconstructions irrespective of method.

    It may be of interest that for functions of this form t^z map to x^(2*z) * Gamma(1/2 – z)/Sqrt(pi()) the sharp decline of Gamma(1/2 – z) with increasing w (imaginary part of z) give the magnification commonly seen in the higher order components of such reconstructions.

    *****

    If no one has mentioned it you seem to used flux in the graphic rather than temperature.

    *****

    The flux parameter you are looking for is indicated in the borehole data. From the temperature gradient (/1000 to get metres) * the conductivity we get ~0.021 w/m^2.

    Alexander Harvey

  20. bernie
    Posted May 9, 2009 at 5:37 PM | Permalink

    Alexander:
    Can you explain in laypersonspeak?

    • Alexander Harvey
      Posted May 13, 2009 at 12:19 PM | Permalink

      Re: bernie (#35),

      I will give it a go at the end, but to recap, I would say that I would be surprised if the PCs that Steve produced are not related to functions of the form t^(a+iw) as his PCs are the eigenvectors of the equivalent covariance matrix wheras t^(a+iw) are similarly eigenfunctions of the equivalent continous linear operator.

      In the continous case you take the partial derivative of erfc(x/Sqrt(4dt)) with respect to t (call this B) and create the following operator which is the continous equivalent of the covariance matrix:

      (A) = (Integral(B*Integral(B*.)dt)dx) where . is a place holder for the function to be operated on, then

      (A)t^(a+iw) = (1/sin((a+iw)*pi))t^(a+iw) that is

      functions of the form t^(a+iw) are eigenfunctions of the operator (A) with eigenvalues given by 1/(sin((a+iw)*pi)).

      Functions of the form t^(a+ib) can be rewriten as t^a*e^(iw*ln(t)) the real part of which which generalises to t^a*(cos(w*ln(t)+a) where a is a phase angle. That is they are periodic in ln(t).

      Now the measurement positions down the hole are limited to the range (17.32,389.17) so one might expect the only the eigenfunctions corresponding to harmonics on the dimensionless “aperture” ln(389.17/17.32) to be candidate PCs. Much like harmonics on a string. The fundamanetal (first harmonic) has no crossings, the 2nd harmonic has two crossings etc.

      That is if you regard the borehole as a string then the PCs are equivalent to its harmonics but the middle of the string would be the geometric mean of the end points not their arithmetic mean.

      Now as always, nothing is a simple as that, not only is the borehole limited but Steve’s choice of time-slots are also bounded this gives rise to another potentially conflicting “aperture”, also the use of a small number of discrete datapoints and time-slots will have an effect.

      Alexander Harvey

      • Steve McIntyre
        Posted May 13, 2009 at 9:40 PM | Permalink

        Re: Alexander Harvey (#37),

        I think that you’re talking about the same thing that I had in mind when mentioning the Chlandni functions which are cos-driven as well. But I’ll have to think about this.

  21. Geoff Sherrington
    Posted May 10, 2009 at 1:28 AM | Permalink

    It’s only one paper, but …..

    http://vzj.scijournals.org/cgi/content/full/6/3/591

    Combined Effects of Urbanization and Global Warming on Subsurface Temperature in Four Asian Cities. Makoto Taniguchia, Takeshi Uemurab and Karen Jago-ona

    Therefore, no thermal free convection is expected (Taniguchi et al., 1999). The logged boreholes were drilled and cased mostly before the 1980s. Thus, the water temperatures in boreholes represent the temperature of groundwater surrounding the boreholes.

    (My bold)

    There is huge variability among individual temperature profiles in each city.

    and on UHI,

    Nevertheless, the averaged temperature–depth profiles in all cities show strong evidence of surface warming. The differences between (old) surface temperature extrapolated from geothermal gradients and (new) surface temperature extrapolated from the average of observed subsurface temperature are estimated to be 2.8°C in Tokyo (Fig. 2), 2.2°C in Osaka (Fig. 3), 2.5°C in Seoul (Fig. 4), and 1.8°C in Bangkok

  22. Ryan O
    Posted May 15, 2009 at 10:03 AM | Permalink

    Steve (or anyone),
    .
    Somewhat related to this . . . do you know of any R implementations for bootstrapped eigenvalue/eigenvector analysis or broken stick analysis for PC retention? The only thing I can find is a scree test and I’d rather not write my own functions if there are other available ones. Thanks!

  23. Ryan O
    Posted May 15, 2009 at 2:32 PM | Permalink

    Found a broken stick implementation in package ‘vegan’, but still no luck on the bootstrap methods. 😦

  24. Alexander Harvey
    Posted Sep 30, 2009 at 4:28 PM | Permalink

    Hi, elsewhere on this site there is a discussion on Yamal etc which I do not wish to go OT on.

    I would just like to say that there is a very good reason why the last 500 year borehole reconstructions should not be added to longer reconstrctions and it is in the design of the borehole reconstruction and nothing to do with its other well known problems.

    In order to make the reconstruction a design decision had to be made concerning the period prior to the 16th century and that decision was that nothing happened.

    The reconstruction could be called what does the data tell us if the temperature prior to 1500 was the same as the year 1500.

    If you re-worked the reconstruction with say a MWP in the prior period you would get a different reconstruction.

    To me at least this precludes tacking it on to a longer spaghetti graph.

    If the period prior to 1500 was marked by a continuous straight line that would be more acceptable but it would still be rather questionable.

    Anyway I have said what I need to. To me it seems a ridiculous thing to do but I as it does not seem to bother anyone else I will try to move on.

    Alex

One Trackback

  1. […] more: Truncated SVD and Borehole Reconstructions […]