## Borehole Inversion Calculations

Phil B writes:

My day job does include parameter and state estimation using Least Squares and Kalman filtering.

I have replicated several of the non-ice borehole temperature reconstructions and I’ll “share” my observations. The inversion problem boils down to finding the solution to the inconsistent set of linear equations Ax~b, where A is a skinny matrix (mxn) whose columns are generated from the solution of the heat conduction equation. x is a nx1 solution vector where the first two elements are a slope and intercept and the remaining elements are the temperature reconstruction. The b mx1 vector is the borehole temperature profile. The solution is calculated using the pseudo inverse based on the singular value decomposition (svd) where A = U*S*V’ and where U (mxm) and V (nxn) are complete orthonormal basis sets. The A matrix is ill-conditioned with ratio of max to min singular value on the order of 1e6 to 1e7 on the boreholes I replicated. In the older literature only singular values of greater than 0.3 are used in the pseudo inverse, with recent literature using a ridge regression that optimizes the norm of the residual versus the norm of the solution. If you keep singular values that are less than 0.3, the reconstruction is physical unreasonable, i.e. pulses on the order of 20-40 deg K. For 500 year reconstruction and 100 year steps, the 3 smallest singular values out of the seven total aren’t used in the psuedoinverse.

So what is the problem??? The ill-conditioning of A!! For instance if A is rank deficient i.e. rank = n-1, then one has a single null vector z such that A*z = 0_mx1 and an infinite number of solutions for x. For our A, there are 3 “almost” null vectors which are the last three columns of V. So A*v(5), A*v(6), A*v(7)~0_mx1. Let x_est be the pseudo inverse solution using only singular values greater than 0.3. Let’s create a new solution x_new = x_est + 2*v(7). The individual residual changes are on the order of millikelvins. x_new “looks” substantial different then x_est but the values are reasonable. The point is that there are many x_estimates and many reasonable temperature reconstructions that have residuals that are almost identical, with differences that are less then the temperature sensor noise level.

Summarizing, the columns of the ill-conditioned A matrix are created using solutions to the heat conducting equation. x_est is one of many possible temperature histories.

### Like this:

Like Loading...

*Related*

## 82 Comments

You mean, as they say in Maine, Mann can’t get there from here?

A recipe for cherry pie?

==============

Apologies, would it be possible to translate this post into English? Whilst I’m sure it’s very interesting, I honestly can’t make head nor tail of it…

It probably doesn’t help that I was rubbish at matrices in school…

Re: Ade (#3),

It probably helps that I’ve been studying matrices in the past year so I can understand this.

The take-home message is that if you take a set of n linear equations with n unknowns which are NOT linearly dependent (ie where one of the equations isn’t a multiple of another) then there should be a unique solution.

If you only have n-1 equations with n unknowns then there are an infinite number of solutions (which is the same as saying the nth equation is 0).

What Phil is saying is that there are several equations which are very nearly zero AND it looks like some of the equations do not have the same solutions as the others.

This means that the estimation of the roots of the equations becomes an exercise in choosing which equations need to be adjusted in order to get reasonable answers. But who says what is reasonable? Phil says that there are a large number of possible solutions which will produce the same result.

The result is that the error bars blow up, rendering whatever trend is in there as meaningless. But meaningless trends with wide error bars don’t appear to be a bar to using them in “robust” temperature reconstructions by Mann and the rest of the Hockey Team – nothing fazes them.

I never got the borehole thing. It seems to assume that heat goes one way downward. I’m glad to see that the mathematics for reconstructing a past temperature from temperatures in a borehole is extremely complicated.

Hank, me neither. I am a ground water guy and I realize temp recon is a whole ‘nother thing, but I’ve never quite believed relic temperature differentials could remain after centuries. Also, every recon I’ve ever seen looked the same (+/-).

If anyone can show I’m wrong, please do.

“I have replicated several of the non-ice borehole temperature reconstructions …”

What’s an example of a non-ice borehole? I thought they were all taken in ice caps, eg Greenland.

Steve:the vast majority are in rock. They re-cycle drill holes from mining exploration. Knowing a bit about mines, I don’t understand how they adjust for water, which is everywhere underground. A lot of drill holes are in shear zones because that’s where you look for gold as well, adding to the water problems.Re: Hu McCulloch (#6),

Non-ice borehole is where you drill a dry hole in the ground where you hope the rock is thermally homogeneous, and then you measure the temperature of the hole at various depths.

No I’m not convinced either.

Re: Hu McCulloch (#6),

Every dry oil well is a potential borehole, IIRC.

If one were drilling through a homogeneous medium with a heat source, the core, on one side and a heat sink, the atmosphere, on the other with the temperature of the sink being time dependent but the source constant, the problem would not as difficult. Although, depending on the precision of the temperature readings in the hole, the vertical spacing between readings and the accuracy that one knows the depth of each reading, it still might be ill-conditioned. But of course the medium isn’t homogeneous and that’s just the first problem.

I’m curious about the effect of the drilling process itself. Do you wait a few years after drilling to allow the system to re-equilibrate? Do you fill the hole with water or keep it dry?

Steve:The holes are 99% of the time from hard rock exploration as far as I can tell. They are “holes of opportunity” and not done for the inversion. In Canada, they would all fill up with water in no time.Ade, I normally lurk and wasn’t planning on front page coverage, so I didn’t write a stand alone post. You need a strong background in linear algebra to understand the above arguments. At best, the borehole temperature reconstructions are a filtered version of the local temperature record.

Hu, google beltrami, pollack, huang and borehole to find the literature. Hugo Beltrami has a nice web site with quite a number of papers. Huang has an archive of borehole temperature profiles at the University of Michigan, but there is very little metadata, like what temperature sensor was used, calibration history and other measurement details.

Hank, actually the temperature goes up the further down one goes. That brings up the issue about the semi-infinite slab heat conduction model that is assumed to solve the heat conduction equations. I didn’t mention that as the ill-conditioning and linear algebra was enough of an issue. I have graduate engineering training in heat transfer, gas and fluid dynamics but I would really like to see a mechanical/aero engineer who specializes in heat transfer to take a look at the modeling. Since the bottom of the borehole is hotter than the surface, it seems there should be convective heat flow out of the borehole. Does it matter if the borehole is kept closed or open? What does the temperature sensor really measuring? Perhaps lucia might pick up the flag?

I guess I’d never realized that people believe you can measure past temperatures by digging a hole and measuring the temperature profile of the rock. Can anyone clue me in to why anyone would believe this? It seems ludicrous on the face of it.

Also, are these results reproducible – i.e. do two borehole in nearby location produce the same results? I’d suspect these folks have done nothing buy create a really fancy statistical method for amplifying the noise in their temperature sensors.

BTW I’ve been trying for a couple of years to get the original uninverted data for an Antarctic borehole, that seems to underpin NAS panel statements on Antarctica. Although the sampling was done in 1995, it isn’t published. That didn’t stop Kurt Cuffey from relying on it in the NAS report, but it’s USGS data and he can’t provide it. Kurt Cuffey is a decent guy and I have a very good impression of him, but his inability to produce the borehole data is very frustrating. At least he’s pleasant about it.

Totally OT. I was playing with Alexa and I come up with a pretty funny picture. If you compare RealClimate with CA, you get two general trends. RC’s look like the long downward trend of ice in the arctic. CA looks like the reverse. They intersect in July 2008.

Of course, RC makes a blog post entry every 5 or 6 days. The downward trend was inevitable.

Phil B:

Was I correct in my summarization of your matrix argument? I’m not so cocksure of my own abilities to study math that I don’t wonder if I’m made a mistake…

Re: John A (#14), Sorry, I missed your question on your #9. I would change this ‘NOT linearly dependent’ to ‘linearly independent’. Kinda of a double negative.

‘If you only have n-1 equations with n unknowns then there are an infinite number of solutions’. A good example is the equation for a line a*x + b*y = c where a,b,c are constants. There are infinite x and y pairs that fit the equation. Gives a fat A, with a rank of 1. The svd pseudo inverse would give a x,y pair that had the minimum norm.

John, your discussion is coming from classical linear algebra where one determines rank of the A matrix (from Ax=b) by row reduction. Good for understanding, but I would let the svd do the numeric heavy lifting.

Normally, I consider roots of an equation, being the roots of a polynomial or f(x)= 0, and don’t associated that term with Ax=b, although if A (mxn) was rank deficient than there is a least one null vector z (nx1) such that A*z = 0_mx1.

I don’t believe you can talk about error bars in this context. In the literature they were forced to use a Truncated Least Square method due to noise and ill-conditioning of the A matrix. By not using three or a portion of the smallest singular values they have left out part of the true answer and it is the high frequency portion. The size of the error depends on the real temperature history.

Re: Phil B. (#35),

Phil, I’ve just been spending my time learning matrix algebra this year as part of a mathematical modelling course. It’s obvious that I’m behind the curve in terms of using these things every day. I was just surprised that I was able to (mostly) understand what was going on as a result.

Re: John A (#43),

John A, I have gone back and attempted to understand enough of the matrix algebra to have fairly reasonable understanding of some of the discussions here at CA. I did the reading at my own pace and still I am so impaptient that I was mumbling, Ken, you have to read faster to get to the good stuff. It is threads like this one that encourage me to get back to the books.

Also, John, am I correct that part of your motivation on linear algebra was the fact that your favorite source of information, Wikipedia, explains many mathematical concepts using matrix algebra.

There is a 2008 review paper coveing related aspects –

The authors reference some difficulties

One might assume that these factors (and more not mentioned) would so confuse the tiny signal that the mathematical opportunity for ill-conditioning exists.

Geoff, the quote sounds like code for “in your dreams bore holes show temperature history”.

Data speak with forked tongue.

The depressing thought is that a number of students with long term persistence will try to show that this method is valid. Then we are off on another exercise like the validity of the hockey stick. Or dendrothermometry. Or whatever.

We need more validation police.

I have the greatest disrespect for Wikipedia, but this article managed to miss being vandalized by the ignorant:

Therefore looking at what Phil has written, it seems that ill-conditioning refers to the fact that some of the linear equations being solved are so close to being multiples of other equations that there are a near infinity of possible solutions.

Although Phil’s terminology is different than mine, the matrix operations presented here appear to be “homomorphic” (in a math sense) to linear regression operations (and he mentions ridge regression as a way of dealing with singularity.) You get quite different language for the same things in engineering and statistical applications. IT would be interesting to see how the formalism transposed into statistical language.

Phil B:

With a background in aerodynamics, I do not believe that there will be significant airflow in the borehole:

1. As the air heats up and tries to rise, it creates a low pressure area below itself, preventing the rise.

2. As colder air above and warmer air below try to “sneak” by each other in the narrow bore hole, they create turbulence which mixes them and removes the local thermal gradient.

3. If you did manage to generate a velocity within a narrow tube, standing waves would begin to inhibit transport.

4. Finally, the specific heat of air is so much less than the surrounding solid material that it will not have a great effect on heat transfer.

In my current work, we heat long (30′, 9m) relatively small diameter tubes (say 4-8″, 100mm – 200mm), arranged horizontally, in a relatively high velocity (1000fpm, 5m/sec) convection oven. Airflow is parallel to the tubes, but we assume convection heat transfer on the outside of the tubes only, as there is negligible airflow within the tube. Without a significant pressure gradient, air doesn’t “volunteer” to go though the tubes. And this is only turbulence and standing-wave induced, as the tubes are horizontal. We don’t have a significant issue of cold air wanting to fall and warm air wanting to rise within the tube.

Re: David Jay (#22),

David, It is not uncommon for old drill holes to fill with water rather than air, depending on the depth to the current water table. Because the water table itself will rise and fall for various reasons (weather, rainfall, tidal effects, etc) the drill hole is subject to noise and smearing of signal. Movement of the water would move the air column above, but as you note, it has small heat capacity.

A more fundamental problem is that there is heat flow from radiation at the surface meeting upwards heat flow of the geothermal gradient. The analyst has to decide which heat flow dominates where and where the effects boundaries meet. The geothermal gradient is not constant either, being modified by groundwater flow, endothermic and exothermic mineral reactions at various depths, etc.

Finally, sounding a little like the pop science version of Heisenberg, the very acts of drilling and later temperature measurement have the capacity to disturb the signal measurement process. There are some workarounds potentially available, but they possibly lack confidence that they are free from quantifiable error.

Re: David Jay (#22), Thanks for your thoughts, David J. Probably depends also on the diameter to depth ratio.

Steve, my terminology is taken from my linear algebra texts. My favorite is “Linear Algebra and Applications” by Gilbert Strang. In my post, I talked about but didn’t provide the math description of the pseudo inverse using the svd of A. If you will indulge me, I will post that up along with the math description of what happens when one drops singular values in the pseudo inverse. Later this afternoon.

John A, I certainly agree with wiki description. I discretized the heat equation so I could create a set of linear difference equations and treat the problem as a “Controlability” problem as my specialty is control system engineering. Issues with matching the continuous time solution, but it did provide insight to the issues.

It is not necessary to pick which equations to use when the number of equations is not equal to the number of unknowns. There are methods that will yield the unique solution if there is one and a least error approximation if the equations are inconsistent. Of course, one might want to be able to select the most useful equations.

Re: Chuck Bradley (#24), Chuck, I agree. For a inconsistent (and consistent) set of linear equations Ax~b (Ax=b), the modern technique is to use pseudo inverse using the singular value decomposition (svd) of A. i.e where A = U*S*V’.

The pseudo inverse answer is x = w(1)*v(1)+w(2)*v(2) + … w(k)*v(k) + w(k+1)*v(k+1) + … w(j)*v(j). Where v(i) is the ith column of V and w(i) is a scalar weight and j = rank of A. The weights are w(i) = (u(i)’*b)/s(i). u(i) is the ith column of U and s(i) is the ith singular value. It is important to note that the singular values are in the denominator.

That is the problem, the borehole reconstruction A matrix is ill-conditioned such that the max to min singular values is on the order of 1e6. Due to noise on b, the last three vector w(k+1)*v(k+1) to w(j)*v(j) dominated the answer in a physical unrealistic manner due to their small singular values (1e-2 to 1e-3). Their solution was to use the Truncated Least Square (TLS) method which only keeps the first k vectors with singular values greater than 0.3. They have thrown away the last three vector with the last three vectors containing the high frequency components (eyeballing). Noting, that V is a complete orthonormal basis set.

A hockeystick temperature reconstruction is obtained. Using the TLS is a practical solution to their noise and ill-conditioning. What is absolutely wrong is to claim that they have the historical temperature history. At best, they have a filtered version. Not even sure of that, as I don’t believe their use of the semi-infinite slab model is correct as it doesn’t include a heat source in the slab as the real earth has.

Note, I normally use r = rank, but one gets this v(r)

I haven’t checked, but my guess is that he is recycling the boreholes which were discussed in “Underground Problems with Mann-Holes“. I’m not at all impressed with borehole temperature reconstructions.

w.

Ok for the layman, ill-conditioned matrices are matrices that are

almostsingular (non-invertible). Since most of the methods for determining the roots for a system of equations rely on an invertible matrix, then any little perturbation (like round off error) will result in the calculations blowing up. It is also an indicator that the original data is very sensitive to sampling errors.Re: Jaye (#28), Jaye, the classic method is to use the normal equation i.e. x = inv(A’*A)*A’*b. If A is rank deficient or poorly conditioned this solution can have it issues. The modern method is to use the svd (A=U*S*V’) pseudo inverse method that I have described which handles these rank deficient, ill-conditioning, and fat A matrix issues. For a rank deficient or fat A matrix, the x solution has the minimum norm. The minimum norm result is due to the fact that V (nxn) is a complete orthonormal basis set.

Jaye, I am just pulling the linear algebra results from Gilbert Strang’s Linear Algebra and Applications text book.

Re: Phil B. (#34),

Yea I know about generalized inverses and svd. The comment was for the layman…what ill conditioned means plain language and why it matters.

If I remember my matrix theory correctly, I think you can “fix” an ill-conditioned matrix by adding small random numbers to the diagonal elements.

Is there something really amiss in these borehole proxies? Granted that a borehole will have a temperature variation that can be measured and graphed, are we making a leap that physics doesn’t allow? Are we wishing that boreholes will give us insights into past temperatures and overlooking other explanations? I suppose the thing to look for would be whether there is a borehole proxy that matches up with some other nearby proxy such as tree rings or varves.

An elastic wave propagates in a way that is very different from heat. While an elastic wave is a contained pulse of energy which attenuates only slowly. I would have to think that any pulse of heat one could imagine would be uncontained and attenuating very rapidly. In fact I would assert that the energy of an elastic wave must attenuate itself into the great sea of heat that we find everywhere around us.

As I recall one of the basic properties of heat is that it moves from regions of high temperature to low. Conversely there should also be *no* motion of cool regions toward warm. This is unlike the situation we have in an elastic wave where a pulse passes, but once it has passed the medium returns to the state it was previously in. Once any pulse of heat that you could imagine passes there is nothing to spring the medium back to coolness. The heat just remains. Also any pulse of heat traveling downwards in a borehole would also travel backwards so that any following regions of cooler temperatures would tend to be washed over by the heat that preceeded it.

I do not believe that pulses of heat could be contained enough to give a durable sign of what ancient temperatures might be. The tendency to disorder must work too fast for that to be possible.

Re: Hank (#31), The columns of A associated with the temperature reconstruction are the borehole temperature response to a 100 year 1deg C unit pulse. The borehole temperature response is derived from the solution of the heat conduction equation at the pulse time. i.e. if you did a 500 year reconstruction, one would have five columns and two more, slope and intercept terms. For a total of seven columns.

Steve Mc, How well do you know Dr. Beltrami? I would love to have him come to CA and comment. He did commented at Open Mind when Tamino covered this topic. I came late to that thread and engaged Tamino in the linear algebra issues. Tamino said he would think about and reply back but he didn’t. Perhaps got busy.

I had a very pleasant dinner with him at AGU one year. I’ll try later in the week.Steve:

The borehole temperature recon problem is a classic case where people seem unable to find a way to check their answers. While the theory of heat transfer is well-known, a correct solution depends on knowledge of the initial and boundary (strata heat conduction rates, ground water flow, ad nauseum) conditions, which in these cases is rather guessed at. One could set up a block of material in a lab and test whether a time-varying temperature at the surface leaves a proper trace in the profile such that temperature can be estimated back in time–I am unaware if anyone has done this. Without such a test, you can not verify that the problem is well-posed. My guess is not.

Dr. Ben, please. It is Mr. Ken to you. Your message to those that have been listening and have reread Mann et al. (2008) several times has been loud and clear. As Mr. Ken to Dr. Ben, let me respectfully suggest a good model for what you are attempting to communicate here about what the Mann paper is actually saying and how it is saying it.

We have a concurrent thread at CA on borehole temperature reconstructions where the participants talk about ill conditioned problems (in linear algebra) where small changes in the initial inputs can change the answer very significantly.

http://www.climateaudit.org/?p=3586#comment-295344

What is so remarkable about the Mann paper is that what you take away from it can vary greatly (due, in no small part, to the flexibility the authors provide) depending dramatically on the reader’s starting points. It is a creation of the need to be able to impart the strict scientific result with all the uncertainty that that historically implies and at the same time allow a Gerry North or any other AGW advocate to bring away a message in line with their advocacy. Putting this more specifically, the authors are allowed to scientifically present evidence that the HS is broken and at the same allow others, who so choose, to interpret the paper such that policy implications are not changed. Brilliant. Just brilliant.

Dr. Ben your alluding to 5th graders in no less brilliant in that you impart the true and remarkable extent of the interpretations available from this paper and in this case the 5th grader version. Heck, I would think there could be a 1st grader version also. I know they are good at getting the points of fairy tales.

So what problems with this paper remain and are well-posed? I would suggest that it is the methodologies that Steve M has been analyzing. I would only respectfully submit to you, Dr. Ben are you up to discussing those issues either from your perch on your tall horse or at ground level.

I apologize that my post above was mistakenly posted here.

Dr. Ben with Al Gore scouring the country side looking for Manpigbear.

Re: Jaye (#40), What – no cheesy YouTube video?

Ken, which thread should it be in?

Re: John A (#42), He moved it already (to MWP non-dendro proxies #2).

Re Steve’s post, John A #7, DeWitt Payne #7,

Are we to conclude, then, that non-ice boreholes are hopelessly uninformative about past temperatures? In ice boreholes, at least the new snowfall is blanketing, and therefore stratifying and to some extent insulating, the lower-horizon temperatures, while the ice underburden is insulating everything above from the underlying geotermal uranium decay activity, etc. None of these factors is present for non-ice boreholes.

Per Steve’s remark #8, if the non-ice borehole does fill with water, doesn’t convection imply that the observed temperatures will necessarily either be uniform, or else stratified with coldest on the bottom? Isn’t convection a problem even in air-filled boreholes, if the reading is taken from the air inside the hole?

Re: Hu McCulloch (#46),

Does anyone measure actual temperature in an ice bore hole? It would seem that isotope ratios of the core would be both better dated and more representative even with the confounding effect of evaporation/precipitation. Then there is the plasticity of ice at high pressure. The hole at any depth won’t remain open long after drilling and removing the core unless it’s lined. The usual liner being high (compared to ice) thermal conductivity metal, any temperature signal would seem to be lost right there.

Hu: The ice boreholes collect samples from the ice and measure chemistry. The ice is laid down through the time period you are measuring, such as collecting a sample from 10,000-year old ice to determine the temperature from that time.

Temperature is measured in the non-ice borehole within rock that is much older than the temperature record you are trying to measure. For instance, a 10,000-year temperature reconstruction from a borehole drilled through rock that is 30-million years old. In any event, fluid temperatures in boreholes is complicated by the heterogeneous nature of permeability on small and large scales and how that translates into heat distribution preferential pathways.

In my experience borehole logging of temperature in geothermal and water supply boreholes, the purpose is to use that information to evaluate production zones and groundwater circulation patterns. It boggles my mind that someone can claim to filter out these very strong heterogeneous signals and the local vertical crustal heat flux in a manner as to determine not only the temperature influence of the surface, but to accurately relate these micro-temperature signals to specific times in the past. This reminds a bit of lyric by Jimi Hendrix: “Will the wind ever remember the names it has blown in the past?”

I have not made an extensive review of the literature, but none of the papers cited recently on CA contain any information on how the local geology and hydrogeology are accounted for in teasing out such a faint and diffuse signal to create these reconstructions. In addition, I have not seem a discussion about how the holes were drilled, what was done to the hole prior to temperature logging, and how exactly was the temperature measured. My SWAG is that there are very few sites where the conditions are such that a reasonable temperature reconstruction can occur. If Steve is correct that the boreholes were selected because they will drilled for other purposes (oil, water, mineral exploration, etc.), I would bet that the boreholes would be relatively poor candidates for temperature reconstruction. The best candidate would be a relatively homogeneous and isotropic low permeability saturated sediment. The problem is, there is little reason, other than to set structural piers, to drill holes in such an environment.

I know that Dahl-Jensen et al. analyzed both the isotopic composition of Greenland ice cores and the temperature profiles of the resulting bore holes. She got similar results for both, with notable MWP and Holocene optimum.

Abstract and link to paper here:

http://www.sciencemag.org/cgi/content/abstract/282/5387/268?ck=nck

I’m not necessarily defending this work, just hoping to provoke further discussion.

Thanks Curt, could not look at the full text. I also did not realize ice cores were used for temperature borehole inversion calculations. I can see how a relatively stable portion of a massive, thick ice sheet might be a suitable material for borehole temperature reconstruction.

Hu, I’m not experienced with coring ice, but would imagine an ice core boring would stay open long enough to run wireline logs. Boreholes collapse generally due to loose fractured rock, unconsolidated granular materials or clays (gumbo) that swell on contact with water. Hole squeeze can also occurs at depth in soft sediments.

I’m sure I’ve gone far enough OT.

Last year I was able to find and download the full text of the Dahl-Jensen paper (legally) without membership or payment. I don’t remember how I did it, and I don’t have the paper at hand now. But it probably would only take a few minutes of Googling around to find it.

There is little point in comparing residual temperature profiles in holes in hard rocks with holes in ice caps as per Dahl-Jensen (abstract). Abundant literature nails groundwater convection as a killer and this is present in the first case but not necessarily in the second.

Re: Hu McCulloch (#46),

Hu, an ice layer should not be a barrier to the outwards conduction of internal global heat to the surface. The heat flow is persistent and has to find a path. Some say it melts the base of some Antarctic ice and forms lakes such as postulated below the Vostok hole (a mechanism not relevant to this discussion, which is about the top few tens of metres of a hole). But ice, like rock, is capable of thermal conduction, with a difference that the latent heat of melting might enter some equations.

General question. There has been discussion of the work of Jonathan Drake on the Ice-gas difference of ages from some ice drill holes, see e.g.

http://homepage.ntlworld.com/jdrake/Questioning_Climate/userfiles/Ice-core_corrections_report_1.pdf

Much discussion of his graph showing CO2 conc versus IGD (with his qualified projection to 300-325 ppm as a past atmospheric level) has not produced a fully satifactory mechanism. Given the complexity of the IGD problem, at depths in the few top tens of metres at Vostok, I wonder how much complexity spills over to reconstructions of surface Earth temperatures on ice caps.

DeWitt Payne #48 writes,

It’s my (limited) understanding that people do both — Dahl Jensen et al (GISP, DYE3) just measure temperature in a “borehole” and back out what the historical temperature must have been. Others (Lonnie Thompson et al) take “ice cores” and measure the dO18 ratio etc from the ice to estimate temperature at the time the ice (snowfall) was formed. Both approaches can be informative, but the methodology is completely different.

But non-ice boreholes, the topic of the present thread, present much greater computational problems, as noted in the post by Steve, so I’m wondering if they’re ever informative.

Hugo Beltrami told me that ice boreholes were more problematic than rock boreholes, but I don’t remember exactly why. Might have been due to glacier movements.

The continuous plastic deformation of the ice must generate heat. As always, the question is how much.

Hu, you say:

This paper rests on a fundamental misconception. First, here is their description:

The flaw in the paper is hidden in the way that the historical ice/gas age difference is calculated. First, a bit of theory.

1) The more snow you get every year, the faster the pores in the snow at depth close off. The difference between the ice and gas ages becomes smaller.

2) The warmer the water, the more evaporation, the more snow.

So the warmer the water, the smaller the ice/gas age difference, and vice versa. In warmer times, the pores close faster because of heavier snow.

The important thing to note here is that the ice/gas age difference IGD can be estimated as some function of temperature T

IGD = f(T)

And of course they’ve figured out the formula using modern conditions and temperatures, and checked it against what we know, so it’s probably a decent approximation at least.

Now, how do we apply that to the past? Well, first, we figure out what the temperature T was at the time, based on the delta O18 proxy. Feed that into the formula IGD = f(T), and presto! We can estimate the ice/gas age difference for 46,000 years ago, or any time we want. It is this

calculateddifference that is listed in the datasets.Now, you say people are mystified about why the IGD, the age difference, is related to CO2. It is because as you know, CO2 and temperature have gone up and down in pretty good lockstep over the past half million years or so. Because of this close relationship, we can express temperature T as some function of CO2

T = g(CO2)

But substituting this back into our earlier formula gives us

IGD = f(g(CO2)) = h(CO2)

Ice/gas age difference IGD can thus be calculated as an indirect function of CO2. So of course they are related.

Now consider what he is doing. He is removing the ice/gas age difference signal from the CO2 signal, in order to estimate what the CO2 signal would be if the IGD difference were zero. Having removed that signal, he calls what remains the “compensated CO2”.

He then notes:

Now, I drove right past the problem just above, just to see who noticed. It was where I said that he removes the ice/gas age difference signal from the CO2 signal. But remember that IGD = h(CO2). So he is actually removing the CO2 signal from the CO2 signal … you see the problem?

Thus, it is absolutely no surprise that there is no longer any significant relationship between “compensated CO2” and temperature. CO2 is correlated with temperature in the record. So when you remove the IGD signal h(CO2) from the CO2 signal, what’s left will no longer be correlated with T.

That’s the fatal flaw in the paper, he didn’t realize that IGD = h(CO2), and that, sadly, renders it worthless.

All the best,

w.

Re: Willis Eschenbach (#55),

I’m not quite sure I agree here.

If the CO2 ice/gas age difference is a function of the temperature, then adjusting the CO2 concentration based on isotope ratios should only restore the actual physical situation for when the ice was formed. It shouldn’t be based on the CO2 concentration itself. Therefore the CO2 to temperature correlation shouldn’t be be altered but merely enhanced.

However I’m open to being shown wrong.

Re: Dave Dardinger (#57),

The problem I see with the paper is that it doesn’t address the mechanism by which the age changes anyway. Before the gases are sealed off into the ice they’re free to diffuse and therefore the concentration distribution depends on molecular mass, this will change the [O18] (and also the CO2).

Re: Phil. (#58),

My first reaction is that you’re wrong here. It’s true there could be some separation via molecular weight, but that it’s too small to matter. It’s essentially a second order effect and since we’re talking parts per million or less in isotopic change, the distortion of the signal via smearing from diffusion would be way too small to measure.

However if you’d care to come up with a back-of-the-envelope calculation I’d be glad to look at it. Assume, for instance a 5-10 deg C temperature change, calculate what change in ice O18 concentration this would result in, and what increase in atmospheric CO2 would be caused and see what the results would be. Bear in mind that there’s a difference in ice diffusion and CO2 diffusion and so we need to check what is actually done in this paper. I haven’t read it so I don’t want to commit myself.

Re: Willis Eschenbach (#55),

Willis, Remember that in the gas bubbles, the vast majority of oxygen with its isotopic variation is in air, with just a few hundred ppm of CO2. So we are working with where the air came from, not with where the CO2 came from.

In the ice, the oxygen with its isotopic variation is mostly in frozen water, so we are working with where the snow came from. It’s not all that easy to think through this apples-oranges equation. I raised it again because it’s a tricky challenge that has rather severe implications for past temperature reconstructions over hundreds of thousands of years.

You write –

While this is accepted as conventional wisdom, it also involves an element of circular logic. I am far from certain that quantitative relations that are used can be validly derived from a qualititive description of the mechanism. The main cofirmation for the assumption seems to be that glaciers/ice caps drilled in various parts of the world show CO2 changes roughly of the same pattern, which are then adjusted to align in time. Remember that one cannot see or count banding at depth in Vostok core, so no direct calibration as to age is available. There is, admittedly, some more data from isotopes of other elements.

Jonathan Drake was quite careful with his reservations about projecting to a Y-axis intercept of 300-320 ppm CO2, which is the correct approach. Like Dave Dardinger, I suspect that there is more to this story than meets the eye. Wills, you should not be bugged. Jonathan took some published figures, rearranged them and speculated with reservations. I’m ok with that.

Sorry for the off-topic post, but that paper bugs me.

w.

Dave D, thanks for the post. You say:

Remember that the IGD is not a

measuredquantity. It is acalculatedquantity, calculated from the estimated temperature at the time the ice was forming.But because of the close tie between temperature and CO2, we can also calculate the IGD as a function of CO2, IGD = h(CO2).

He says that his method is:

In other words, he’s doing a linear regression of the form y = m * x + b, where m and b are constants. This gives us

IGD = m * CO2 + b

To see what is happening at zero ice gas differences, we set the IGD to zero. This gives us:

“compensated CO2” = -b/m

But m and b are constants … which means we have successfully used a function of the signal h(CO2) to remove the signal from the signal.

In a perfect world, his procedure produces a straight horizontal line. So why doesn’t he get a straight line?

What remains in the real world after this curious procedure is the inaccuracy in the function g(CO2).

Remember that IGD is

calculatedas someknownfunction of TIGD = f(T)

T, on the other hand, is

estimatedas a linear function of CO2, of the formT = g(CO2) = m * log2(CO2) + b

So the inaccuracy in this estimation remains after applying his method. That is the reason he doesn’t get a straight line.

Let me explain this a different way. He says “I noticed a curious correlation between CO2 and IGD. I used it to estimate CO2 when IGD is zero”.

But he could just as easily have said “I noticed a curious correletion between temperature and IGD. I used it to estimate T when IGD is zero”.

In that case, of course, since IGD is a calculated function of T, he would have gotten a perfectly straight line.

I don’t want to hijack this thread, so I’ll leave it there.

—————–

I have been thinking more about borehole inversion calculations. My conclusion is that the error bars get huge.

In a thread here, I show some graphs of the actual borehole temperatures themselves. It is worth taking a look at this raw data, so that you can understand the magnitude of the task.

Given the discontinuous nature of that original data, which is typical borehole data, even a perfect algorithm would result in a muddy signal, with wide confidence intervals.

And as Phil B points out in the head post, even using perfect data, the fact that the algorithm is ill-conditioned results in a muddy signal as well.

But an analysis of discontinuous data using an ill-conditioned system? I did a test of temperature reconstructions of borehole pairs, holes within a couple of km of each other. The results are shown here.

So rather than GIGO, we have GIGAGO

Garbage In, Garbage Algorithm, Garbage Out.

I also did an average of all of the boreholes within 400 km of Oxford, the home of the Central England Temperature record. I found no significant common signal of any kind. Results here.

Regards to all,

w.

Geoff, look, I’m not talking theory. I was intrigued by the paper, so I took the Vostok data and did the math. I researched the way the IGD is CALCULATED, not measured but calculated. I replicated his calculations. I used his method to remove the IGD signal from T, as well as removing it from CO2 as he did.

I’m telling you flat out, you can’t do what he did. He is using the CO2 signal to remove the CO2 signal from the CO2 signal. You may be “OK with that”, as you say … I’m not. Can’t do dat.

Finally, the CO2 and the temperature move in lockstep because of the well-known and well-understood relationship between water temperature and dissolved CO2. The relationship does not depend on the exact age assigned to any given layer. The point is that the T and CO2 for a particular layer are correlated, regardless of the age of that layer.

Drake’s problem is that he thinks the ice/gas age difference IGD is something that we calculate using two sets of data given in the Vostok results — the ice age, and the gas age. He thinks the IGD is the difference between these two datasets.

What he doesn’t realize is that the IGD is calculated

without reference to anything but the delta O18 levelin that same layer. Using the delta O18, the temperature is calculated. Then using the temperature, the IGD is calculated.But it is not calculated as Drake thinks, as the difference of two datasets. Quite the opposite — the IGD is subtracted from the ice age to give us the gas age.

I encourage you to get the Vostok data and do the math yourself, it may become clearer why what he has done is totally incorrect.

w.

Re: Willis Eschenbach (#62),

Willis, this does not yet address your main points,

but you state –

Since CO2 and temperature do not march in lockstep through the instrumental thermometry era, why should they do so before then?

Phase diagrams for CO2 levels in water show sensitivity to other influences like calcium activity and biomass effects, now and in the unmeasurable past. Very little dissolved CO2 is present in equilibrium with the ocean waters of today.

I’m trying to explain the Drake fig 1 correlation coefficient of 0.8 (high for any natural set of observations) by dropping temperature out of the picture. It’s a non-essential intermediate. We have oxygen isotopes from air in gas bubbles and oxygen isotopes from water in ice. Their ratios are different. Fractionation has happened. Until I understand the provenance of the oxygen isotopes, I cannot answer your question properly. Drake simply notes in his fig 1 that the extent of oxygen isotope fractionation correlated with the CO2 content measured in gas bubbles. Why? Let’s solve that problem before we get onto your matter of subtracting CO2 out of the later equations, as a first logical exercise.

Re: Geoff Sherrington (#63),

But their source is different. Why would you expect the ratios to be the same? Fractionation for water happens primarily when the water evaporates, IIRC. Oxygen in the air, OTOH, undergoes no such fractionation or at least its fractionation processes in the biosphere are very different.

Re: DeWitt Payne (#64),

Thank you DeWitt Payne, but what are the sources? For all I know the snow at the South Pole comes from water sublimated from ice, isotopic composition unknown.

It does not seem satisfactory to say (qualitatively) that during seawater evaporation the heavy isotopes are left behind; and then (quantitatively) that a change of x per mill in the isotope ratio corresponds to a deposition temperature of y degrees C. Where I am leading is back to the subject of this thread – are all pre-instrumental temperature reconstructions from drill holes in ice ill-conditioned, not just the shallow ones?

Re: Geoff Sherrington (#65),

I’m not saying that there aren’t problems with d18O in ice cores as a temperature proxy. All proxies have problems because they’re the map, not the territory. I’m saying that those problems are different from and independent of d18O in trapped air bubbles.

I’m pretty sure that calculating temperature from d18O does not involve the same sort of inverse solution that leads to ill-conditioned or ill-posed problems with converting bore hole temperatures to surface temperatures.

Geoff, I think you are conflating the inversion of borehole temperatures from non-ice boreholes, with the determination of temperature from the -delta O18 extracted from ice cores … I think.

The first one, according to Phil B (thanks for the post) is or can be ill conditioned. I don’t think the -delta O18 calculations suffer from the same problem.

Phil B, if you are still reading this thread, you had said:

Could you explain that a bit more clearly for the matrixially challenged? (Or if anyone else would care to …)

My best to everyone,

w.

Re: Willis Eschenbach (#66),

Willis, right with you re question to Phil B. Example from another discipline. In modelling the complexity of chemical species found in natural groundwaters, we sometimes found that we had more equations to solve than we had measured variables for; or that sometimes when we felt we had enough variables, some were not independent and their relationship was complex. While this does not really fit the description of “ill-conditioned” as used above, there are similarities. It was quite hard to solve the multiple species concentrations in groundwaters in contact with rocks of various compositions because of this.

Let me make my question a little clearer. Phil B had said:

What throws me is the “if” in the sentence. As far as I know, all borehole temperature records are the same – a string of N numbers representing depths, and a string of N numbers representing temperatures at each depth. Typically, N is on the order of a hundred to a thousand or so.

So my question relates to the “if”. What would make the calculations for one borehole ill-conditioned, but not another borehole? What am I missing here?

w.

Willis, I have to work to today, but will get back this evening with more discussion. The several boreholes that I have looked at have an ill-conditioned A matrix. All the borehole literature that I have read only use the larger singular values which suggests that the ill-conditioning issue is universal. The ill-conditioning A matrix is derived from the solution of the heat conduction equation. See my post #33. My interpretation is that a 100 year temperature pulse five hundred years ago just isn’t observable in a borehole temperature measurement today.

One can obtain an exact reconstruction even with an ill-conditioned matrix only if the noise is zero or small. In the several borehole reconstructions I did, the noise coupled with the ill-conditioning created physically impossible answers if I keep all the singular values in the pseudo inverse (see #30). Only using the upper singular values in the pseudo inverse gives a reasonable answer but you can’t argue that you have the ‘true temperature reconstruction’.

Phil B, thank you kindly. I am interested in any further information on the subject.

In particular, how much does the ill-conditioning affect the final results? I understand that there are thousands of possible solutions. However, if those solutions differ by thousandths of a degree, it’s not much concern. On the other hand, if they vary by full degrees, it is a big concern.

Finally, my own experiments with synthetic boreholes show me that for anything other than very recent temperature changes, the down-hole variation is tiny. It seems to me that with a tiny signal, an unknown subsurface geology, and an ill-conditioned matrix, we’re pissing into the wind … but I’d be interested in your take on that.

Hugo Beltrami came and posted at my previous CA thread on the topic, but didn’t stay to answer questions. I’ve been hoping he might come back to give the mainstream view some support.

All the best,

w.

Well, I got some more information on boreholes in my usual fashion … “Suck it and see”, as my Aussie mates say. I set up a simplified borehole dataset in Excel whose characteristics match those of the earth as given in Beltrami. That is, heat propagates at 16 m in one year, 50 m in ten years, 160 m in one hundred years, 500 m in a millennium.

Then I put in a signal consisting of alternating century long temperature deviations of ±5°C, ±10°C, and ± 15°C. By that I mean I held one surface record at +5°C for an entire century, then I switched it to -5°C for a century, then back the other way for a century, for a total of 1000 years. The second and third records I did the same at ±10 and ±15.

Here is the result:

Some points, in no particular order.

First, Phil, I’m starting to see what you mean by “ill conditioned”. There is certainly nothing in that record that screams “square wave” to me. And I suspect that a triangle wave, or even no wave at all but an early slow change plus recent variations, could all give a record that is virtually indistinguishable from the one shown. So for the reason of invertability alone, I’d say you can’t use boreholes.

Second, it is also interesting that for hundreds of years, it retains the direction it originally moved (positive, +5, +10, +15). Despite going negative every other century, it still holds (and presumably varies about in some sense) the direction it started.

Third, the blue line is going plus and then minus 5° at the surface for a century at a time. After a century, we have a signal of about a tenth of one degree, and after two centuries, half of that. The signal is too small, the noise is larger than that … and that’s less than a tenth of a degree for a swing of

10°C!!.Fourth, all of this is calculated from a theoretically perfect borehole with absolutely homogeneous geology. In the real world, with its constantly changing subsurface conditions, varying layer characteristics mixed with fractures containing water (or not), a tenth of a degree? Get real.

Finally, the signal is so smeared temporally that the century long square wave signal disappears in about 200 years … even the thirty degree swings don’t show up as anything but the tiniest wiggle.

So … my conclusion?

Boreholes are useless as temperature proxies in anything but most general sense. Century long pulses disappear in a few hundred years, rapidly replaced by tenth-of-a-degree variations around some unknown and changing centre …

METHOD

I simulated the borehole using the “Iteration” capability of Excel. I created a column of cells, that started and ended with a fixed value of zero. All of the cells in between were the same — they were the average of the cell above and the cell below. Then I set the preferences of Excel to manual calculation, and iteration.

Then I set the rate of decay equal to the rate in the earth as specified above. I set one recalculation arbitrarily at one month, and set Excel for 12 recalculations (one year). Then I ran it for 12 iterations, and looked to see how many rows it took to decay to 5% of its starting value, which turned out to be 12 rows. 10 years was 33 rows, 100 years was 120 rows, 1000 years was 333 rows.

I wanted to compare temperature to years, and not depths, so I calculated the coefficients for the linear relationship between log(rows) and log(age). That let me calculate the age corresponding to each row, to use as my y-axis

Finally, I duplicated the column twice. I started with all zeros top to bottom. Then I put 5, 10, and 15 into the top cell of the three columns respectively. I ran it for 100 years (1200 recalculation). Then I switched the top cells to -5, -10, and -15, and ran another hundred years. After a thousand years, you get the result shown above.

Best to all,

w.

Re: Willis Eschenbach (#72),

I suspect your method is more optimistic than reality. If you use the actual heat equation, which isn’t all that difficult to implement even in Excel, you have to worry about heat flux at the surface too. It is likely not infinite. That is, when you change the ambient temperature, the surface temperature won’t immediately become equal to it because heat transfer from the atmosphere to the surface is finite and proportional to the temperature difference. Then again, maybe the flux is so small because both the thermal conductivity and heat capacity or rock are low, and it doesn’t matter.

If your calculation is correct, the temperature profile with depth is described by the erfc(K*sqrt(t))) (error function complement function).

My apologies, the graphic with my post didn’t … post, that is. Problem’s at my end, the server I store stuff on seems to have gone into a snit.

DeWitt, while the problems you refer to with the air-ground interface are real, I was just looking only at the ground part of the puzzle.

Thanks for the hint about the form of the profile, I’ll see what I can find.

w.

Re: Willis Eschenbach (#74),

I didn’t read my equation correctly and left out distance. For a step change in temperature it’s deltaT(x,t) = deltaT0*erfc(x/sqrt(k*t)) where x is distance, t is time and k is the thermal diffusivity in m2/sec.

k, the thermal diffusivity is thermal conductivity/(density*heat capacity) all in SI units so that the units of k are m2/sec.

I did some work on this a few years back using a simplification by change of coordinates from linear ones to their natural logarithms. This made the mathematics a lot simplier particularly as a whole class of functions including the sinusoids mapped backwards and forwards.

So the Fourier components of the borehole record could be mapped back to the Fourier components of the surface temperature record. There is of course a multiplicative coeffiecient which is given by either the complete or incomplete Gamma Function (e.g. Gamma(1/2 + iw) as I recall).

Now not only does this make the inversion simpler to perform but it allows one to reason about such things as resolving power (which turns out to be pretty minimal one certainly cannot resolve the 1600s from the 1500s) and more generally the effects of noise in the data on the result in a quantifiable way.

Now I thought this might be of use so I wrote to anyone I thought might be interested. I started this in the first half of 2006 I got nowhere, so I thought I might take the opportunity to write this here to see if it can raise the interest that I think it deserves.

FWIW

I believe (and think I can show) that the methodology used in the reconstructions is badly flawed. Also that many of the borehole temperature records included in the reconstructions tell one more about their uneven geology than the surface temperature record and I believe one could show this from spectral analysis. I have also let people no this but sadly I rarely receive as much as a reply.

Best Wishes

Alexander Harvey

Re: Alexander Harvey (#77),

You should write that up and submit it for publication in a suitable journal. That’s the only way it’s going get recognition. So the next question would be what are the suitable journals? Anyone?

Re: Alexander Harvey (#77),

Somebody mention rocks?

Alexander, try the EAGE’s First Break magazine

A simplification occurs under a change in co-ordinates.

T = Ln(t); where t is time before the moment of observation

X = Ln(x^2); where x is depth into sample (depth below ground).

Under this change:

Sinusoids in T map to sinusoids in X.

The amplitude and phase under the mapping depend only on the angular frequency w (in T).

The amplitude and phase are given by the Modulus and Argument of the

Complete Gamma Function for (1/2 + i*w) [normalised by division by Sqrt(PI)]

where w is the angular frequency in T and i is the Sqrt(-1) and * is

multiplication operator.

So the mapping function (from T to X) is complex multiplication by

G(w) = Gamma(1/2+i*w)/Sqrt(Pi).

To make this clear a temperature history of the form Sin(w*Ln(t)) with

constant amplitude will at the moment of observation produce a borehole

temperature record of the form Sin(k*Ln(x^2)) with an amplitude that is constant with depth.

This allows an observation, of temperature with depth, to be decomposed to its Fourier components each component divided by the mapping unction G(w) and recomposed to give the Temperature Record in T (and hence in t) directly.

In this way the inversion problem can be simplified. It is as easy to map from X to T as it is to map from T to X. No iterative relaxation is required to reclaim the temperature record. Fourier analysis and complex multiplication/division is all that is required.

Importantly one can construct an orthogonal basis in T that maps to an orthogonal basis in X and vice versa.

I did a lot of work on this a couple of years ago and could not raise any interest. If anyone would like more details please let me know.

Best Wishes

Alexander Harvey

Dang, my post got lost in the intarwebs … what I had said was Alexander, that is all fascinating and has my head totally twisted around the Gamma function. My question for you or Phil B., which I am certifiably unqualified to answer, is, how does your transformation

T = Ln(t); where t is time before the moment of observation

X = Ln(x^2); where x is depth into sample (depth below ground).

affect the ill-conditioned nature of the matrix inversion? Is it still ill-conditioned after the transform?

Very interesting.

w.

I haven’t seen this paper discussed here yet. I found the clarifications with respect to their previous papers interesting.