http://rpubs.com/rbellarmine127/166033 ]]>

Yes, but t here is the radiocarbon date, not the calendar age. Jeffreys’ prior is uniform in RC date, but it becomes highly nonuniform, as per my green curve, when changing the variable from RC date to calendar age, because is has to be multiplied by the Jacobian factor involved. ]]>

Using the likelihood of all 10 observations at once from a uniform prior will give exactly the same posterior as the 2-stage Bayesian learning procedure I described that first computes a posterior from the first 9 observations and then uses that as the informed prior for the 10th observation, so we’re on the same page there.

In the calibration case, the analogous one-step procedure is to compute the likelihood for the combined calibration and sample reading for each t. This will be the normal density evaluated at the difference of the two readings, with mean zero and a variance equal to the sum of the sample and calibration variances. Assuming that the calibration variance is constant across t (which it nearly is after smoothing, at least for periods spanning only a millennium or so), this is the same distribution for all t, and therefore the Fisher information and Jeffreys prior are constant for all t. (Or at least for all t less than or equal to the present!)

I’ll take a look at your new CA post and arXiv paper.

]]>“The calibration data either tells us that historical events are more likely to have occurred during steep portions of the calibration curve, or that historical C14 samples are more likely to represent flat portions of the calibration curve. The latter seems more plausible to me.”

Maybe. But, as I have shown, that plausibility does not generally lead to accurate uncertainty intervals, whereas using a uniform prior in C14 data (and hence a very nonuniform Jeffreys’ prior in calendar age) does – and those intervals are if anything narrower.

A prior that is noninformative for one observation from an experiment remains noninformative as additional measurements are added, since the experiemental characteristics (data likelihood function, etc) remain the same. So if a uniform prior is noninformative for the first 9 measurements, there is no reason to change it for the tenth. But it is IMO misleading to think of the posterior generated after the first nine measurements as being a prior for the tenth measurement, where there is a fixed but unknown parameter. Rather, one should think of inference being based on the combined likelihood function for all the observational data, using a prior that is noninformative in relation thereto. See my latest CA post, on incorporating prior information, or read my arXiv paper.

]]>Apologies if I had any doubts.

(the yoga to understand it is considerable though..phi and its inverse have reciprocal slopes btw I had that wrong)

Keenan method and “obj prior” should yield same result.

the reason they DONT is that the sigmoids calibration curve used is too flat.

It only drops 2 years (999-1000)

There is no chance for the orangered curve, the observation, to intersect it and leave any probability in the “danger zone”..2y for a normal with sd=60? thats less than 1% in the danger zone..the rest is elsewhere on the bends.

This is the reason for the discrepancy: Either make the obs narrower to 2-3y, sd=1, or make the calibration curve drop 50y.

Making the curve drop is more real life, as it more looks like fig.1 then

What is the arXiv paper to which you refer? Was this your paper relating to the climate forcing issue? If so, what is the reference? (I remember you sent me drafts of that article but I wasn’t able to follow that debate at the time.)

The calibration data either tells us that historical events are more likely to have occurred during steep portions of the calibration curve, or that historical C14 samples are more likely to represent flat portions of the calibration curve. The latter seems more plausible to me.

Suppose we were just trying to estimate the C14 content of today’s air, already had 9 measurements, and were about to add a 10th measurement. Starting from a uniform prior for the first 9, the first 9 observations would give us a concentrated Gaussian posterior, which can be taken as the prior for the 10th observation. Its posterior will then be an even more concentrated Gaussian distribution.

Or, if we insist on a uniform prior for the 10th observation, the results of the first 9 observations can be used to back out a non-uniform pseudo-prior for the first 9 which gives us a uniform “posterior” when combined with the data on the first 9. But surely it is not legitimate to use some of the data to generate the prior for that data itself!

]]>I have no issue with these sort of articles here, they are excellent food for thought (I would spin them out more and present them more like literal programming instead of oversatiating us with graphs sometimes)

Bayes theory is very very difficult it seems..or at tleast it looks like it in most classic texts.

I hope for Oxford University that I am wrong with my hunch, which is that Keenan is correct and this integration by substitution should yield exact the same results.

They have a week to repent there, LOL

Dont get me wrong I think there are well a 100 people in Oxford who can, and would just love to attack this department but they all know its “bad for their careers”.