## The Loehle Network plus Moberg Trees

Loehle’s introduction emphasized the absence of tree ring chronologies as being an important aspect of his network. I think that he’s placed too much emphasis on this issue as I’ll show below. I previously noted the substantial overlap between the Loehle network and the Moberg low-frequency network.

I thought that it would be an interesting exercise to consider Loehle’s network as a variation of the Moberg network as follows:

• expand and amend Moberg’s low-frequency network (11 series) to the larger Loehle network (18 series);
• keep the Moberg tree ring network
• use my emulation of Moberg’s wavelet methodology using the discrete wavelet transform. This is not exactly the same as Moberg’s method for which there is no source code, making exact emulation very difficult.
• The results are shown in the Figure below. Obviously this method maintains the general “topography” of the Loehle network as to the medieval-modern relationship, while using a Team method (or at least a plausible emulation of the Moberg method.)

The difference between Moberg’s results (in which the modern warm period was a knife-edge warmer than medieval) and these results rests entirely with proxy selection. The 11 series in the Moberg low-freq network are increased to 18, primarily through the addition of ocean SST reconstructions (two Stott recons in the Pacific Warm Pool, Kim in the north Pacific, Calvo in the Norwegian Sea, de Menocal offshore Mauritania and Farmer in the Benguela upwelling, while excluding the uncalibrated Arabian Sea G Bulloides series – a proxy much criticized at CA). The other additions are the Mangini speleothem and the Holmgren speleothem (while the Lauritzen speleothem in Norway was excluded for reasons that I’m not clear about, but its re-insertion would not change things much) plus the Ge phenonology reconstruction and the Viau pollen reconstruction.

So the underlying issue accounting for the difference is not the inclusion or exclusion of tree ring series (in a Moberg style reconstruction) but mainly the inclusion of several new ocean SST and speleothem temperature reconstructions, combined with the removal of uncalibrated proxies.

1. Jimmy
Posted Nov 20, 2007 at 9:36 PM | Permalink

while you’re at it, you should add in Vostok temps

ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/vostok/deutnat.txt

2. Sylvain
Posted Nov 20, 2007 at 10:19 PM | Permalink

With the late result it seems that there is a lot more uncertainties than acknowledge by the warmers team, knowing that different selection end up much different.

It would be interesting (I don’t know if it is possible) to see what would look like a reconstruction using all calibrated proxies.

I have a few questions:

How confident can we be that scientists don’t select proxies until they reached the expected result?

What does mean the different standard shown in the selection of what is published?

Do these publishers hold to much power on the outcome of science?

Why isn’t there more scientists with the level of professionalism shown by Mr Loehle?

It is a little more than a year and a half that I follow this debate. I’ve been really turned off by how the science is conducted, and surprised to learn how little publications get audited, and even more surprise by how much obstruction exist so they don’t get audited.

Mr McIntyre felicitation! I have the perception that many people are seeing you as a IRS auditors, this is a good thing since they catch a lot of cheater. To sad that you don’t have their power in acquiring data then they have acquire our transaction record.

3. Jimmy
Posted Nov 20, 2007 at 11:41 PM | Permalink

alternatively, one could use EPICA Dome C instead of Vostok. Or maybe an average of both to represent Antarctica?

ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc3deuttemp2007.txt

Also, what happens if you use Dye 3 instead of GRIP? (Dahl-Jensen et al., 1998)

There are so many other datasets out there that could make the reconstruction “more global”….

4. John M.
Posted Nov 20, 2007 at 11:47 PM | Permalink

Bear in mind that paleoclimate studies are not really the core of what the theory of AGW is based on. Computer modeling of future climate is, in fact, what the Kyoto Protocol was primarily driven by. Worth bearing in mind also that Craig Loehle used data from peer reviewed scientific papers. What I think is slowly emerging now is that properly conducted scientific research is already readily available in the literature which refutes Mann’s hockey stick so people who have been making sweeping comments about scientists on here have been throwing the baby out the window with the bath water.

5. PHE
Posted Nov 21, 2007 at 1:09 AM | Permalink

Can I ask what is being done to get some of this extensive work into peer reviewed publication? While the falabilities of ‘peer review’ is understood, this is the constant mantra of the hockey team and promoters of IPCC conclusions. Any view points that are not ‘peer reviewed’ are too easily dismissed.

6. henry
Posted Nov 21, 2007 at 6:04 AM | Permalink

PHE says:

Can I ask what is being done to get some of this extensive work into peer reviewed publication? While the falabilities of peer review is understood, this is the constant mantra of the hockey team and promoters of IPCC conclusions. Any view points that are not peer reviewed are too easily dismissed.

But that’s the beauty of this study – ALL the proxies he’s using are taken from peer-reviewed (and published) articles.

It will be hard for critics to refute information they’ve already accepted.

They might critique his methods – or his choice of proxies – but the science has been accepted.

7. bender
Posted Nov 21, 2007 at 7:12 AM | Permalink

#7 You’re conflating the science of the canonical/source studies, A, with the science of the derivative/synthetical study, B. Acceptance of A does not imply acceptability of B. In fact, the statistics in B in this case (Loehle 2007) are sketchy. That is apparently being fixed. But it will be some time before an improved version is available. At that time – not sooner – we will see whether Loehle’s conclusions are robust. If the post above is any indication, that may be the case. OTOH I’m not sure Moberg adequately dealt with the problem of confidence estimation either.

8. BrianMcL
Posted Nov 21, 2007 at 7:39 AM | Permalink

Does anyone know how hard would it be to work out what would happen to the graph if the Ababneh trees were included?

9. Posted Nov 21, 2007 at 7:50 AM | Permalink

#9: “How do they test climate computer models that predict the future climate?”

I’m not sure how the test those models, but if the methods are similar to how GISS produces surface air temperature (SAT) maps, it’s an art form, not science:

Q. If SATs cannot be measured, how are SAT maps created ?
A. This can only be done with the help of computer models, the same models that are used to create the daily weather forecasts. We may start out the model with the few observed data that are available and fill in the rest with guesses (also called extrapolations) and then let the model run long enough so that the initial guesses no longer matter, but not too long in order to avoid that the inaccuracies of the model become relevant. This may be done starting from conditions from many years, so that the average (called a ‘climatology’) hopefully represents a typical map for the particular month or day of the year.

10. Philip_B
Posted Nov 21, 2007 at 8:11 AM | Permalink

How do they test climate computer models that predict the future climate?

They generate simulated climate data. In simple terms, they make up data that fits their predictions (preconceptions) of future climate and test the models against that data. If the model gives the the result the modelers expect then that is a successful test.

11. bender
Posted Nov 21, 2007 at 8:16 AM | Permalink

#10

Wow hard would it be to work out what would happen to the graph if the Ababneh trees were included?

Currently impossible because the individual tree data are not available. All we have is Erren’s digitization of the mean chronology. Hence Steve M’s efforts to have researchers archive their data promptly.

12. bender
Posted Nov 21, 2007 at 8:20 AM | Permalink

Whoops, ignore #13. You don’t really need the individual tree data if you are going to ignore the error around the mean chronology.

13. bender
Posted Nov 21, 2007 at 8:28 AM | Permalink

#10 It would be relatively easy once you’ve got the data in hand.

14. Stan Palmer
Posted Nov 21, 2007 at 8:33 AM | Permalink

#10 It would be relatively easy once youve got the data in hand.

Is the Arbaneh data linked to temperature?

15. pk
Posted Nov 21, 2007 at 8:56 AM | Permalink

#16
“Is the Ababneh data linked to temperature?”

Not really. In appendix II of her thesis she attempted to correlate the four tree ring series to local temperature and precipitation. The best correlation she got was with the Sheep Mountain whole bark trees, although it was not a strong correlation.

16. bender
Posted Nov 21, 2007 at 9:12 AM | Permalink

#17 Exactly why JEG said that the proxies should always be weighted according to the amount of instrumental variation they explain. If Ababneh bcp explains zero, its weight in the recon would be zero. If it’s 0.1, it’s weighting would be 0.1. Better proxies get more weight because they’re more credible.

17. Posted Nov 21, 2007 at 9:12 AM | Permalink

So the underlying issue accounting for the difference is not the inclusion or exclusion of tree ring series (in a Moberg style reconstruction) but mainly the inclusion of several new ocean SST and speleothem temperature reconstructions, combined with the removal of uncalibrated proxies.

just from an eyeball approach, none of the Moberg proxies seems to end in 1810.

perhaps the most simple explanation is using data reaching (sort of) up to 2000?

Steve:
While the point is worth raising, practically, I don’t think that this is relevant to the difference. (And BTW the Moberg ARabian Sea series is a splice of one series ending in 1500 with another series and the splice is very hairy as I observed elsewhere.) My own sense is that the difference comes from the addition of SST estimates and the removal of the two uncalibrated series (which were very non-normal and perhaps overly influential).

18. Larry
Posted Nov 21, 2007 at 9:37 AM | Permalink

Better proxies get more weight because theyre more credible.

How did we get from showing a stronger temperature signal to “better” to “more credible”?

19. bender
Posted Nov 21, 2007 at 9:42 AM | Permalink

Larry, I’m not making a factual assertion, I’m simply recounting someone else’s logic.

20. Steve McIntyre
Posted Nov 21, 2007 at 10:24 AM | Permalink

#16. bender you said:

Exactly why JEG said that the proxies should always be weighted according to the amount of instrumental variation they explain. If Ababneh bcp explains zero, its weight in the recon would be zero. If its 0.1, its weighting would be 0.1. Better proxies get more weight because theyre more credible.

You have to be careful here with what you’re saying. You don’t know in advance what proportion is explained and have to estimate this. If JEG is proposing that proxies be weighted according to their correlation to the MBH PC1 or NH temperature or something like that, then you’re doing a (Partial Least Squares) inverse regression. The coefficients are proportional to $X^T y$. In OLS multiple regression, these coefficients are rotated by an orthogonal matrix $(X^T X) ^{-1}$ to yield new coefficients. If yo have a very noisy network, then the rotation matrix (in coefficient space) $(X^T X) ^{-1}$ is “near” orthogonal in some sense, so that the PLS regression (about which we don’t know very much) is approximated to some degree by OLS regression about which we know a lot and have applicable instincts.

One applicable instinct is that if you do an OLS regression of a series that is 79 years long on 72 poorly correlated series (near orthogonal), you will get a sensational fit in the calibration period regardless of what you’re working with. That’s why Mannian methods work just as well when the networks are mostly white noise as they do with actual proxies – if you have one active ingredient.

My own instinct in this is that there is a real case for very simple averaging methods if you have poor knowledge of which proxies are good and which ones aren’t. I think that there is even a mathematical basis for this although I’m not sure that I can demonstrate it.

21. Posted Nov 21, 2007 at 11:06 AM | Permalink

#20 Steve, Bender and JEG, I believe, were arguing for weighting based on the amount of “local” instrumental temperature variation they explain. A reasonable refinement. Another very useful calculation would be to attempt to create a global temperature reconstruction by separately (or further) calibrating and weighting the proxies prior to averaging based on the covariance between the local temperatures at the proxy sites (not the calibrated proxy data themselves) and the global temperature over the instrument record. This would push the Schweingruber vs. Fritsch methodology dichotomy to its limit: a legitimate (testable and defensible) reconstruction of global temperature using direct temperature relationships as much as possible.

22. Mike B
Posted Nov 21, 2007 at 11:23 AM | Permalink

Steve:

My own instinct in this is that there is a real case for very simple averaging methods if you have poor knowledge of which proxies are good and which ones arent. I think that there is even a mathematical basis for this although Im not sure that I can demonstrate it.

I’ve been fiddling a bit with the Loehle proxies, and I believe the best framework for evaluating multiproxiy reconstructions is via robustness (i.e. how does the composite chang when individual or groups proxies
are removed).

As a first test, I looked a simple average of each of the 18 possible combinations of 17 proxies. The composite is remakably robust to the removal of any individual proxy.

As a tougher test, I looked at composites of 11 proxies. There are about 32,000 possible combinations, so I obviously haven’t looked at all of them. But I have randomnly selected 25 composites of 11 proxies (in a bootstrapish fashion), and again am amazed at the robustness of this network. Everyone of the composites shows a MWP and a LIA.

Given my utter failures at posting graphs and data previously, if anyone else in interested in pursuing this approach, they could probably get something for others to look at more quickly than I could.

Steve: You upload the image somewhere and then use the IMG button to insert the url. This results in WordPress code .

23. steven mosher
Posted Nov 21, 2007 at 12:09 PM | Permalink

RE 22. Go to http://www.imageshack.us.

Upload yur stuff. Post. Easier than statistics. or dendroendrochronology

24. Jon
Posted Nov 21, 2007 at 1:23 PM | Permalink

#17 Exactly why JEG said that the proxies should always be weighted according to the amount of instrumental variation they explain. If Ababneh bcp explains zero, its weight in the recon would be zero. If its 0.1, its weighting would be 0.1. Better proxies get more weight because theyre more credible.

The question is why are measurements of xyz a proxy for temperature–pick your favorite xyz. Your statement does not directly address the importance of distinguishing local variance from global variance.

A “teleconnection” to global variance requires intermediate physical processes which draw into question the linearity assumptions of the technique. Conversely, a correlation with local variance is meaningful so much as your knowledge of the local temperature is accurate.

Loehle claims no specialized knowledge of whether timeseries xyz is a proxy for temperature. He draws from peer-reviewed studies where others have claimed such specialized knowledge to identify and to calibrate the timeseries.

Consequently the variance-weighting approach JEG mentions has no direct meaning within the logic of Loehle’s approach which takes the calibrated timeseries as an a priori given. Thus, JEG’s suggestion is a gross misunderstanding both of what is taking place in the Loehle approach and in what is taking place in the MBH approach.

25. Peter D. Tillman
Posted Nov 21, 2007 at 1:40 PM | Permalink

bender says in #12

Whoops, ignore #13…

Steve, not to nag, but it would make the blog more intelligible for your readers if you preserved the original post numbers whe you moderate…

Steve:
I’m sorry about that, but the mechanics of snipping and leaving is sufficiently more time-consuming than deleting and I’m already swamped that I’m going to do the quickest thing on many occasions. Sorry about that. If you suggest to WordPress that they add a Clear button beside their Delete button, I’d use it and preserve the order.

26. Mike B
Posted Nov 21, 2007 at 2:36 PM | Permalink

Upload yur stuff. Post. Easier than statistics. or dendroendrochronology

So says you.:) I kept getting a thing with a little red x in it. I’ll try again.

27. PHE
Posted Nov 21, 2007 at 3:03 PM | Permalink

Re 26 (Peter D. Tillman). This point keeps coming up. Just make sure you include the no. AND the name. Then if the no. slips, we can still work out who you’re having a go at. As for no. 28 (PHE), you don’t know what you’re saying – just talking in circles.

28. Posted Nov 21, 2007 at 3:41 PM | Permalink

I think that the similarity to the IPCC’s 1990 figure below is striking. Some of the smaller peaks and valleys seem match as well.

29. Bob Koss
Posted Nov 21, 2007 at 3:59 PM | Permalink

Mike B.

Might I suggest the reason for you getting a red X?

Network filtering software is the likely culprit. Sites that are mainly for images are likely being blocked.

Posted Nov 21, 2007 at 4:09 PM | Permalink

#29: Jason L, how dare you post that “cartoon.” It was just a “schematic” of popular opinion in the olden days and not based on any real proxies that were actually published or anything. 🙂

Steve M or anybody: what causes the Moberg-style graph in this post to have a huge uptick at the end and Loehle 2007 to have a downtick? Loehle 2007 seems to better represent the 1970s cooling, though the warm 1930s don’t seem to make an appearance.

31. olram
Posted Nov 21, 2007 at 5:32 PM | Permalink

#31: The x-axes in the plots in the Loehle Proxies #2 thread have greater resolution.. the warm 30’s can be seen

32. Phil.
Posted Dec 1, 2007 at 11:16 AM | Permalink

Joel #31
“Steve M or anybody: what causes the Moberg-style graph in this post to have a huge uptick at the end and Loehle 2007 to have a downtick? Loehle 2007 seems to better represent the 1970s cooling, though the warm 1930s dont seem to make an appearance.”

The way he treats the endpoint of his dataset; the mean of his proxies peaks in 1966 and drops rapidly to the end in 1980 (see below):

1964 0.100322801
1965 0.142321563
1966 0.170178093
1967 0.157816547
1968 0.143066997
1969 0.130265684
1970 0.138856536
1971 0.151107518
1972 0.12857254
1973 0.113799023
1974 0.089954119
1975 0.05341691
1976 0.015333634
1977 0.003300072
1978 0.010647498
1979 -0.008464229
1980 -0.009645872

Hadcrut3 shows an increase of ~0.2ºC over the same timeframe (and ~0.6ºC to the present day).

The 30’s appear not to be very warm in comparison:

1932 -0.012193721
1933 -0.011362548
1934 -0.011035132
1935 0.035464209
1936 0.021091592
1937 0.01574397
1938 0.021998722
1939 0.017378256
1940 0.009740916

Based on that data I would say that Loehle represents the period 1930-1980 extremely poorly!