In numerous ancient Climate Audit posts, I observed that all MBH98 operations were linear and that the step reconstructions were therefore linear combinations of proxies, the coefficients of which could be calculated directly from the matrix algebra (described in a series of articles.) Soderqvist’s identification of the actual proxies enables calculation of the AD1400 weights by regression of the two “glimpses” of the AD1400 step (1400-1449 in the spliced reconstruction and 1902-1980 in the Dirty Laundry data) against the proxy network. The regression information is shown in an Appendix at end of this post.
The figure below shows the weights for (scaled) proxies as follows: left – weights from my previous (ancient) calculations from “first principles”; right – from regression of reconstruction “glimpses” against Soderqvist identification network.
I haven’t yet tried to retrace my linear algebra using the new identification. The linear algebra used in the diagram at left also reconciles to five nines to the Wahl-Ammann calculation. So it can safely be construed as the weights for the AD1400 network as listed in the Nature SI, but not the actual MBH98 network, the weights of which are shown on the right.

Within the overall similarity, there are some interesting differences in weights arising from the use of four lower order NOAMER (pseudo-) PCs rather than four tree ring series from Morocco and France. The problematic Gaspe series (what Mark Steyn referred to in his deposition as the “lone pine”) receives nearly double the weighting in the MBH98 data as actually used, as opposed to the incorrect listing at Nature. Also, the NOAMER PC6 is almost as heavily weighted as the notorious Mannian PC1. It will be interesting to see how heavily the Graybill stripbark bristlecones and other data that Mann had analysed in his CENSORED directory feature in this other heavily weighted PC. My guess is that combination of principal components and inverse regression will show the heavy weighting of stripbark bristlecones and downweighting of other data that we pointed out almost 20 years ago.
The contribution of North American individual species to the MBH AD1400 reconstruction can be calculated from the eigenvectors. In the Mannian PC1, nearly all of the sites (and thus species) have positive coefficients, though, as discussed many years ago, the stripbark species (bristlecones PILO and PIAR; foxtail PIBS) are the most heavily weighted. When six PCs are used in the MBH98 algorithm, Engelmann spruce (PSME) are flipped to a negative orientation.

Appendix
Below is the output from a simple regression of MBH98 AD1400 “glimpses” (AD1400-1449 from the splice and AD1902-1980 from the Dirty Laundry data) against Soderqvist’s identification of the actual network. The R^2 of the reverse engineering is 0.9999 with significance less than 2e-16 for all but one proxy (seprecip-nc). A small bit of untidiness with seprecip-nc, but de minimis.

Also, for reference, here is a diagram of AD1400 proxy weights from late 2007 (link). I discussed these weights in a couple of subsequent presentations.



58 Comments
Very nice!
This is for data divided by 1400-1980 standard deviation, correct? If you divide by the calibration-period standard deviation (or detrended standard deviation as Mann did), PC6 actually trumps PC1.
good point. I’ve got data and scripts to analyze the eigenvectors. Will do so soon.
Trivia Question: Can anyone name the relatively recent movie in which the word “eigenvector” was used?
good trivia question.
good trivia question.
I give up. Courtesy of google, I found two movies in which the term eigenvalue was used: No Way Out (Kevin Costner is in it) and End Game. But no idea on eigenvector.
It was Endgame. I mistakenly said Eigenvector, meant Eigwenvalue.
I also get a tiny weight on seprecip-nc, but it was definitely used.
HS,
Readers should appreciate your mathematical skills. The reverse engineering that you did here is not easy, but the result is outstanding. We need more like you to demonstrate how shoddy some work is. Like Steve’s past demonstrations of how PAGES authors used slick to combine many data series, most without recent upticks, to produce a big uptick hockey stick style. One merely needs a rudimentary eyeball to see they will not combine that way, but the definitive analysis and rebuttal needs accurate math, free of preconception. Thank you. Geoff S
I noticed that at the time Steve presented them. How do you get that shape from all those that do NOT have that shape?
Thanks, Geoff. As with PAGES 2k, most proxies in the MBH98 AD 1400 network indicate normal conditions in the 20th century.
It’s interesting that if you squint you can see a slight dip in the orange data through the Little Ice Age (1550-1720), a chance there was actual sensitivity to short summers.
The Gaspe plot is erratic, which is perhaps why the other data was needed, to smooth the shaft of the hockey stick. For example, I notice the two data set’s excursions cancel each other in the 1800s.
Compare to this graphic in March 2006 https://climateaudit.org/2006/03/31/a-new-spaghetti-graph/

Figure: Spaghetti graph showing top- absolute contribution to MBH98 reconstruction (1400-1980 for AD1400 step proxies) by the following groups: Asian tree rings; Australia tree rings; European ice core; Bristlecones (and Gaspé); Greenland ice core; non-bristlecone North American tree rings; South American ice core; South American tree rings. Bottom – all 9 contributors standardized.
in the directory UVA/TREE/MANNETAL/ is a file northertreeline-pc1.xmgr which compares the AD1000 PC1 and Gaspe series.

Jean S did some heroic work years ago figuring out Mann’s bodge of AD1000 PC1 but I don’t recall any discussion of the Gaspe series in his analysis.
Congratulations Steve and hs, (Soderqvist I presume).
I am hoping very much you will summarize the degree and significance of MBH’s deceptions in terms that I, Mark Steyn, and the Mann v Steyn judge can understand. I am hoping the trial is video recorded. It has the potential to be historic.
Ron, unfortunately there will be no video no photos. It’s not allowed in US Federal Courts. But in the states, some or many do, such as in Georgia where Trump’s team does not remove it to Federal Court, like several co-defendants have done, because he wants the corrupt and fragile US voting system “on trial” and before the public to see.
This certainly has publication potential.
Will it get things moving regarding the shaky scientistic scaffold called multiproxy temperature reconstructions with TRW (ao)
I’ve added the following to the text (including a new diagram):
“The contribution of North American individual species to the MBH AD1400 reconstruction can be calculated from the eigenvectors. In the Mannian PC1, nearly all of the sites (and thus species) have positive coefficients, though, as discussed many years ago, the stripbark species (bristlecones PILO and PIAR; foxtail PIBS) are the most heavily weighted. When six PCs are used in the MBH98 algorithm, Engelmann spruce (PSME) are flipped to a negative orientation.”
Of the heavy weights, Tasmania is up there. In calculation of the calibration between tree ring factors and local temperatures, there was no good quality temperature record near to where the trees grew – none in similar climate zones.
The actual temperature change used in the original paper would plausibly be between half and twice the “real” temperature change. Geoff S
Long time, no chat. Cheers.
Steve,
Hoping mostly that you are well, I am flagging a bit. Hoping for more CA articles from you and the old team comments.
I beaver away writing truisms that few read because they get offended. Sending a long article on UHI to WUWT soon.
At its start, I elected not to join in social media, for various personal preferences. That has limited our chat.
You appear to have been close to access for law matters, me not so any more. There used to be a case law here in Libel, where an actor was paid to insert a small animal leg into a pie he was eating at a famous Melbourne café, then fuss about it. It worked, the café lost business. Then came more laws to curb badmouthing intended to lead to economic loss. I view some of the Mannian remarks as designed for economic loss to hydrocarbon fuel industry, but I have not seen any laws being invoked.
Many interesting matters seem to have disappeared into the never-never. Was there an end to this, for example? https://climateaudit.org/2015/09/28/shuklas-gold/
Cheers Geoff
Sent from Mailhttps://go.microsoft.com/fwlink/?LinkId=550986 for Windows
I read them, Geoff. Keep on keepin’ on, as they say.
EXCELLENT, Steve.
This seems a timely revelation which ought to weigh substantially in the just beginning Mark Steyn v. Mann libel trial in the DC Federal District Court.
“Mann made” & co tried everything to torpedo Stephen McIntyre, from ignoring him via shadow-banning on Twitter to outliving him, but nothing worked! The Truth boomerang is coming back: Karma is a b*tch.
Mann v Steyn and Simberg is tentatively scheduled for Jan. I might just try to pop in.
https://www.steynonline.com/13931/if-at-first-you-dont-succeed-trial-trial-again
Great blog. Awesome.
Spam
You’d think that Mann himself would “weigh” in – or at least some of his disciples.
Congratulations on getting the details sorted!
Bruce
What does “weight” mean in the column charts? I would think they ought to sum to 1.0 (100%), but obviously they don’t.
The “weights” or coefficients tell us how much each proxy contributes to the temperature reconstruction. These are uncalibrated proxies so a weighted average of them would also be uncalibrated.
reconstruction = coefficient1*proxy1 + coefficient2*proxy2 + …
Makes sense, that’s what I think of casually as the definition of word “weight”.
So why don’t the weights add to 100%? Don’t you want to be able to say X% of the reconstruction is from Gaspe? (I would, if describing an analogous complex phenomenon in medicine, manufacturing, chemistry, mining, refining, business etc.)
Even with the negative weights added in, just integrating by eye, it seems like the weights are not adding to 1.0.
Look at the left panel of figure 1. The first five proxies are all above 1/4 weighting.
proxy weight, by eye percent
USA PC1 0.55 55
Tasmania 0.37 37
Tornetrask 0.37 37
Gaspe 0.36 36
USA PC2 0.27 27
————————
Total 1.92 192
And yeah there are some negative bars on the right. But not enough to compensate for too high positive weighting. Just integrating by eye, it looks less than the remaining smaller positive factors. I mean you have 6 negative factors, all absolute magnitude less than 1/4 (2 of them magnitude more than 1/8). And in comparison the remaining positive factors (additional to above) are 10 more (4 of them more than 1/8). [Also, you have one zero, by eye, factor in North Carolina.]
Forming a temperature reconstruction as an average (weighted or not) of *calibrated* proxies would be a pretty reasonable thing to do. Then the coefficients would all be positive and add to one. But here the proxies are standardized to have unit variance, so the calibration is implicit in the coefficients.
To get weights that add to one, one could divide the coefficients by their sum and scale the proxies correspondingly. Negative weights should be avoided by first inverting proxies that are negatively related to temperature, otherwise individual proxies could account for more than ±100% of the reconstruction.
So, uh, why didn’t we do that?
How can I evaluate “how much each proxy contributes” (your words) if you don’t use weights to mean the garden variety meaning of the term?
Presumably within a graphic, the “weight” y axis is the same, so I can at least say that USA PC1 is about twice as important as USA PC1 (left side of first figure). But I still don’t have an intuitive feeling for “how much [PC1] contributes”. Is it 25% of the total? 33%? What. I can’t tell…because we have this bizarre use of weight.
AND I can’t tell for sure that the “weight” (y axis) means the same thing from figure to figure. Like is a 0.25 the same thing on left and right graphs within figure 1? How about between figure 1 and figure 2. Or the two panels within figure 2?
Agreed, I might avoid the concept of a negative “weight”. If we want to bash Mann for flipping trained proxies, fine. Bash away. But I would mathematically express the overall equation in terms of how much each thing contributes (even if it’s the flipped thing contributing). But even if we keep negative weights, the total STILL doesn’t add to 100%.
It’s been 15+ years and a gazillion posts. And I still don’t have a pie chart! We got autocorrelation and monkey videos and fractional calculus. (I kid you not, we had that!) But I still don’t see a simple expression of the “how much each proxy contributes”.
And this isn’t personal, isn’t political. Take it out of left/right climate debate. Just think about if you were analyzing an ore processing plant. Wouldn’t you want to show how much each proxy (issue) contributes to the reconstruction (production shortfalls)? Otherwise, how do I know where to spend my time on upgrades?
What I mean by “weights” in this context is that the reconstruction is a linear combination of the proxies. (This was an original observation that I made early on and contradicted a then widespread belief that it was impossible to allocate contribution of individual proxies.)
In this usage, there are negative weights as well as positive weights.
In cases where I’ve tried to standardize the scale of coefficients, I’ve typically used sqrt of sum of squares, rather than summing the values.
I’ve shown maps with absolute value of weights represented by the size of the dot and sign by color.
I did a graphic years ago in which I showed the contributions year-by-year of each class of proxy, with stripbark bristlecones+Gaspe as one class of proxy. The other proxies are just noise and cancel out.
I agree that it would benefit from an insightful graphic.
One of the topics in which Climate Audit blog advanced well beyond our early papers was in articulation of the linear algebra of MBH98 and especially the overfitting of Mannian inverse regression. A topic which was important in CA posts in 2007-2008 but which got lost in the Climategate furore and then with the new set of questionmarks re PAGES2K. One of the most important of which was the surprising HS-ness of so many tree ring chronologies – which Soderqvist solved so cleverly recently.
the idea of “contribution” is fairly subtle. My understanding of what’s going on is that most of the “proxies” are just noise and that they cancel out under Central Limit Theorem. Within the data are a subset of cherry picked HS-shaped series (stripbark bristlecones), which are not necessarily temperature proxies, but which are not cancelled out, because they are all oriented in relation to 20th century trend.
This method also works well for identifying what proxy data went into the “penultimate” reconstruction shown in the top panel of Figure 7. The 1820-1979 portion is a linear combination of the archived network of 112 proxies (listed in PCS/multiproxy.inf) plus (again) four unarchived proxies: TREE/MANNETAL97/urals.dat and the three series in INSTR/SLP/. 59 of these 116 proxies were used for this version of the reconstruction.
Typo correction to first sentence, third para of
Should say “…USA PC1 is about twice as important as USA PC2…”, not “…USA PC1 is about twice as important as USA PC1…”
hs,
1. For what it’s worth, I do think your detective work was excellent and appreciate it. Good work.
2. Also, as I’ve stated often before, the Mann work is a mess in how complex the algorithm is (always a concern, when something very elaborate needed to prove a phenomenon). And the explication within MBH of the method (there was no original data collection, it was just a calculation that was reported) was awful.
3. Different topic, but (given you have a perfect replication) is it possible for you to now (please) calculate the r2 metrics that Steve complains were missing from MBH?
Wahl and Ammann provided a table of (catastrophically bad) r2 results in their 2007 paper. (After protesting.) Their results reconcile to ours. My issue with the results being withheld in MBH98 was that the bad results should have been disclosed and that the omission of bad results was material omission relevant to the representation of the research.
Yeah, yeah. I know.
Just for precision, since neither you nor W&A did a perfect replication, before, can we now get the actual r2s? Probably same basic story, but still interesting to check and do the math right, given having the (hopefully) perfect replication now.
Like if we look at Gaspe, there was a material difference pre and post hs. Went from 37% to 55%. Except the “weights” aren’t really weights, so I don’t know if the numbers compare left and right panels…but even just looking at rank order, Gaspe went from fourth to first. So, it did change. So, maybe the r2s move around some also.
Yes, for sure. We can get actual r2’s now. But, as I recall, there’s an additional layer of weirdness in Mann using dense and sparse temperature versions, neither of which quite reconcile with ordinary CRU NH data. As usual, what ought to be about a 5 minute job usually takes hours with Mann.
I will add the sparse NH reconstruction to the GitHub page.
By the way, speaking of weights and validation, did anyone ever look in the source code to see how the “multivariate spatiotemporal” (MULT) RE was calculated? That’s the RE score for the gridded reconstruction.
The computation involves a double sum where the outer summation (adding gridpoint sums to the total) is carried out over and over inside the inner loop, not the outer loop. In other words each partial gridpoint sum is added to the total. This implicitly weights the data in a peculiar way. The last value is added once, the second to last is added twice, the third to last is added three times and so on.

I never got to that calculation. MBH98 is a Little Shop of Horrors, that’s for sure. New monsters or grubs whenever one turns over a stone.
your replication of multivariate statistics is another impressive piece of work. to my knowledge, no one ever previously attempted to figure this out. Can you explain this new peculiarity in some more detail? Can you explain how each of the loops works? Also, from your description, it sounds like some data is being multiply counted. What would the results look like if the loops were done correctly (in some sense of correctly)?
Thanks. The multivariate RE statistic is 1-amsecaltot/varrawtot, where varrawtot and amsecaltot are supposed to be sums of squares of the spatiotemporal instrumental target and the reconstruction residuals, respectively.
In Mann’s code, the outer loop iterates over the grid and the inner loop computes sums of squares varrawgp and amsecalgp. These grid-point sums should be added to the totals in the outer loop. Alternatively, individual terms could be added to the totals in the inner loop.
The effect is small but I still noticed that my values didn’t match Mann’s.
I’m unable to reproduce the verification r^2 shown in Figure 3.
https://i.imgur.com/ZxPMht8.png
The RE values are reproducible.

Weights, dethreading.
1. I just wouldn’t use the term “weight” if you don’t mean the casual definition. It’s not a neener-neener wording criticism. It’s analytically confusing to say that it’s just a bunch of simple linear combinations and we can assign weights…and then…we don’t. It’s either (1) not really that simple or (2) we aren’t being as analytically clear as we could be. Maybe some of each. We could weight it, I guess. 😉
2. I expect “weights” to add to 100%. If I think about a 5ppm gold ore that has .5 ppm free gold, 1 ppm cyanide-leachable gold, 3 ppm sulfide-bound gold, and 0.5 ppm “gangue” gold, than my overall 5ppm ore is 10% free, 20% leachable, 60% POX-able, and 10% unrecoverable (but detectable since the offsite analytical lab will ultramill, dissolve with aqua regia and HF, use a mass spec, etc. etc. blablabla).
3. Again, this is not just a neener-neener. But if we are using some other definition of “weight”, then I can’t be clear how much a proxy contributes, on a percentage basis. Also, it becomes unclear if the “weight” number means the same thing in different graphics. Is a 0.25 weight the same percentage (let’s say it’s really 12.5%, not 25%) in the left panel Fig 1 as in the right panel Fig 2? Or some later analysis we do? I would at least say “relative contribution” or something. And I would avoid using numbers that look so similar to real weights but don’t add to 100%. But I really don’t see why you can’t actually give a percent contribution.
4. Sure, there may be series that cancel each other. But one thing at a time. Do the first order examination first. Start by just showing the mathematical weights. After doing that, the next analysis can look at anticorrelated series, etc. E.g. Is Tiljander really just two series. She has two phenomena: thickness and X-ray absorption (really transmittance, but absorption is the log transform of percent transmittance). The 4 series are a complication of two signals. But before diving deeper and discussing that, might as well start with just the simple picture of the 100% weight add-up. And yes, if two signals cancel each other, you might say they are meaningless. But let’s say we are talking trees and have two physically different but paritally anticorrelated series (not the Tilj cockup). If you just remove one of those anticorrelated series, the end result will be quite different. So I think starting by assigning weights makes sense. Then get into the other issue, next.
5. Similarly, I would avoid the use of negative weights. If I’m doing a regression on how a drill press works or something, I could have some factor (hydraulic oil level or the like) that can be expressed as a positive or a negative. But I want to know how important the different phenomenon are. Are we talking a 50% factor or a 5% factor? So don’t confuse it with a “negative weight” based on which frame of reference I use for measuring the factor. [And again, I’m fine with the debate on aphysical upside down thermometers. But that’s a separate topic, more related to training and proxy selection. But in terms of factor importance, I want to know if “negative cavesickle” is a 5% factor or a 1% factor or a 50% factor. For one thing, if it’s a 1% factor, than it’s aphysical negativeness may still be silly, but fixing it…excluding the proxy doesn’t materially change the answer. And I’m not 100% saying to exclude upside down crap. Like in sampling, you may have noise and thus have some by chance upside down stuff and not want to exclude them (like with thermometry)…but again, my point is just not to argue everything at once. To start with, let’s just know the 100% buildup. Then debate the other issues, next.]
6. I’m not sure that everything is completely linear in the step reconstruction. Are parts of some of the proxy series in the steps also hiding within the PCs? I’m not asserting there’s a non-linearity–I’m not a mathematician. Just I would want to have one check. I mean maybe if Gaspe is 20% of the step and then also 10% of PC1 and “negative of Gaspe” (God help us) is 50% of PC2 and PC 1 is 10% of the step and PC2 is 5% of the step, can I still linearize it all out, like a tail-swallowing corporate stock ownership and understand what total percent Gaspe is of the recon? Maybe. But I do need to be clear if I’m really talking “free Gaspe” or “all Gaspe, even within the PCs”, when I think about excluding the series. [If I’m totally dorking this up and the proxies don’t hide inside the PCs…”never mind”.]
7. I think it’s also important to be clear about “percent of what”. Are we talking a step? Or the overall recon? Or the HSI? (Maybe the last is what both “sides” “care” about.) I would think the step probably can be considered to be a set of linear combinations and we can talk about weights. But for the overall reconstruction, you can’t really. Within a step, every year is just the same linear combination of weighted factors. But not across steps. Similarly, it’s not obvious that HSI will respond linearly as an output function the way that temp anomaly within a step does.
7.5. If you do have nonlinearities from 6 or 7, there’s probably still ways to look at “contribution”. Maybe you can’t get a linear combination of factors weighting to 100% if there’s interactions (nonlinearities) driving HSI. But you could still examine each factor in isolation. Like take out Gaspe and how much does HSI drop. Etc. And then you can have a chart of how much each exclusion changes the HSI. Show them in rank order and make a comment on them not adding to 100% because of non-linearities.
8. Off topic, but I thought there was also some temperature geographic gridding going on. That it’s not just a Y function of world temp anomalies? But I donno…reaching back into foggy memories. But how does that affect linear weight view?
9. “Within the data are a subset of cherry picked HS-shaped series (stripbark bristlecones), which are not necessarily temperature proxies, but which are not cancelled out, because they are all oriented in relation to 20th century trend”
Again, I would be careful about arguing everything at once. To start with, let’s just know how big a factor Gaspe is. 1%? 10%, 50%? I’m sympathetic to the argument of overtraining and low degrees of freedom. Essentially the BCPs (and maybe even worse the 51st state “cherries”) are selected because of two trends matching, but they don’t actually show “wiggle-matching” that a better physical proxy like coral does. You also have the whole teleconnection and global signal concern. But again, we have to disaggregate. To start with, just what are the weights. Whether Gaspe is right (Jesus Christ gave it magical teleconnection powers) or wrong (cherry pie filling), what weight is it? Settle that as a mathematical question. Then, after showing it’s 30% of the step, thus important, we can as a separate topic examine it. Maybe in more detail than some 3% driver. But to start with, since everything is simple…what’s the percent contribution to the step? (Knuckle-dragger use of term “weight”.)
negative weights is a distinctive feature of lower order principal components. The first eigenvector in “sensible” networks tends to consist entirely of positive coefficients, but, in all lower eigenvectors, about half the coefficients are negative. To the extent that the network is supposed to consist of oriented temperature proxies, inclusion of lower order principal components subtracts from the validity of the resulting reconstruction. I didn’t fully appreciate this until blog articles in 2006-2008. The Mann/Wahl-Ammann hyperventilation about the “right” number of principal components was ill-conceived, because they didn’t really understand what was happening in the lower order PCs. Except heavy weighting of bristlecones.
The calculation of “weights” for the final reconstruction was explained in 2006-2007 posts on the linear algebra of MBH98. Those were excellent posts and should be read by anyone interested in this issue. Eigenvectors are “weights” in the sense that I use the term in these posts. By definition the ssq of each eigenvector is 1. This is a different set-up than simple addition, but is well-established in linear algebra and highly useful.
“To get weights that add to one, one could divide the coefficients by their sum and scale the proxies correspondingly. Negative weights should be avoided by first inverting proxies that are negatively related to temperature, otherwise individual proxies could account for more than ±100% of the reconstruction.” -hs
Makes total sense to invert proxies known to have negative orientations e.g. corals. But these account for only a small proportion of the negative weights. Having said that, the figure would be improved by taking care of this ahead of time.
There are two other issues.
As I mentioned before, lower order PCs assign negative weights by the intrinsic properties of principal component algorithm. Even if all the proxies are “well behaved” temperature proxies (which they aren’t.) In Mann world, negative weights (in my sense) get attributed overall to ordinary tree ring chronologies even if they have positive coefficients in the PC1.
The other issue is that a lot of the proxies are simply no good. Some proxies of the same type may have slight positive correlations and some slight negative correlations. If you flip them all ex post, you are building a recipe for an arbitrary HS.
So it’s important to assign orientation ex ante. It’s a good idea to invert known classes of negatively oriented proxies, but this is not a relevant issue in the AD1400 network illustrated here, except for a couple of series.
No worries and good work, both.
I may go radio silent again. Internet can be an addiction.
Playing with the spatiotemporal data. Here’s 1816, which was highlighted in MBH98. The “enhanced cold in the eastern United States and Europe” is somewhat reduced when the upside-down error is corrected (and the same eigenvectors are retained).
which upside-down error is in play here? I seem to have missed something previous.
The inversion of anomalies below -10 degrees.
d’oh.
They replayed that graph from mid 1700’s through late 1900 in the Steyn Mann trial during the testimony of Bradley. ( I am typing this approx 45 mins after that line of testimony)
The 1816 was shown as very cold and blamed it on the Mount Tambora volcanic eruption.
Interesting to note is that the same graph for the years 1814 & 1815 showed similar cold across the world, ( a mistake to hover over those prior years). Not sure if defense caught it, but will be interesting if defense caught it and addresses it during the cross examination.
Wyner showed Mann’s incorrect AD 1400 proxy network to the jury, with Gaspé and NOAMER PC1 highlighted.