UWA Vice Chancellor Johnson Refuses Data Again Again

Barry Woods has been trying to get Lewandowsky’s data, inclusive of any metadata on referring blogs, since August 2012 (before anyone had even heard of Lewandowsky). Woods has made multiple requests, many of which have not even been acknowledged. Woods has expressed concern about Hoax to Eric Eich, editor of Psychological Science, who suggested that Woods submit a comment.

The UWA’s Code of Conduct for the Responsible Practice of Research states clearly:

3.8 Research data related to publications must be available for discussion with other researchers.

The Australian Code of Conduct for the Responsible Practice of Research (to which the University of Western Australia claims to adhere) states:

2.5.2 Research data should be made available for use by other researchers unless this is prevented by ethical, privacy or confidentiality matters.

Nonetheless, Vice Chancellor Johnson flatly and unequivocally denied data to Woods for the purpose of submitting a comment to the journal, stating that “it is not the University’s practice to accede to such requests”.

From: Paul Johnson
Sent: Friday, March 28, 2014 8:08 AM
To: Barry Woods
Cc: Murray Maybery ; Kimberley Heitman
Subject: request for access to data

Mr B. Woods

Dear Mr Woods,

I refer to your emails of the 11th and 25th March directed to Professor Maybery, which repeat a request you made by email dated the 5th September 2013 to Professor Lewandowsky (copied to numerous recipients) in which you request access to Professor Lewandowsky’s data for the purpose of submitting a comment to the Journal of Psychological Science.

It is not the University’s practice to accede to such requests.

Yours faithfully,
Professor Paul Johnson,
Vice-Chancellor

It seems highly doubtful to me that it is indeed the “University’s practice” to refuse access to data to other researchers. Such a practice, if generally applied, would be a flagrant violation of the Australian Code of Conduct and would surely have come to light before now. But whether the refusal of data to other researchers is the general “practice” of the University or merely applied opportunistically in this particular case, it is a violation of the Australian Code of Conduct for Responsible Research and the “practice” should cease.

UWA Vice-Chancellor Refuses Lewandowsky Data

Over the past 15 months, I’ve made repeated requests to the University of Western Australia for a complete copy of Lewandowsky’s Hoax data in order to analyse it for fraudulent and/or scammed responses. Up to now, none of my previous requests were even acknowledged.

I was recently prompted to re-iterate my longstanding request by the retraction of Lewandowsky’s Fury. This time, my request was flatly and permanently denied by the Vice Chancellor of the University himself, Paul Johnson, who grounded his refusal not on principles set out in university or national policy, but because the University administration’s feelings were hurt by my recent blogpost describing the “investigation” by the University administration into the amendment of Lewandowsky’s ethics application .

Continue reading

Lewandowsky Ghost-wrote Conclusions of UWA Ethics Investigation into “Hoax”

Following the retraction of Lewandowsky’s Fury, the validity of University of Western Australia ethics “investigations” is again in the news. At present, we have negligible information on the University’s investigation into Fury, but we do have considerable (previously unanalysed) information on their earlier and illusory “investigation” into prior complaints about the ethics application for Moon Landing Hoax (“Hoax”).

This earlier “investigation” (recently cited at desmog here and Hot Whopper here) supposedly found that the issues that I had raised in October 2012 were “baseless” and that the research in Hoax was “conducted in compliance with all applicable ethical guidelines”.

However, these conclusions were not written by a university investigation or university official but by Lewandowsky himself and simply transferred to university letterhead by UWA Deputy Vice Chancellor Robyn Owens within minutes after Lewandowsky had sent her language that was acceptable to him.

In today’s post, I’ll set out a detailed chronology of these remarkable events. Continue reading

Lewandowsky’s Fury

Moon Hoax author Stephan Lewandowsky is furious that Frontiers in Psychology has retracted his follow-up article, Recursive Fury. See also Retraction Watch here.

Accompanying the news, Graham Redfearn at desmog has released FOI documents from the University of Western Australia that include correspondence between the university and the Frontiers editorial office up to May 2013. (Lewandowsky did not take exception to an FOI request from desmog.)

One of the last emails in the dossier is a request from the Frontiers editorial to the University of Western Australia in early May 2013, acknowledging receipt of the University’s statement that Lewandowsky had been investigated and cleared of various misconduct allegations.

The University’s investigation had been swift, to say the least, given that some of the complaints had been made as recently as April 5, 2013, and that, at a minimum, the falseness of Lewandowsky’s SKS claim had been unequivocally confirmed by SKS editor Tom Curtis.

The Frontiers editorial office sought particulars of the procedures of the UWA investigation (see list below), telling UWA that they had appointed a team of senior academics to examine the incident and hoped that “the team’s report could state that they have seen UWA’s decision and the background documents and are happy to be able to rely on that as a solid and well-founded decision (assuming that to be the case)”. They also stated that they not only wanted the evaluation to be “robust, even-handed and objective” but for the process to be perceived as such:

I am therefore writing to ask if it would be possible for the team evaluating the complaints to have a little more information in the process adopted by UWA in assessing these issues. The sole purpose of any such access would be to assist the evaluation team in its work. We are striving to ensure that the evaluation is robust, even-handed and objective and this information would be helpful not only to facilitate this but also to allow it to seem to be so. The idea would be that the team’s report could state that they have seen UWA’s decision and the background documents and are happy to be able to rely on that as a solid and well-founded decision (assuming that to be the case.)

We are well aware of the sensitivity of whole question…

If UWA felt able to share any of the following types of information it would be helpful:

1. The specific complaints made
2. The articles of the code of conduct which were considered relevant for the assessment
3. Whether any codes of conduct relating specifically to psychology were considered relevant and if so, which ones
4. The aspects of factors considered by UWA in its investigation
5. The reasoning adopted to support the findings of the preliminary investigation
6. Whether the recommendations referred to in UWA’s letter concerning dealing with conflicts of interest means that UWA considers that conflicts of interest were present in this case
7. Confirmation by UWA that those who assessed these allegations were independent of each of the authors and had no conflicts of interest or similar challenges in carrying out this task (note that we are not asking for details or evidence, just UWA’s confirmation
8. Finally, from UWA’s letter we understand that the conclusion is that there was neither any breach nor any research misconduct as defined by the Australian Code for the Responsible Conduct of Research. Is this correct?

A few days later, the UWA appears to have sent a “more detailed report” (according to an acknowledgement by Frontiers on May 6, 2013.)

Lewandowsky’s blog article contains the following short statement which has now been issued by Frontiers will issue later today:

In the light of a small number of complaints received following publication of the original research article cited above, Frontiers carried out a detailed investigation of the academic, ethical and legal aspects of the work. This investigation did not identify any issues with the academic and ethical aspects of the study. It did, however, determine that the legal context is insufficiently clear and therefore Frontiers wishes to retract the published article. The authors understand this decision, while they stand by their article and regret the limitations on academic freedom which can be caused by legal factors.

The statement conspicuously does not contain the planned statement that they had “seen UWA’s decision and the background documents and are happy to be able to rely on that as a solid and well-founded decision”, from which one can surmise that they were unable to to make such a statement.

Does “Inhomogeneous forcing and transient climate sensitivity” by Drew Shindell make sense?

A guest post by Nic Lewis

 This new Nature Climate Change paper[i] by Drew Shindell claims that the lowest end of transient climate response (TCR) – below 1.3°C – in CMIP5 models is very unlikely, and that this suggests the lowest end of model equilibrium climate sensitivity estimates – modestly above 2°C – is also unlikely. The reason is that CMIP5 models display substantially greater transient climate sensitivity to forcing from aerosols and ozone than to forcing from CO2. Allowing for this, Shindell estimates that TCR is 1.7°C, very close to the CMIP5 multimodel mean of ~1.8°C. Accordingly, he sees no reason to doubt the models. In this connection, I would note (without criticising it) that Drew Shindell is arguing against the findings of the Otto et al (2013) study,[ii] of which he and myself were two of the authors.

As with most papers by establishment climate scientists, no data or computer code appears to be archived in relation to the paper. Nor are the six models/model-averages shown on the graphs identified there. However, useful model-by-model information is given in the Supplementary Information. I was rather surprised that the first piece of data I looked at – the WM-GHG (well-mixed greenhouse gas) global forcing for the average of the MIROC, MRI and NorESM climate models, in Table S2 –  is given as 1.91 W/m², when the three individual model values obviously don’t average that. They actually average 2.05 W/m². Whether this is a simple typo or an error affecting the analysis I cannot tell, but the apparent lack of care it shows reinforces the view that little confidence should be placed in studies that do not archive data and full computer code – and so cannot be properly checked.

The extensive adjustments made by Shindell to the data he uses are a source of concern. One of those adjustments is to add +0.3 W/m² to the figures used for model aerosol forcing to bring the estimated model aerosol forcing into line with the AR5 best estimate of -0.9 W/m². He notes that the study’s main results are very sensitive to the magnitude of this adjustment. If it were removed, the estimated mean TCR would increase by 0.7°C.  If it were increased by 0.15 W/m², presumably the mean TCR estimate of 1.7°C would fall to 1.35°C – in line with the Otto et al (2013) estimate. Now, so far as I know, model aerosol forcing values are generally for the change from  the 1850s, or thereabouts, to ~2000, not – as is the AR5 estimate – for the change from 1750. Since the AR5 aerosol forcing best estimate for the 1850s was -0.19 W/m², the adjustment required to bring the aerosol forcing estimates for the models into line with the AR5 best estimate is ~0.49 W/m², not ~0.3 W/m². On the face of it, using that adjustment would bring Shindell’s TCR estimate down to around 1.26°C.

Additionally, the estimates of aerosol forcing in the models that Shindell uses to derive the  0.3 W/m² adjustment are themselves quite uncertain. He gives a figure of -0.98 W/m² for the NorESM1‑M model, but the estimate by the modelling team  appears to be -1.29 W/m². Likewise, Shindell’s figure of -1.44 W/m² for the GFDL-CM3 model appears to be contradicted by the estimate of -1.59 W/m² (or -1.68 W/m², dependent on version), by the team involved with the model’s development. Substituting these two estimates for those used by Shindell would bring his TCR estimate down even further.

In any event, since the AR5 uncertainty range for aerosol forcing is very wide (5–95% range: -1.9 to -0.1 W/m²), the sensitivity of Shindell’s TCR estimate to the aerosol forcing bias adjustment is such that the true uncertainty of Shindell’s TCR range must be huge  – so large as to make his estimate worthless.

I’ll set aside further consideration of the detailed methodology Shindell used and the adjustments and assumptions he made. In the rest of this analysis I deal with the question of to what extent the model simulations used by Shindell can be regarded as providing reliable information about how the real climate system responds to forcing from aerosols, ozone and other forcing components.

First, it is generally accepted that global forcing from aerosols has changed little over the well-observed period since 1980. And most of the uncertainty in aerosol forcing relates to changes from preindustrial (1750) to 1980. So, if TCR values in CMIP5 models are on average correct, as Shindell claims, one would expect global warming simulated by those models to be, on average, in line with reality. But as Steve McIntyre showed, here, that is far from being the case. On average, CMIP5 models overestimate the warming trend between 1979 and 2013 by 50%. See Figure 1, below.

CMIP5 trends 1979-2103 CA

Figure 1: Modelled versus observed decadal global surface temperature trend 1979–2013

Temperature trends in °C/decade. Virtually all model climates warmed much faster than the real climate over the last 35 years. Source: http://climateaudit.org/2013/09/24/two-minutes-to-midnight/. Models with multiple runs have separate boxplots; models with single runs are grouped together in the boxplot marked ‘singleton’. The orange boxplot at the right combines all model runs together. The default settings in the R boxplot function have been used; the end of the boxes represent the 25th and 75th percentiles. The red dotted line shows the actual trend in global surface temperature over the same period per the HadCRUT4 observational dataset. The 1979–2013 observed global temperature trends from the three datasets used in AR5 are very similar; the HadCRUT4 trend shown is the middle of the three.
.

Secondly, the paper relies on the simulation of the response of the CMIP5 models to aerosol, ozone and land use changes being realistic, and not overstated. Those components dominate the change in total non-greenhouse gas anthropogenic forcing over the 1850-2000 period considered in the paper. Aerosol forcing changes are most important by a wide margin, and land use changes (which Shindell excludes in some analyses) are of relatively little significance.

For its flagship 90% and 95% certainty attribution statements, AR5 relies on the ‘gold standard’ of detection and attribution studies. In order to separate out the effects of greenhouse gases (GHG), these analyses typically regress time series of  many observational variables – including latitudinally and/or otherwise spatially distinguished surface temperatures – on model-simulated changes arising not only from separate greenhouse gas and natural forcings but also from other separate non-GHG anthropogenic forcings. The resulting regression coefficients – ‘scaling factors’ – indicate to what extent the changes simulated by the model(s) concerned have to be scaled up or down to match observations. There is a large literature on this approach and the associated statistical optimal fingerprint methodology. The IPCC, and the climate science community as a whole, evidently considers this observationally-based-scaling approach to be a more robust way of identifying the influence of aerosols and other inhomogeneous forcings than the almost purely climate-model-simulations-based approach used by Shindell. I agree with that view.

Figure 10.4 of AR5, reproduced as Figure 2 below, shows in panel (b) estimated scaling factors for three forcing components: natural (blue bars), GHG (green bars) and  ‘other anthropogenic’ – largely aerosols, ozone and land use change (yellow bars). The bars show 5–95% confidence intervals from separate studies based on 1861–2010, 1901–2010 and 1951–2010 periods. Best estimates from the studies using those three periods are shown respectively by triangles, squares and diamonds. Previous research (Gillett et al, 2012)[v] has shown that scaling factors based on a 1901 start date are more sensitive to end date than those starting in the middle of the 19th century, with temperatures in the first two decades of the 20th century having been anomalously low, so the 1861–2010 estimates are probably more reliable than the 1901–2010 ones.

Multimodel estimates are given for the 1861–2010 and 1951–2010 periods (“multi”, at the top of the figure). The best estimate scaling factors for ‘other anthropogenic’ over those periods are respectively 0.58 and 0.61. The consistency of the two best estimates is encouraging, suggesting that the choice between these two periods does not greatly affect results. The average of the two scaling factors implies that the CMIP5 models analysed on average exaggerate the response to aerosols, ozone and other non-greenhouse gas anthropogenic forcings by almost 70%. However, the ‘other anthropogenic’ scaling factors for both periods have wide ranges. The possibility that the true scaling factor is zero is not ruled out at a 95% confidence level (although zero is almost ruled out using 1951–2010 data alone).

AR5 Fig10.4Figure 2: Reproduction of Figure 10.4 of IPCC AR5 WGI report

.

The individual results for models used by Shindell are of particular interest.

The first of the five individual CMIP5 models included in Shindell’s analysis, CanESM2, shows negative scaling factors for ‘other anthropogenic’ over all three periods – strongly negative over 1901–2010. The best estimates for its GHG scaling factor are also far below one over both 1861–2010 and 1951–2010.  So it would be inappropriate to place any weight on simulations by this model. In Figure 1, it is CanESM2 that shows the greatest overestimate of 1979-2013 warming.

The second CMIP5 model in Shindell’s analysis, CSIRO-Mk3-6-0, shows completely unconstrained scaling factors using 1901–2010 data, and extremely high scaling factors for both GHG and ‘other anthropogenic’ over both 1861–2010 and 1951–2010 – so much so that the GHG scaling factor is inconsistent with unity at better than 95% confidence for the longer period, and at almost 95% for the shorter period. This indicates that the model should be rejected as a representation of the real world, and no confidence put in its simulated responses to aerosols, ozone or any other forcings.

The third of Shindell’s models, GFDL-CM3, is not included in AR5 Figure 10.4.

The fourth of Shindell’s models, HadGEM2, shows scaling factors for ‘other anthropogenic’ averaging 0.44, with all but the 1901–2010 analyses being inconsistent with unity at a 95% confidence level. The best defined scaling factor, using 1861–2010 data, is only 0.31, with a 95% bound of 0.58. So HadGEM2 appears to have a vastly exaggerated response to aerosol, ozone etc. forcing.

The fifth and last of Shindell’s separate models, IPSL-CM5A-LR, is included in Figure 10.4 in respect of 1861–2010 and 1901–2010. The scaling factors using 1861-2010 data are much the better defined. They are inconsistent with unity for all three forcing components, as are those over 1901-2010 for natural and GHG components. That indicates no confidence should be put in the model as a representation of the real climate system. The best estimate scaling factor for ‘other anthropogenic’ for the 1861-2010 period is 0.49, indicating that the model exaggerates the response to aerosols, ozone etc. by a factor of two.

Shindell also includes the average of the MIROC-CHEM, MRI-CGCM3 and NorESM1-M models. Only one of those, NorESM1-M, is included in AR5 Figure 10.4.

To summarise, four out of six models/model-averages used by Shindell are included in the detection and attribution analyses whose results are summarised in AR5 Figure 10.4. Leaving aside the generally less well constrained results using the 1901–2010 period that started with two anomalously cold decades, none of these show scaling factors for ‘other anthropogenic’ – predominantly aerosol and to a lesser extent ozone, with minor contributions from land use and other factors – that are consistent with unity at a 95% confidence level. In a nutshell, these models at least do not realistically simulate the response of surface temperatures and other variables to these factors.

A recent open-access paper in GRL by Chylek et al,[vi] here, throws further light on the behaviour of three of the models used by Shindell. The authors conclude from an inverse structural analysis that the CanESM2, GFDL-CM3and HadGEM-ES models all strongly overestimate GHG warming and compensate by a very strongly overestimated aerosol cooling, which simulates AMO-like behaviour with the correct timing – something that would not occur if the models were generating true AMO behaviour from natural internal variability. Interestingly, the paper also estimates that only about two-thirds of the post-1975 global warming is due to anthropogenic effects, with the other one-third being due to the positive phase of the AMO.

In the light of the analyses of the characteristics of the models used in Shindell’s analysis, as outlined above, combined with the evidence that Shindell’s aerosol forcing bias-adjustment is very likely understated and that his results’ sensitivity to it makes his TCR estimate far more uncertain than claimed, it is difficult to see that any weight should be put on Shindell’s findings.

.


[i]  Drew T. Shindell, 2014, Inhomogeneous forcing and transient climate sensitivity. Nature Climate Change. doi:10.1038/nclimate2136, available here (pay-walled).

[ii]  Otto, A., F. E. L. Otto, O. Boucher, J. Church, G. Hegerl, P. M. Forster, N. P. Gillett, J.Gregory, G. C. Johnson, R. Knutti,N. Lewis,U. Lohmann, J.Marotzke,G.Myhre, D. Shindell, B Stevens and M. R. Allen, 2013. Energy budget constraints on climate response. Nature Geosci., 6: 415–416.

[iii] A.Kirkevag et al, 2013, Aerosol-climate interactions in the Norwegian Earth System Model-NorESM1-M. GMD .

[iv]  M.Salzmann et al, 2010: Two-moment bulk stratiform cloud microphysics in the GFDL AM3 GCM: description, evaluation, and sensitivity tests. ACP.

[v]  Gillett N.P., V. K. Arora, G. M. Flato, J. F. Scinocca, K. von Salzen, 2012: Improved constraints on 21st-century warming derived using 160 years of temperature observations. Geophys. Res. Lett, 39, L01704, doi:10.1029/2011GL050226

[vi]  P Chylek e al., 2014. The Atlantic Multidecadal Oscillation as a dominant factor of oceanic influence on climate. GRL.

Data and Corrections for Rosenthal et al 2013

Last year, I wrote a blog post covering Rosenthal et al 2013 – see here. It reported on interesting Mg-Ca ocean cores in the western Pacific from the foraminfera H. Balthica, which is believed to be a proxy for “intermediate water temperatures”.

The press release stated that the middle depths were warming “15 times faster” than ever before:

In a reconstruction of Pacific Ocean temperatures in the last 10,000 years, researchers have found that its middle depths have warmed 15 times faster in the last 60 years than they did during apparent natural warming cycles in the previous 10,000.

However, the situation was much less dramatic if one parsed the actual data, as shown in the graphic below (taken from my earlier post) redrawn from Rosenthal’s information. Rather than the modern period being “unprecedented”, on this scale, it looks well within historical ranges.

Figure 2. From Rosenthal et al 2013 Figure 2C. Red- temperature anomaly converted from NOAA Pacific Ocean 0-700m ocean heat content. Cyan – Rosenthal Figure 3B reconstruction (my digitization). Orange trend line shows third comparison from Rosenthal SI, taken from first row in Table S3.

In the course of doing the analysis, I observed that Rosenthal’s Table S3 seemed to be screwed up in multiple ways – as I observed in my post.

In addition, Rosenthal had not archived his data (though he has a pretty good track record of archiving data from previous studies.) I asked him for the data and got fobbed off a number of times. I notified Sciencemag of the problem but got no response. The other day, I noticed that Rosenthal had issued a revised SI at Sciencemag and that the requested data had been filed there. Rosenthal discourteously failed to notify me that he had done so.

Rosenthal’s revised SI also included substantial changes to the Table S3 that I had previously criticized, but did not issue a Corrigendum notice. He said that the “errors
have no bearing on the main conclusions of the paper”. In making these corrections to Table S3 (which still has some puzzles), Rosenthal did not acknowledge Climate Audit’s criticism of this table.

Rather than reviewing the analysis, I’ve posted up the revisions as an update to the earlier post here.

Mann Misrepresents NOAA OIG

In today’s post, I’ll consider a fifth investigation – by the NOAA Office of the Inspector General OIG here- and show that, like the other four considered so far, Mann’s claims that it “investigated” and “exonerated” Mann himself were untrue. In addition, I’ll show that Mann’s pleadings misrepresented the findings of this investigation both through grossly selective quotation and mis-statement. Finally, the OIG report re-opened questions about Mann’s role in Eugene Wahl’s destruction of emails as requested by Phil Jones. In Mann’s pleadings, Mann claimed that each of the investigation reports was “commented upon in the national and international media”. But, in this case, much of the coverage focused on renewed criticism of the apparent obtuseness of the Penn State inquiry committee. The episode even included accusations of libel by Mann against CEI’s Chris Horner as well as a wild and unjustified accusation of “dishonesty” by Mann against myself.
Continue reading

Mann Misrepresents the UK Department of Energy and Climate Change

Next in the list of misrepresentations by Mann and his lawyers is their inclusion of the Government Response to the House of Commons Science and Technology Committee as an investigation that “investigated” and “exonerated” Mann personally. This takes the total of such misrepresented investigations to four (out of the four that I’ve thus far examined). In Mann’s pleadings, Mann additionally attributed findings of the Muir Russell Review to a separate investigation by the “government of the United Kingdom”, in turn, wildly inflating the supposed findings. As a secondary issue, Mann’s claim that this “investigation” was widely covered (or covered at all) in international media is also untrue, a point that Joe Romm complained about at the time. Continue reading

Mann Misrepresents the UK Commons Committee

Mann’s inclusion of the UK House of Commons Science and Technology Committee (“Commons Committee”) among the investigations that supposedly “investigated” and “exonerated” Mann personally is as invalid as his similar claims regarding the Oxburgh and Muir Russell inquiries or his claim to have won a Nobel prize.

The Commons Committee (see report here) did not conduct any investigation into Mann’s conduct nor did it make any findings whatever in connection with Mann’s conduct. I’ll demonstrate this in today’s post, which is the third in the present series (previously I discussed the Oxburgh and Muir Russell inquries here and Muir Russell here).

In addition, I’ll also discuss an important discrepancy between the findings of the Muir Russell panel and the report of the Commons Committee concerning the notorious email concerning Jones’ construction of the 1999 WMO diagram (the “trick … to hide the decline”). Whereas the report of the Commons Committee had concluded in respect of this incident that Jones had “no case to answer”, the Muir Russell panel found that the diagram omitted data and was “misleading”. In combination, these constitute the elements of the offence of “falsification” as defined in academic misconduct codes.

Mann’s pleadings also emphasize and rely on international media coverage as an essential element of the defendants’ knowledge of the reports, but media coverage of Jones’ appearance before the Commons Committee was savage. Continue reading

More SKS in the Mann Pleadings

I am in the process of writing a post showing that Mann’s claim that he had personally been exonerated by the UK House of Commons Science and Technology Committee (report here) of a wide range of counts was also untrue. It’s so untrue that it’s hard to even make an interesting post of it.

In the process of writing the post, I noticed the following quotation from Mann’s Reply Memorandum, which has some extra interest given the evidence that the doctored quotation in Mann’s pleadings came from the SKS blog, rather than original documents. Mann and/or Cozen O’Connor wrote:

In March 2010, the United Kingdom’s House of Commons Science and Technology Committee published a report finding that the skeptics’ criticisms of the CRU were misplaced, and that its actions “were in line with common practice in the climate science community.”

The first sentence is completely untrue: the Committee Report said nothing of the sort. The assertion that “criticisms of the CRU were misplaced” is neither made nor supported in the Committee Report. This phrase originated instead with SKS, who, once again, altered the language, though, in this case, not going so far as to fabricate a quotation. Continue reading

Follow

Get every new post delivered to your Inbox.

Join 3,310 other followers