New Acquisitions: On the Wisdom of Noah Smith

I generally try to avoid having Twitter disputes spill on to the blog. Generally what happens on Twitter is best left on Twitter and in some cases not even that. However this past week I was pulled into a Twitter debate with Noah Smith about the validity of the way that historians offer our knowledge into the public debate who then opted to continue that debate in a long-form blog post tackling my work in particular, which in turn seemed like it demanded a long-form response rather than something in 280 characters or less.

Noah’s initial tweet declared:

Carlos Morel responded with some confusion, noting that historians tend to be quite wary of what he termed ‘theorizing’ and perhaps foolishly I waded in because I felt Morel had made a valid point somewhat confused by the fact that Smith seemed unaware that the word ‘theory’ is used differently in different fields.1 And so to compress quite a few tweets into a short statement, I offered that in fact this was a real difference: historians do not generally aim to construct laws of general applicability (quite unlike social scientists, who do), but instead to study and furnish relevant exemplars as tools (but not predictions) for thinking about current problems.

To which Smith responded that he’d be putting together a post where my public writings would “form the core” showing how contra to what I said that “academic historians make strong theoretical claims that cannot be evaluated empirically.” And here I want to ask you to please put a pin in the word ’empirically’ now used twice because we’re going to come back to it.

In any event he did write that post and it is here and you can read it; unfortunately I cannot recommend it. We’re going to talk about why but it is going to end up being rather involved because to explain why a shallow critique of a discipline’s methods is shallow, you have to explain how that discipline functions and why it does so. But before we get to the complicated stuff we should deal with:

The Bad Faith Complaints

We can start with the conceit of the section titled “Historical analogies are theories” (mark that word ‘theories’ again because we’re coming back to it) which is where my work is focused on. For the most part I just want to get most of this out of the way before I get to the real meat of Smith’s complaint. There are a few problems with his reading of the essays in question which seems to speak either to bad faith or a failure in comprehension. Let’s take this paragraph of his:

Which is building off of this paragraph of mine:

Now perhaps this is unfair of me coming as a specialist trained in the reading of texts but it sure does seem to me like Smith has stripped out quite a lot of qualifiers here to fundamentally misrepresent a paragraph that is in fact a giant caveat, instead presenting it as claim of general statistical predictability. Phrases like “stretch the scientific metaphor” before “laboratories of democracy” fairly clearly indicate that, no, I am well aware these are not actual laboratories; the double-quote marks around “data set” do the same, acknowledging that this is analogous to a data set but not an actual data set.2 Moreover I am leading the paragraph with the idea that there is fundamental uncertainty, indeed “always risks” in “drawing comparisons across vast chasms of time and culture” before introducing the idea that there may well be other examples which might offer different lessons. Rather than “asserting that ancient Greece is an appropriate analogy for our modern politics,” as Smith has it, I have explicitly opened the door to the idea that it might not be. As we’ll see, this sort of stress on context and contingency over rules of general applicability is a key difference between the methods historians use and the social sciences.

Smith then has a brief foray into wondering if I am constructing an argument-by-definition that tyrants are defined by their effort to repeatedly seek power (a pointless digression; I define ‘tyrant’ in the piece) which we may mostly skip over except to note that I think Smith’s failure to familiarize himself at even a basic level with the material in question is rather exposed by his use of Richard Nixon as the example of a tyrant who would be ‘No True Scotsman’d’ out of this potential definition I am not actually proposing. Richard Nixon was an crook and a bad president, but I know of no definition where a politician who won election to his office and then was forced to leave it by constitutional processes and didn’t use violence to attempt to retain power would qualify as a tyrant; Watergate does not have a body count. The ancient Greek definition (the one I was working with here) functionally requires the position to be extra-constitutional and violence to be used in the seizure and maintenance of power as standard definitional features.3 Even the remarkably reductive characterization I offered in the essay, that tyranny was a neutral Greek term for one-man rule (if you think you detect an editor shaving down a more complete definition for the sake of pacing, that’s because you do), still disqualifies Nixon who – quite manifestly in the instance as he was about to be removed from power by Congress – did not rule alone.

The broader problem here is Smith’s choice of targets, which are not a bunch of peer-reviewed journal articles but in fact a number of mostly short-form (c. 1200-1500 word) essays in traditional media publications. At first when Smith read the phrase “Would-be tyrants keep trying until they succeed” to mean “all would-be tyrants keep trying until they succeed,” I assumed he was just unfamiliar with this genre of writing where editors tend to strip out caveats like ‘sometimes’ or ‘frequently’ as redundant and where the general mode is to present a single detailed example on the assumption that, as a historian we have in fact considered the broader evidence as it exists but do not have the word-space to exhaustively list all examples.4

Indeed in most cases, Smith has removed what nuance was contained in the original essays. A conclusion of “a need to choose: Either trim down American objectives in what are, effectively, occupied countries to those which can be achieved merely by organizing the existing military and political structures or settle down to the task of building a new military organization [there] from the ground up” becomes in Smith’s summation, “to predict that the U.S. military would be more effective if it reorganized itself in certain ways.” The latter is frankly a misleading summary of an argument which instead offers the Romans as one tool (of potentially many, including others like the British army in India mentioned in that very essay) with which to think about the trade-offs inherent in trying to raise military force in occupied countries. Perhaps Smith has never written in this genre before and so is unfamiliar with it and its constraints.

Except that Noah Smith spent some time as a columnist for Bloomberg and wrote in exactly this genre. In fact, he wrote exactly these sorts of essays, in the deep, dark past of last year. Here, for instance, is Noah Smith contending that it took Franklin Delano Roosevelt two crises to be transformative whereas at that point Joe Biden had only had one crisis and perhaps that wouldn’t be enough:

Am I to assume that Smith has done a statistical study of every transformative president and confirmed that the mean number of crises they required to be transformative was two? Why hasn’t he included these studies in his short article? Has he done empirical verification of the two-crisis theory? Of course not, he is reasoning from a single historical exemplar and drawing a conclusion from it, a fairly standard use of case-based inductive reasoning. It turns out he does know how this works.

Collectively I think these problems speak to the shallow degree to which Smith is engaging with the question; critiquing the non-comprehensiveness of short, traditional media articles is a failure of seriousness. No one is going to give a complete accounting of ancient Greek tyranny in the c. 1,000 known Greek poleis in 1,500 words; demanding they do so is a literal category error, mistaking the think-piece for the footnote-laden journal article.

With that out of the way, we can at last get to the meat of the disagreement.

Epistemologies

In his tweets (above) and in the essay itself, Smith is consistent in his call for empirical methods to be used and for historians to acknowledge that they are making “predictive theories.” I think he reaches his main point most clearly here:

And my sense is that the problem here is that Smith has not familiarized himself with history as a discipline or historical argumentation, both how it works but also why it works that way. Consequently he’s attempting to jam them into his social science (economics, particularly) framework, apparently unaware that there are different methodologies and indeed often different epistemologies than his own. That is a fairly big problem and a disappointing one, so let me explain what I mean.

First, an epistemology is a theory of knowledge – that is a theory of how we can come to know things. You will note that I keep using it in the plural because there is not one epistemology in use but in fact several; actual functioning humans rely on different epistemic principles at different times and in different subjects. Empiricism is one of these epistemologies, which argues that direct, personal sense-perception is the chief or even only valid form of knowledge; in its pure form, empiricism rejects authority, testimony, and rationalism (that is the application of raw logic) as sources of knowledge. To test something empirically is thus to test it experientially, typically in the form of an experiment whose results can be observed empirically, that is with the senses. That of course includes using all sorts of tools, including statistical tools; a statistical test of data is an empirical test of that data as it outputs a result which can be observed.

Empiricism, as you may gather, is the epistemology which underlies the scientific method and so is the chief epistemology in the natural sciences and the social sciences. And I am terribly fond of it; for things which can be tested empirically, it works great and is generally to be preferred over other epistemological approaches. Those who know my actual research will be well aware that I think forms of empirical testing (such as experimental archaeology) can be very valuable in helping to understand the past.

The problem is that not all things can be tested empirically, either because it is impossible to do so or because it is impractical to do so. On the impossible end, we have phenomenon which are not subject to independent sense-perception, like the thoughts of a person other than yourself; we are not (yet?) able to export someone’s thoughts and brain imaging provides at best a very incomplete picture of someone’s mental state. The best one can do is ask the person what they are thinking but for the researcher this introduces a non-empirical break in which they are forced to rely on the subject telling the truth. Likewise, a fairly large category of things which cannot be tested empirically is everything that does not exist right now, since humans experience time in a linear, continuously forward-moving fashion, leaving all things that once existed but no longer do out of the realm of human sense-perception. That, of course, is terribly relevant to historians, because it means very little of what we study is subject to empirical tests. Most humans, indeed even famous humans, leave no empirical evidence of their existence; one could not, for instance, empirically prove the existence of Socrates. And yet we can be very confident Socrates existed!5

On the other end, some things are impractical to test empirically; empirical tests rely on repeated experiments under changing conditions (the scientific method) to determine how something behaves. This is well enough when you are studying something relatively simple, but a difficult task when you are studying, say, a society. Social scientists look for ‘natural experiments‘ to aid their understanding, but this is compounded by the rarity of some really important phenomena. An economist studying market interactions can statistically analyze millions of daily trades on the NYSE, but a political scientist has to wait four years to add one data-point to a data-series on presidential elections. Consequently empirical methods struggle to establish solid predictability for such rare events despite the best efforts of very talented analysts. Needless to say, asking people to do a controlled to-scale experiment in, say, warfare or pandemic control would face severe legal, ethical and practical hurdles, but at the same time these events are sufficiently rare and complex that relying only on natural experiments results imposes severe limits. Again, empiricism is great, when you can use it.

Now a philosopher might then insist on pure empiricism (or, as in David Hume’s formulation, pure empiricism in matters of fact and pure logic for the relations of ideas) and just declare everything outside of it unknowable, but in practice this is absurdly limiting. In our actual lives and also in the course of nearly every kind of scholarship (humanities, social sciences or STEM) we rely on a range of epistemologies. Some things are considered proved by the raw application of deductive reasoning and logic, a form of rationalism rather than empiricism (one cannot, after all, sense-perceive the square root of negative one). In some things testimony must be relied on; perhaps the most important element of history as a discipline are the systems we apply to assess the reliability of testimony for events which, having taken place in the past, cannot be viewed empirically in the present (the term for all of these methods collectively is ‘the historical method.’ Historians are not creative people when it comes to naming things).

To insist that all knowledge must be empirical knowledge and that all theories must be empirically testable theories is to either misunderstand what the words mean or to engage in scientism – the delusion that all knowledge is empirical, scientific knowledge.

The irony in this is that when Smith suggests a specific test in response to my article it is not, in fact, an empirical test. In particular he suggests that, “Before we conclude that “would-be tyrants keep trying until they succeed”, we should rigorously and systematically check the historical record to see if we could identify, ex ante, a set of characteristics that allowed us to predict who would keep trying to seize power and who would give up.” But without then running the experiment in the present (that is, waiting to see what kind of dictators show up and if they match the ‘set of characteristics’) there’s no empirical test there. It is difficult to even imagine how such a test would be managed (how do you isolate the variables?), but its results would be useless to anyone today in any event because they wouldn’t be known for decades.6 This is of course the great difficulty that political scientists wrestle with (not without significant success, mind you): the effort to apply scientific methods to the subject (large-scale human interactions which often include violence) where controlled experiments are impossible, unethical or both.

Instead what he’s actually suggesting isn’t an empirical test at all: it is a quantitative, statistical test of data none of which can be empirically verified because it all happened in the past; only the statistical analysis is subject to empirical verification. It’s a lot easier to see how this would be done: past leaders would be grouped into tyrants and non-tyrants, each assigned a series of mathematically defined qualities and then one would run a regression analysis to determine which variables are most predictive of being a tyrant. One would then ‘test’ the analysis against other historical tyrants not included in the original sample to see if the predictability held. That’s still not an empirical test (neither the main data set, nor the test data set are empirically derived – only their comparison is empirically observed which won’t correct for any problems in the non-empirical elements: the evidence and the process of reducing it to data), but this is precisely the sort of work that political scientists do and often to useful result. That said I hope in this case the difficulties implied by assigning historical figures mathematically defined qualities to enable statistical comparison, especially in the context of incomplete historical information, are not lost on the reader. There is perhaps a reason no political scientist7 has yet tried to go and run this analysis.

How History Works

And now at last we get into both how and why the historical method differs from social science approaches to the past. Now the difference here shouldn’t be taken as too binary; there is significant overlap in both methods and concerns. But generally the social sciences aim to establish general rules for how societies function which have strong future predictiveness; laws of the workings of society akin to the laws of the workings of physics whereby we can predict with quite a lot of precision where a ball will go when thrown; for the sake of clarity I’m going to call these ‘laws of general applicability.’ By contrast the focus of historians is on the past itself; while historians of past decades often toyed with the idea of ‘grand narratives’ akin to the social sciences’ laws of general applicability, these have long since been abandoned by all but a few because the exceptions kept overwhelming the general rules. Of course historians hope that the work we do to create knowledge of the past will be useful in the present, but the discipline prioritizes the former over the latter. As a result, historians generally reject the creation of laws of general applicability, insisting that while the past is a useful teacher, efforts at strict predictability will always be overwhelmed by contingency, context and unexpected variables.

That in turn leads to differences in methods. Social scientists – and here we mostly mean economists and political scientists, the two social sciences that collide with history most often – use many methods but perhaps the most important is quantitative analysis using historical data.8 Historians by and large do not reject that method but tend to be leery of it because a key step in the process is the reduction of a whole bunch of very complex evidence to a handful of mathematical variables which can then be analyzed statistically. I use the word reduction here quite intentionally; those mathematical values can only be either simplifications or distortions of the evidence used to create them. Going back to our example above with tyrants, imagine the difficulties in reducing figures like Peisistratos or Cylon (or Napoleon, Caesar or Hitler) to mathematical expressions; there is ample room for the introduction of new bias but even if done faithfully the result will flatten out much of the nuance of these figures (or be forced to infer in places where we lack evidence). And so even when such data is generated with great care that means taking complex, difficult phenomena and flattening them.

Let’s take an example and we’ll pick a statistical argument that I think actually has some considerable merit to it: the democratic peace theory. The theory (in its modern form), arguing that democratic countries do not (generally) go to war with each other, emerged out of statistical studies comparing historical democracies on a period-by-period basis with a list of historical conflicts and observing the lower rate of democracy-on-democracy conflict.9 But a brief look at the original study reveals how much the complexity of the actual history was flattened to provide for statistical analysis; the United States is a democracy in 1776, but Great Britain isn’t until 1832 – a look at the actual criteria (fn. a on p. 212) reveals the tortured efforts to get a binary classification which produced data that made even minimal sense. Non-European style potentially democratic states – I have in mind the Six Nations of the Iroquois Confederacy – are entirely excluded. The binary created means that the United States in 1812 is 100% a democracy despite holding a not-trivial proportion of its populace in slavery, while its enemy Great Britain is 100% not a democracy despite the main political power being vested in an elected parliament.10 The actual complexity has to be flattened out to produce data; this isn’t a critique of the theory – the flattening was unavoidable. Some more recent efforts at the problem have tried assigning democracy or liberalism ‘scores’ to countries but of course that itself introduces all sorts of complications. The conversion or compression of fuzzy, non-numerical evidence to data is thus not free, it is a ‘lossy‘ process.

And that leads to a fairly frequent kind of interaction between the disciplines: historians scour the evidence and produce our best assessment of them, often at fairly low confidence. Social scientists then take these assessments and turn them into data, stripping or flattening out the caveats in the process and then produce impressive looking statistics like those implied by the chart below and attempt to draw conclusions from them, while historians cry foul over how – in Smith’s phasing – ‘disciplining with data’ cannot correct for the problems with the data.

Image

In the case of this chart, what Max Roser has succeeded in doing is not charting global deaths in conflict, but rather in charting the rate at which evidence for battles is preserved over time and the reliability of the estimates of their casualties. One can actually see the problem with this effort at creating a map of all known battles:

The conclusion drawn in the tweet is, of course, spurious; the prominence of Europe here is an artifact of what battles are well-documented in the languages that the database was constructed with. And before one argues this is just a tweet, John Keegan tried the same thing in A History of Warfare (1993). The map and its conclusions are only so good as the evidence that informs the data it is based on.

Which appears to contend that the Mongolian Steppe was a relatively peaceful place. Of course it wasn’t, but it was a place that produced almost no written records and so provides almost no recorded battles to the database, though in the brief moments we can see clearly into the history of this region it was very violent indeed. But the gaps here are massive; keep in mind for instance on the first chart that prior to 1492 (so nearly the whole first sixth of the Max Roser chart) functionally none of the conflicts in North or South American can be represented because they leave no trace in the evidence. Was it peaceful? No, it doesn’t seem to have been, but that’s not reflected in the data.

All of which is to say that while the social science approach to understanding the past has its benefits, it also has some fairly severe limitations. It is not simply a superior method. My own view is that both historians and social scientists have a fair bit to learn from each other’s methods and conclusions (and caveats and criticisms) but of course this requires mutual respect and a mutual methodological understanding.

Historians by contrast are constrained by two key factors: as a discipline we’re taught to avoid simplifying our subjects for the sake of analysis (some historians are more careful in this than others) and rather than focusing on converting the historical research of another field into data, historians deal directly with primary sources, which in turn demand quite a lot of time and energy be invested into collecting and working through the evidence. Smith’s insinuation that historians aren’t doing the “hard, often unrewarding work” is incredible coming from someone who works in a discipline that has all of its historical data ‘pre-chewed’ for it by historians. Every dot on that chart of battle deaths above likely represents years of work by historians collecting, sorting and understanding difficult and often contradictory primary source material. Without that work, producing the chart would be impossible.

Consequently that means that rather than engaging in very expansive (mile wide, inch deep) studies aimed at teasing out general laws of society, historians focus very narrowly in both chronological and topical scope. It is not rare to see entire careers dedicated to the study of a single social institution in a single country for a relatively short time because that is frequently the level of granularity demanded when you are working with the actual source evidence ‘in the raw.’

Nevertheless as a discipline historians have always11 held that understanding the past is useful for understanding the present. Or as arguably the first historian, Thucydides puts it, “if it be judged useful by those inquirers who desire an exact knowledge of the past as an aid to the understanding of the future, which in the course of human affairs must resemble, if it does not reflect it, I shall be content.”12 Smith declares that this sort of use of history means that these “are social-science theories” (emphasis original), which is an absurd bit of turf-claiming from a discipline (the social sciences) which is, as a practice distinct from philosophy or history, something like 2,225 years younger than history.13 So how do historians build arguments with present-tense implications and is this approach valid?

The present-tense implications of historical research generally come in two kinds: either the history of a thing (usually an institution) that still exists is used to explain how that thing came to exist as it does or the history of something in the past is presented as analogous to something similar in the present, such that the former is a useful tool when thinking about the latter. Smith is clearly focused on the latter kind of historical argument, so we can set the history-of-a-thing-that-exists argument aside for today.

The epistemic foundation of these kinds of arguments is actually fairly simple: it rests on the notion that because humans remain relatively constant situations in the past that are similar to situations today may thus produce similar outcomes. This is no new thing; the attentive will notice our good friend Thucydides laying out this very logic some c. 2,420 years ago. At the same time it comes with a caveat: historians avoid claiming strict predictability because our small-scale, granular studies direct so much of our attention to how contingent historical events are. Humans remain constant, but conditions, technology, culture, and a thousand other things do not. I think it would be fair to say that historians – and this is a serious contrast with many social scientists – generally consider strong predictions of that sort impossible when applied to human affairs. Which is why, to the frustration of some, we tend to refuse to engage counter-factuals or grand narrative predictions.

We tend to refuse to engage in counterfactual analysis because we look at the evidence and conclude that it cannot support the level of confidence we’d need to have. This is not a mindless, Luddite resistance but a considered position on the epistemic limits of knowing the past or predicting the future.

Instead historians are taught when making present-tense arguments to adopt a very limited kind of argument: Phenomenon A1 occurred before and it resulted in Result B, therefore as Phenomenon A2 occurs now, result B may happen. Tyrants in the past have made multiple attempts to seize power, therefore tyrants in the present may as well, therefore some concern over this possibility is warranted. The result is not a prediction but rather an acknowledgement of possibility; the historian does not offer a precise estimate of probability (in the Bayesian way) because they don’t think accurately calculating even that is possible – the ‘unknown unknowns’ (that is to say, contingent factors) overwhelm any system of assessing probability statistically. Once again what Smith mistakes for lethargy is in fact a considered position by historians that further certainty is not possible; the critique historians make is that the methods Smith advises take that essential, unresolveable uncertainty, dress it up with numbers and pretend it has vanished (or been quantified) when it hasn’t and cannot be.14

Nevertheless this historian’s approach holds significant advantages. By treating individual examples in something closer to the full complexity (in as much as the format will allow) rather than flattening them into data, they can offer context both to the past event and the current one. What elements of the past event – including elements that are difficult or even impossible to quantify – are like the current one? Which are unlike? How did it make people then feel and so how might it make me feel now? These are valid and useful questions which the historian’s approach can speak to, if not answer, and serve as good examples of how the quantitative or ’empirical’ approaches that Smith insists on are not, in fact, the sum of knowledge or required to make a useful and intellectually rigorous contribution to public debate.

Though I’ve already offered several examples where Smith critiques historical methodologies without actually bothering to understand them, I want to add one more, which is Smith’s apparent lack of awareness of the different uses of the word ‘theory,’ which came up in the debates that led to his essay. Smith uses ‘theory’ to mean ‘hypothesis’ or ‘predictive theory’ but within the discipline of history ‘theory’ as a term refers to the broad intellectual framework into which evidence is interpreted; that ought to make sense given that history is by and large a discipline of source criticism rather than one engaged in hypothesis testing (though we do a bit of that too). Historical theory is thus often concerned with questions of what sort of history is important, what questions can and ought to be asked of the sources and how their importance should be understood (e.g. mentalités within the Annales school; a study which focuses on mentalités can be described as being situated within an Annales theoretical framework), a very different beast from a hypothesis.

The Wisdom of Noah Smith

In conclusion, Noah Smith’s analysis here cannot be recommended. While accusing historians of shrugging off the “hard, often unrewarding work” he has failed to engage meaningfully with the historical method and its epistemic systems; the easy, swiftly rewarding work of establishing a basic understanding of other disciplines. The result is an analysis which repeatedly misunderstands the claims that historians are making and deliberately ignores the way they signal uncertainty. Instead, Smith indulges in rank scientism, insisting that all claims be confirmed empirically, a calling he himself failed to answer to on the previous day in arguing for the elite overproduction hypothesis which he admits, “make[s] some questionable assumptions about how labor markets work” and which isn’t empirically tested. For my own part I would note that Peter Turchin’s effort to find support for this hypothesis in the ancient world is fatally flawed by the lack of evidence; Turchin has built a vast castle on sand – it can be no stronger than its meager evidentiary foundation.15

Of course some historians absolutely do make arguments that extend out beyond what their evidence can support firmly. Sometimes they are responsible in signalling uncertainty and sometimes they are not. The problem is especially acute in compressed formats and genres. Of course social scientists do much the same; it cannot for instance both be the case that we can be certain that forgiving student loan debt both absolutely will and assuredly will not induce more inflation. On this latter point, I suspect Smith would agree and yet I don’t see him suggesting that economics, as a discipline, should pack it up and go home.

And it would be easy enough to dismiss one ill-advised essay on an internet full of them were it not for the fact that this kind of scientism contributes to the hardship of a discipline already under sustained attack not because of the predictions it supposedly makes but because of the true things we insist on teaching about the past, while at the same time history departments continue to shrink. Of course these trends worry me as a historian, but they ought to also worry social scientists too, who after all, as noted rely on historians to process the evidence they use to produce the data that forms the foundations of their conclusions and also benefit from historians offering a critical second look at their models and conclusions. Without historians to do that work, the ability of social scientists to reach for data earlier than the rise of the modern administrative state functionally vanishes.

Yet for all of this it is the wisdom of most historians to be able to see the value in methodological and epistemological approaches other than their own. Alas, it is a wisdom Noah Smith apparently lacks.

  1. And before someone assumes that this is just because the humanities use a non-humanities term wrong, it seems worth noting that theory (θεωρία) was our word first and had our meaning first. We’ll get to the difference in that meaning in a moment.
  2. By the way, Noah – misquote! Punctuation has to be preserved in quotation, “data set” embedded in a quotation is “‘data set.'”
  3. This is, in fact, what separated in the Greek mind, kings from tyrants: a τύραννος ruled extra-constitutionally (that is, outside of the traditional system of government for the polis) with violence while a βασιλεὺς (king) ruled in a customary or constitutional position through tradition and legitimacy. Thus Sparta and Macedon had kings, while Syracuse a series of tyrants (the Romans do not make this distinction and so often called the Syracusan tyrants kings (reges), so you will see that usage in English as well, but in the Greek sense they were tyrants).
  4. In this case that’s not the evidence for all tyrants everywhere, but specifically as noted with the caveats above, the evidence for ancient Greek tyranny, a much smaller set; Smith assumes this term is perfectly synonymous with ‘dictator,’ which it is not but again that goes to Smith’s failure to grapple with terminology in disciplines other than his own.
  5. Because his existence is independently attested by multiple contemporary authors, most notably Plato, Xenophon and Aristophanes. But this is testamentary evidence of the sort empiricism, as a total epistemology rejects, the absurdity of which is why attempting to treat empiricism as a total epistemology for practical purposes is derisively termed scientism, instead of science.
  6. One would need, presumably, to identify the relevant characteristics, measure them quantitatively somehow, and then look at their rate of incidence in the emerging tyrant population against the rate of incidence in the general population. I shudder to imagine how long you’d have to wait until the tyrant population reached a sufficient size to achieve statistical significance in all but the most banal of observations.
  7. That I know of, I cannot claim to have comprehensive knowledge of every poli-sci statistical study.
  8. I am of course simplifying here to a degree to the methods that Smith is most focused on (the quantitative ones). Both economics and political science also have branches that rely heavily on game theory, sometimes as thought experiments, sometimes as empirically tested, although with theoretical approaches that rely more on rationalism than empiricism.
  9. The first full version of this is M.W. Doyle, “Kant, Liberal Legacies and Foreign Affairs,” Philosophy & Public Affairs 12.3/12.4 (1983). The models and methods for assessing them have been refined substantially since then, but the principle is generally considered well-established, though substantial argument about its causes remains, see for instance the substantial bibliography on D. Reiter, “Is Democracy a Cause of PeaceOxford Research Encyclopedia of Politics (2017) for a sense of the debate to date. Because we’re going to engage with actual published scholarship here rather than tilting at 1500 word magazine articles.
  10. As is fairly typical, by the by, the statistical approach here extends only to the 18th century. Subsequent studies have pushed the method further back but not out of the early modern period. I know of no effort to try to test the democratic peace theory against ancient democracies. Attempting to define democracy in that context would doubtless introduce new problems.
  11. A rare word for me to use, but here I actually mean it.
  12. Trans. follows R. Crawley (1874) with minor changes.
  13. Thucydides writing what I view as the first true effort at history as we understand it c. 400 BC, while the term ‘social science’ isn’t coined until 1824 and the first clear field of it not established until the 1830s. It’s fair to claim earlier figures (Machiavelli, Hobbes, even Aristotle) as predecessors of the social sciences, but they all understood themselves to be doing philosophy and also used quite different methods, whereas Thucydides understood himself to be doing history and even engages in historical source criticism (e.g. Thuc. 1.20-21).
  14. Of course more responsible social scientists are well aware of limits of certainty and even quantifiable uncertainty when dealing with these sorts of complex systems.
  15. Which doesn’t mean he is wrong, merely that the evidence prior to very recently cannot be used to say what he says it does because of how staggeringly incomplete it is.

240 thoughts on “New Acquisitions: On the Wisdom of Noah Smith

    1. For me the biggest problem with historians is they do a bunch of analysis of textual (and sometimes archeological) sources and then write about technical subjects having sweet fuck-all in terms of technical knowledge about those subjects.

      For example my pet interest is the history of brewing and I’ve come across historians who have written entire books about the history of brewing…who don’t seem to know much of anything about the nuts and bolts of how beer is produced, which leads to all kinds of errors seeping into their writing that could’ve been caught by having a 15 minute conversation with a brewer.

      Now this obviously isn’t a problem here as we can see with the How They Made It series but the idea of writing a whole book on the history of a subject and then not even learning the basics of the nuts and bolts of how that works while reading mountains of texts to learn about the socio-economic aspects of that subject just makes my brain hurt.

      1. To be fair, you see this a lot with economics, political science, and the social sciences as well. All the time. It is so frequent that it boggles the mind.

        I remember recently reading a purported article about racism in the developing world by someone who, quite clearly, had no experience being a visible minority in one of those countries and who had never taken the time to speak to those of us who have been. The article, which was published in a respectable local journal, was full of really basic inaccuracies, and it would’ve taken a few seconds actually talking to a visible minority to sort them out.

  1. This makes me regret ever giving Noah money. Utterly unacceptable behavior from someone who purports to be of the side of truth. Time to put my money to a better scholar, writer, and thinker.

    Ugh I just feel gross now.

    1. In a blog post way back years and years ago, Smith said he had gone full time pundit. Did this change at some point? (Full time pundit would explain some of this, lots of pundits = substack + twitter + something in the air has produced a lot of the weirdness you see in this post. Or maybe brought out wierdness that was already there.)

      1. Funnily enough, in the comments of his post he accuses a fringe of historians of trying to turn their credentials into a position in the pundit classes.

        Self awareness might not be his strong suit.

      1. Indeed, if a pundit isn’t massively wrong about things fairly regularly, they’ve retreated to a niche where they will soon face irrelevance. And in this, Noah is massively wrong :-).

        I accord an expert (like our esteemed host) a large amount of authority in his field, and only some outside of it. My expectations of a pundit, on the other hand, are to be thought provoking, not reliably accurate. After all, no human can be expected o deliver new insight after new insight.

  2. Huh. Weird to see a couple different people who I sometimes read the blogs of fighting. This also gets into weird stuff like the philosophy of knowledge. I think the best point you make is about the lack of representative data in the past – you can’t exactly do a research survey of dead people.

    This could be wrong (and, I’ll admit I am biased here, I’m a chemistry student), but it seems to me that this is a consequence of emergent properties of larger systems and how complexity makes things difficult to analyze. After all, chemistry is based off of quantum mechanics, but simulating, say, one mole of gas by using that is computationally impossible, so you have to use approximations in many circumstances for things to be useful, and the same goes with biology,. You can’t simulate the human brain with current technology, so we have to make approximations (this part of the brain controls this, and from that arises psychology as we can’t simulate that, and in large groups you need economics and sociology, etc, etc. We can’t purely reduce history to economics, sociology, for the same reason – if we had all the data, infinite computing power, and 100% accurate theories independent of culture, it might be possible. But that’s never going to happen, especially as so much data is forever lost, so we have to use modern historical methods to get the best picture we can.

    1. There are different kinds or levels of emergence.

      You can’t model the path of every gas molecules, but gas behavior tends to often be well behaved and approximatable by the ideal gas law and thermodyamics, say. Simple enough that we can either derive those laws from statistical mechanics, or get them from measuring gas behavior directly.

      You can’t model the path of every molecule in a turbulent liquid… and approximating that usefully is, AIUI, nightmarishly hard if not impossible. Lots of ‘chaos’, lots of ‘sticky’ interactions between molecules unlike the elastic collisions of gas molecules…

      In economics, thermo might be analogous to Keynesian IS-LM: raise interest rates, and the economy contracts; pump in money, and get economic expansion and/or inflation, depending on idle capacity. Details? We don’t need those. And skip ‘microfoundations’, they haven’t worked and aren’t needed… turbulence would be “will a recession happen this year? (without the central bank causing it, or some supply shock)” or “when will this asset bubble burst?” No one knows, or can know.

      1. It helps, of course, that gas laws apply to A LOT of gas molecules. Far more than there have ever been humans.

        1. It does, but there are just as many molecules in a liquid, and that’s much harder to predict — the nature of the interactions between the units leads to much more complexity and unpredictability.

          And human interactions are likewise complex and ‘sticky’, not elastic…

      2. Yeah, it wasn’t the best example to compare with history’s relation to social sciences. My point remains, though.

    2. It is somewhat interesting to think that as this is simply a computational strain issue, it may eventually solve itself, at which point absurdly accurate predictions of future events will become commonplace (because the computational issue is that in order to simulate everything from the quantum level up, you basically have to simulate the whole universe, as at the various steps up you’re not just having to average because it’d be a lot of maths, but also because increasing the scale of the model has made more previously insignificant variables relevant- gravity on water simulations is an obvious one, where it is technically always a variable, but matters significantly less when you’re modeling the contents of a bath than the atlantic)

    3. I think there’s more to it than just problems with emergence and complexity.

      One thing I’ve taken from philosophy of science is that we like to talk about models when it comes to disciplines, but a better metaphor for thinking about what we’re really up to is maps. Both are just ways of thinking about using a less accurate description of something to achieve a practical purpose. And what should be put in or left out depends on that specific purpose, not just on how much paper and ink you have lying around. It’s easier to forget that when talking about models than when talking about maps.

      Sometimes maps are better when they misrepresent things like spacial relationships between places, which you’d think would be a central feature. Harry Beck’s London tube map isn’t in any way a simplified version of a large aerial photo of the city, after all, but is a massive improvement on the earlier maps that did look like. And taking the (practical) value of scientific explanations to be limited to predicting (very specific kinds of) things is selling them awfully short.

      After all, even if we did manage to come up with a formula for translating chemistry into physics* it might be neat, but it would be completely useless for chemists to write out a complete model of what something would look like in physics and work with that. You could say that the formula would provide support for the theories in chemistry, but, chemistry does that too, and we have it right now.

      (And obviously it’s worth keeping in mind that we don’t even have a way of translating into physics, let alone some other discipline. And that the best physics we have right now seems directly opposed to the idea that we could have determinate predictions of events.)

  3. The result is not a prediction but rather an acknowledgement of possibility; the historian does not offer a precise estimate of probability (in the Bayesian way) because they don’t think accurately calculating even that is possible – the ‘unknown unknowns’ (that is to say, contingent factors) overwhelm any system of assessing probability statistically.

    An acknowledgement of possibility is a prediction, though. The space of outcomes one could write down regarding the real world is huge, so the probability that would be assigned to any one of them by a truly uninformative prediction is tiny. To say that a particular thing has a high-enough probability that it’s worth noting as possible does in fact carry substantially more information than not doing so — sometimes a lot; frequently most of the work goes into locating what possibilities are worth considering.

    For instance, if I were to claim that “It’s possible that Donald Trump will declare himself the King of Antarctica”, people would reasonably say that’s a dumb statement. It’s “possible” in the sense that it can’t be strictly ruled out, but it’s not “possible” in the sense that it’s likely enough to be worth considering, which is what’s meant by the acknowledgements of possibility that you talk about. Such a statement is indeed enough of a prediction that we can reasonably say that it is wrong; no, I haven’t asserted that Trump will declare himself King of Antarctica, so it can’t be disproved in that way, but the implicit probability I’m putting on it by calling it “possible” is far too high, and if you take action on this prediction — by taking some cheap gamble that will only pay off if this action does occur, on the theory that hey it probably won’t happen but it costs you so little and you’ll win big if it does (what that would consist of in this case is unclear) — you are making a mistake even if it only ends up costing you a hundredth of a penny.

    And yes obviously I am taking a Bayesian perspective here, but, like, one can consider Bayesian epistemology to be an essentially correct description of an ideal epistemology without like actually attempting to put precise probabilities on everything. Saying “it’s possible that X might occur” is a statement about the probability of X even if it turning it into numbers might be a difficult exercise. (And that’s still true if you want to use probability intervals rather than point probabilities, etc.)

    I guess the conflict here is between the weak notion of prediction and the strong level of evidence demanded? Like, to my mind such an “acknowledgement of possibility” is a prediction, but I’m obviously using a weak notion of “prediction” when I say that; I’m not sure you can simultaneously use such a weak notion of prediction while demanding a much stronger level of evidence for it, which is what Smith seems to do?

    1. Strong comment. I think the weakest part of Bret’s otherwise pretty great response here is that he seems to weakly motte and bailey “historians don’t like making predictions/talking about causality” (the motte) with “historians like to passive aggressively insinuate that they know X has happened before and X will happen again” (the bailey). Which (by my reading) seems to be what Matt Yglesias was talking about in the quoted tweet, which I think Bret didn’t really respond to (recognize?) properly.

      (Also, I can’t condone the oh-so-polite sniping back at NS, but I *can* say it made me laugh.)

          1. Summarising someones argument while removing their caveats, then suggesting that your summary suggestions something the original argument did not suggest seems pretty bad faith to me.

      1. Reminder that the “motte and bailey” is not a fancy new term for “equivocation”. It’s not a fallacy, but more of a stratagem (but not really, the provided reference calls it a doctrine), where the more formal terminology and defensible reasoning of one party is used by another (typically) party to cover their more conjectural (or even sloppy) reasoning/advances.

        https://blog.practicalethics.ox.ac.uk/2014/09/motte-and-bailey-doctrines/

        “Different things said by different people are not fairly described as constituting a fallacy.”

        Our host has spent a great deal of effort in attempting to clarify and dismantle the ability to use historical knowledge in this way, along the lines of “no, you can’t actually simplify things that way, as tempting of an argument as that is”.

        1. Nope, it’s a fallacy. Of equivocation. People who say “Feminism is the radical notion that women are people” indulge in it because we all know that in reality they would, in fact, deny that something is feminist on grounds much broader than denying that women are people.

          I have, in fact, read a number of essays telling me I am a feminist when I am not, using that fallacy.

          1. I was thinking about one of the reasons people enter into motte-and-bailey arguments, or arguments that look a lot like equivocation from an outside view, especially in politics.

            Sometimes, a person starts with axioms we may find humble and sensible, but then takes them in a direction we’re very uncomfortable with.

            For instance, someone may start with “it’s important that adults be able to decide whether or not to enter contracts, and it’s very important that stuff you own shouldn’t be stolen from you,” and I smile and nod. Sure, fine.

            And then they go on for a few paragraphs and conclude with “…and that’s why society should be okay with debtor’s courts selling people into slavery if they can’t pay their hospital bills.” At which point I’ve stopped smiling and nodding, and my eyes have widened and I’m backing up a little.

            There’s a lot of steps in the process between A and Z here. And as you can imagine, I may have a lot of disagreements about those steps. But if challenged, the person I’m disagreeing with will rapidly fall back to Step A, assuming I’m some kind of hostile weirdo who doesn’t grant their premises at all even a little bit. If so, they will fall back on “I just believe that freedom of contract and property rights are important!”

            From their perspective, they are defending their obviously righteous viewpoint from, I dunno, a dirty commie. From my perspective, they are indulging in equivocation, in the motte-and-bailey argument.

            It takes a rare degree of mental discipline and decency on both parties to actually stop and rigorously analyze the chain of reasoning at every point, finding the true points of disagreement and addressing them effectively. There’s a reason philosophers spend a lot of time learning how to philo their sophy properly.

    2. (Like to be slightly more explicit at the end there, since I left that part a bit short — if one wants to use a Bayesian notion of “prediction”, as I am, then one also has to use a Bayesian notion of “evidence”, which is much weaker than the sort of evidence that Smith seems to be asking for, and it seems to me like historians’ evidence is pretty adequate for the sorts of predictions they’re making?)

      1. Noah did not insist on a higher standard of evidence in all cases, just that *if* they can’t meet this higher quantitative standard, then their media predictions should be treated more like “punditry rather than as academic knowledge”. He also complained that they shouldn’t deny they are making predictions when they obviously are, as you correctly explained above.

        1. The problem is a bit more complicated than that. Granted, historians’ standard of certainty in predicting the behavior of tyrants is lower than, say, astronomers’ standard of certainty in telling you where to stand if you want to see a solar eclipse. But there are plenty of people who try to use historical examples and evidence to make predictions that are wildly imprecise and unreliable.

          Compared to someone who says nonsense like “America is strong because we have the Spartan warrior ethos, and having more of it would make us stronger” or “the Romans fell because they let foreigners in instead of concentrating power among ethnically pure Romans…”

          Well, a professional historian with a clear understanding of the facts is making a much higher grade of accurate prediction.

          So for a professional historian to claim special standing to make more accurate predictions based off of history is probably justified, even if “more accurate” is stated in relative terms and not absolute ones.

    3. One might consider Bayesian epistemology to be an essentially correct description of an ideal epistemology, but one shouldn’t!

      If you can’t actually put numbers (or intervals, as you like it) on things, calling your estimates “Bayesian” is really just a bit of posturing. And you can’t. Brett takes a good deal of time in the article above describing the difficulty of putting numbers on things, and the “flattening” affect that attempt can have. If your numbers haven’t been derived from some solid principle, or they flatten the data too much, then your mathematically rigorous Bayesian estimate is not epistemically rigorous at all.

      1. Failure to use numbers correctly may mean it’s not epistemically rigorous, but that doesn’t mean it’s not still a prediction.

    4. And, following up, the idea that the experience of Cylon and Pisistratus offers useful or informative guidance for defining the space of reasonably probable outcomes in the current U.S. was always silly. (For starters, controlling the Acropolis allows you to control Athens, controlling the Capitol building does not allow you to control the U.S.) That whole piece was one of Bret’s weaker efforts. I think the real lesson is that when historians (or economists, though it seems to be less common) allow their hatred of Trump to lead them into extravagant fantasies,[1] they furnish opportunities for ridicule.

      [1] In fairness, Bret did not raise the danger that Trump might hire a tall woman to drive him into Washington in a chariot, so it could have been worse.

      1. allow their hatred of Trump to lead them into extravagant fantasies

        This…is a very strange thing to complain about, as there’ no evidence it happened.

        And in actual life, quite a lot of politicians, including Trump, are arguing the election was stolen (or that other elections were stolen) that very clearly were not, and are taking active steps in this direction, plus the capitol attack did in fact happen.

      2. Controlling the Acropolis with only a small body of men does not allow a would-be tyrant to control Athens indefinitely. Your loyalists have to sleep sometime, and if they are besieged within the Acropolis by the rest of the city, it is unlikely that they will last that long.

        Controlling the Capitol building itself does not allow you to control the US, but that was never the plan of the January 6th coup attempt. The plan, using a very minimal amount of very, very basic reading between the lines based on information publicized by the congressional committee and attested to by Republicans, involved more than that.

        Temporary control of the Capitol could be parlayed into an attempt to (irregularly) affirm that Donald Trump would still be president, that Biden had in fact lost the election no matter what those lying totally fabricated vote totals said.

        This is what is known as a “self-coup,” and it is in some ways very different from an attempt by a total outsider to seize the machinery of the state. Trump already was the lawfully acknowledged authority with the power to tell soldiers what to do, and so on. Now, I’m sure we can agree that it’s unlikely that the machinery of the government as a whole would just keep obeying him after an obvious attempt to overthrow competing centers of power. But such a move has a far higher chance of success than would a simple attempt by an adventurer at the head of a mob to seize a single building.

        So to avoid disingenuousness, we must observe that any comparison is not about the details- but also that Dr. Devereaux seemed, in his article, to know this perfectly well. The underlying point is that someone who is willing to use force to overturn normal legal processes and secure their authority beyond the bounds the law allows once is very likely to come back for another try if not somehow prevented from doing so.

        Imagine a man that has already worked his way around to the point where he refuses to admit he can lose an election, and decides to direct his supporters to start a riot to disrupt the process that would formally declare the election’s verdict. Such a man will probably not just go “oh well, nice try, guess I won’t ever do anything like that again!” if it doesn’t work. He won’t reach that conclusion of his own free will, at any rate.

        1. That is certainly an interpretation of what happened on January 6.

          Given the general clownishness of everything Trump did around the election, it fails the smell test, because there is no evidence that Trump or his coterie are devious enough to come up with such a scheme.

          1. My argument is not that he’s too stupid to commit sedition. My argument is that he’s incapable of the kind of subtle maneuvers that Simon Jester ascribes to him.

            If Trump had planned a coup, you wouldn’t have to “read between the lines”–it would be a slam-dunk open-and-shut case, because January 6th would have been a full-blown bloody tragedy instead of a tragic farce.

    5. I don’t agree that saying “it is possible that Event A might happen” is a prediction in any useful sense. This is simply because that statement can be followed by similar statements within the same context for Events B, C, and D, all of which might preclude Event A. “It is possible that Event A might happen, because of the similarities between contexts then and now” is a useful statement, because it suggests thinking about the similarities and the differences the past situation and the current one. But, you could quite easily also state: “But it seems more likely that Event B will happen for these reasons”. So, even if you consider “it is possible Event A might happen” to be a prediction, it’s so weak that it’s not useful; it’s a possibility to be considered as an aid to understanding. History is not predictive.

      A more “historical method” set of statements would be: “It is possible that Event A might happen, because x, y, and z elements of the current situation are analogous to a situation 1,500 years ago. However, for reasons i, ii, and iii, it seems more likely that Event B will happen. So, when we are considering our current situation, we should take into account a wide range of elements, and bear in mind that the unexpected could happen (Covid anyone?). What happens is contingent, so getting a deeper understanding of our current situation would really help.”

      Lack of a predictive element in the historical approach is not a weakness, nor does it make history less useful as an approach to current situations. It can be used as an aid to understanding.

      1. In all fairness, if someone writes a few thousand words analyzing a situation, discussing historical parallels, and emphasizing that certain conclusions can be drawn…

        Even if they provide caveats, the commonsense understanding of the written piece is “the author predicts that X is a likely outcome of this situation.” This is, of course, necessarily going to involve the word “likely” doing a lot of work here, because “likely to happen” is not the same as “will happen” or “will happen in exactly the way I anticipate.”

        But we do not have to be fools about these things. A man can make a prediction in the full knowledge that he can be wrong, without the prediction ceasing to be a prediction, something that a good faith reading would interpret as a prediction.

    6. Maybe I am wrong (our pedantic host is welcome to correct me), but the way I parsed of the line you quoted was not “I am (implicitly, vaguely) making a claim about the probability distribution of tyrant tries again|tyrant defeated once” but rather “Hey, I am offering you a piece of information. All the records we have about what happened in a certain place in a certain period of time tell us that tyrants (there, then) behaved in a certain way”. Now, given the paucity of information about tyrants, I think any good bayesian would definitely update their posterior with that. Maybe you think differently. But the crucial point is, that kind of discussion is beyond the scope of the article, that is, to simply state what our sources tell us about Hellenic tyrants. And sure, the author pretty openly thinks that the piece of evidence he is presenting will weight significantly in our final posterior (why writing an article about it otherwise?), but ultimately leaves that decision to us.

      1. This is how I interpreted it as well. It’s just a statement of what can possibly happen.

        A good analogy would be saying, “With the application of modern medicine, some people survive brain cancer. Here are some examples of people who survived it.” It’s just a statement of what has happened in the past but offers no prediction about a particular case of brain cancer (because there are a lot more variables at play than what is being taken into consideration in this statement), and it’s informative in the sense that if someone thought that brain cancer is an automatic death sentence, now they are provided with evidence of what might happen, but it’s not informative enough to skew one’s “Bayesian priors” on what might happen to a particular cancer case if they were already aware of survival being a possibility. But it doesn’t make any predictions about all cases of brain cancer or about what might happen to a particular brain cancer case.

  4. I think Smith caught an IMO uncharacteristic bit of twitter poisoning. There are highly visible twitter characters cloaked in the would-be authority of history who make exactly the errors you describe. They are not representative of the careful epistemology of history as a discipline but they are much more visible. That doesn’t make his arguments any better, but it makes them perfectly comprehensible at the expense their relationship to history.

    Or perhaps it isn’t twitter’s fault. But people making unreasonably confident claims on the alleged basis of historical study is very much not an empty class. You could call them pundits instead of historians but doing so risks the scotsman fallacy.

    Likewise, what I took from Yglesias’ argument, was a claim that whoever he’s calling historians are not as limited in their attribution of causation as they claim, and perhaps even think, they are.

    1. But it isn’t a no true Scotsman arguement if the person clearly isn’t a Scotsman and is only wearing a plaid cap. A pundit citing history is not a historian anymore than I’m a doctor when I pass on good remediation for a cold.

      Overall I’m confused about what’s controversial here. Of course tyrants barring obstacles will seize power again generally speaking. Hasn’t anyone worked on a committee of some kind or run into clique groups? So long as the power base exists so to the ambition.

      1. “Of course tyrants barring obstacles will seize power again generally speaking.”

        If this is something that’s just intuitively obvious, then why would you need to bring up historical examples in order to support it in the first place?

        1. Because, while intuitively obvious to many, there are some whose intuition fails to grasp it. Be that because their intuition isn’t well developed enough through education or experience, or that there’s an extra leap because they have to jump over some strong biases to get there. In these situations, being able to say ‘it happened before’ is useful.

    2. I’d be inclined to blame substack more than twitter here. The audience capture effect on that platform is strong.

  5. First of all, I think you are focusing too much on “empirical”. Reading Noah’s article, it’s clear that he isn’t using the same definition of empirical that you are. Yours might be “right” and his might be “wrong”, but faulting him for an argument he isn’t actually making is a bit weak.

    Beyond that, I think Noah’s arguments that historians can cherry pick examples to support any position they wish has some validity. Like, I’m sure that if you tried, you could find an example (or several examples) of greek would-be tyrants who tried once and then willingly retired. And while an article that said “some greek would-be tyrants did just try once, so it’s possible that trump would retire now” would be technically correct, that would also presumably be somewhat misleading in practice. You can add in as many qualifiers as you like, but readers will tend to read that as “trump won’t try again” even if that isn’t what you intended to say.

    Of course, this sort of criticism is hardly unique to history. “Lies, damned lies, and statistics” is a phrase for a reason, after all. A statistical approach to the question of “how often do tyrants try again” would be interesting, but the actual results would likely vary wildly depending on how you define “tyrant” and “try”. So yeah, the historical approach has its share of flaws, but I’m not trying to argue that the statistical approach is necessarily better overall.

    1. If a social scientist is using the word ’empirical’ incorrectly, something has gone very wrong indeed. That’s an important word to understand in any discipline that understands itself as a science!

      1. You seem to have a much narrower idea of the word “empirical” than usage I’ve encountered in other contexts. I would consider observational evidence to be “empirical”, and I would also consider historical records to be “observational evidence” in this sense (albeit often unreliable, 2nd hand, etc.).

        1. History is hardly unique as a field that tries to investigate the world while being unable to perform experiments. Astronomy is almost entirely observational, for instance. Many questions investigated by social sciences are also not amenable to direct experiment.

        2. Social scientists have thought a lot about how one might empirically investigate causal hypotheses when only observational (i.e., non-experimental) evidence is available. Natural experiments are an obvious one. Propensity scoring is another approach. Also, there’s the whole causal framework of Pearl / Spirtes, Glymour, Scheines, which attempts to explain how observational data constrains possible causal relationships. Instrumental variables, regression discontinuity, etc. Anyhow, obviously experiments would be better, but you often can’t have them, so it makes sense to spend a lot of time thinking about how you can empirically investigate causality anyway.

        3. Smith’s perspective on macroeconomics is that it used to be much more based on “theory” (in the sense of building a toy mathematical model, doing some mathematical proofs about it, and just sort of asserting that it was relevant to the real world) and has since taken an “empirical turn” (in the sense of more experiments, but also more of these observational studies that use the above methods). I think if you look at educational materials and papers in the field, you’ll see “empirical” used to describe both experimental and causal-identifying-observational methods.

        1. I agree about this. “Empirical” doesn’t mean “I see it happening, right now”, let alone in an experiment. A fossil can be empirical evidence; so can an archaeological finding. A textual source is at the very _least_ empirical evidence that someone actually wrote it.

      2. With the standard caveats about the (in)validity of linguistic perscritivism and what does it mean a definition be wrong, I’m pretty sure the definition Noah was using is the correct one. It is a much closer fit to how I have personally heard the term used IRL and also seems like a better fit for the dictionary definition https://www.wordnik.com/words/empirical which seems like reasonable proxy for the common usage which since Noah is pretty explicitly writing for laymen seems like the most important one.

  6. What strikes me most forcefully in this is the sheer *arrogance* of social scientists. Perhaps we would be much better off if they had never co-opted the monicker “science” for their disciplines.

    1. LOL. LMAO.

      “Social scientists are arrogant [implicitly, specially so]” Is hilarious to anyone who’s read much from other scientists. Physical Sciences has much more of an arrogance problem than social.

      https://xkcd.com/793/

  7. I’m grateful that you took the time to write this, I followed a bit on Twitter, but generally wasn’t impressed by how either ‘side’ acquitted themselves, as is usually the case on Twitter. The discussion of different epistemologies was particularly enlightening for me, as well as the pitfalls inherent in reducing things to numerical values.

    The only thing I struggled with throughout the conversation was what exactly counts as a prediction. When you say “Tyrants in the past have made multiple attempts to seize power, therefore tyrants in the present may as well, therefore some concern over this possibility is warranted,” I guess I don’t feel like this is *just* an acknowledgement of possibility. A neutral acknowledgement of possibility implies to me that… anything else is equally likely?

    Just saying that “Would-be tyrants may make multiple attempts to seize power,” is an acknowledgement of possibility, but it’s also not particularly convincing. It makes sense that history can offer support for this possibility existing, but how does history support it if not by suggesting that it is more likely to occur? And when we offer evidence to suggest that a certain possibility is more or less likely, are we not making predictions?

    1. I mean, something can be relatively unlikely — 10% “chance”, say — and yet still be worth worrying about, because of its high negative consequences if it happens.

      “A would-be tyrant might just give up on trying for power” — yes, that’s a possibility, and one we’re not worried about, because he gives up.

      “A would-be tyrant keeps trying” and “Further would-be tyrants follow the trail of norm violations that he blazed” are things to actually worry about.

      1. I agree, but I also believe that those are predictions about the world based on evidence gathered from historical examples. It’s not about the specific probabilities as much as that you think *something* is more likely than you would otherwise based on the evidence, i.e. you think that changing your future behavior based on lessons from history would be beneficial.

    2. This is the kind of thing I took Yglesias to be referring to. Either you are making some kind of actual predictive claim about how likely things are to happen, and need to be willing to be pinned down on that, or you’re not saying much at all.

      If historical parallels don’t help provide causal explanations or predictively useful information, what do they bring to discussion of any current event? It feels a bit like purely ornamental erudition. In practice, of course, people usually bring them up in a way that does imply some prediction.

      1. Speaking only for myself, I find the current trend towards ‘you should be able to quantify the degree of chance that this will happen’ tends to lend itself ironically to far more false certainty than the available evidence actually supports.

        To take the tyrant example, at the time it was written there was quite a lot of talk going around about how January 6th wasn’t really a coup attempt because it was silly, or Trump backed off rather than push it further. Responding to that is entirely worthwhile.

        Claiming that you know that there’s a sixty percent chance he’ll try again and a thirty-five percent chance it will inspire serious follow-up attempts by other parties isn’t being more helpful, even though it allegedly indicates uncertainty (this could go either way) the level of precision in the guess ironically gives a false impression of actual knowledge which we don’t/can’t have.

        And I think that’s actually the critical point of dispute. To Noah/Matt, those sorts of estimates are a sign of intellectual humility and good reasoning. They indicate what you think is actually likely in a very clear way. To Bret/most historians I saw commenting, those estimates are a sign of wild intellectual arrogance, because they seem to assume you have sufficient facts to make such an estimate.

        1. Then side with the philosophers in the article above – accept that you don’t know, and say that. It’s this business of saying that history suggests a possibility, but not saying whether you think it is more likely than any other possibility, that seems off. Almost anything is possible; if you’re not willing to give some indication (not necessarily numerical) of what you think is likely, what are you saying?

          1. What you’re generally saying is ‘this is a thing which has happened before and can happen again, but we lack sufficient information to have a good feel for likelihood.’ This is sufficient to disprove any number of claims of either psychological impossibility (as seen in some of the responses to the Sparta article here, where reference to existing real world mistreatment of children can be used to demonstrate that no, it is actually possible that Sparta treated its children that way) or practical impossibility (no one who really wanted to be a tyrant would go about it in this farcical manner and if they did, they’d be laughed out of society and never try again).

            By the way, with all possible respect, your response here is exactly why I tend to find those attempts at bayesian estimates when you lack underlying information so concerning. It sometimes turns what is supposed to be a driver of intellectual humility into one of intellectual certainty.

          2. The problem is that one may have enough information to delineate likely outcomes, without having enough information to assign realistic probability estimates.

            Outside some highly specialized communities that make something of a fetish of Bayesian estimates, when educated and semi-educated people hear “I predict a 70% chance of this happening,” they assume one of two things:

            1) A lot of calculation went into justifying that claim, and that the probability can be used with honest mathematical certainty, or

            2) The person making the claims is an arrogant idiot who thinks they can Mister Spock their way into solutions to all the world’s problems.

            (Incidentally, when people steeped in the Bayesian-fetish-community approach try to talk this way outside their communities, they tend to come across as (2)…)

            The problem is that many realistic systems are so chaotic, with such small data sets under such varying circumstances, that nobody with the sense God gave a goose would try to do detailed calculation as per (1). Any probability estimate would itself be the compounded result of several entirely made-up probability estimates, and the aggregate result would be about as much of a joke as a serious attempt to solve the Drake Equation.

            In fact, the Drake Equation is a good illustration of the problem. You can use it to ‘prove’ that sapient life is probably very common in our galaxy, or that humans are virtually alone in the galaxy, because the result of the equation tells you more about your own starting assumptions than it does about reality. It is best understood as a tool for assisting the mind in clarifying and analyzing the problem, not as a tool for generating usable mathematical predictions in the face of complex and uncertain evidence.

            History is no different. This does not make predictions drawn from history worthless, but it does mean that attempting to present historical predictions in terms of a mathematical probability of various outcomes happening is a worthless and stupid way to communicate the useful predictions.

        2. For me, this discussion is sort of funny because the discussion is about epistemiology when the actual issue is politics. After the Mar-A-Lago raid, it has become apparent that there is a real possibility of putting Trump to prison, and thus preventing him from doing another coup. Now, Deveraux’s old post about would-be tyrants having a tendency to try again becomes an important intellectual argument for prosecuting Trump.

          Consequently, we start discussing here the definitions of empiricism, while the actual question should be: “Does the historical evidence provide sufficient proof for a proposition “reoffending in sedition is so probable that it is wothwhile to prosecute Trump in order to protect the US Constitution?” This question can, in fact, be discussed without nitpicking on epistemology.

          1. Well, one can argue that the actual question is one of justice.

            It would clearly be unfair to charge Trump with a crime he has not committed, on the grounds that he is a risk factor for sedition or armed overthrow of the state.

            But should a man be charged with a crime when it turns out that he took dozens of folders of clearly labeled TS/SCI documents home with him after leaving his government job, and that some of the folders containing these documents appear to be empty? And when his response to the FBI seizing the documents mixed in with his personal effects was to direct his lawyer to sue and get them back on the grounds that he, a former president, has an executive privilege right to have them that overrides the current president and executive branch’s right to take them back?

            I am fairly sure that the Trump of 2015 would argue that these are excellent grounds to charge a person with a crime, as long as he didn’t have foreknowledge that he would be the one doing such things.

        3. “Claiming that you know that there’s a sixty percent chance he’ll try again and a thirty-five percent chance it will inspire serious follow-up attempts by other parties isn’t being more helpful”

          Yes, it is. If you give predictions in terms of probabilities then it is possible, after the fact, to tell how good your predictions were. Which allows you to learn from your mistakes and make better predictions in future:

          https://astralcodexten.substack.com/p/grading-my-2021-predictions

          This was how Philip Tetlock’s Good Judgement Project worked. I am 95% sure it would be good practice if more of us did this.

          Then again, I see no reason to believe that pontificating economists are more likely to follow this practice than pontificating historians, so this argument is not a criticism of historians in particular.

          (I might also point out that someone who can say your house has a 60% chance of being burgled in the next year is more informative than someone who says it *might* happen. Assuming you have reason to trust their judgement.)

          1. Except, again, this assumes you’ll have a sufficiently robust information base/testable prediction base to get a good feel. Those estimates I gave:

            1) are going to be basically useless to any actual future discussion of my predictive ability as they can almost certainly be gamed either way based on the phrasing I use; and
            2) even if I was 100% clear on what exactly constituted those attempts, attempted coups are such a rare course of action that the single data point is simply not useful.

            Now, Scott and others who engage in this are engaged in something interesting and worthwhile due to the nature of their jobs (which generally are pure punditry). I am unconvinced that the same holds true to people who write occasional pieces on specific topics, most of which do not lend themselves to individual predictions which are in any way useful.

          2. “‘Claiming that you know that there’s a sixty percent chance he’ll try again and a thirty-five percent chance it will inspire serious follow-up attempts by other parties isn’t being more helpful’

            Yes, it is.”

            Good luck. If I saw a historian assigning numerical probabilities in this way, my immediate inference would be that said historian is full of it. As Bret said above, the consensus within the discipline is that these sorts of statements are misleading and irresponsible BS, purporting to more precision than is actually possible. The contingencies and exceptions and historically unique circumstances of each case aren’t noise that needs to be filtered out to get at the signal, they -are- the signal.

          3. This is an interesting point that I had forgotten about. For those unfamiliar, the idea is that you’re looking at many predictions, usually by the same person. They can be utterly unrelated in subject matter — it doesn’t matter if coups are rare, or if you make only one coup-related prediction in your life. You make many predictions, with ranges for probability and end-date, and later grade them. If you make 10 60% predictions for 2021, then ideally 6 of them should have come true by 2022. Eventually you get an idea of how good a forecaster the person is, and also how good they are judging their own knowledge and confidence. (If all of the 10 “60% likely” predictions come true, the forecaster is probably underconfident. The reverse situation is much more common…)

            What if someone does not habitually make predictions, and is prompted to make one only due to extraordinary circumstances, like living through a coup attempt? Then the idea is less useful, though in principle still usable: if everyone put numbers on their forecasts, you could later measure how reliable such ad hoc predictions by outraged experts are. Of course, without the practice and experience of making numerical predictions, their probabilities would be less likely to be well-calibrated.

            I’m not saying historians like Bret should produce such numbers, but the idea maybe merits more consideration than it seems to at first. But it really does only make sense as a practice, not as a one-off thing.

            A related idea is the 90% confidence interval, like “I’m 90% sure Roman urban literacy was within these numbers”. Though without a way to later measure the real number, or at least to measure the historian’s accuracy in making other 90% intervals, we would be back to false precision and scientism again.

          4. Aithiopika, so if you saw Nate Silver assign a 65% chance to Trump being elected President in 2024, you would ignore it, but if someone said he *might* be elected, you would think you had learned something? (If so, what would you have learned?)

            Or are you prepared to listen to people give probabilities to elections, but not to coups?

            In most years, the are are more coups in the world than American presidential elections.

          5. “Aithiopika, so if you saw Nate Silver assign a 65% chance to Trump being elected President in 2024, you would ignore it, but if someone said he *might* be elected, you would think you had learned something? (If so, what would you have learned?)

            Or are you prepared to listen to people give probabilities to elections, but not to coups?

            In most years, the are are more coups in the world than American presidential elections.”

            The Nate Silver thing seems like a non sequitur. I’m actually not really sure what you’re getting at, besides, it seems, that you’re implying that what Nate Silver does for elections is an apples-to-apples comparison to what you think a historian should do for coups (or if it isn’t apples to apples, it’s less clear why you think I would be compelled to reject Silver due to what I said about historians and coups).

            I went and searched for info on Silver’s methodology to see whether it throws any light on what you have in mind here. I read the following:

            https://fivethirtyeight.com/features/how-fivethirtyeights-2020-presidential-forecast-works-and-whats-different-because-of-covid-19/

            And though he’s got a lot more detail than you have (obviously, there being different standards for in-service election modeling than for off-the-cuff internet comments), there certainly seem to be enormous methodological differences between what Silver is doing and the impression that you’ve given of what you have in mind. Forget apples to apples, this doesn’t even seem to be apples to fruits. For instance, Silver’s primary inputs aren’t historical election data, and he doesn’t care about the historical sample size of American elections in the way that you seem to; the main model inputs are polling data about the same future election that he’s trying to predict. Historical data are used in quite limited ways to calibrate and nudge the polling data, and even there, primarily just from the last two elections, deliberately excluding the great majority of past elections (because, again, his method isn’t to use past elections to predict the next one).

            But maybe you can clarify?

            For my part, pending such clarification I’m not at all inclined to accept your implication that coup probabilities should be as or more quantifiable than presidential election probabilities because of greater sample size. The increase in the predictability of presidential elections in recent decades (to the extent that it exists) isn’t because we’ve added a handful to the sample size and are now in the forties rather than the thirties; it’s because a whole poly-sci establishment has grown up around using generally nonhistorical methodologies (overwhelmingly based around current polling) to predict them.

            Specifically, polling-based methodologies pretty obviously don’t overlap all that much (perhaps slightly) with what would be needed to predict coups.

            More generally, why would we expect data relevant to anticipating coups to be comparably quantifiable as data relevant to anticipating elections? Votes are literally designed to be quantified.

          6. If I say “This coin has a 50% chance of coming up heads of if I flip it right now” and then I flip it and it comes up heads, how good was my prediction, and does that change based on the percentage I give?

            Sure, if I’m in a position to do it over and over and over and amass a good sized data set then that’s something that could, but single episodes are very resistant to statistical analysis for very obvious reasons…

        4. The big issue for me is that refusing to put numbers doesn’t mean you’re not using numbers. If you say (as you said below), “this is a thing which has happened before and can happen again, but we lack sufficient information to have a good feel for likelihood.” Saying it “can” happen again means you think there’s a greater than 0% chance, the fact that someone feels the need to point it out *at all* means that they think *something* should be accorded a higher chance of happening than whatever it was they thought before.

          Not using probabilities at all means that we think tyrants will behave exactly like the other politicians. If you think tyrants are *less* likely to repeatedly grab power, you’d say that. If you think your statement adds literally no information, you wouldn’t say anything about tyrants. You make a statement about tyrants grabbing power because you believe that they are *more* likely to do certain things based on history. The fact that you’re not willing say exactly how much more likely doesn’t mean you’re not still assigning a range of probabilities.

          1. The problem is that saying “the probability that, given A, B will happen is somewhere between 5% and 80%” often seems useless or likely to make the speaker a laughingstock… While in truth, it often is not.

            In many cases, the fact that there is a 5% chance of an outcome occurring is itself important. For instance, it may massively increase the prior probability of something the reader had deemed impossible, which then promotes the likelihood that B will happen given additional piece of evidence C to a much higher level. Or it may be that the event itself is so dire that even a 5% chance of its occurrence would be a damned good argument for caution.

            I don’t think it would really improve things if we insisted that everyone try to give a confidence interval for their predictions about massively complicated systems. Especially since realistically such predictions would often give a range of probabilities so wide as to be uninformative, without making the experts’ opinions genuinely useless if they are themselves being interpreted rationally.

        5. Agreed completely. I’d go further and say that modern punditry fetishizes attaching numbers to things that are fundamentally unquantifiable. I think it started out as an admiration of disciplines that could provide such numerical predictions, but it has evolved into a crutch designed to make the speaker seem like they’re coming from a more rigorous foundation than they truly are.

  8. “Phenomenon A1 occurred before and it resulted in Result B, therefore as Phenomenon A2 occurs now, result B *may* happen. Tyrants in the past have made multiple attempts to seize power, therefore tyrants in the present may as well, therefore some concern over this possibility is warranted. The result is not a prediction but rather an acknowledgement of possibility;”

    This is *exactly* what I was thinking when I was reading Smith going on about ‘predictions’. “Hey, this has happened in situations like this, so be on the lookout.” And the only empirical update to that could be after much more history, likely centuries from now, or some influx of data (new way of recovering records, a time machine…)

    1. Or, of course, it was a coincidence. That way superstition lies. (Literally.)

      The real fun of history is that you can’t control your variables.

      1. This is the advantage of adding numbers to predictions. If the soothsayer says it might rain tomorrow, you learn nothing about his reliability whether it rains or not. But if he gives probabilities then you can see, after the fact, whether he was right on 80% of the occasions when he was 80% confident, 90% of the occasions when he was 90% confident, and so on.

        Now you, and he, can tell how good the forecasts were. How can you improve, when you can’t tell how good you are in the first place?

        https://astralcodexten.substack.com/p/grading-my-2021-predictions

          1. Wikipedia lists eight attempted coups last year. They are a lot more common than American Presidential elections, so if you can rate Nate Silver on his performance the State Department could rate a coup predictor on his.

        1. Actually it doesen’t (as anyone who plays D&D knows..) just becuase there’s a 5% chance of rolling a natural 20 doesen’t mean you’ll ever actually roll one: Probability is just that, probability.

          1. EDIT: Point being that someone can sya “There’s a 20% chance of rain” his entire life, be right about the probability, and still never get rain.

          2. Arilou, if someone says “there is a 20% chance of rain tomorrow” for a year, let alone a lifetime, and it never rains, I think you can safely discount his weather predictions.

            He might be unlucky, but it is much more likely that he is wrong.

        2. No, there is not an advantage of adding numbers here. That is making the dangerous assumption that you have largely complete and correct. For all we know history is littered with one-time attempts at tyrannical takeover that were never written down because they failed quickly and the would be tyrant was promptly killed or exiled. Heck, many successful tyrannical takeovers might have been rewritten by court propagandists as a legitimate move by the (now) monarch.

          If you start adding numbers to what written history we have, you’re ignoring the fact that we don’t have all the history, and what happened in that history may have a meaningful affect on what was and was not recorded.

          1. Plus, if you add numbers people can start bickering over the correct number, where what actually matters is that the possibility is larger than ‘negligible’.

          2. If you have literally no idea whether someone will launch a coup within the next year, the odds are 50-50. The probability is 0.5.

            Given that you can use numbers to quantify probability when you have literally no idea how likely something is, there is no way in which you can be “too ignorant” to use numbers.

            That does not mean there are an infinite number of alternate universes out there, in exactly half of which a coup comes to happen.

            All it means is that the person making the prediction doesn’t know whether the coup will come to happen. The predictors ignorance is precisely the thing the probability is measuring.

            If the probabilities he gives for the coup happening or not happening are very similar, that means he doesn’t know whether a coup will happen. If they are very different, that means he is very sure, one way or the other.

          3. The odds are unknown.

            One might as well say that if someone didn’t know about six-sided dice, the odds of rolling 5 or under on one are 50-50.

    2. Agreed. I think this is another situation of humans in general being terrible at thinking about risk, which may be a way of phrasing it that gets people in the right frame of mind to engage with a given statement properly.

      So you’d say something like ‘Phonomenon A1 occurred before and it resulted in Result B, therefore as Phenomenon A2 occurs, there is a risk of Result B happening’.

      You’ll still get all of the usual ‘but how likely is it that result B happens’, even quite vociferously by people who have little tolerance for uncertainty, but at least it might get some people into the frame of mind for thinking about other contingencies and potential remedial actions/backup plans.

      Calling it a ‘prediction’ conjures up unwanted connotations of certainty, whereas ‘risk’ holds connotations of something that may or may not happen, but is worth thinking about.

      Would it be right to say that a good historian does not make predictions, they flag risks?

    3. It seems to me that the real value of the historical research is that the focus on details gives you a better sense of why that happened in that case, and what kinds of events make it reasonable to worry more or less about a possibility as well. But this is the result of adding more detail, rather than subtracting detail out, and so doesn’t lend itself to boiling down to some easy numbers.

  9. I noticed in the discussions around this that the Smith partisans made a lot of hay of “Marxist Historians”, treating them as a clear counterexample to your claim that historians emphasize contingency and avoid endorsing grand narratives.

    I may be wrong here, being an amateur, but that seems like a rather stale characterization? As I understand it, whatever the initial ambitions of the Marxist historians, they have in fact mostly ended up doing a lot of work to demonstrate contingency in practice, regularly contradicting Marx along the way, and that mostly they have held onto/retreated to an emphasis on economic production and class structure and their relationship. I don’t doubt there are hardliners still fighting for full dialectical determinism, but they must be a small minority, in history in particular.

    As an aside, the recurring phenomenon of outsiders constructing grand historical theories that are overly broad in claimed scope despite being based on a clearly rather narrow and biased pool which at best boil down to a simple and not particularly novel intution must be rather annoying to historians. To take a particular example from this post, it seems to me that when you translate “Elite Overproduction” from Turchinese you are left with the claim that political crises arise when there are more people who feel entitled to power than there is power to go around, and perhaps the corollary that this entitlement is created by the very social and political system it threatens. A fairly banal observation that is nonetheless not even going to be universally applicable.

    1. The closest analogy to “elite overproduction” I know of is in studies of Central Eastern Europe in the 19th and early 20th century. Even there, the reasoning was EXTREMELY contingent (eg on the existence of much richer elites abroad, the knowledge of their standards of living and the gap between them, and ability to match those foreign elites’ standards of living by rent-seeking behavior). It was always phrased as “this is what happened in these specific places at this specific time”.

      For public policy purposes this is indeed a possibility you should keep in mind; you cannot and should not limit your knowledge to purely quantitative and scientific data, because there are phenomena on which those data cannot speak.

      1. Surely the modern educational system provides a pretty close analogy? We teach children basically from nursery school onwards to “go out and change the world”, and send large percentages of them to university. If elite overproduction theory is right, we should expect to see increasing levels of political instability over recent years.

        1. The point Dr Devereaux is making is that this “elite overproduction” theory as applied to the modern day is not based in any rigorous academic research, of whatever epistemology. I am pointing out the only analogous phenomenon that is backed up by good data and sources.

        2. Not necessarily.

          19th century Central Eastern European elites or what have you could get more power and wealth for themselves by squeezing the peasants harder and harder.

          The typical modern college graduate leaves university with a big pile of student debt, a landlord who can crank up his rent by 5-10% a year and have him evicted if he can’t pay, and no special legal title to or control over pretty much anything. He also grows up in an Internet that in many cases tells him engagement with politics is useless because all politicians are impossibly corrupt anyway.

          If he reacts to that situation by joining a radical organization that might, say, overthrow capitalism by force, the FBI jumps on it with both feet as soon as it looks even vaguely threatening, or the local police drown it in tear gas and targeted arrests as soon as it shows up on the street waving signs. Barring exceptional luck, he has no path to exceptional power, and his parents and family and peer group haven’t got it either and never have.

          In short, your typical college graduate is not an “elite.” He is simply a member of the top half or so of the “commoner” class, trained for his role by a society whose real elites have decided college is a requirement for half the jobs in society.

          1. The typical modern college graduate leaves university with a big pile of student debt, a landlord who can crank up his rent by 5-10% a year and have him evicted if he can’t pay, and no special legal title to or control over pretty much anything.

            Uhm, yeah? Elite overproduction is when more people are groomed for elite positions than there are positions available. Pointing out that most graduates end up with decidedly un-elite lifestyles isn’t a refutation of the concept, or its applicability to the present.

          2. @Mary
            The meaningful definition of an “elite” requires that members of the elite have personal clout well beyond what is plausibly available to the median member of the population, unless that median person gets lucky.

            A crown prince lives in a palace with servants, not an apartment with rent that consistently outruns inflation. If he owes the bank a lot of money, that may be a problem for the bank as much as it is for him. He has the ear of the most powerful political figures in his nation.

            The crown prince may not be a ruler, but he sure has clout. He is an elite.

            “Being a college graduate” does not confer clout in America, and “the set of college graduates” cannot reasonably be considered to be America’s ‘elite’ in any normal sense of the word ‘elite.’ Which, to be fair, ties into GJ’s point, which deserves to be addressed.

            @GJ

            >Uhm, yeah? Elite over-
            production is when more
            people are groomed for
            elite positions than there
            are positions available.
            Pointing out that most
            graduates end up with
            decidedly un-elite life-
            styles isn’t a refutation…

            Thank you, that’s a good point. That does, however, clarify the question: Is college meaningfully a process of grooming people for elite positions? It served that function 100 years ago, perhaps. But times change. If we go back another century before that, just getting an education beyond the third or sixth grade level or so marked you as a member of the elite.

            https://nces.ed.gov/pubs93/93442.pdf
            (page 36)

            From 1900 to 1940, secondary school enrollment among teenagers 14 to 17 years old rose from about 10% to about 70%.

            https://www.thinkimpact.com/wp-content/uploads/high-school-statistics.jpg

            From 1890 to 1970, the proportion of the population that consisted of high school graduates rose from about 5% to about 80%.

            If rising levels of education are causing people in modern society to think that they are entitled to be privileged elites, and then causing them to grow resentful when their elite status does not materialize, the problem wouldn’t have started with modern youth, because colleges aren’t unique in the status of once having been elite academies for the privileged. High schools used to be much the same.

            If elite overproduction, exemplified by entitled kids with too much education hitting real life and being angry they don’t get to control the world and spending the rest of their lives lashing out accordingly and polarizing the political scene, is a major problem in the United States…

            …Well, logically, I’d expect that problem to have first emerged among people born some time around 1910 or 1920, maybe 1930 at the latest. I’d expect it to only get worse for the baby boomers.

            Honestly, I’d expect it to start actively tapering off among the late Gen-Xers and the millennials. Because if you were born after about 1970-75, then no matter what anyone told you, you never really knew a world where a college diploma made you a real member of the elite. You’d have had your whole life to get used to the idea that your college degree wasn’t going to make you part of the 1% or even the 10%, no matter what blather anyone said at your graduation speech. Every kid knows not to take a graduation speech too seriously these days.

    2. I’m personally of the opinion that his preoccupation with Marxist historians gives some lie to Smith’s claim that his beef with history as a discipline is apolitical.

  10. Non historians who believe they know a great deal of history get both defensive and confrontational when an historian points out flaws in what they think they know, or even what they know has been superseded by later scholarship, including archaeology. Seeing an increase of this lately online, where non-historians, who are enamored of all things, ‘what if?’, alternate history and counterfactuals are denouncing historians and history in general. We are in the age where everyone knows they are The Expert, but actual, practicing, credentialed and experienced expertise is belittled.

    1. My take is that it’s an expansion of Dunning-Kruger. The more knowledge there is freely available, the more people can educate themselves just enough to think they’re an expert but not enough to know how little they know.

      Freely available knowledge is an unparalleled good on the whole, but I figure this is one of the drawbacks.

      1. Speaking of Dunning-Kruger effect, the actual finding and its implications are slightly more complicated than often implied in popular descriptions. With statistical tools one can recover an overconfidence perception bias, but it bit tricky, because the Dunning-Kruger result (especially the bit unfortunate illustration they chose) look similar to simple uncorrelation from imperfect measurements.

        http://haines-lab.com/post/2021-01-10-modeling-classic-effects-dunning-kruger/

  11. I really appreciate your commitment to and direct endorsement of epistemic humility here, that’s one of the things I appreciate most about what you’re doing with this blog.

    There’s a stealmanning of Smith’s argument as you’ve presented it (I’ve not read it, at your un-recomendation) which I’ve experienced regularly when reading historical arguments directed at popular audiences (i.e. thinkpieces).

    As a non-expert I have a tool I can use to evaluate empirical model-based arguments: I can watch what happens, and make a rough Bayesian update about the offered model. With non-empirical arguments I cannot do this, and I don’t feel like I have a good tool. I can agree it’s not reasonable, as Smith demands, that the discipline of history remold itself to fit the tool I have. I do think it is reasonable to ask that history seek to provide non-experts with a reasonably general purpose tool for evaluating arguments made from the results of the historical method.

    Referring to sources is great, but the volume of historical arguments I come across is such that I can’t as a layperson buy and read an introductory review of the evidence for each – if I did, I would be a historian and not a layperson. What do you recommend I do as a non-expert to judge the strength and applicability of historical arguments, if not try to generate and test predictions from them?

    1. Maybe you could do what you suggest non-experts in your fields of expertise, lay people, whomever, do, to evaluate whether the content is trustworthy, and at the very least, presented in the best faith of doing right, yet willing to change course when shown to be down the wrong track?

    2. Referring to sources is great, but the volume of historical arguments I come across is such that I can’t as a layperson buy and read an introductory review of the evidence for each

      And of course, even if you did, you’d still have to trust that the introductory reviews themselves presented the evidence accurately.

    3. Here’s where that other source of knowledge, authority, comes in. We laypeople have to evaluate historical arguments primarily by how much we trust the historian making them. Authority is built by 1) Having good things to say about topics we already understand or partially understand; 2) Cultivating a relationship over time; 3) Demonstrating virtues like courtesy, wit, zeal for the truth, and thoroughness; and to some shrinking extent when the other forms are unavailable 4) Having institutional credentials.

      You can tell that Bret understands this.

    4. History (and a good many other practices) are closer to the judgements we frequently make in real life than to the controlled experiment/prediction seen as ideal science. If you saw one of your friends acting in a way that suggested to you that they are depressed (or suffering a financial hardship or in love or…) you don’t say to yourself that you will wait until you have a statistically valid sample of similarly-behaving friends before you act. Nor do you reach for the latest research summary on friend behaviour. Yet you are often confident enough your judgements to do something. Case studies are useful tools (even economics uses them), and your experience of people – direct and derived – is a series of case studies.

      Brett’s piece on tyrants might be summarised as “The Greek experience suggests that politicians who are willing to break laws and well-established norms in the pursuit of power will likely persist in the absence of strong action against them and their enablers. Which directed attention to the rather conspicuous lack of action against Trump and his enablers, and the continuing infatuation of significant parts of the US political nation with Trumpism. Do you really need 50 tries at tyranny before you recognise the dangers?

      1. “History (and a good many other practices) are closer to the judgements we frequently make in real life than to the controlled experiment/prediction seen as ideal science.”

        I am less convinced of such argument as a defense. Good many other practices were previously like that, too, including medicine. “I saw a patient with symptoms I I confidently recognized as foobar and I felt confident to make a diagnosis and prescribe a treatment” describes perfectly well also behavior of doctors during the time before that modern medicine when they often were quite useless or even harmful to wellbeing of their patients (“I feel confident to ignore Semmelweis’ recommendations”). Inclusion of quantitative methods and quantitative arguments was an improvement there.

        Of course qualitative characterizations should not be dismissed as they are useful, but “close to how people make decisions in real life” isn’t a stellar endorsement.

        1. Dr. Devereaux himself has repeatedly brought up cases where quantitative or measurable methods have made it possible to decisively settle historical questions. This usually involves archaeological evidence. And he seems to come down firmly on the side of quantifiable facts wherever those facts can be found.

          The trouble is that human societies are extremely complicated. Our datasets are very sparse compared to the size of the N-dimensional “space” of possible scenarios and societies that could conceivably exist. We have no way of constructing controlled experiments to narrow down the possibility space. And to make matters worse, much of our historical data is intentionally or accidentally “pruned” of most of its value, because most of the information is simply missing, or destroyed intentionally, or because records were prepared by people who had no reason to write things down.

          As such, for most questions, it is simply unrealistic for us to try and form mental models with enough predictive power to say “what is the likelihood of a coup happening under these specific historical circumstances” and getting a useful answer. We might be able to predict the probability of coups in a fully general sense, but untangling the cause and effect behind the “risk factors” that make coups more likely is an extremely difficult task. And mathematical modeling is just not that helpful for it because of multiple interlocking recursive layers of problems with quantifying the relevant information.

          We don’t have Hari Seldon’s psychohistory, and we shouldn’t try to work towards it at the expense of a realistic understanding of what historical studies can or cannot do. And it’s actively counterproductive to pretend that we ought to behave as if we did or should.

          1. >We don’t have Hari Seldon’s psychohistory

            Good, as I think nobody is seriously arguing for such a thing?

            Psychohistory does not become more possible after discarding quantitative data either. And it is possible to put on Hari Seldon hat and give wise and sagely advice while putting “maybes” and “it might not be”s as a preamble.

            >And it’s actively counterproductive to pretend that we ought to behave as if we did or should.

            Is it more or less counterproductive than engaging in punditry and pretending adding a “maybe” is sufficient because all uncertainty is equally uncertain? (If everything uncertain were equally uncertain, people truly would not have much better options than take any and all actions randomly.)

            On the note of what historical studies or data can or can’t do: As I wrote in one of my earlier comments, especially I disliked the blog author bringing up two clearly flawed statistical illustrations as evidence of unavoidable limitations of “data” and methods relying on it.

            Now, I for one don’t agree with all Smith’s recommendations (in most cases I can think of, differences in differences becomes possible only with modern-like data collection; it is more applicable to two other examples he cited, related to industrialization and education). And good historical methodology is good at many things when it is correctly applied. But even with lack of data, it is not impossible to make other kinds of quantitative arguments:

            One can draw up a game theoretic model of rational behavior in different situations (or rational individual with imperfect information, or different assumptions of rationality). One can think of generative processes that goes into some process of creating some particular piece of data (and then making again, quantitative guesses about how much data is missing, and how different amounts of missingness and different kinds of missing data processes would affect any conclusions one makes). If one cites numbers or makes an illustration, put also the uncertainty visible on the illustration.

            And if then finally it is found impossible to measure important quantitative aspects, one can still try to reason about them while granting that they are unknown: Is a phenomenon A is similar to past situation B? In what respect? In what respect not? How many different respects to consider here even are? Why the past situation B is more important here than some other situations C? What are the relevant hypothesized causal relationships? What one would observe if hypothesized causal relationship were true or untrue? How strong they would need to be to be meaningful? (Such thinking is necessary before planning an experiment or an observational data collection; I think this kind of thinking is already what many historians do, but getting formal about it could be helpful.)

            One can do punditry without doing any of that or anything resembling it, too. It is what many people do, historians, journalists, economists, and members of many other professions alike. (Punditry doesn’t require and there are larger audiences for it than anything described above; it makes sense there is much more people doing it, Smith included.) But I think Smith is on something when arguing that one should recognize such punditry has very limited value. Punditry that draws directly from a piece of rigorous research (quantitative methods or not) may be more valuable.

  12. Smith always reads his current heroes and villains with grossly unbalanced levels of charity and diligence. So you get demands for statistical analyses of city-states we know about mainly from pottery shards, while Turchin and other favored hacks get uncritical essays waxing poetic on their latest Theory of Everything.

  13. My historical education ended at undergraduate level, so I’m not really qualified to dispute with our host, but this account of the goals and methods of history seems incomplete. My understanding is that many schools of history, including but not limited to the Marxists who have already been mentioned in these comments, hoped that studying history would illuminate general principles of the functioning of human societies that could also inform our understanding of the present and even the future.

    My impression is that the discipline, at least in the academy, has been moving away from this approach in the last few decades. This probably happened for good reason – people kept poking holes in the sweeping general theories – but I do think that it has made it more difficult to answer the question ‘Why study history?’, especially when trying to justify it as useful rather than merely interesting.

    1. I think the question here is, indeed, epistemic. I believe that quite a few historians think that history gives you insight on how societies work. For example, our host, Dr. Deveraux, just wrote a very lengthy discussion on how the structural factors of logistics constrained movement of armies. On the other hand, the same historian now explicitly denies the possibility of any “theory” in social science sense.

      This is not a contradiction. History will give you experience and wisdom about how people and systems have behaved in other situations, although you can’t draw straight mathematical deductions about future. So, it may make you a wiser person, but in a manner that cannot really be empirically quantified.

    2. The compete and utter failure of Marxist grand theories is in fact a pretty big part of why historians typically avoid creating such grand theories today.

      Fun fact: Soviet historians and theoreticians agonized for decades over one simple question: why Russia? According to Marx’s theories, Russia did not have the level of industrial development for communism to take over there. It should’ve started in Germany or France and spread to Russia eventually. But that didn’t happen, and it bothered the hell out of them.

    3. This probably happened for good reason – people kept poking holes in the sweeping general theories

      That’s no doubt part of it, but there are also institutional and structural factors that favour “splitters” over “lumpers”. In particular, the publish-or-perish approach to academic hiring, plus the short duration of most (especially early-career) academic positions, means that scholars are incentivised to produce a large number of limited, itty-bitty studies rather than a smaller number of more wide-ranging works. A guy who produces two journal articles a year, each analysing Hadrianic-era pottery fragments from a particular Upper Egyptian village, is going to look more impressive to a hiring committee than a guy who locks himself in the library and emerges fifteen years later with a great magnum opus on the factors behind the rise and fall of empires, if the latter guy even manages to keep his job that long.

      1. Yes.

        On the other hand, people might have retained more respect for the “lock yourself in the library for fifteen years” guys if it weren’t for the fact that the brilliant sweeping theories on the rise and fall of empires kept turning out to be wrong. At which point your guy has just wasted a third of his adult life barking up the wrong tree, and you’ve wasted a million dollars or so paying his salary for fifteen years.

        The power of human beings to construct accurate models of reality by sitting in an armchair with minimal discourse and engagement with others is… limited. Sometimes very limited.

  14. As for the coup/tyrant example, I would say the important takeaway is not “what is the chance he’ll try for power again?” but “what can we do to make the power not grabbable?”

    Relatedly, in human affairs, often the point of a prediction is to make it false: to foresee a bad future, and *avoid it*. This makes predictive accuracy a rather problematic criterion in such cases, to say the least.

    1. Personally, I bristle a lot when people attempt to use statistics as a swiss army knife for any kind of data – it is akin to uses a master’s chisels to pry open paint cans.

      The problem is that the underlying foundation is that of a set of events which are in concept repeatable – the flipping of a coin, rolling of a dice, emission of a particle. What goes unsaid is we assume that many different coins, dices, or particles will all behave the same way under the same experiment – allowing us to apply various kinds of analysis to the whole.

      It is the duty of the experiment to argue why it is reasonable to accept this assumption – as disciplines become more and more concrete, more and more care is necessary to avoid introduce errors.

      1. Yes, regarding the use of stats to prove whatever one thinks should be proven, or wants to be proven, etc. see: political polling, for instance, a current version of reading entrails, as the last decades’ electoral seasons have shown. (Though perhaps, there are times when the Pythia and Delphi and the augurs — and then the saints — were better at it than the pollsters and political ‘scientists’, who are increasingly desperate to keep their lucrative career prospects alive.)

        1. The recent issues with polling are
          -getting the right sample, lots of people don’t answer polls for various reasons and figuring out who and what they do has been tricky. this is a known problem and probably fixable.
          -Issue polling is generally a mess.

          For poll aggregation to predict elections, the first issue is problematic but it still overall works pretty well for what it does, polling misses happen but still do a good job describing elections. For issue polling, they can act as additional information, but other tools are available.

          The issue is treating polls as magic, (“They missed once!!! They are horrible!!!) and expecting them to do things other than act as a useful tool with a specific set of uses and limits.

          1. Yeah, but there are people who reject the very idea of polling. “How can 3000 people tell us anything about the country?” Most often, of course, when the poll says something they don’t like. It’s not “the poll was done badly” but “polling makes no sense”, a rejection of random sampling.

            Given his ‘entrails’, the person you were responding to may be one of them.

    2. This reminds me of the reductiveness of the “Y2K alarmism was completely false because it passed and nothing of note happened.” (straw-man?) argument in completely failing to appreciate the amount of effort programmers worldwide had exerted in order to minimize Y2K’s effects. It wasn’t a nothingburger masquerading as a crisis; it was a crisis-in-the-making that was averted in time.

  15. Noah Smith’s general quality of work is such that one is tempted to advise the author against feeding trolls…

  16. My question is why are you trifling with this guy? I really appreciate your writing and I hope it doesn’t get in your way.

  17. So. This is all interesting, but unfortunately my contribution is probably going to be much less directly responding to either side but pointing out a bit:
    Namely, the scarcity of data on pandemics, or ‘plagues’. And how it’s so bad we literally turned to an MMO plague for data: I’m talking about the Corrupted Blood Incident.
    This, to me, speaks of a certain desperation/boredom in Sciences in general, in that anything that happens that crosses their field will inevitably draw much attention and attempts to do something USEFUl for their field with this new event that’s perhaps only tangentially related to their field.
    This is ah, a thought reinforced by this:
    https://xkcd.com/2655

    And one of these days I might attempt to throw a few such questions at Bret, if he seems sufficiently bored, which he arguably might be.

    1. We know quite a bit historically about pandemics and even the Black Death, particularly now, from historical scholarship, and the contributions of archaeological work. Two excellent source as to what we know and how we know it is Kyle Harper’s The Fate of Rome: Climate, Disease, and the End of an Empire (2017), as well as his Plagues upon the Earth: Disease and the Course of Human History (2021), Princeton University Press.

      Funny thing that about epidemics and pandemics: how the primary tools for confronting and handling them have always been known: isolation, masking, good nursing — have always been refused by the powers that pee on us all, because of the same reasons now for our cultures’ refusals — the economy matters more than dead bodies (unless the dead bodies are mine and those of people I care about).

        1. Where it gets more complicated is that the economy isn’t just a matter of imaginary money moving around, it’s also what people eat and drink. There are and can be arguments about when and how you make tradeoffs, but a recession kills just as surely as a pandemic. (and of course, they are related, starvation makes people more vulnerable to disease, and disease as you pointed out interrupts the economy)

        2. There are other factors to consider.

          For example: The average city has something like 5 days of food in it. Without imports of food, the city starves. That’s one way sieges worked, after all. If you shut down the economy, in practical terms it means shutting down those imports of food. Even if you allow truckers to run, shutting down gas stations, oil refineries, mechanics, and the like would drastically reduce the amount of food we can bring into cities. (If you allow truck stops, diners, oil refineries, mechanics, and the like to work, it makes your shutdown FAR less effective.)

          Modern cities are even more fragile, though. We rely heavily on infrastructure that requires constant maintenance and upkeep. If a powerplant–or even a critical substation–goes offline for a significant amount of time it puts hospitals in a very dangerous situation, particularly when we needed cryogenic temperatures to store and transport vaccines. Water treatment facilities require constant attention. Hazardous materials transportation, storage, and use requires constant attention–and modern societies require tremendous amounts of hazardous materials to function. If all this isn’t continuously manned, people die. In some cases, a large number. (I know of a plant that uses vinyl chloride that was in a small town because if it ever blew up it would wipe out the town, but at least it didn’t blow up anything important. And yes, that’s what the manager said to me.)

          We probably could mitigate all these issues, but it would take a lot more time, effort, and planning–as well as willingness to endure discomfort–than we had at the time.

          It’s not a matter of a bunch of greedy corporate executives callously weighing the value of human lives against a 2% increase in margin this quarter. It’s that our society is based on infrastructure far more fragile than people realize. You simply CANNOT slam the breaks on an economy like the USA has without dire consequences.

          To be clear: I’m not saying “We should have just been business as usual! It’s just the flu!” (I know how many people the flu kills, for one thing.) Rather, my point is, the pandemic presented us with a situation where there were no good options. Bad things were necessarily going to happen; the goal was to make it as least-bad as possible. It’s the same grim calculous that an army commander has to make, or someone facing a potential natural disaster. People are going to die, and there’s nothing you can do to stop it. How do you minimize the damage? Add to it the fact that if you DO take drastic action, and it succeeds, it looks like you didn’t do anything at all, making you look like the villain.

          1. While what you say is strictly true, it should be recognized that in the context of the United States, the discourse on how tightly to lock the economy down was very much dominated by debate between “we should, to some negotiable extent, lock things down” and “it’s just the flu, suck it up, nobody important’s gonna die, lockdowns are just proof that liberals want to tyrannize everyone to death.”

            There’s a valid calculation to be made about which institutions and organizations should open, and in what capacity, during a time of worldwide pandemic when public health measures are still of limited effect.

            The idea of making this calculation was pretty much categorically rejected by a lot of people, at least in the United States, in favor of wadding it up, throwing it out, and holding that motorcycle rally.

            (See Sturgis, 2020 and 2021)

          2. While what you say is strictly true, it should be recognized that in the context of the United States, the discourse on how tightly to lock the economy down was very much dominated by debate between “we should, to some negotiable extent, lock things down” and “it’s just the flu, suck it up, nobody important’s gonna die, lockdowns are just proof that liberals want to tyrannize everyone to death.”

            Other countries with less/no appreciable anti-lockdown wing didn’t generally end up adopting more moderate or successful lockdown policies, so blaming this on the mean old lockdown sceptics won’t work.

        1. During the period that people were actually doing strict lockdowns, yes. They were the only way to control covid, but they were certainly effective in e.g. Australia and New Zealand.

        2. I think you can get a pretty good example if you compare the Covid-charts on Our World In Data between Finland, Norway and Sweden. Three neighbouring countries with quite similar societies and different lockdown policies.

          Of course actually stopping Covid would have required really severe and extended lockdown, but it was possible to delay it until your country had reached practical vaccination levels to reduce damages.

          1. Sweden is not a success story. By June 2020 it had a cumulative death rate not far below Norway or Iceland in Sep 2022.

            Sweden’s about in the middle in terms of deaths by Covid, whereas if lockdowns were so effective, we should expect it to be at or near the top.

            Of course actually stopping Covid would have required really severe and extended lockdown, but it was possible to delay it until your country had reached practical vaccination levels to reduce damages.

            Anti-Covid policies caused far more damage in most western countries than Covid itself, e.g.:

            One study found that children born during the pandemic had a 22 point drop in their average cognitive score (similar to IQ). From an average of 100 for children born before COVID-19, the scores dropped to an average of 78 for those babies born during the pandemic. The findings stem from a longer-term study in which researchers at Brown University compared the verbal, motor and overall cognitive skills of infants born in 2020 and 2021 with those born from 2011 to 2019. Males and children with mothers with lower educational attainment, used as a proxy for socioeconomic status (SES), suffered greater losses. The researchers postulated that the environmental changes, especially less parental availability, contributed to the decline…

            In another trial from the Babylab at Oxford Brookes University in England, 600 children, ages 6 to 36 months, were followed online to monitor their vocabulary and cognitive development during COVID-19, from spring to the winter of 2020. They found that children who continued to attend high-quality early childhood education centers had enhanced development, compared to those children quarantined at home. The authors said larger benefits were noted for children from lower socioeconomic backgrounds. https://centerforhealthjournalism.org/2021/12/09/covid-causing-developmental-delays-kids

            Or:

            The backlog in hospital treatment continues to grow with nearly one in eight people now waiting for operations or other types of care in England.

            The newly-released NHS England data shows there were 6.84 million people on the waiting list at the end of July.

            It is a record number – before the pandemic there were 4.2 million waiting for treatment.

            There were slight improvements in emergency care with ambulance and A&E waits decreasing.

            But both services are still a long way from meeting their targets though.

            Close to three in 10 people waited longer than four hours in A&E in August, while ambulance crews continued to struggle to respond to 999 calls within their target times. https://www.bbc.co.uk/news/health-62832997

          2. “Sweden’s about in the middle in terms of deaths by Covid, whereas if lockdowns were so effective, we should expect it to be at or near the top.”

            The few countries that did effective lockdowns (Australia, New Zealand), or other consistent covid mitigations (Japan, Korea, Taiwan), are near the bottom.

            And through the summer of 2020, Sweden was in fact near the top of deaths, surpassed only by countries like Italy that had their nursing homes hit by covid hard and early. It becomes more middling as other European countries (not to mention the USA) get more feckless about their mitigations. And just calling it “about in the middle” obscures the huge numerical differences; even by late 2021 (before the omicron surprise) it has *seven times* the death rate of neighboring Finland and Norway. Twice the death rate of Canada and Israel, for some countries outside the old SARS-1 region. Ten times the death rate of Japan.

            *160 times* the death rate of New Zealand.

          3. It becomes more middling as other European countries (not to mention the USA) get more feckless about their mitigations.

            Lockdowns can’t go on indefinitely. If the death rate catches up after the lockdowns finish, then locking down isn’t a viable strategy to prevent deaths.

            I note, too, that you haven’t even attempted to refute the point about negative effects caused by lockdowns. At least the people dying of cancer because their screenings were postponed due to lockdown can take comfort in the fact that they aren’t dying of Covid, I suppose.

          4. “Lockdowns can’t go on indefinitely. If the death rate catches up after the lockdowns finish, then locking down isn’t a viable strategy to prevent deaths.”

            Lockdowns don’t go on indefinitely, they go until community transmission stops. If you do them right, like Australia, New Zealand, and China. And their death rates have *not* caught up; per the link I’ve already shared, they’re still among the lowest rich countries in total deaths per capita.

            The “lockdowns” in the US and Europe were incompetent, letting up as soon as cases went down a bit. It’s like stopping your antibiotics as soon as you start feeling better, rather than finishing the course.

            Straight-up lockdowns weren’t the only way to control covid; Japan, Taiwan, Korea etc. used other means.

            One way or another, something like 80% of US covid deaths could have been prevented, if not for bad policy and outright lies.

          5. Lockdowns don’t go on indefinitely, they go until community transmission stops. If you do them right, like Australia, New Zealand, and China. And their death rates have *not* caught up; per the link I’ve already shared, they’re still among the lowest rich countries in total deaths per capita.

            Australia and New Zealand are islands in the middle of the ocean, so it’s easier for them to stop diseases at the border. China has been lying about the whole Covid thing from the beginning to make the CCP look good, so I wouldn’t trust their statistics.

            And I note that you *still* haven’t said anything about second-order effects of lockdowns. Without considering those, there’s really no way of assessing lockdowns’ effectiveness.

  18. I have to admit, I feel similar sentiments to George Thomas Talbot. This is the first article I’ve read on the blog that I don’t feel I particularly learned anything. I’ll freely admit that most of the notions I had prior to reading this were inchoate, and if I gave it time and thought and sat down to try to articulate this I wouldn’t be nearly as eloquent or precise, but this feels like watching a GM smash a patzer who has no idea how outclassed he is: Mildly entertaining, but ultimately non-instructional.

    I don’t intend this to be personal criticism. I’ll read anything you put to ink, but I’m quite sure you could have found something more productive to make than poking holes in a man’s argument who can’t even use one of the foundational terms of his own discipline correctly. Like, playing with Ollie and Percy.

  19. As a long time ACOUP reader an occasional reader of Noah Smith I am kinda disappointed with how both sides of this have been argued, but can see strong steelmans for each side. I don’t have time to do good write up but my summary would be Noah is (mostly) attacking popularization of history based on interpretations that are not what the authors intended or ever particularly good faith readings *but* are reflective of how those article are interpreted by (a large segment) of their nominal target audience.

  20. Noah Smith was great before he got his doctorate. Now’s he’s a troll. He has a physics backgroud before he went into Econ. He’s got that disease physicists have where they think they are the masters of all fields of study.

  21. The original twitter argument is…a twitter argument, I’ve seen these kind of things a lot over the past few years (actually blocked twitter on my main computer because of how much time it was eating up per useful results, partly because of these sorts of arguments.), it’s pretty clearly for attention or to stir up drama or such, it worked to the point of having a blog post and my comment, and I’ll stop there.

    Instead, time to rewrite the “how scientific/mathy is it?” comment from way back in ye olden days on this blog on a post about how history is different from political science/is it a science. If you want to divide fields in this way, seems you get:

    -Pure logic: math, some philosophy. Uses proofs mainly. Computer science is an applied version.

    -Controlled experiment: Physics, chemistry, lab biology. Uses controlled experiments, can get very accurate mathematical models, make very exact predictions. Applied is most engineering fields.

    -Observational: Geology, meteorology, astronomy, ecology and other large scale biological fields. You can’t directly do controlled experiments because what you are studying is just too darn big, but you can use results from the controlled experiment fields to get good accuracy. Not really an applied since we aren’t building these things on purpose, mining geology is probably the closest.

    -Statistical: some economics, psychology, epidemiology and some other public health fields, some political science, controlled experiments in a lot of medicine. Data isn’t as repeatable or exact, so concenrs about how comparable or reliable it is have to be addressed, and mathematical models are more guides than anything exact. As a result, lots of techniques to deal with confounding variables, get statistics right, etc. Applied would be some finance or marketing.

    -Non-math: humanities fields in general, some political science, area studies, some philosophy. Most fields started this way, where “person X thinks this” is used as an argument. a lot of history goes here. Applied might be something like doing diplomacy, at least the stereotype of figuring out how someone thinks and using that information.

    As in the actual blog post, exact numbers are great when they can be used, but in a lot of cases can’t be used, so people who want information go to other techniques, or less exact numbers.

    As for alternate history, the reluctance can very much be annoying, though I have heard the reasons and see where they come from. What’s useful/fun about them is less “here’s an exact prediction of what would happen”, which obviously can’t be done, and more using alternate history to tell a fun story plus describe some actual history that a person may not know about before. As an example applied to this blog, the story of Carthage winning a Punic war would give a chance to describe what actually happened in the wars in order to pick a plausible way they go differently, plus describe how diplomacy, armies, etc. were done, since these details are needed ot explain how they could have gone differently.

    1. Humans like counterfactuals. Philosophers like Judea Pearl or Dan Dennett might say the ability to do counterfactuals is part of what makes us human.

      As a software engineer, I’d say it kind of partakes of math, science, engineering, and art.

      Math in the logical thinking, occasionally involving actual proofs. (If we did more then our programs might be more solid.) Math in the CS background if you took computer theory.

      Engineering in building stuff that works, that’s robust under stress… art in how you build it.

      Science in figuring out what some code — often your own! — actually does, vs. what you intended it to do or it’s documented (if it is) as doing…

    1. Any sort of human based thing (economics, psychology, history, politics) seems to attract lots of weird pop-science, clever against the grain types, and economics in particular attracts a lot of pundits (who often didn’t study the subject, though some people in the field do similar) who have this attitude.

      (And I, who has studied the stereotypical “I know better than everyone” fields in engineering and economics have no such problems. No ego, no need to comment, no siree, none at all…:) )

      1. Engineering is full of these types, but at least engineers are largely paid handsomely to do things within their area of competence. This reduces their tendency to chase down punditry fame and fortune, although it doesn’t prevent us from occasionally saying dumb things on Twitter.

  22. I’ve read Smith’s post, now. I would argue that he does make some good points, but writes them in inflammatory language; I suspect that, if he were to define (as you do) his use of words such as “empirically” and “prediction” you would find that you agree. I’m not going to use those words in this, because I don’t know how he defines them, and in my opinion he comes across as rather snide, but I agree that (paraphrased) “tyrants try for power multiple times” is understood in context as saying it is likely that Trump will make other attempts at gaining/retaining power. Is that a prediction? Maybe. It depends on how we define predictions; do we define a prediction as a hypothesis, as in “this is what *will* happen,” or do we define predictions as “this is *what is likely* to happen”? My suspicion (I cannot be sure) is that you are operating under the former definition while Smith is operating under the latter.

    I don’t use Twitter, as a rule. The constraint of so few characters makes it easy to be nasty to others when a longer, politer response would be better if you could make it, and that atmosphere further poisons what could have been an academic question of “what is a prediction, how are they backed, and should we be making them in these contexts?” On here I can use many more characters, and thus phrasing is more fluid; on Twitter, often “might” and “maybe” become “will”, because (in my opinion) it is more difficult to phrase uncertainty in so little text. I also agree that Smith is in large part making a bad faith reading of your work, but I think the context of a Twitter argument makes that easier for him to do, and he seems to argue the reason for the aforementioned statement on the nature of tyranny is due to attempting to use what was as influence on what is, and that Smith believes that requires more evidence (of what kind, he does not say that I parsed).

    Anyway that’s my opinion, not to say that I was asked or have extraordinary information. Hope y’all have a good day.

  23. I am not in a position to write an essay-length reply, but here is a comment on one point I know something about.

    “In the case of this chart, what Max Roser has succeeded in doing is not charting global deaths in conflict, but rather in charting the rate at which evidence for battles is preserved over time and the reliability of the estimates of their casualties.”

    This is a very good observation. Recognizing presence of selection bias and a great deal of other biases and issues and then accounting or mitigating them (or hopefully avoiding them already in the design) is the bread-and-butter of statistical work of making causal or correlational statements. In my eyes, the presented graph is a lesson that any analysis and presentation of historical data should be made with a good understanding of the methods historians and archeologists use. The graph would be better if the uncertainty bounds were drawn accordingly.

    Likewise, a map of observations that purports to show a pattern in observation but doesn’t clarify that the sampling of observations is biased is simply a bad statistical illustration. There are good representations that could show it.

    I believe the presented cases are more evidence of social science done less than ideally than evidence of anything inherently wrong in the research program of social science. (My own caveat: I am not a social scientist, but know something of statistics.)

    “Flattening” may be unavoidable, but uncontrolled loss of information it is not the intended result of applying a statistical method. Neither is producing data for the sake of having data. (Maybe some people do it that way, but I believe they have the wrong idea.) To me, the objective always is the investigation of the phenomena under study and drawing correct inferences given the evidence at hand and its limitations. The key is the limitations. (Statistical analysis is seldom needed in situations of certainty, after all.)

    The intent of statistical methods (since the very beginning and humble classic t-test) is to come up with a principled way to draw inferences from given evidence while considering the limitations (not can’t be perfectly measured and explained and there is inevitable randomness and noise; the key is to characterize it).

    1. P.S. The traditional statistical methods as traditionally presented are best applied in situations where one has enough data to that it makes sense run a test (which is then usually accompanied by statements about statistical significance — though observe that in recent years, many statisticians have argued that setting a fixed cut-off for a result to be significant or not can be counterproductive fool’s errand). However, there are other methods maybe more better suited to handling uncertainty. The starting point of Bayesian model fitting is to draw up a prior distribution, which can be anything but precise — one certainly *can* present total uncertainty. Even a choice between two different “uncertain” distributions can be quite informative!

  24. I’m starting to think “history” is just what people call something that hasn’t yet advanced to being a science. For instance, before biology became an empirical, data-driven science, it was called “natural history” and was this same morass of narratives, interpretations and theories. Then R.A. Fisher came along and realized, you know, all of these “events” and “sources” that the natural historians were building their narratives out of were just data points, and he could do real statistics with them instead. Boom, biology!

    All the sources you’re using are essentially just data points as well, they aren’t something else that gets compressed to data, they are themselves data. That’s why you can enter “Thucy said ‘if it be judged useful by those inquirers'” onto a computer because that’s text, that’s data which is being converted into bytes. It’s only a matter of developing algorithms to analyze that data and create predictive models. Translation, sentiment analysis, debiasing and finding deep connections in datasets are all essentially algorithms. There’s no special human empathic magic required to figure out what sentiments, biases and evidence points exist in a piece of text data. Those algorithms will only get more streamlined and advanced with developments of machine learning. Even now, I’m pretty sure Cambridge Analytica could do deeper and more objective sentiment analysis of the source dataset than most historians. It’s only a matter of time before that dataset is effectively synthesized by data analytics, enabling predictive models from the whole of recorded and archaeological history, not just from within “subfields”.

    I think we’re at the paradigm shift of “social history” now where it’s going to get digitized and effectively modeled by statisticians, scientists and AI to create something that CAN make real and meaningful predictions about future events. I’d give the field ten years tops before it gets taken over by machine learning.

    1. just out of curiosity, where can I find you in 2032 to collect on this bet, and what is the substance of your forfeit?

    2. This has been tried before, in fact this is what Marx was trying to do with his historical materialism: turn history into a “hard” science that can be used to predict things. It … didn’t go well. Adding in AI gobbledygook doesn’t change squat here.

      Frankly, I think you’re over confident about both history and AI. The “AI”’s we have aren’t magic, and they can’t do anything you’re proposing. Hell, what you’re suggesting is beyond the limits of an artificial general intelligence, and pretty deep into “pure magic” territory. Might as well propose that historians will make a Time Machine, tbh.

      1. As an addendum: sentiment analysis is dependent on the corpus of text the algorithm was trained on. You cannot use these techniques for smaller sets of data. Cambridge analytica couldn’t tell you shit about Latin sentiment *because it can’t read latin*.

        You could train it to read sentiment in Latin texts, but you’d have to have some clearly defined data to train it on, which would involve asking historians to tell you what a corpus of text meant. So, not really that useful.

        Might as well just ask the experts instead of expecting tech wizardry to make hard problems easy.

        1. Not to mention how Cambridge Analytica couldn’t tell you squat about data that was lost because the events it describes happened centuries ago…

        2. And of course, there’s probably not a large enough corpus of latin texts to actually train an AI to do what we want, AND these texts are themselves a highly biased sample (IE: Largely written by well off men)

      2. One of my favorite “what ifs” is if Marx had picked up the Supply & Demand model of value (which existed but was pretty new) instead of the (already obsolete even though he didn’t know it) Labor Theory of Value that caused so much nonsense in his reasoning.

  25. More on “predictions” – lots of historians will write think-pieces not to establish that something will happen, but to debunk a very confident prediction (social science or pure punditry) with counterexamples.

    The burden of evidence is much, much lower for lowering the certainty of a preexisting prediction like “no way Trump will try again” than for making a high-confidence prediction of your own.

  26. The entire discussion about empiricism and the theoretical foundations of the different disciplines, what stuck out to me was this: “Maybe now all the historians will have time to go read a book”.

    Anyone who has ever studied history probably knows why that’s funny. History students (and historians) read. A lot. Like. HUUUUGE amounts. I’m not really aware of any discipline that has you read as much raw text in various cases.

  27. Surprising to see a post by you at this time of week! I cannot make much comment on this though, I have always been weak on theory when it comes to history

  28. I think it’s more economists that this can be attributed to, or at least social scientists who rely on quantitative data. The ones who use qualitative data never get into pooh poohing non-quantitative fields.

  29. A small nitpick from the mathematics discipline
    > one cannot, after all, sense-perceive the square root of negative one

    ‘i’ (the square root of negative one) is a clever mathematical shorthand for concept of rotating an object by 90 degrees. I just perceived myself multiplying my coffee cup by ‘i’ by rotating it a quarter turn on my desk. The term ‘imaginary number’ is a misnomer, ‘i’ appears in many physics equations that describe the physical universe!

    1. This is incorrect, and I suppose comes from a misunderstanding of Euler’s representation (and a misunderstanding of the representation for the thing itself).

      To wit, multiplying by i is only equivalent if your plane is complex, which is what you seem to explicitly argue _against_. You aren’t rotating the cup in C^3, you are rotating it in R^3.

      As for the second part, imaginary numbers are used to describe physical reality exactly because some descriptions (models) of reality necessitate complex numbers, because the reality itself can not be described (and thus most likely is not taking place in) R^3, but only C^3 (or ^4 or whatever).
      Just because some rotational matrix operations and some harmonic oscillations that are R^3 are easy if you describe them in C^3 (I guess you got confused by all those i*omega*t resonators) does not make C^3 a shorthand.

  30. I feel like a more defensible version of Smith’s argument could be constructed:

    1. When historians make arguments about current events, they are making predictions about how those events will unfold.

    2. Given the nature of real world events, these predictions are inherently untestable.

    3. Therefore, if these predictions are made confidently or authoritatively, the historians making them are acting as pundits, not as expert historians (who would argue that broad predictive theories are the not the realm of history because events are overdetermined).

    I think there is a conflict here in how arguments are compressed for public consumption (and to make a point), where the level of uncertainty is lost. All of your various caveats aside, your more political posts (such as those on Ukraine or Trump) are making a clear predictive argument that these situations are analogous to historical events, and therefore we should consider acting as though history will repeat itself in this case. While you are not using “theory” in the grand-overarching sense, and you do not claim to be all-knowing, you (and any other historian writing about current events) are absolutely making predictions.

    The challenge, then, is how much weight should be given to this expert advice when (as you freely admit) the predictions cannot be tested? How can the non-expert (or even the expert historian — as you say, your expertise is of necessity narrow) decide whether the situation with Trump is more like one example from ancient Greece or another from Mexico in the 1800s? I will trust an expert historian to be correct in her sources (at least, within her area of expertise), which is a great improvement to every random hack making factually inaccurate historical analogies, but in the end the connection to the modern day is limited only by her skill as a writer.

    I don’t think Noah Smith has a useful answer to this (if we could answer this empirically someone would have already done so), but it is a challenge in how we present this kind of essay to the public.

    All that said — you yourself have the best line on this I have ever read. “Public engagement is how you build support for the field; activism is how you spend support for the field.” Your educational posts are so much stronger than your persuasive essays for this reason. There’s a process in all this:

    1. How things were.
    2. ???
    3. How things will or should be.

    The more of that ??? a historian fills in for the reader, the more they’re stepping outside of their expertise and the more they are trading on their good name.

    1. I just commented below, but I think this is an excellent more-involved statement of my point.

  31. Someone who changed approaches to history, pioneering the concept of data base scholarship for compilation of stats and analysis, is Gwendolyn Midlo Hall.

    https://www.amazon.com/Gwendolyn-Midlo-Hall/e/B001HP7O74%3Fref=dbs_a_mng_rwt_scns_share

    See particularly her fascinating memoir, Haunted By Slavery (2021).

    Building on what she did, others have done same, as we see notably in the now available records of the history of the transatlantic slave trade, as with Eltis and Richardson’s Atlas of the Transatlantic Slave Trade.

    https://yalebooks.yale.edu/book/9780300212549/atlas-of-the-transatlantic-slave-trade/

    Historians don’t ‘do’ history without number crunching any longer, and haven’t for a long time. But somebody has to dig through all the historical records and documents to find the numbers to compile and then analyze. Reading accounting ledgers is a very important skill for historians.

    This doesn’t mean everyone agrees as to what the numbers mean in every situation. And some numbers, such as those of how specifically many died in the 6th C Bubonic Plague — as well as all the other epidemics sweeping about, while due to the deaths and other conditions, crops and harvests declined or outright disappeared — for all the obvious reasons. But sometimes in certain places, the numbers were recorded daily — until finally there were too many, and likely those keeping the records also died from the plague. However, those records and documents provide a great deal to worth with, particularly now that we have the technologies of forensic archaeology and other tools available to assist with the analysis.

    1. Eh, plenty of historians do history without number crunching: Just not particularly that kind of history.

      There are a bunch of problems even with having numbers, though, eg. “What do these numbers actually *mean*?” The classic trap that a lot of people (even historians) fall into is the “The numbers mean what the guy who compiled them said they mean, rather than the result of his compilations.” Eg. Recorded deaths *precisely that*, not total deaths. Baptisms are not births, etc.

  32. I spent some time working through this wall of text, but I think there’s a core point sort of missed here.

    Smith is attacking the practice of historian-as-pundit, while you are defending the practice of history-as-discipline.
    But history-as-punditry is something you do. When you say “would-be tyrants keep trying until they succeed”, that is history-as-punditry, not history-as-discipline, and no amount of qualifiers protects you from being called out on it, in my view.

    Smith says at one point in his Twitter discussion:
    “The key here is that many academic historians (and much of the public) act as if their discipline is nomothetic until they are pressed on the matter, and then retreat to a pro forma claim that it is idiographic.”
    and I think he’s right. It’s like Jon Stewart retreating back to “I’m just a comedian” whenever he’s called on something.

    I am persuaded by your defense of history-as-discipline, but when you (and other historians) are writing for a public audience, you’re practicing history-as-punditry more than history-as-discipline.

    1. ” … when you (and other historians) are writing for a public audience, you’re practicing history-as-punditry more than history-as-discipline.”

      Or that is YOUR take, which ignores that historians writing for the public are also practicing history as discipline, which allows them the authority and standing to write for the public on the subject at hand.

      That attitude means a belief that historians should write and discuss only in academic, limited, expensive, subscription-access journals. History has no place in education and public discourse. Historians should never inform the general public about anything in the vast range of what historians study and record, and have ever since history began. W

      History as we are speaking of it here, begins as soon as there are organized states of any kind, whether bronze age China, the Assyrians, Egypt, you name — there are efforts to record the past, explain the past as well as record the present. And before that there was mythology. And yes, mythology and history exist simultaneously, even now — see: Lost Cause re mythology, and history of the War of the Rebellion.

      Also, give us a break from that silly assumption that has emerged since covid came to kill millions, which it is still doing — around 450 per day in the US alone — comes to pandemics and famines: how many of those deaths are really from something else? That’s just … ya, that’s the word, silly!

  33. To summarize:

    Noah believes that if you can’t make some kind of nomothetic argument, you shouldn’t engage in punditry.

    Brett believes that it’s epistemologically impossible to make a nomothetic argument.

    They’re both right. Tragically, neither of them is willing to follow the logic to its conclusion.

    1. To be fair, that is not what Smith wrote. He wrote that punditry based on “untested theories” should receive same level of skepticism as some macroeconomic models. (Presumably also applies to punditry by himself.)

    2. If Noah is right, the vast majority of the human experience is fundamentally unfathomable, no lessons can be learned from experience, and everything is a constant unpredictable ball of chaos in which anyone can be right about anything, because who knows what’s really true?

      That level of radical epistemic doubt is certainly possible, but it’s absurdly counterproductive when applied to day-to-day life. And I don’t think we should go out of our way to demand radical epistemic doubt and silence in fields of human study where that would have the effect of shutting down everything there is to learn or even know about ourselves and our past.

      This, here, is not the way to live:

      https://existentialcomics.com/comic/93

      1. >If Noah is right … no lessons can be learned from experience,

        I don’t think that is a sensible extrapolation, like, at all? Smith argued in favor of “empirical” methods (in the conversational sense of the word). And “empirical” ultimately is nothing but being rigorous firstly when one makes observations from experience and then, secondly, also rigorous about which are relevant assumptions (and their limitations) when generalizing them to predictions.

        Economists have many disagreements about differences-in-differences studies about stuff like minimum wage, but the good parts of such disagreements can be productive: to question a quantitative argument, one must pinpoint which part of the assumptions is questionable.

        And anyway, such practices have been developed exactly because learning from experience naively, “au naturel”, is difficult. People often learn some intuitive physics, like that heavy things have tendency to fall down by observing behavior of things falling down every day; yet such a simple thing as predicting weather (which is also something everyone observes every day) is considerably more difficult. Folk wisdom has many anecdotes and statemaents about weather, but they are seldom correct and often wildly overconfident when they correlate with real physical phenomena.

  34. According to Robert Skidelsky (Keynes: the Return of the Master) this is one of the main differences between Keynesian economics and modern mainstream neoclassical economics. Neoclassical economists think they can model people as if they have (accurate) probability estimates for future events which they can use to calculate payoffs for their actions. Keynesian model people as if they can’t make general probability estimates and therefore are wary of risk.

  35. Fascinating, as always. One of the really interesting disconnects between the so-called “hard” sciences (e.g., physics, biology) and the so-called “soft” sciences (e.g., sociology) lies in the different meanings of “theory”. In the soft sciences, this generally means an intellectual framework for understanding an event or phenomenon. Hard sciences often use that same meaning, but the authors often forget that they’re doing so and privilege theory as “something tested in every conceivable way using numerical and empirical approaches and proven to still be right after every test”. It’s far, far more nuanced than that, of course, but the difference in focus leads to many misunderstandings when hard scientists try to communicate outside their own discipline.

    1. To make a pedantic but critical distinction: nothing can be “proven right” in science. It can only be proven *wrong*. I would proffer instead the following definition of “theory” in the “hard science” sense: “a model or framework for explaining a particular set of phenomena, which makes testable predictions, where those predictions have not been proven to be at odds with reality over as many possible different tests as we can feasibly make (often over some moderate length of time).” It’s a fuzzy definition, and often only applicable in hindsight, but that’s the gist of it.

      Certainly, though, differences in usage of “theory” between different people can lead to misunderstandings, so it’s important to clarify.

      1. The claim that nothing can ever be proven right, and that science is just exercises in falsifying things or failing to do so, is itself a philosophical statement, based in a specific school of philosophy that has very specific ideas about what words like “right” and “wrong” mean.

        It is possible, by playing enough language games, to use this philosophical perspective to explain why we practice science and how, in a way that makes good sense. But this is not the only possible perspective that has these effects, and should not itself be (ironically) mistaken for objective truth about science itself.

  36. The entire confusion here, as far as I can tell, stems from different readings of the phrase “may happen”. You appear to be asserting the absolutely literal meaning of the phrase – that it is a possibility among infinitely many. Noah appears to be reading that phrase as “likely to happen”.

    If you read the phrase literally, as one among infinitely many possibilities without any claim on relative likelihood, then certainly your arguments are valid. But if that is actually what you mean, then your claims are also vacuous. To say that a thing may happen without any associated claim on the likelihood of that thing is a non-statement.

    To make a claim on relative likelihood is a non-vacuous statement, but it is an empirical statement grounded deeply in and inseparable from counterfactual analysis.

  37. I find Bret’s defense of history as an academic discipline to be very convincing.

    However, I thing Noah has the better of the exchange when it comes to the predictive value of history. Bret argues very persuasively that this isn’t really the sort of thing that History-as-a-discipline is interested in, and he tells us many intelligent reasons why this might be the case, but it remains that historians in the public eye are frequently very busy using their history to make predictions. Bret claims that his Trump-dictator piece is being selectively quoted to make it seem predictive, but I don’t think that’s fair. Here are some more quotes from that piece:

    “Another key lesson from this history should be even more sobering: Would-be tyrants keep trying until they succeed.”
    “Unless would-be tyrants are made to face the consequences of their attempts to seize power, they will keep trying until they succeed so thoroughly that justice is beyond recovery.”
    “… but the ubiquity of the [tall poppy] tale and its lesson ought to worry the prominent supporters of the January insurrection as much as its opponents.”
    “The ancient Greek experience with tyranny thus presents two reminders: First, the necessity to prepare for another, likely better planned and organized, effort to overthrow democracy; and second, the dire consequences for failure.”

    That is at least four explicit claims about what the reader should think or do in light of the history of Greek tyranny. This is not, pace 2022-Bret, a claim that this is one possible outcome out of many, not privileging any of them. It is the presentation of a single possibility as one that *should* be paid most attention to. I don’t think that some boilerplate at the beginning of the essay that other historical situations might also be relevant saves you from making a claim–you don’t PRESENT any other historical situations, so what are we to make of that if not that you think this is the best (or at least, on of the very best) historical analogies. Otherwise you would be writing about the Emperor of San Francisco, the delusional old man who made farsical claims on political power which were humored by those surrounding him, and never amounted to any harm. You COULD have written about him in relation to January 6th. Why didn’t you? Is it, perhaps, because you wanted us to take a specific lesson about the future that came from the Greek analogy but not from the San Franciscan one? That sounds like a claim about the future.

    And I don’t really think that you address his claims that the let’s-find-a-single-analogy method is a bad way of forecasting the future. Yes, his preferred approach is very hard, because you have to pick which events you care about and create clean categories where none exist (although there are better sorts of statistical methods that alleviate this problem at the expense of being more work). But your approach is just as bad in that respect. I am comfortable saying that neither his approach or yours is useful, and nobody should use either to predict the future. But I think there are two key superiorities of his approach:

    1) Statistical models all have an explicit variable to capture contingent circumstances (an error term; this will also capture other stuff like measurement uncertainty as well, though), and what many of them are doing is allowing you to compare the strength of generalizable factors in producing outcomes compared to contingency. If you think that contingency is ultra-important in guiding the results of events (and I agree), I don’t think that this is an argument against modelling.

    2) Statistical models constrain the researcher so that he or she cannot make any claims desired. Do they do as good a job as this as you would hope? No, for many reasons including many you outline quite well above. But they constrain the researcher MORE than the pick-an-anecdote-and-forecast-from-it method, where you can pick literally any anecdote, and you can then choose whatever aspects and results you want. Whereas if you pick lots of data points, it is harder (or at least, more obvious) if you exclude ones you don’t like, and once you have done the choosing you can’t make any decisions about how to do the math from there–16.8 squared is always 242.24, no matter how inconvenient for your theory.

    1. There’s a lot of ground between ”pick an anecdote and forecast from it’ and ‘make a statistical model’. As Brett notes, in history (and politics and war and much current affairs) the data to populate a model are not available, not reliable or cannot be collected in time to do anything useful. For instance, morale is an essential element in war, but measuring the opinions of enemy troops is, well, difficult. Even the opinions of one’s own are hard to assign a number to – and how morale is expressed and acted upon is heavily influenced by culture. A related point is that the personalities of individual actors matter. It is central to Trump’s failure as coup-maker that he is not just ambitious, but incompetent.

      What we can construct are narratives, which we check against as much of the available information as possible, and test for coherence and plausibility. The historical method is a formalised way of doing this, but it’s very much the method used in our everyday lives, and in fields other than history where complexity, uncertainty and a wide range of scale apply.

    2. “Another key lesson from this history should be even more sobering: Would-be tyrants keep trying until they succeed.”
      “Unless would-be tyrants are made to face the consequences of their attempts to seize power, they will keep trying until they succeed so thoroughly that justice is beyond recovery.”
      “… but the ubiquity of the [tall poppy] tale and its lesson ought to worry the prominent supporters of the January insurrection as much as its opponents.”
      “The ancient Greek experience with tyranny thus presents two reminders: First, the necessity to prepare for another, likely better planned and organized, effort to overthrow democracy; and second, the dire consequences for failure.”

      Speaking of predictions, has Trump actually launched any coup attempts recently?

      1. He’s still actively promulgating his first one, his ‘I should be president, not Biden, never mind your phony vote tabulations.’ He and his minions are busily trying to take over election office positions so that in 2024 even if the (real) numbers come in wrong, Trump will still claim – and attempt to activate – victory. To say that this is a threat to our democracy would be a gross understatement.

        1. He’s still actively promulgating his first one, his ‘I should be president, not Biden, never mind your phony vote tabulations.’

          How long did it take Hillary to admit that she’d lost the election? Did she ever actually do so?

          He and his minions are busily trying to take over election office positions so that in 2024 even if the (real) numbers come in wrong, Trump will still claim – and attempt to activate – victory. To say that this is a threat to our democracy would be a gross understatement.

          Meanwhile, back in the 2020 election:

          “Their work touched every aspect of the election. They got states to change voting systems and laws and helped secure hundreds of millions in public and private funding. They fended off voter-suppression lawsuits, recruited armies of poll workers and got millions of people to vote by mail for the first time. They successfully pressured social media companies to take a harder line against disinformation and used data-driven strategies to fight viral smears. They executed national public-awareness campaigns that helped Americans understand how the vote count would unfold over days or weeks, preventing Trump’s conspiracy theories and false claims of victory from getting more traction. After Election Day, they monitored every pressure point to ensure that Trump could not overturn the result. “The untold story of the election is the thousands of people of both parties who accomplished the triumph of American democracy at its very foundation,” says Norm Eisen, a prominent lawyer and former Obama Administration official who recruited Republicans and Democrats to the board of the Voter Protection Program.” https://time.com/5936036/secret-2020-election-campaign/

          So Trump represents a gross threat to democracy, whereas a “conspiracy” (the article’s term) to change voting laws, pressure social media to delete certain news stories, and recruit like-minded people as poll workers, is perfectly fine? Sorry, not buying it.

          1. @GJ

            >How long did it take Hillary
            >to admit that she’d lost the
            >election? Did she ever actually do so?

            https://www.npr.org/2016/11/09/501425243/watch-live-hillary-clinton-concedes-presidential-race-to-donald-trump

            Per Wikipedia, Wisconsin’s count gave Trump 270+ electoral votes at 2:30 a.m. on the morning of Wednesday, November 9, 2016. I just linked you to a news article containing a recording of Clinton’s concession speech, timestamped 11:25 that same morning. I don’t know exactly when she made her concession speech, but presumably it was some time before 11:25 a.m.

            So how long did it take her to admit she’d lost? Did she admit it?

            It took her about eight hours or so. And yes. Yes she did.

            Who on Earth gave you the idea that she didn’t, or took an unreasonable amount of time to do so? I hope you don’t trust them about other points of basic historical fact or current events, because they’re clearly lying to you.

            How these zombie lies persist, I don’t know…

          2. Who on Earth gave you the idea that she didn’t, or took an unreasonable amount of time to do so?

            I think it was Hillary’s own actions in the aftermath of the election:

            https://www.forbes.com/sites/paulroderickgregory/2017/06/19/is-russiagate-really-hillarygate/

            Maybe my standards are just too high, but I don’t think it really counts as “admitting you’ve lost the election” if you immediately start trying to frame the winner for treason.

            I hope you don’t trust them about other points of basic historical fact or current events, because they’re clearly lying to you.

            Don’t worry, I already know not to trust the Clintons. 😉

          3. “Maybe my standards are just too high, but I don’t think it really counts as “admitting you’ve lost the election” if you immediately start trying to frame the winner for treason.”

            Given that Trump stole classified documents, perhaps you should revise your belief that accusing him of treason was a ‘frame’.

            Fact is that Trump has behaved a lot like a Russian asset from before he was elected.

            Fact is also that you and Mary were both wrong about Hillary not conceding the election.

          4. The president does not, however, retain this power after he leaves office. Nor is he allowed to exercise this power purely in the privacy of his own mind, without informing the government and the nation that he has done so.

            “The documents were in my possession, but I totally declassified them before I left office” is an affirmative defense. One is admitting to the offense in question, but arguing that it is not a real offense because of a particular circumstance. It falls on Trump to prove that he actually did declassify the documents.

            Now, normally, that’s not hard. There is supposed to be a paper trail here. Among other things, this is because when documents are declassified, they are supposed to become available for FOIA requests and other purposes.

            Also, because imagine what would happen if some random citizen had taken a copy of a document Trump totally declassified on or before 1/20/21, and posted the whole thing online on, say, 1/21/21. They would likely be arrested for releasing classified information. That would be unfair. After all, it wouldn’t be classified information because Trump had, supposedly, totally declassified it by that time.

            So naturally, it’s only fair to expect Trump to tell people that he’s declassifying documents. Otherwise, some poor person might have gotten thrown in jail over a nonexistent crime! Also, additional expenses get run up maintaining security over information that is no longer classified, people have to attend inconvenient training courses and fill out paperwork…

            It’s just bad governance to have a system where the president’s power to declassify things can be carried out in secret without telling the government itself that its own documents have been declassified.

            Fortunately, the US system around classification and information security is not foolish enough to allow for this. There’s a paper trail associated with declassification orders.

            The FBI clearly didn’t know these documents were declassified, because they would have been idiots to fill out a warrant application saying “we’re looking for classified papers at Mar-A-Lago” when they knew damn well the papers in question weren’t classified. That would never stand up in court and would backfire horribly.

            You don’t mess around when filing a warrant; you don’t falsify information like “we’re looking for this thing, which is totally classified, but we know it’s not actually classified because we know the president declassified it over 18 months ago.”

            Again, the FBI clearly didn’t know these documents were declassified as of, say, August 1, 2022.

            That’s because Trump never actually filled out any of that paper trail for these documents, did he?

            As far as I know, Trump did not leave so much as a Post-It note on the Resolute Desk informing the government “oh, yeah, I totally declassified documents #781923 through #782079” or what have you.

            The only evidence that Trump has in fact declassified the documents is that it would be really, really inconvenient for him and his reputation if they weren’t declassified at the time he walked out the door.

            I am dealing here in cold, hard facts.

            The fact is that Donald Trump walked out of the presidency with large amounts of highly classified documents in his possession. They were classified at the time he took them.

            Trump held onto them for a year and a half, mingled in with his personal effects, such as his passports. He held onto them long past the point at which it could plausibly have been a mistake or a paperwork hiccup, long past the point where any reasonable person with the money to gain access to competent legal counsel and advisors would have realized he was making holding documents he should not hold.

            These are the facts. They permit, broadly speaking, of two conclusions.

            Either he’s a hoarder so crazy that he’s incapable of recognizing when possession of a document is literally illegal, and incapable of just doing the sensible thing and not arguing and giving the document back to the government when they ask nicely two or three times in a row…

            …Or he’s so high on his supply of “the king can do no wrong” that he somehow intended to benefit or profit by having in his possession large amounts of classified government documents. And thought that this was okay, and thought his supporters would insulate him from anything bad happening because of this, because “the king can do no wrong.”

            America does not run on the principle of “the king can do no wrong.” And if it did, that would not be good for Trump either, because he is no longer the king, insofar as he ever was. The thought of what the FBI would do to a man who stole that many classified documents if there weren’t laws restraining what “the king and his men” can do to American citizens… Well, I shudder to think on it.

            The only way that this is ‘fine’ for Trump is if there’s some kind of selective rule by which (R) presidents are allowed to do as they please, both before and after holding office, because “the king can do no wrong,” but (D) presidents are expected to follow the rule of law, or by which we’re supposed to think all their actions are invalid by definition because no (D) can ever be the rightful king.

            In which case we might as well be honest and stop making up justifications for the rightful king’s actions on the fly, and admit that the real underlying principle here is that Donald I can do no wrong because he is and remains America’s once and future king, anointed by the (R).

          5. Fortunately, the US system around classification and information security is not foolish enough to allow for this. There’s a paper trail associated with declassification orders.

            But apparently it is foolish enough to allow for Hillary Clinton to keep classified emails on a personal server.

            These are the facts. They permit, broadly speaking, of two conclusions.

            Actually, there’s a third conclusion, namely that Trump and his supporters have noticed that American democratic norms are applied in an obviously partisan manner, and consequently no longer regard them as having normative force. As a student of history, you should have been expecting something like that.

          6. “But apparently it is foolish enough to allow for Hillary Clinton to keep classified emails on a personal server.”

            Except that didn’t happen.

            Just as the widespread voter fraud that Republicans allege, never happened.

          7. @mindstalko: Donald Trump has a higher body count of Russians than either Barack “The 1980s called, they want their foreign policy back” Obama or George “I looked [Putin] in the eye. I found him very straightforward and trustworthy” Bush.

            Definitely the behavior of a Russian asset.

      2. The material conditions were very favorable for Trump to attempt a coup in early January 2021, what with him being the president of the United States and having his followers at a hyperactive roiling boil what with his Twitter claims that the election results had been faked somehow. He had minions, he had enough legal power to at least muddy the waters and interfere with law enforcement attempts to stop his minions. His chances may not have been good, but he did at least in theory have the basic prerequisites to get the job done.

        But now? Not so much. Odds not so great.

        Even if I were Donald Trump’s greatest fan, and wanted nothing more than for him to become king of America at the first opportunity… I would not advise him to attempt a second coup attempt now.

        So if you argue that Trump has attempted no coups in specifically the last nineteen months, you are not proving that he will not attempt any more in the future, or that predictions that he’d try again given a chance are wrong.

        You are merely placing a upper bound on either:

        1) Just how stupid an idea Donald Trump is willing to try, or
        2) Just how effective the people around him are at stopping him from doing that particular stupid thing.

  38. As someone who was following this debate on Twitter, I’d say that you’ve convinced me of your main argument: that the kind of empiricism that Smith demands historians follow isn’t possible, and that we can still draw lessons from history (albeit very tentative ones) even without empirical data. Smith has also utterly failed to convince me that “historians have become a sort of priestly order that we rely on to tell us about where our politics are headed and how we should think about our sense of nationhood”, or that they command anywhere near the same level of respect that economists typically do.

    That said, I think your piece on ancient insurrections wasn’t a good example to illustrate your point. In particular if, as you argue here, the lessons that we can learn from history are specific and contingent, rather than general and universal, then the piece would have to contain some discussion on Donald Trump and how he is similar to the historical figures mentioned in order for those historical lessons to be useful. (Is Trump similar to Peisistratos? I sure as hell don’t know.) Otherwise, the piece is either an observation about how would-be tyrants behave generally (i.e. “most would-be tyrants keep trying until they succeed”, which is a claim for which I think it’s fair to demand empirical justification), or it’s a bunch of historical anecdotes whose relevance to the current situation we can’t assess (i.e. “some would-be tyrants keep trying until they succeed”).

    Also, this is definitely a nitpick (but hey, this is a blog about unmitigated pedantry), but if Nixon would be excluded from the simplified definition of tyrant that you provided in the piece because he didn’t rule alone, then wouldn’t the same apply for Trump?

    1. if Nixon would be excluded from the simplified definition of tyrant that you provided in the piece because he didn’t rule alone, then wouldn’t the same apply for Trump?

      If it hadn’t been for the January 6 capitol riot seemingly being egged on by Trump and his supporters, as well as the constant attempt to undermine the results of the presidential election results both before and after January 6th – then, yes, the same would have applied to Trump as well.

      1. “Attempts to undermine the results of the presidential election” have been a feature of US politics since Bush v. Gore. “*Constant* attempts to undermine the results of the presidential election” have been a feature of US politics since Trump got elected. If that counts as tyrannical behaviour, then the United States is a nation full of tyrants.

        1. Ok, first of all, please excuse my ignorance. I barely know about past US presidential elections to be able to weigh in on whether past presidential candidates acted tyrannically or not.

          Also, I assume that an act of tyranny has to be contrary to the past political traditions and societally accepted sources of authority and legitimacy. In a country whose body of government is based upon acting according to a stringent set of laws, illegal or extra-legal actions surpassing the lawful authority of the governor are tyrannical. In a feudal monarchy, actions of rulers that trample the ancestral privileges of the aristocratic class should be considered tyrannical. In a democracy, actions that disenfranchise the people (especially without good reason – emergency powers are always iffy, but can still be justified in cases of actual emergency) are tyrannical.

          With that in mind, by “attempts to undermine the results”, within the US context I assume that those attempts have to be illegal or otherwise blatantly fraudulent. AFAIK, in 2000 both political parties ultimately acquiesced to the decision of the Supreme Court, which itself had a long legal precedent of having the constitutional power to resolve such issues. If I’m wrong, please do let me know.

          If someone denies that the Capitol riot is linked to the actions or will of Donald Trump; or that Trump’s attempts to evoke a vote recount were still bounded within the typical political process – fine, fair enough, I’ll rescind my accusation of Trump’s actions being tyrannical. Still, personally, it feels close. – I didn’t have a problem with Trump’s presidential legitimacy until January 2021. Afterwards, between the examples mentioned above, as well as the unrestrained amount of presidential pardons that he signed, it seems to me that he deliberately disrespected the traditional conception of the limitation on the President’s authority. It’s not tyrannical yet (if only due to the fact that the coup attempt actually failed), but it does feel like his actions put him on the road towards striving to establish tyranny.

          1. With that in mind, by “attempts to undermine the results”, within the US context I assume that those attempts have to be illegal or otherwise blatantly fraudulent. AFAIK, in 2000 both political parties ultimately acquiesced to the decision of the Supreme Court, which itself had a long legal precedent of having the constitutional power to resolve such issues. If I’m wrong, please do let me know.

            Both parties *ultimately* acquiesced in the election of Joe Biden — there haven’t been any attempts to unseat him since Jan 6 last year, after all, which I guess is evidence either that “Would-be tyrants keep trying until they succeed” is false, or else that Trump isn’t a tyrant. As for blatantly fraudulent, I think Russiagate and the attempt to impeach Trump over that would pretty clearly count.

            Jan 6 was an escalation of the trend of denying that the other side won legitimately, insofar as people physically broke into the Congress, but it was nevertheless an escalation of a pre-existing trend, not some bolt-from-the-blue attack on an otherwise serene and functioning republic. In this regard, Trump is more like one of the demagogues from the late Roman Republic, like Milo or Clodius Pulcher, than Pisistratus or Cylon.

            Afterwards, between the examples mentioned above, as well as the unrestrained amount of presidential pardons that he signed, it seems to me that he deliberately disrespected the traditional conception of the limitation on the President’s authority. It’s not tyrannical yet (if only due to the fact that the coup attempt actually failed), but it does feel like his actions put him on the road towards striving to establish tyranny.

            “Disrespecting the traditional conception of the limitation on the President’s authority” has been usual since at least FDR. To take a more recent example, Barak Obama’s decision not to deport certain categories of illegal immigrant, essentially rewriting US immigration law on his own, seems far more “tyrannical” to me than making free with presidential pardons. (And the fact that Trump then got compared to Hitler for applying the law as written should give an indication as to how much bad faith is involved in these sorts of discussions.)

          2. “Both parties *ultimately* acquiesced in the election of Joe Biden”

            Unless you count all the Republican politicians who are still saying that the election was stolen, votes need to be recounted, etc.

          3. Unless you count all the Republican politicians who are still saying that the election was stolen, votes need to be recounted, etc.

            And certain Dems kept claiming that Bush stold the election from Gore throughout the former’s presidency.

            As I said above, “If that counts as tyrannical behaviour, then the United States is a nation full of tyrants.”

          4. @Mary
            @GJ

            > Has Hilary Clinton admited
            > yet that she lost?

            Yes. She admitted it roughly, as of this writing, five years, nine months, twenty-four days, and eighteen hours ago.

            https://www.npr.org/2016/11/09/501425243/watch-live-hillary-clinton-concedes-presidential-race-to-donald-trump

            Per Wikipedia, Wisconsin’s count gave Trump 270+ electoral votes at 2:30 a.m. on the morning of Wednesday, November 9, 2016. I just linked you to a news article containing a recording of Clinton’s concession speech, timestamped 11:25 that same morning. I don’t know exactly when she made her concession speech, but presumably it was some time before 11:25 a.m.

            So how long did it take her to admit she’d lost? It took her about eight hours or so.

            Who on Earth gave you the idea that she didn’t, or took an unreasonable amount of time to do so? I hope you don’t trust them about other points of basic historical fact or current events, because they’re clearly lying to you.

            How these zombie lies persist, I don’t know…

            @GJ
            >And certain Dems kept
            >claiming that Bush stold
            >the election from Gore
            >throughout the former’s
            >presidency. As I said
            >above, “If that counts
            >as tyrannical behaviour,
            >then the United States is
            >a nation full of tyrants.”

            If Gore had attempted to organize a mob to storm the Supreme Court or the Capitol during the events by which the decision unfavorable to him was being enshrined into law and Bush’s appointment as president was formalized, then you would have a case.

            Trump’s actions represent an enormous escalation beyond anything we have seen in recent American history, wherein the president-elect made numerous challenges to the legitimacy of inconvenient election results throughout the nation, repeatedly insisted on fraud, ignored the advice of legal counsel and government officials telling him that the fraud claims were false, and then finally, when all else had failed, sicced a mob on the parts of the government that were in a position to proclaim that he wouldn’t get to be president anymore.

            All these actions have been attested to in open Congressional hearings. Many of them attested to by Republicans, many of them members of the Trump administration itself.

            The idea that this is anything like the disputed Florida election of 2000 is an extremely small and perforated fig-leaf to try and put up over the whole debacle. I’m sorry, but it is. The facts are what they are. The man did what he did, and what he did, others had not done before.

          5. Trump’s actions represent an enormous escalation beyond anything we have seen in recent American history, wherein the president-elect made numerous challenges to the legitimacy of inconvenient election results throughout the nation, repeatedly insisted on fraud, ignored the advice of legal counsel and government officials telling him that the fraud claims were false, and then finally, when all else had failed, sicced a mob on the parts of the government that were in a position to proclaim that he wouldn’t get to be president anymore.

            Challenging the legitimacy of inconvenient election results based on obvious lies: https://www.thenation.com/article/archive/the-real-costs-of-russiagate/

            Angry mobs trying to break into government buildings to stop disfavoured candidates entering office: https://www.nbcnews.com/politics/supreme-court/protests-build-capitol-hill-ahead-brett-kavanaugh-vote-n917351

            What happened on Jan 6 is not unprecedented; you just don’t care about the precedents because they were committed by your side.

          6. Trump’s actions represent an enormous escalation beyond anything we have seen in recent American history, wherein the president-elect made numerous challenges to the legitimacy of inconvenient election results throughout the nation, repeatedly insisted on fraud, ignored the advice of legal counsel and government officials telling him that the fraud claims were false, and then finally, when all else had failed, sicced a mob on the parts of the government that were in a position to proclaim that he wouldn’t get to be president anymore.

            Neither of the things you mention (challenging the legitimacy of inconvenient election results, siccing angry mobs on branches of government) are unprecedented. The Democrats spent most of Trump’s term of office claiming he was a foreign asset who won the election thanks to Russian interference, and an angry mob tried to break into the Supreme Court building to stop Brett Kavanaugh being sworn in.

          7. and an angry mob tried to break into the Supreme Court building to stop Brett Kavanaugh being sworn in.

            Come to think of it, that’s not the only time angry mobs have tried to pressure the Supreme Court:

            “Militant pro-choice activists doxxed the six Supreme Court justices that are expected to dismiss Roe v. Wade — publishing their partial addresses online as part of a planned protest.

            Heated protests outside the courthouse in Washington, DC continued Thursday, with 8-feet non-scalable fences erected late Wednesday ahead of the crowds, similar to the ones set up after the Jan. 6 Capitol riot…

            The group, “Ruth Sent Us,” has planned the protest for next Wednesday at what it called “the homes of the six extremist justices.”

            It even included a map pinpointing homes — three in Virginia and three in Maryland — “where the six Christian fundamentalist Justices issue their shadow docket rulings from.”…

            The group also has action planned for Mother’s Day on Sunday — telling followers to descend on Catholic churches in protest “that six extremist Catholics set out to overturn Roe.”” https://nypost.com/2022/05/05/supreme-court-surrounded-by-fence-after-roe-v-wade-protests/

            Not a precedent for Jan 6, of course, because it happened afterwards, but enough to disprove the notion that Trump, or Trumpists, or Republicans, are some kind of extremist outlier in US politics.

            Incidentally, when asked about all this, the White House simply said that “The president believes in peaceful protests.”

            I wonder what history tells us about countries where angry mobs intimidate judges and attack churches with tacit support from the executive…

          8. >Neither of the things…
            >are unprecedented. The
            >Democrats spent most of
            >Trump’s term of office
            >claiming he was a foreign
            >asset

            Claiming that someone has had foreign help in winning an election, and presenting ambivalent evidence of foreign involvement in the election that may be insufficient to prove guilt, is one thing. We could, hypothetically, debate the merits of the alleged evidence of Russian involvement in the 2016 election, but claiming that it happened is one thing.

            It is a very different thing to claim that the vote tallies in the election itself are invalid and should be ignored. There’s a difference between claiming what is, in essence, a campaign finance violation or something like it, and claiming that the vote itself is so tampered that we should throw out the results and just appoint the Right Guy.

            Especially when the claims of tampering are on the grounds that lots of vote fraud was committed… But then this is followed by trying to prove the vote fraud in court and failing over and over, dozens or hundreds of times, in parallel, without a single victory and with numerous suits getting thrown out as frivolous to the point where lawyers involved in the suing get disbarred for wasting the court’s time.

            Or where this is followed by not even trying to provide evidence of this vote fraud, just repeating it as a vague murmur of conspiracy-theory “you know, millions of illegal immigrants were bused into polling sites across the country to vote for Hillabiden, so you just can’t trust the polling results when they say Those People won.”

            There is a difference between alleging that a candidate has committed a crime or has unsavory connections, and alleging that the election itself is so rigged that the people conveniently in power at the time are justified in ignoring the results and continuing as if they had won the election.

            >…and an angry mob tried
            >to break into the Supreme
            >Court building to stop Brett
            >Kavanaugh being sworn in.

            I can’t find evidence that any of the many protests against Kavanaugh’s appointment (you must admit, he was not a widely popular choice) attempted to break into the Supreme Court. But I may have missed something, so I will stipulate for the sake of argument that at least one group did.

            The United States has a population of over 300 million people. It is not hard to find a few hundred of them willing to shout a lot. it isn’t even that hard to find a few hundred of them willing to charge a fence. What is remarkable is when the president of the United States calls in such a mob in an attempt to hold himself in power.

            If Trump had no connection to the 1/6 mob and it had been a purely self-organized phenomenon, I would never have brought it up, and I’m not sure we’d even be having this conversation.

            Such events can happen despite the best wishes of people who respect the rule of law, because firebrands and angry factions are not rare in history.

            But they can also happen because of the deliberate efforts of powerful men to avoid having to be accountable to the rule of law, hoping to rely on their followers outside the government to break centers of opposition within the government.

            The former type of event is not a structural threat to the republic, because it is easily crushed by security forces. Even if individual prominent figures in a republic are killed in such violence (e.g. William McKinley, Shinzo Abe), the republic itself goes on and democratic processes are not subverted.

            The latter type of event is a structural threat to the republic, because it can be used by single powerful figures within the government to their own direct benefit. And used to bypass the normal legal constraints that prevent any one person from securing immunity from the laws and the public.

            >Incidentally, when asked
            >about all this, the White
            >House simply said that
            >“The president believes
            >in peaceful protests.”

            >…angry mobs intimidate
            >judges and attack churches
            >with tacit support from the
            >executive…

            If the protestors actually do intimidate the judges other than just by existing and being visibly upset, or if they actually do attack the churches, they are, ipso facto, not peaceful and not granted tacit support by Biden’s statement.

            If the protestors simply stand there being visibly present and upset, then barring certain commonsense restrictions on time, place, and manner, they are acting within the normal rights of citizens of the republic to assemble for redress of grievances.

          9. Euphemizing what they said only underscores the double standard of justice you are applying

          10. It is a very different thing to claim that the vote tallies in the election itself are invalid and should be ignored. There’s a difference between claiming what is, in essence, a campaign finance violation or something like it, and claiming that the vote itself is so tampered that we should throw out the results and just appoint the Right Guy.

            Claiming that the President of the United States is a foreign asset isn’t at all like claiming a campaign finance violation, as well you know.

            I can’t find evidence that any of the many protests against Kavanaugh’s appointment (you must admit, he was not a widely popular choice) attempted to break into the Supreme Court. But I may have missed something, so I will stipulate for the sake of argument that at least one group did.

            On the day of his confirmation, a number of people broke through a police cordon and started banging on the doors to the Supreme Court building. It certainly seemed to me like they were trying to get in.

            The United States has a population of over 300 million people. It is not hard to find a few hundred of them willing to shout a lot. it isn’t even that hard to find a few hundred of them willing to charge a fence. What is remarkable is when the president of the United States calls in such a mob in an attempt to hold himself in power.

            So we’re just going to forget the Democrats’ claims that Kavanaugh was a woman-hating rapist who would transform the country into a Handmaid’s Tale -esque dystopia? Or is riling up angry mobs OK when it’s an entire party doing it rather than a single individual?

            If the protestors simply stand there being visibly present and upset, then barring certain commonsense restrictions on time, place, and manner, they are acting within the normal rights of citizens of the republic to assemble for redress of grievances.

            “Being visibly present” outside someone’s home is an obvious intimidation tactic. Trying to claim otherwise is simply disingenuous.

      2. To be clear, I was referring to the simplified definition that Devereaux provided in his piece on ancient insurrections, “a neutral descriptive term for one-man rule”, not the more comprehensive definition he provides here, which is an “extra-constitutional” position for which “violence [is] used in the seizure and maintenance of power”. We can’t know for sure what kind of government Trump would’ve set up if his supporters had actually succeeded in overturning the election. I suppose it’s possible that he would’ve created a simple system of “one-man rule”, which would make him a “would-be tyrant” by the definition provided in the piece. However, it seems much more likely to me that he would’ve kept the structure of the US government mostly intact, just with himself remaining president (and possibly his political opponents purged from high-ranking positions and replaced by his cronies) which would not be “one-man rule” (and would be closer to how the piece defines “oligarchy”).

        As I said, it is a pretty pedantic point, since Trump can be a serious threat to democracy even if he doesn’t meet the piece’s definition of “would-be tyrant” (but then again, arguably so was Nixon).

        1. We can’t know for sure what kind of government Trump would’ve set up if his supporters had actually succeeded in overturning the election.

          Trump’s supporters (and possibly Trump himself; it’s difficult to know how cynical he was being throughout the whole affair) thought that the election had been stolen and that, by getting Congress to certify Trump as the true winner, they’d be upholding democracy and the US Constitution. So I don’t think one-man rule, or tearing up the constitution, would be on the cards. In the long term, of course, having an angry mob get Congress to certify their favoured candidate as President would set a very dangerous precedent, but in the short term, Trump would probably spend his second term much as he’d spent his first, then step down when his eight years were up.

          1. He had lawsuits prepared to go before the election count had even begun, and was yelling fraud on election night when everyone knew beforehand some ballots would take time to count. That’s clearly an attempt to ignore the actual election. That’s consistent behavior with someone sticking in power as long as possible, not someone doing an honest 8 years.

          2. He had lawsuits prepared to go before the election count had even begun,

            US election security is a complete joke compared to any other first-world country. Lawyering up before an election is a perfectly sensible thing to do, and indeed, Biden did the exact same thing: https://www.reuters.com/article/us-usa-election-biden-idUSKBN24305H

            and was yelling fraud on election night when everyone knew beforehand some ballots would take time to count.

            “Everybody knew beforehand that the election security would be a complete joke” is not a good argument.

            That’s clearly an attempt to ignore the actual election. That’s consistent behavior with someone sticking in power as long as possible, not someone doing an honest 8 years.

            Legal battles over how to count votes are a commonplace of US politics. Trump’s behaviour was simply a continuation of the historical norm.

            Since the original post is about how we can use history to help guide our actions, I’ll suggest that the people in charge of running US elections might benefit from studying the “Caesar’s wife must be above suspicion” anecdote. If a process seems open to corruption, people aren’t going to trust it, even if no corruption actually occurs.

          3. Upon learning that he had lost the popular vote in 2016, Trump claimed that vast numbers of illegal immigrants and other unlawful votes were brought in by Democrats to tip the counts against him. These claims are a matter of public record. Despite having control of the executive branch for four years and every possible incentive to prevent such a thing from happening to him again, the Trump administration failed to turn up actionable evidence of the conspiracy.

            Conspiracy theories about electoral rigging have been a hallmark of the Trump era of American politics, but they are overwhelmingly generated by Republicans and overwhelmingly in the context of elections a Republican has lost, or might lose. You don’t catch (R)s complaining that someone’s stuffing ballot boxes in their favor in, say, Ohio, even though it’s presumably no harder to stuff a ballot box in favor of (R)s in Ohio than to stuff it in favor of (D)s in Georgia or North Carolina.

            It is easy to generate suspicion about election security if one is willing to fabricate unsubstantiated rumors about thousands or millions of fraudulent votes, then sidle away mumbling when called on the lack of evidence for anything remotely resembling the alleged levels of fraud.

            The resulting “suspicion” of possible election fraud cannot be used as justification for pre-emptively denouncing election results as fraudulent as soon as one appears to be losing. Cynically, one might almost think of this as a pre-planned fallback position.

          4. “Conspiracy theories about electoral rigging” would gain less traction if US elections weren’t so easy to rig. If America adopted some basic election security procedures of the sort taken for granted in most European countries — e.g., making people show some form of ID when they vote, or getting the votes collected and counted in good time — people would be less likely to entertain the notion of widespread electoral fraud, because such fraud would be much harder to pull off, and hence prima facie much more unlikely.

          5. @GJ

            >“Conspiracy theories about
            >electoral rigging” would gain
            >less traction if US elections
            >weren’t so easy to rig.

            You know, the proposed forms of election fraud being discussed here involve things that are not only theoretically easy to do, but easy to prove after the fact. For instance, you seem concerned about people exploiting the lack of ID to vote under a false name.

            If this is a regular enough occurrence to cast election results in doubt, it should not be hard to find proof of large numbers of people using this process. If Malcolm Malefactor showed up at a polling place claiming to be Santiago Standupguy, then it would be a matter of public record that Santiago showed up to vote, when in fact he never did. If Santiago showed up later, it would cause great confusion and this confusion, too, would become a matter of public record.

            Where are these public records?

            Likewise, you appear concerned that the failure to report election results in a timely manner lends itself to election fraud.

            Well, the obvious way to do that would be to: (1) start counting mail-in ballots well before Election Day and encourage people to use such ballots, so that they can all be processed and tallied on time and the results announced on the day, and (2) to make sure that there are no polling places with gigantic lines, and that all parts of all regions of the territory are copiously supplied with polling places that have short or minimal waiting lines, so that precincts can close and report their Election Day in-person totals in a timely manner.

            Now, if I were cynical, I might suggest that (1) is not actually desired, and (2) is very much not desired within the frame being used here. Because if you look at places like, say, Georgia or Texas, you find short lines at polling places in (R) country and long lines in (D) country, especially in urban areas.

            Greatly increasing the ease of voting in person would make it easier to tally all votes in a timely manner. But it would also make easier for (D) people to vote if (R) state legislatures weren’t able to say “oh yeah, sure you can vote, but to vote you’ll have to show up in person and stand in line for four hours and it’s illegal to pass out bottled water or anything because that might be considered bribing the voters.”

            Can’t have that, can we?

            So honestly, I think this does very little to undermine my conclusion that concerns about alleged ease of falsifying US election results consistently serve the interests of the faction most likely to promote them. This provides a neat and Occam-compliant explanation for why they are promoted in the first place. You were told our elections are unreliable because it is advantageous to the politicians who hope to retain power through your support if you believe that they are unreliable, regardless of whether or not that is true.

            When given a choice between making election results easier to tally and report on the one hand, and making it harder for (D)s to vote at all on the other hand, (R) leadership consistently chooses the latter.

            When given a choice between spending government funds to make access to voter ID easy so that we can issue it to everyone without smacking into the well known problems of a poll tax and inequitable access among legally enfranchised citizens, and making it harder for hypothetical, potential (D)s to vote, (R) leadership consistently chooses the latter.

            Election fraud is only a concern retroactively, when it can be used as an excuse to believe that (R) politicians are more popular than they really are, and that (D)s are enemies of the republic who cheat on Election Day.

          6. Well, the obvious way to do that would be to: (1) start counting mail-in ballots well before Election Day and encourage people to use such ballots, so that they can all be processed and tallied on time and the results announced on the day,

            Mail-in ballots are more open to fraud than in-person voting; in fact, some countries (e.g., France) don’t allow mail-in voting at all, for precisely this reason. So if you’re interested in making elections more secure, encouraging people to vote by mail is the complete opposite of what you should be doing.

            And your attempts to blame this on the evil Rs would be more plausible if D-controlled states managed to run their elections to the same level of competence as other first-world countries, but they don’t.

  39. It’s funny he picked this blog as a bad example, as the main reason I like reading it is because it presents history not as fact, but with many caveats and looking at different sides in arguments between historians.

    That said, a lot of history how it’s taught in school (which, to add a caveat here, probably depends on where you live and what kind of teacher you have) tends to present history as fact. And as pointed out here, a lot of history research eventually ends up flattened into a data set to get (ab)used by others, or it’s just simplified in a cycle like the one shown here. 🙂

    https://i.pinimg.com/736x/83/95/1b/83951b757f8aac6c00c14d3acac298ff–science-comics-science-humor.jpg

  40. I’m not one to nitpick grammar, but this one sentence: “The epistemic foundation of these kinds of arguments is actually fairly simple: it rests on the notion that because humans remain relatively constant[,] situations in the past that are similar to situations today may thus produce similar outcomes.”

    The comma I inserted makes sentence substantially easier to parse.

  41. Social statisticians scientists don’t seem to realize that psychohistory is a science fiction—wait, there’s a largely unrelated academic field coincidentally also named “psychohistory,” I’m no longer sure this joke works.

  42. By the way, Matt Yglesias was asked about this back and forth in today’s regular Friday “mailbag” feature where he answers questions. His response:

    “I hope Noah will write a reply. For myself, this has mostly served as a reminder of how thin-skinned academics are! All disciplines and professions have their foibles, and I both genuinely think historians are wrong in their attitude toward explicit counterfactuals and also don’t mean that as a nuclear-strength diss of the field.

    I said above that I don’t like the journalistic habit of doing oral quotes from experts, I think the conventional wisdom among economists about inflation targeting is wrong, and I think practitioners of the U.S. politics subfield of political science unduly neglect comparative issues. But I don’t think journalism, economics, or political science are worthless as disciplines, and I don’t think that about history either. Let’s all move on.”

Leave a Reply