Fireside this week, but next week we are diving into our long awaited series on pre-modern textile production, though we will be particularly focused on the most important clothing fibers in the Mediterranean world, wool and linen (rather than, say, silk or cotton).
For this week’s musing, I want to expand on an issue that came up in the discussion in the universal warrior series, which was the question of changing lethality in warfare. Just how likely were you to die on the battlefield in different eras, and how did changing offensive, defensive and medical technologies alter that balance? One assumption I see frequently is that since the advent of modern medicine has made serious infections survivable, it must follow that basically all wounds were fatal in the pre-modern period. Of course balanced against this is the relatively higher lethality of modern weapons, which strike with much greater energies and in consequence do much more damage to the body. So where does the answer lie?
(Just a quick reminder that these musings are just that: me talking through an idea and to some degree thinking out loud with the information I have available. Consequently, you should take any of the conclusions here as preliminary; whatever level of confidence you place in my normal ‘Collections’ ramblings, a ‘musing’ is probably at least a half-a-notch lower in the ‘epistemological certainty’ scale.)
Unsurprisingly, it is complex. The first question we need to ask is what kind of mortality we are interested in. If we include non-combat related disease deaths, those will swamp combat related deaths, but there are problems with this. Estimates for disease-losses in pre-industrial armies (some of these figures compiled in Rosenstein, Rome at War (2004), 130-2) vary tremendously; late nineteenth century rule of thumb held that disease casualties would be four times battle deaths; but in the American Civil War and the Boer War they were only twice battle deaths and in the Imperial Japanese Army during the Russo-Japanese War, they were actually four times less than battle deaths. Basic sanitation, it turns out, matters a lot here. The next problem is that here our question needs to be about excess deaths because of course some level of disease is prevalence in the larger society; that is even harder to calculate, but if we fail to adjust for that, all we are doing is measuring the size of armies, since the background disease death rate in the pre-modern world was so elevated. In this case, I think it is best to remove non-combat disease deaths from the equation, since the concern here was the experience of battle, so we should limit ourselves to violent death in war.
Except that brings us to the next question, which is the word ‘battle.’ Are we limiting ourselves to battles or not? If we want to consider warfare as a totality, then the evidence is fairly clear that violent death per capita (so adjusting for population growth) decreases as a function of time (on this, note A Gat, War in Human Civilization (2006) rather than S. Pinker, Better Angels of Our Nature (2011). Yes, I am aware that the former has spoken highly of the latter’s book, but frankly the more careful and cautious argument of War is to be preferred). This decline isn’t necessary a smooth curve, but seems to move in steps with lots of variation within those steps. Lifetime violent mortality among early hunter-gatherers and other non-state peoples seems to have been on the order of 15% (with some examples above 25% generally. These deaths were concentrated among males; a 15% overall rate of violent mortality might mean a 25% rate among males and only a 5% rate among females). For pre-industrial agrarian state-societies, that figure falls to single-digit percentages, often very low single digits, but perhaps sometimes as high as 5-10% (some estimates for the military mortality of the Romans during the worst years of the Second Punic War run up to 20% of the adult male population, so c. 7% of the total population; I think this is a smidge too high and perhaps 5% is more correct). In post-nuclear, post-industrial states where conventional war is often sharply constrained by deterrence and the industrial revolution means that returns to trade and investment are higher than returns to war (the reverse being true previously) that figure often falls to fractions of a percent, but again with potentially wide variance in the event of rare but extremely destructive wars.
But – and you knew there was a but coming – but that absolutely massive hunter-gatherer lethality level isn’t generally in battles. The bulk of those deaths occur during very one-sided ambushes and raids, directed at individuals who are effectively incapable of fighting back. So we might ask the question a different way: what was the chance of dying in a particular battle at a given point of time? Once again, that’s complicated, in part because reliable casualty statistics for losing armies are often frustratingly rare even well into the modern period. But there are a few observations we can make in terms of how technology might have impacted these changes. First, in both West Africa and North America, we can observe that the introduction of gunpowder so radically increased the lethality of open battles that tactics shifted entirely away from them. Now, this might not be surprising in North America, where the battles in question were of the first-system type – lower lethality missile exchanges when the raid or the ambush had failed . But in West Africa, the battles in question were often state-on-state and concluded with shock infantry engagements (with a notable use of blunt weapons, since part of the goal in battle was for the victor to take many captives; on this see Lee, Waging War, ch. 8), yet even in that context the increased lethality of gunpowder meant that such battles had to be discontinued in a context where the demographics of the underlying society simply couldn’t support massive losses. So the strong suggestion here is that gunpowder weapons represented a significant increase not merely in the lethality of the weapons but of the battles themselves.
We might also compare individual battles or campaigns. Rosenstein (op. cit.) estimates that for the Romans from 200 to 168 BCE (the period of our most sustained data, due to Livy), the Roman battle-death rate was very roughly 8-9% of soldiers engaged. That accords fairly well with the general 5-15% rule of thumb often posited for hoplite combat in Classical Greece (but note the difficulties with those figures). To compare with, say, the Battle of Austerlitz (1805), there the French had c. 75,000 men and the coalition c. 90,000 and the former took c. 9,000 casualties to the latter’s c. 36,000; so out of 165,000 men engaged, 45,000 had become casualties (27%); it’s hard to say exactly how many died because the looser’s casualties in Napoleonic battles often aren’t broken out clearly between dead, captured and injured (because the loser is not in the position to count the field). At least on the French side, the wounded outnumbered the dead roughly 5-to-1. At Waterloo, where it is the coalition who won and thus have the more precise figures, British wounded outnumbered dead about 3-to-1, while Prussian wounded outnumbered dead by 3.5-to-1. At Gettysburg (1863), with 104,256 Union and 75,000 Confederate forces engaged, there were 23,055 union casualties (3,155 killed, a WIA-to-KIA ratio of 4.6-to-1) and perhaps 23,231 confederate losses (4,708 killed, a WIA-to-KIA ratio of 1.2-to-1); about 25% of men engaged had become casualties of some sort, about 4-5% of those engaged had been killed. I’d suggest that the flurry of figures there suggests, broadly, casualties moving very broadly within the same basic range as a percentage of men deployed. Our ability to get a sense of the impact of wounds is complicated by the fact that pre-gunpowder sources tend not to give solid numbers for the wounded at all.
Now the question is how does the industrial revolution and antibiotics change this? Well, for some World War I examples (post-industrialization, pre-widespread antibiotics), at the First Battle of the Marne (1914), roughly 1 million British and French troops had squared off against 900,000 Germans; both sides took around a quarter of a million losses (so c. 25%) of which c. 149,400 were killed (7.8%). At the First Battle of the Masurian Lakes (also 1914), 215,000 Germans met 146,000 Russians with 10,000 German losses and 70,000 Russian WIA or KIA and another 30,000 prisoners, so roughly 30% casualties, but unclear what slice of those were deaths. At Kolubara (1914), 400,000 Serbs faced off against 450,000 Austrian troops, with 405,000 total casualties, of which 52,000 were deaths (the WIA to KIA ratio was 5-to-1). I’ve tried to pull major battles in 1914 because the long battles of the trench stalemate are difficult to generalize figures from as units rotate in and out. Taking a longer view, some roughly eight million Frenchmen served in the first world war, of which 1,150,000 died in combat (not counting disease), suggesting a combat mortality rate over the whole war of a staggering (by modern standards) 12.5% (though note above that many non-state societies lived in a situation of WWI casualty levels in every generation).
(Update: note that the casualty figures here include KIA, WIA and MIA, so I’ve obtained the WIA-to-KIA ratio by by directly comparing those two figures, without the MIAs. That seemed to have been confusing some folks in the comments.)
For post-industrialization, post-antibiotics, options are narrower. American, British and Commonwealth forces had good access to antibiotics in WWII (but other armies less so). The Battle of the Bulge involved, at its peak around 700,000 men, of which 19,200 were killed and about 48,000 wounded (a mere 2.5-to-1 WIA to KIA, due in part to the cold weather), a 10% casualty rate (and c. 3% fatality rate), but then that was in an allied victory and the German casualty rate (out of a peak deployment of c. 450,000) was perhaps as high as 15% (2.3% KIA). That said, modern conflicts can also be very lethal; US, Iraqi and British forces inflicted some 1,200-1,500 KIA on insurgent forces at the Second Battle of Fallujah (2004), an estimated 37.5% of all of the insurgents engaged.
Consequently, conclusions are hard to draw and hard and fast rules (‘lower casualties due to modern medicine’ or ‘higher casualties due to industrial firepower’) don’t always hold for individual battles. The chance of a person dying in war seems fairly clearly to have fallen over time, but less as a result of medicine and more as a result of declining incidence of war combined with increased economic specialization meaning that a smaller portion of the populace goes to war (since there is often a more complex economy to run at home). That said, the lower incidence of war can conceal that modern wars are often individual more destructive; military mortality among French people 1820-2020 is almost certainly much lower than the c. 3% (please note that c.; this is a ballpark combining several estimates) the Romans had from 218-168 BC, but of course for the generation that had the singular bad luck of coming of age in the 1910s, it was comparable and in some cases higher (around 4.5% of the population of the France died in WWI).
Notably, KIA-to-WIA ratios mostly fluctuate within a range from 2-to-1 to 6-to-1. The Second Battle of Fallujah’s ratio was 6-to-1 for the Coalition forces. There’s a fairly clear drift towards higher ratios in the gunpowder age (as medicine makes wounds more survivable) but it is important not to overstate this. More to the point, in the absence of really reliable wounded statistics (almost never reliably preserved for ancient battles) makes it difficult to project that same ratio earlier. That said, the evidence we have seems to suggest to me that wounds delivered with muscle-powered weapons (which is to say, basically all pre-gunpowder weapons) were often survivable. Scarred veterans form a common literary topos in many pre-gunpowder societies, which points to wounds being survivable, and individuals with physical disabilities from wounds were also by no means unknown. Simon James (“The Point of the Sword” in Waffen in Aktion eds. A.W. Busch and H.J. Schalles (2010)) suggests that perhaps ‘serious’ wounded might have been roughly equal to KIA in Roman warfare, which might in turn suggest a WIA-to-KIA of something like 2 or 2.5-to-1 once light wounds are accounted for; I’d argue that the comparative evidence from muscle-power military systems responding to the increased lethality of muskets suggests that the WIA rates should be a bit higher, but it must be noted that all of this – my suppositions and James’ – is essentially guesswork as the figures simply don’t exist to give a confidence answer to the question (though a very large bone study might offer some interesting hints if done in the right place). Though I will say the notion that functionally any wound was likely to be fatal in a pre-modern medicine world is often wildly overblown: even most wounds in the 19th century were survivable, as revealed by the WIA-to-KIA ratios above, well before modern antibiotics and infection treatment.
Again, this is something where my confidence on most of the conclusions is relatively low, save for the observations that overall military mortality decreases in steps at key stages of social and technological development and that it doesn’t seem like ancient casualty figures are wildly higher or lower than modern ones when dealing with conventional armies fighting pitched battles. More granular conclusions seem speculative and limited by the dearth of good evidence for the pre-modern period.
On to recommendations!
First off, this excellent video of a practiced horse archer showing off is pretty amazing. Notice especially how as he rides he is stabilizing himself to allow for a remarkably stable (and thus accurate) firing platform. The same channel also has some older videos discussing the construction of composite hornbows as well.
Next, for those interested in the discipline of ‘Classics’ (that is, the study of Greek and Roman antiquity; the name is now rather a sticking point in the field), the past couple of weeks have brought an intensification of what classicists are calling ‘The Discourse,’ which is to say continuing arguments about what – if anything – the future of Classics should be. One of the limitations of that debate – especially in explaining the debate to folks who are interested in Greek and Roman antiquity but do not have an intimate knowledge of the structure of the field – is that the whole thing can be very hard to follow and it can sometimes be hard to pin down what a given ‘side’ in the debate actually wants. A lot of this is because many interlocutors here spend their time responding to the most heated rhetoric of the other side (increasingly in the form of what are effectively op-eds as subtweets which make little effort to fairly represent the other side) rather than the concrete proposals the other side makes, which makes it tricky for newcomers to a get a sense of what the stakes are here.
To that end, I think this blog post by Dr. Rebecca Futo Kennedy and the pseudonymous Maximus Planudes does an excellent job laying out both a very strong version of the ‘reform’ camp (sometimes also called the ‘burn it down’ camp), their arguments and their aims. Crucially, Kennedy and Planudes lay out in very practical terms what they would like to see change, which is I think a far more helpful stance than some of the High Rhetoric which decries the field but doesn’t explain, in practical detail, what needs to happen going forward. On the other side (what we might call the ‘traditionalist’ position), I don’t think there is quite any similarly excellent ‘one-stop shop’ but James Kierstead manages to pull together most of the dominant strains of the ‘traditionalist’ arguments being made so far without getting too overheated. For those looking for a more complete rundown of the entire debate, the Rogue Classicist has actually compiled a fairly complete list of the essays (by classicists and non-classists) involved. Of particular note on that list, though it is paywalled, is Mary Beard’s “What is ‘Classics’?” given that Mary Beard is probably the classicist with the highest public profile right now.
I have my own views on this topic, as you might imagine, but they aren’t quite yet well formed enough in my mind to be worth writing out and arguing for just yet, so they will have to wait for a future date. What I will say is that while many of the arguments frame this as some all-or-nothing dispute, Dr. Kennedy and Maximus Planudes are correct about a very important thing, which is that the decisions are going to be made department by department, not for the whole field. Consequently, there are likely to be many different answers for how the field of Classics ought to proceed. Given the continued pressure on the humanities, departments whose changes succeed will survive, those whose adaptations fail, will get shut down and so every department has to consider for itself what direction is best for survival; this will not be one-size-fits-all.
Update [Friday Morning]: Late Arriving reading suggestion, from the Peopling the Past blog (which you should follow if you have any interest in the ancient world) an excellent primer on one of the less discussed major Middle-to-Late Bronze Age kingdoms, the Mitanni, by Mara Horowitz. Occupying northern Mesopotamia and much of Syria from the 17th century BCE to around 1350 BCE and a significant player in the Late Bronze Age ‘concert of powers’ between the Hittites, the New Kingdom in Egypt and Babylon (under the Kassites).
For the book recommendation this week, I am going to recommend P. Crone, Pre-Industrial Societies: Anatomy of the Pre-Modern World (1989). I have been asked a few times to recommend a book that offers a broad overview of the mechanics of pre-modern societies and Crone’s slim volume (214 pages in paperback) is probably still the best ‘beginner’s guide’ to many of the features that dominate the structures of agrarian societies before the industrial revolution. While Crone does not delve deep into, for instance, hard-numbers oriented demographic or economic modeling or even the specifics of individual cultures, what she does do is lay out in general the way that agrarian economies and societies tend to be oriented, the ways they tend to view the world, organize their economies and politics and so on. The overall mental model presented of these societies is excellent, albeit a bit more oriented towards Europe and the Mediterranean.
Of course the real value for a work like this is to act as a first stop, not as a last stop. Crone ought to be read as a beginning to looking at pre-modern societies, particularly as a way to break through many of our modern assumptions about how we may assume the world has ‘always’ worked. That said, as Crone forthrightly admits, every society is different and so any actual historical society is going to deviate from her model. Moreover, as you may note from the publication date, the book is now thirty years old and while most of the observations here are still very much valid, in some very important cases our knowledge of past societies has moved forward in important ways. Consequently, I think Crone is best used as a foundation text, to be followed by the reading of more recent and focused worked which can offer more insight into particular societies. But, as that foundation text, I know of no newer or better replacement for Crone’s book.
And that’s it for this week. Next week, textiles!