Fireside Friday, January 10, 2025

Hey folks, Fireside this week! I’m currently working on a post “On the Gracchi” taking a somewhat darker look at everyone’s favorite Roman reformers (though hardly the same black takedowns Alexander and Cleopatra got) , which will hopefully be ready for next week.

Trustworthy Librarian Ollie helping me work on my book sorting solution, also known as “vaguely sorted piles on the floor.” I am not a well organized person, if you haven’t noticed.

Before we dive into this week’s musing, I do want to take a moment to clarify the rulers for the comments section, as we had a bit of episode in the comments of the last post.1 First, note that the way the system is set up, new commenters (or suspicious comments) go into a moderation bin for me to approve before they show up, which is why some comments may be delayed in appearance, but that also means I don’t approve (or necessarily read) all of the comments all the time; I couldn’t even if I wanted to, it would take too much time.

Nevertheless, I do have some basic expectations for the comment section that I want to make clear. First, this is not a politics blog; it sometimes comments on contemporary politics simply because this is the outlet I have for those thoughts, but I don’t do so frequently.2 I do not want the comments beneath every post, whatever the topic, to become a battlefield for the latest political fight or culture war; instead, comments should try to stay broadly on topic to the subject of the post.

Second, I expect everyone to show a degree of civility here and to interact the same way you would if this was a classroom discussion environment. That means both assuming good faith and, in fact, operating in good faith. It also means respecting that we have folks here from a wide range of background who have a wide range of perspectives (some of which, yes, code politically). I don’t ask you to agree, but to be civil.

With that out of the way (for now; this comes up every so often), on to this week’s musing, where I wanted to expand just a little bit on something more on-topic that came up in last week’s post on pre-modern currencies and fantasy gold, which was just how preposterously massive the notional value of standard ‘adventuring equipment’ is in a lot of fantasy settings; as ‘Dan’ put it, “Even a simple analysis shows that a high level party is walking around with the equivalent labour hours to the Great Pyramid of Giza on their backs, if you attempt to work out how many labor hours high level equipment represents in wages.” Which is true and silly and it works that way for game balance reasons, rather than an effort to model historical societies (and I do understand the need for games to consider gameplay experience).

But it got me thinking it might be interesting to discuss: how much value might a heavily armored fighter or warrior be carrying around on their backs in the real world? Because I think the answer here is informative.

Here we do have some significant price data, but of course its tricky to be able to correlate a given value for arms and armor with something concrete like wages in every period, because of course prices are not stable. But here are some of the data points I’ve encountered:

We don’t have good Roman price data from the Republic or early/high Empire, unfortunately (and indeed, the reason I have been collecting late antique and medieval comparanda is to use it to understand the structure of earlier Roman costs). Hugh Elton3 notes that a law of Valens (r. 364-378) assessed the cost of clothing, equipment and such for a new infantry recruit to be 6 solidi and for a cavalryman, 13 solidi (the extra 7 being for the horse). The solidus was a 4.5g gold coin at the time (roughly equal to the earlier aureus) so that is a substantial expense to kit out an individual soldier. For comparison, the annual rations for soldiers in the same period seem to have been 4-5 solidi, so we might suggest a Roman soldier is wearing something like a year’s worth of living expenses.4

We don’t see a huge change in the Early Middle Ages either. The seventh century Lex Ripuaria,5 quotes the following prices for military equipment: 12 solidi for a coat of mail, 6 solidi for a metal helmet, 7 for a sword with its scabbard, 6 for mail leggings, 2 solidi for a lance and shield for a rider (wood is cheap!); a warhorse was 12 solidi, whereas a whole damn cow was just 3 solidi. On the one hand, the armor for this rider has gotten somewhat more extensive – mail leggings (chausses) were a new thing (the Romans didn’t have them) – but clearly the price of metal equipment here is higher: equipping a mailed infantryman would have some to something like 25ish solidi compared to 12 for the warhorse (so 2x the cost of the horse) compared to the near 1-to-1 armor-to-horse price from Valens. I should note, however, warhorses even compared to other goods, show high volatility in the medieval price data.

As we get further one, we get more and more price data. Verbruggen (op. cit. 170-1) also notes prices for the equipment of the heavy infantry militia of Bruges in 1304; the average price of the heavy infantry equipment was a staggering £21, with the priciest item by far being the required body armor (still a coat of mail) coming in between £10 and £15. Now you will recall the continental livre by this point is hardly the Carolingian unit (or the English one), but the £21 here would have represented something around two-thirds of a year’s wages for a skilled artisan.

Almost contemporary in English, we have some data from Yorkshire.6 Villages had to supply a certain number of infantrymen for military service and around 1300, the cost to equip them was 5 shillings per man, as unarmored light infantry. When Edward II (r. 1307-1327) demanded quite minimally armored men (a metal helmet and a textile padded jack or gambeson), the cost jumped four-fold to £1, which ended up causing the experiment in recruiting heavier infantry this way to fail. And I should note, a gambeson and a helmet is hardly very heavy infantry!

For comparison, in the same period an English longbowman out on campaign was paid just 2d per day, so that £1 of kit would have represented 120 days wages. By contrast, the average cost of a good quality longbow in the same period was just 1s, 6d, which the longbowman could earn back in just over a week.7 Once again: wood is cheap, metal is expensive.

Finally, we have the prices from our ever-handy Medieval Price List and its sources. We see quite a range in this price data, both in that we see truly elite pieces of armor (gilt armor for a prince at £340, a full set of Milanese 15th century plate at more than £8, etc) and its tricky to use these figures too without taking careful note of the year and checking the source citation to figure out which region’s currency we’re using. One other thing to note here that comes out clearly: plate cuirasses are often quite a bit cheaper than the mail armor (or mail voiders) they’re worn over, though hardly cheap. Still, full sets of armor ranging from single to low-double digit livres and pounds seem standard and we already know from last week’s exercise that a single livre or pound is likely reflecting a pretty big chunk of money, potentially close to a year’s wage for a regular worker.

So while your heavily armored knight or man-at-arms or Roman legionary was, of course, not walking around with the Great Pyramid’s worth of labor-value on his back, even the ‘standard’ equipment for a heavy infantryman or heavy cavalryman – not counting the horse! – might represent a year or even years of a regular workers’ wages. On the flipside, for societies that could afford it, heavy infantry was worth it: putting heavy, armored infantry in contact with light infantry in pre-gunpowder warfare generally produces horrific one-sided slaughters. But relatively few societies could afford it: the Romans are very unusual for either ancient or medieval European societies in that they deploy large numbers of armored heavy infantry (predominately in mail in any period, although in the empire we also see scale and the famed lorica segmentata), a topic that forms a pretty substantial part of my upcoming book, Of Arms and Men, which I will miss no opportunity to plug over the next however long it takes to come out.8 Obviously armored heavy cavalry is even harder to get and generally restricted to simply putting a society’s aristocracy on the battlefield, since the Big Men can afford both the horses and the armor.

But the other thing I want to note here is the social gap this sort of difference in value creates. As noted above with the bowman’s wages, it would take a year or even years of wages for a regular light soldier (or civilian laborers of his class) to put together enough money to purchase the sort of equipment required to serve as a soldier of higher status (who also gets higher pay). Of course it isn’t as simple as, “work as a bowman for a year and then buy some armor,” because nearly all of that pay the longbowman is getting is being absorbed by food and living expenses. The result is that the high cost of equipment means that for many of these men, the social gap between them and either an unmounted man-at-arms or the mounted knight is economically unbridgeable.9

To bring it back to the classic adventuring party, your D&D party’s fighter is probably walking around in equipment that represents a level of wealth your party’s rogue can never hope to have (outside of dragon-slaying related windfalls).

Now part of the reason for that is that the adventuring party is, itself, patterned off of a knight’s retinue, the lance fournie, “an equipped lance” (lance here being a unit name, like ‘squad’). A knight might travel to war with himself as a fully-equipped and mounted warrior, plus a coutilier (or coustillier, literally a ‘dagger-man,’ a lesser non-noble combatant), archers, other infantrymen and non-combatant pages. The unit didn’t usually have a fixed size, but we’re talking three to perhaps at most a dozen men, which is to say roughly the range of adventuring parties. And you can see the Fighter (the knight), the rogue (his coutilier), the ranger (those archers) fitting in neatly in our embryonic adventuring party, along with non-combatant retainers who might include chaplains (that might represent our spellcasters, both divine and arcane).

Except of course a lance fournie wasn’t an egalitarian formation: the knight was both the boss and commander of the unit and the guy who was paying every other member of the unit. Definitionally, he did so because he could dispose of wealth they could not. But of course setting up your adventuring party where one player is simply the boss of every other play would produce terrible table dynamics in most groups, so no one does it that way. INstead, the assumption is a basically egalitarian relationship between the party members (and a need for that to be mirrored in their economics), which means its tricky to have the Fighter, simply by virtue of being a Fighter, rolling around in enough armor to buy and sell the Rogue’ and his entire family’s entire household. But that was precisely the economics of the lance fournie.

(As an aside, a more interesting experiment might be to have the knight of the lance fournie be a GMPC, with the players only consisting of the retainers (and thus notionally equals), though I think making this fun would require playing the GMPC-Knight as something of a hopeless buffoon, since he needs to not steal the spotlight from the players, but that too could be fun and funny. I, uh, I have something a reputation in my RPG group for Well Meaning But Hopeless Buffoon knightly types as both PCs (when I’m not DMing) and NPCs, so I find the archetype fun.)

All of that, I think, can help explain in part why, despite how much war these societies engaged in, how they retained such a relatively inflexible and unmoving social hierarchy, with very little social mobility: you might make decent money (for your class) by serving in the army, but the gap between you and the next class (who got paid more) was so vast that even your relatively decent regular-soldier paycheck would never get you there. And of course the jump from infantryman to cavalryman was just as big, as you often had to supply the horse, which meant not only buying the horse, but also storing him when you weren’t at war (which is to say, you need a farm estate large enough to raise horses on or to be a close retainer of someone else who has one).

Bonus picture of Percy, majestically aspiring to great things and by great things I presumably mean that counter he shouldn’t be on.

On to recommendations!

First, I want to note that our heroic narrator is back providing narrated audio versions of ACOUP posts. The recent additions are audio versions of the three-post series on “Decline and Fall,” along with “The Roman Dictatorship” and “Why No Roman Industrial Revolution?”

Over on YouTube, Roel Konijnendijk is back with Insider for another video rating the historical realism of battle scenes in films and TV, including Gladiator II, some House of the Dragon, Rings of Power and an appropriately angry review of 300: Rise of an Empire. He notes, among other things – in a line that will surely excite his many fans – that the sea is the “mother of all ditches.” But as always, there are a lot of good observations here about the importance of formations, the role of signalling, the use of archers, what artillery is for in a siege and so on.

The other notable ancient history story making the rounds right now is a PNAS study arguing that atmospheric lead levels during the early Roman Empire may have been sufficient to reduce aggregate average IQs among the Romans by 2.5-3 points. That’s gotten a lot of attention, but I would first point you to a blog response by Neville Morley (via the latest Pasts Imperfect) who notes some substantial flaws with the reasoning connecting this all to the fall of the Roman Empire (which is certainly how the media has opted to read it). Morley points out some of the key problems here, the main thing being that IQ is defined as a testing result over a population with 100 as a mean, so saying a population had its IQ reduced by 2.5-3 is hard to parse, since this would just cause the measure itself to be re-meaned to return the average to 100; the key question here is ‘decline relative to what?’

And that was my issue seeing the study as well. 2.5 to 3 points of average movement sounds really significant, I think to folks because the general perception is that intelligence both individually and on average over populations is static, but it isn’t and a 3-point movement over two centuries is actually quite slight. IQ measures in the United States and Europe have actually been far more volatile over the last century, resulting in IQ tests needing to be regularly re-meaned as performance improves (the ‘Flynn Effect’) at a rate of nearly 3 points per decade. One of the proposed reasons for at least some of that movement is lead abatement in wealthy countries, so without any basis for comparison, what we may be seeing is a situation where “because of Roman silver smelting releasing lead, the Romans went from being 2010s smart to being merely 1990s smart over the course of 200 years,” which hardly sound very explanatory. It should be noted the cumulative Flynn Effect impact in developed economies since WWII is often placed at around fifteen points, massively larger than the lead effect the PNAS study proposes for the Romans.

Finally, I should note that, given the small size of the effect found, there’s a lot of reason to suppose disease and parasite pressures are going to be more impactful on ancient populations without modern medicines when it comes to growth and development. More broadly, I think a lot of the popular discussion (I won’t speak for the researchers) here is asking the wrong question: it’s focused on “why did Rome fall” as if empires normally last forever. The Roman Empire united a geographic region (the Mediterranean littoral, construed broadly), which no other empire at any point in history earlier or later has ever united, and then held it that way for four centuries, which is also longer than most empires last in any form. The collapse isn’t the unusual thing about Rome: the rise and long sustained height are the things that demand explanation.

On to this week’s book review!

This week I’m going to recommend E.A. Hemelrijk, Women and Society in the Roman World: A Sourcebook of Inscriptions from the Roman West (2021). Now this is, as the title notes, a sourcebook, which is to say that primarily consists of a collection of primary source materials, in this case inscriptions from the western half of the Roman Empire, mostly (but not entirely) from the imperial period. Normally, this kind of sourcebook is primarily going to be used in classroom settings (so it’s great that it is in an affordable paperback that students could actually buy!) but I think this volume also has a lot of value in the library of the enthusiast who is interesting in Roman society or the lives and values of historical women more broadly. For those without a firm grounding in Roman gender roles and values, each chapter and also the subsections within chapters do have introductions that provide some of that framework and also lay out some key Latin terms (the inscriptions are in translation, but key Latin terms are noted).

The bulk of the text consists of translations of Latin inscriptions – funerary epitaphs (think ‘tombstones’), dedicatory inscriptions, honorary inscriptions, legal documents and so on. All of the inscriptions are presented as translations in clear, approachable English, along with a date and location, as well as a brief but often quite valuable description of the context of the inscription, like if there is accompanying artwork or if the quality of the carving is notably good or notably bad.10 Some of the inscriptions – about 70 of them – also come with black-and-white images of the actual object (mostly in cases where they have artwork) which is a great touch that gives the reader some connection to the physicality of these texts.

Of course the real stars of the show are the inscriptions themselves and this is where I think the great value of a volume like this is found: as opposed to the often quite impersonal nature of the literary sources for Roman history, written often centuries after the events and concerned almost entirely with the doings of senators, generals and emperors, these inscriptions are often intensely personal. Many of them are funerary epitaphs, written by husbands or children about dead wives or mothers (or those wives and mothers, burying husbands, sons and daughters), extolling their virtues or describing their lives. Many are quite touching, for instance, inscription 1.12, a funerary epitaph written by a husband for his deceased wife, “…she lived with me for five years, six months and eighteen days without any foul reproach. He [the husband] had this [the monument] made during his lifetime for himself and for his wife and dedicated it under the axe. You, who read this, go bathe in the baths of Apollo, as I did with my wife. I wish I still could.”

It’s a vision into the fundamental humanity of these people, especially of these women, figures whose humanity we so rarely get to see in our male-dominated aristocratic literary sources.

That said, there’s also a lot of very solid historical content here, about the structure of Roman families and family life, what was valued in Roman women, the occupations, status and positions they might occupy (often with more prominence than you might expect!), their place in public life (again, more than you might think!), their role in religion, their social relations. The inscriptions are divided thus by topic. I think as the next step up from the sort of introductory sourcebooks (typically the Shelton and Ripat, As the Romans Did, also a really valuable volume for the Roman history enthusiast or scholar or student) focused particularly on women, this book has tremendous value. Precisely because our literary sources so often leave women out, a book of inscriptions, presented here so accessibly and carefully translated, is quite fantastic and well worth a read.

  1. Now removed, so you needn’t go looking for it; it is not there.
  2. If I had known that The Vital Center would pick up my “1933 and the Definition of Fascism” post, I’d have put it there and not here, to be honest.
  3. Warfare in Roman Europe AD 350-425 (1996), 122
  4. If you are wondering why I’m not comparing to wages, the answer is that by this point, Roman military wages are super irregular, consisting mostly of donatives – special disbursements at the accession of a new emperor or a successful military campaign – rather than regular pay, making it really hard to do a direct comparison.
  5. Here my citation is not from the text directly, but Verbruggen, The Art of War in Western Europe during the Middle Ages (1997), 23.
  6. from Prestwich, Armies and Warfare in the Middle Ages (1996).
  7. These prices via Hardy, Longbow: A Social and Military History (1992).
  8. Expect it no earlier than late this year; as I write this, the core text is actually done (but needs revising), but that’s hardly the end of the publication process.
  9. Man-at-arms is one of those annoyingly plastic terms which is used to mean “a man of non-knightly status, equipped in the sort of kit a knight would have,” which sometimes implies heavy armored non-noble infantry and sometimes implies non-knightly heavy cavalry retainers of knights and other nobles.
  10. The pedant in me might have wished the Latin presented as well, but this is no great flaw: each inscription is given with its number in one or more of the major inscription indices (usually the CIL here, but some in the IG, ICUR, IGUR, CIJ etc. – there is a complete list of abbreviations at the outset of the book, which is handy since these collections can, in some cases, get quite obscure) and I think most readers who want to chase an inscription back to the original Latin are probably also going to know how to look them up in the CIL.

263 thoughts on “Fireside Friday, January 10, 2025

  1. Was there a market in second-hand armour, or was stuff passed down the generations, or was it all bought new?

    1. Absolutely markets in second-hand weapons and armor, plus inheritance within families. For knightly or otherwise military families in the Middle Ages, weapons are a huge part of wills, often with armor and weapons specified carefully. You can see some of this discussed in K.B. Neuschel, Living by the Sword (2020) in chapters 3 and 4.

  2. I think a tack you could take is the set up at the beginning of A Knight’s Tale: the knight has been struck down, his unhappy followers left in the lurch… unless they continue on pretending everything is fine and nothing has changed and that way they have access to resources and fame they couldn’t aspire to on their own. And that way it can include all the frissons of impersonating a knight or even nobility as sidequests. So the bard is Chaucer, the hypeman for Will. Wat is probably a barbarian, Kate an artificer. Not sure what Roland in that movie would qualify as for a class, but of course the party could be different.

    1. The Sunless Skies RPG (it’s not great or particularly deep, but it has one fun idea, to wit:) has you all as officers on a ship, with the captain played by the GM: the captain can be comatose or insane or tied up in the brig or a collective hallucination or whatever, so long as they cannot give orders.

  3. As a point of trivia, the “Hardy” who authored “Longbow: A Social and Military History” in one of the footnotes was Robert Hardy, the Siegfried Farnon (All Creatures Great and Small)/Cornelius Fudge (Harry Potter) actor. Playing Henry V developed into an interest, and then a genuine expertise, on the English longbow.
    https://en.wikipedia.org/wiki/Robert_Hardy

    1. If you’d like to look at another RPG, Pendragon simulates Arthurian knights much more accurately than D&D. Social class is even a stat. There’s a lot of RPGs that do really interesting things.

  4. Spinning off into a weird tangent today, I wonder if the situation our good host describes, of a well meaning, buffoonish knight whose player retinue are the ones really getting things done is truly a GMPC. Which in turn prompts the question “What characteristic, or characteristics in combination, turns an NPC to a GMPC?”

    The two simplest notions, that it is A) Solely a notion of authority, or B) solely a notion of the NPC being in regular adventuring contact with the PCs, I think can be disposed of pretty quickly and simply. Plenty of games put the PCs in an organization of some sort where they have a boss or a leader; and an adventuring structure of head out, fight the baddies, come back for a report and maybe a reward before being sent on the next mission is a relatively common if more structured than usual DnD setup, and I don’t think characters like Anaximander (bonus points for anyone who gets that reference) count as GMPCs.

    For point B, pretend it’s the other way around; do the hirelings that PCs have along to tend their animals, carry excess baggage, or be fodder for traps count as GMPCs? I can’t ever remember them being considered such, even if those guys adventure with the party on a regular basis.

    So to have a real consideration of our buffoon knight to be a GMPC, you’d need to either suggest that it’s the confluence of him both adventuring with them AND being their boss that makes him a GMPC even though neither alone is enough to make him one, or shift the grounds entirely.

    Looking back on some (bad) campaigns I’ve been in, I think it is that ground-shift that leads to a conclusion, the one I find convincing at any rate. The really bad, obviously intrusive GMPCs I’ve dealt with have all been the spotlight stealers, often an NPC inserted, sometimes permanently, sometimes just for one adventure, who are more powerful or more knowledgeable or more central, and wind up shouldering the bulk of the burden and thus stealing most of the spotlight. Consider for a moment two NPC characters. One of them is basically Sherlock Holmes; brilliant, erratic, can fight a bit but is not optimized for it, but an absolute genius at putting clues together and figuring out what the sneaky bad guys are up to on the tiniest slivers of evidence. The other one is Torin from Master of Magic, a blank, magically created pseudo-person who is not quite completely invincible, but pretty damn close to it, and able to crush entire armies with his bare hands without serious issue. Are either of them GMPCs if introduced to the campaign?

    What if I told you the campaign in question was a primarily investigative one, needing to ferret out which of the scheming nobles in the king’s court are the ones who are responsible for the rash of murders, or are working with the enemies outside the realm? Sherlock Holmes would be really helpful in that situation; so helpful in fact, that he’ll probably squeeze out what the PCs are trying to do and solve the mystery unless something stops him from doing so. Torin, on the other hand, is only really useful in a fight; he might help the PCs get out of trouble if they get into it nosing around, but his likely dramatic role in such a situation is to go punish the scheming nobles once they’re found out and evidence is presented of their guilt. Clearly, Holmes is the GMPC.

    But what if we did a different campaign; an old fashioned kick the door in of the dungeon, fight the monsters, get the treasure, come back to town to level up and do it all over again with very little else going on? There, Torin would probably invalidate all of the characters unless they’re very high level indeed, being unstoppable would do that in an almost exclusively combat campaign. Holmes? Probably not so much; he might help you find a trick to take on a particularly nasty monster in a dungeon, or might even be the narrative glue of figuring out what’s going on overall to send you from dungeon to dungeon as the PCs rampage along in their murderhobo way.

    I think the core of what makes a GMPC is that he or she steals the spotlight. Or at the very least, they’re subverting the ‘game’ part of role-playing game, by doing whatever it is that would normally determine success or failure in that game and thus stealing it from the choices (and luck) of the player characters. If they don’t do that, they’re just an NPC, not a GMPC.

    *Gets off the soapbox*

    1. Holmes is actually physically very strong. Which would be useful in a dungeon crawl. Granted, he isn’t trained in the sort of fighting you usually do in dungeon crawls but that’s a fixable problem.

      1. True, with things like him straightening out the bent poker in the Speckled Band as evidence of it. He also has a mention from another character in the Sign of Four that he’s apparently a very skilled amateur boxer. But PCs are also often more than ordinary folk, at least in the most commonly played systems. And at least the impression I formed from the books is that while Holmes is a skilled fighter and physically fit, he’s in the tough but normal side of things. That stands in marked contrast to his abilities as a detective, where he’s one of a kind. I think even if we took him into a fantasy setting and taught him how to fight for dungeon crawls, he’d likely be decent at it, not campaign invalidating. He certainly can’t fight armies alone the way Torin can if you apply Master of Magic to D&D.

        1. He doesn’t occupy much of a combat role in Fate Grand Order, though he pitches in from time to time, but is decidedly less impactful than a homunculus possessed by Galahad.

    2. My own take on GMPCs is rather looser – a GMPC is any NPC that’s both with the party regularly and at the same level of mechanical and RP detail as a PC – and, crucially, that’s not *necessarily* a bad thing for a game.

      So a hireling probably has the generic commoner statblock and no more personality and backstory than a one-word trait and a funny voice, so doesn’t count. The king who sends the party on an adventure might be a fully statted level 8 fighter with a detailed personality and backstory which the GM has worked out, but doesn’t adventure and is therefore also not a GMPC – but I’d argue that both of your examples later count as GMPCs in both cases, and so long as they aren’t overshadowing the actual PCs, that’s fine.

      An example from my own experience – I ran a Pathfinder game for my parents, and since it was just the two of them, I had a halfling bard who tagged along for support. He was a few levels below the actual PCs and just operated as support – but I’d say he counts as a GMPC, one done in a way that they should be done, to help support the group without taking over the spotlight. That said, it’s easy to overstep, so I’d say it’s definitely something to be careful of, both as a player and a GM.

    3. I believe that the term GMPC, as distinct from NPC, came from an early era where the GM might have a character of their own in the party. This would be a normal character, not a balancing agent for a small party or narrative element, but one who is accruing treasure and experience just like the rest of the group.

      It’s not hard to see the conflict of interest that could arise when one of the players is also running the game. Horror stories abound of GMs who abused this to make their own character the star of the show.

      As role playing games matured, this seems to be less of a thing (people understand the GM as a separate role that doesn’t need a personal avatar) but the term GMPC has persisted somehow, applied in new situations different from its original application. Some people even use it interchangeably with NPC for added confusion. Typically though a GMPC is an NPC for which some of the following are true:

      They are a regular member of the party (often filling a missing party role, such as a healer

      They have stats like a character, rather than a monster/npc, with character levels.

      They suck, pulling focus away from the actual players. The GM may favor them over the actual characters.

      I’ve seen some minor arguments turn vicious when both sides have different interpretations of the term. Adding an NPC to a small party might be necessary for game balance but they don’t need to be getting special treatment or being leveled by the GM.

  5. “You are all a retinue of a buffoonish nobleman who keeps coming up with the oddest missions but needs to be worked around instead of with” is a campaign premise that I, actually, already encountered. (Not D&D, Fading Suns. The setting’s much better suited to it.) It’s a good premise, too, and lot more colorful than “you all meet in an inn”.

    Also, looks like I missed the drama. On one hand, it’s good it’s over. On the other… I missed the drama.

    1. I’ve done this kind of set-up but with the buffoonish boss being a Venetian noble and the PCs being his hired thugs. Worked well. Having the buffoonish boss being a knight seems like a good idea as well. I like the idea of a buffoonish boss as PCs LOVE manipulating people, his buffoonery gives the PCs freedom to maneuver, and his being the boss provides a fallback direction for the party if the players are being indecisive.

  6. I wonder how that ratio of income-to-equipment cost works out for a modern soldier.

    Looking at numbers here:
    https://www.cbo.gov/system/files?file=2021-05/57088-AppendixA.pdf

    So there are 15,910 “Active Military Personnel” for an Infantry Brigade Combat Team, at an annual cost of 2.92 Billion USD. That works out to about $184,000 / year. That’s two-three times what I might consider “Skilled Artisan” income for a year? And of course that’s an annual expense, not a one-off to kit them up the first time, but I don’t know how many years the kit lasted in the old days? Like a war-horse gets old and has to be replaced, but I don’t know what kind of working lifespan war-horses had. Or how long a Roman trooper expected his helmet to last, for that matter. Conversely I think that 184k/troop/year includes amortizing the humvees and rifles, as well as the cost of gas and bullets.

    Well, US troops are probably the most expensively outfitted soldiers that there have ever been, but it’s kind of interesting the modern system is still ‘only’ about one order of magnitude higher on purchasing power parity of “skilled worker income” compared to “heavy infantry equipment costs”, despite the enormous differences between the comparisons.

    1. Only a fraction of the 185k/yr is going to be for kit for individual soldiers. Much (most) of it going to be training, maintenance, and other ongoing operating costs.

    2. Napkin Math about 10 years out of date but having tried to keep up with things:
      Rifle (laser and day optic included): At most $5000
      Armor (helmet, plate carrier, plates): “” $4000
      Basic load ammunition: probably <$200
      Explosives, pyrotechnics (share of squad equipment): could be $100 of MG ammo, to thousands for an antitank rocket
      Comms: Not really solid on this, my property book had lots of weapons but not many radios, but at most we are in "thousands"
      1 suit worn clothing (flame resistant cammies, gloves, boots, etc): $600 or so

      All the life support stuff in a ruck, to include the ruck itself (most of the cost being the ruck, rain suit and sleeping bags): $2000

      Total for a basic, day-fighting rifleman: Call it in the ball park of $15,000, since there's only gonna be a few rockets or really capable radios in a dozen man squad. Having already totaled this using the numbers above, I remembered that the field gear issue (ruck and life support equipment, armor minus plates, personal equipment outside of weapons, ammunition, and radios) came to about $6,000 when a set had to be reported lost due to theft or desertion.

      The REALLY BIG TICKET items that push a modern US infantryman's fighting kit into the realm of an average US worker salary are night vision and thermal optics. Depending on the specific model issued, a set of NVGs with thermal capability could easily be $10,000-40,000. The monoculars I used were more like $2000.

      1. NVGs went down a lot in price in the last few years. They are still in the thousands but not in the tens of thousand anymore.

        But I think what you miss, is that basically nobody wents to the fight on foot anymore. I think you should add a share of the cost for even the most basic motor transport to the price.

    3. This was the thing I wondered as well. I suppose it depends on how broadly you define ‘equipment’. Are you just including the personal equipment they carry? What about comms equipment? Artillery? Tanks? Do we treat them as cavalrymen due to having humvee ‘steeds’? What about stuff like GPS?

      I suppose if $184k is the upper limit, and each accounting for the above brings it down, it’s still going to be a maximum of 3x the historic figure (or less if we compare them to cavalrymen).

    4. Larry, I am sorry to say this, but you have made a MASSIVE error here – which unfortunately did not get picked up by any other commenter so far.

      That figure of 15,910 people per Infantry Brigade Combat Team refers to “Total military personnel for these forces, rather than personnel per unit.” What this means is that it counts ALL of the support personnel needed by the brigade together with the soldiers. The actual number of SOLDIERS that brigade consists of on the battlefield is 4,413, according to Wikipedia, meaning that about ~11,500 of those military personnel are in support. The support personnel outnumbering the fighting forces nearly 3:1 is actually completely in line with how the U.S. military has always operated.

      https://en.wikipedia.org/wiki/Tooth-to-tail_ratio

      Needless to say, your numbers (and Paul K’s “napkin math”) are going to need to be recalculated from scratch in the light of this.

      1. I see your point, but I’m not sure I agree: calculating the kit cost of a Roman soldier is comparatively easy because they are close to being the only type of asset the roman army accounts for: there is little to no permanent collective equipment, same goes for the training, and the logistics “tail” is mostly an independent emergent property.

        On the other hand, an infantryman in a modern system army makes little sense as a unit of accounting precisely because they are the very pointy tooth of a gigantic tail, but won’t even know where the battlefield is without said tail.

        So if the Roman soldier’s kit is worth roughly one year of a workers yearly income, their American counterpart’s kit is both a quarter of a workers yearly income (a quick googling confirms Paul K’s napkin math) if you only account for what they are carrying directly to the battlefield… AND ten times that if you account for everything that carries them there.

        An then we may want to speak about how the industrial age shifts the the “violence output bottleneck” from the individual combattant to the whole army and its harware (and, increasingly, its software)…

        1. Weren’t Roman legions in particular also fairly unique even historically for how much manual labour was expected to do by the legionaries themselves? Arguably culminating with the famous “every legionary carries a stake which is used to make a fence once they make camp for the night – after they finish digging a ditch, that is” principle.

          In contrast, last week’s post on currency has a note citing Thucydides that every Athenian citizen soldier had an enslaved servant. Bret also already had fun years ago pointing out here and on social media that the “300 Spartans” were like 1/20th of a force which was mostly composed of the other Greek states. There were also 900 helots here – and while some probably fought*, many were definitely the tail. It might be possible for me to keep finding examples, but in general, the subject of how “tooth-to-tail” evolved over time likely deserves a post of its own (similar to the scope of the “Warrior’s Mindset” post.)

          * Some historians apparently go as far as to claim a Spartan phalanx ONLY had Spartans in the first line and the other seven lines were ALL helots. This is a reading which stems from one account of Herodotus, but I don’t know how widely accepted it is.

          https://www.jstor.org/stable/4436459

      2. Colorado’s first woman senator liked to point out that Army regulations required one laundress for every three soldiers in the 19th century American west. Presumably, there were other support personnel as well.

    5. I suspect much of that goes to vehicles and artillery, both depreciation and maintenance parts. At that level, they’re integrated. An “infantry” brigade probably has enough vehicles to mount everybody in varying degrees of discomfort.

  7. Actually my favorite game, Dungeon Fantasy RPG, doesn’t fare so badly here. A month’s upkeep is 30 silver pieces, 360 silver pieces for an entire year. A heavy mail suit costs 180 silver pieces, a broadsword costs 25 silver pieces and the rest of the kit has trivial cost in comparison. For reasons of balance, mail is cheaper than plate but 2/3 of actual cost is pretty good. A light warhorse costs 600 silver pieces, or about two years’ wages. Not bad.

    1. You could square the cost of plate being higher than mail by assuming the cost of plate includes the mail underlayer/voiders as well as the plate. You can’t equip both plate and mail at once, so it makes sense that ‘plate’ actually means ‘the whole plate armour package, including mail and gambeson etc.’.

  8. I think one important thing to note is that iron is durable and something that you don’t need to buy by yourself. The Roman army was basically a finishing school for Roman citizens, and it would be worth me saving up for a hauberk for my son. Or giving him my old one (it’s iron, if I’m not stupid it’ll be useable). Or the village can buy one for the kid–remember, this guy may move up the ranks and become important, a good friend to have!

    Similarly, in the Middle Ages, it would make sense for a group (village, guild, whatever) to chip in for armor. If you know X people are going to have to serve as a man-at-arms for the local lord, it makes sense to get them some armor. It would be expensive, sure, but spread a person-year across twenty households (women could contribute significantly, as gambesons were fabric) it amounts to a few weeks’ worth of labor for each household. And once you buy it, provided you get it back, the upkeep on a hauberk isn’t prohibitive.

    One thing that surprises me is that maille leggings were so cheap. I’ve not made a set yet (it’s on the to-do list), but I didn’t think they needed so much less in the way of links. The labor certainly won’t be half that necessary for a hauberk! The fitting is fiddlier, too, or at least can be. Not to get too far into it, there are places you REALLY don’t want pinched! (Modern History TV has a video on this if you’re curious.)

    1. > Similarly, in the Middle Ages, it would make sense for a group (village, guild, whatever) to chip in for armor

      The Anglo-Saxons had a system which provided for exactly this – households are aggregated together into units who collectively have enough land to equip one armoured soldier, and expected to do so as a group. It’s a pretty simple way to raise higher quality troops from land than you can get if you expect each family to provide one soldier by their means alone.

      1. Interesting that this functioned for the Anglo-Saxons but not for Edward II in the 1300s. I’d suggest that says something about the transfer of wealth away from villages able to equip heavy infantry into large elite cavalryman estates under the Normans.

        If so, that would be an interesting (though not perfect) parallel to the transfer of wealth to the upper classes in the Roman world, and away from the military smallholding farmers.

        Unless, of course, I’m just noticing the issues of the present day as they existed the past.

        1. Or it was an Issue with Edward the second’s personal government, as the king couldn’t raise taxes or armies without parliaments permission and Edward the second was an infamously weak king.
          By the time of Edward the third english knights mostly fought as heavy infantry, using their horses for strategic mobility.

    2. Maybe the maille leggings are so cheap because they’re not trouser design, just two tubes starting mid-thigh hanging off straps attached your belt? If the hauberk is elbow and thigh length, the relative pricing looks about right to me.

      1. Doing a bit more looking, it looks like a lot of them were even more simple–sheets (tailored to the individual guy’s legs) that tied in the back. You don’t need to protect the back of your legs all that much in Medieval combat, as the threats are generally going to come from the front. They do have foot covers, which complicates things, but nowhere near as much as sleaves (which are a nightmare, which I assume is one reason why the Romans didn’t bother for a long time).

        Give that, yeah, I suppose the price does make sense. Less material, fewer fiddly bits, possibly something your apprentice can work on while you do the fiddly bits on the rest of the armor.

        1. Heh, I suppose that means in a grounded fantasy world (one low-magic, but still fantasy enough to feature various non-human races) there might be a distinction between regular “human combat” mail and considerably more expensive “goblinslayer mail”, where one of the key distinctions would be the all-directional leg protection, since it would be far easier for smaller, more numerous opponents (stereotypically goblins) to slip behind the combatant and target the back of the leg.

          I also wonder if any notable fantasy has considered that smaller humanoids like dwarves, goblins and kobolds would, in general, be FAR more dangerous opponents in a head-to-head fight than normally portrayed. Square-cube-law would make them all significantly relatively stronger than normally portrayed, and if they are smart enough to understand armour and are organized enough to produce it, they would be able to put on significantly thicker armour than a human (due to possessing smaller body area to cover) while retaining mobility.

          https://tvtropes.org/pmwiki/pmwiki.php/Main/SquareCubeLaw

          (The same law would also cripple the effectiveness of trolls, ogres, giants, etc. The enormous ground pressure would likely make it impossible for many of them to go anywhere they could encounter mud, and force them to go on all fours or use walking sticks as often as possible – and that’s BEFORE we consider giving them any armour.)

          1. “I also wonder if any notable fantasy has considered that smaller humanoids like dwarves, goblins and kobolds would, in general, be FAR more dangerous opponents in a head-to-head fight than normally portrayed. ”

            Diskworld does. Dwarves are noted as being particularly good fighters. Partially this is their endurance, partially it’s the fact that they generally do manually-intensive labor, partially it’s because humans have a tendency to view them as children and are reluctant to attack, and partially it’s because where there’s one, there’s fifty.

            Tolkien also touches on this, He specifically notes that Gimli openly wore a hauberk, because dwarves make light of burdens. That’s like 50 lbs of extra armor he was wearing to march for months on end; not a trivial thing!

            I’d have to think more about fighting dwarves. They’re not that dissimilar in stature to humans (compared to, say, pixies or dragons), so exotic weapons and armor aren’t necessary. But most tactics in the past were, for obvious reasons, built around attacking human-sized opponents. A spear that would hit me in the face would pose no threat to a dwarf, and it would take some training to figure out how to attack dwarves.

            There’s also the metalworking aspect. It’s entirely possible that they’d not merely have more armor but functionally better armor. They’re master metalsmiths in most stories, so it’s entirely reasonable for them to be in some equivalent of mass-produce plate armor instead of maille. That likewise presents real problems, because defeating that sort of armor requires different tactics than defeating maille.

            You’d also want FAR heavier armor on your legs. Your knees would be more vulnerable, and someone who’s gotten both knees broken isn’t going to be doing much combat. Maybe plate armor from the waste down, maille for the torso?

          2. It’s easy to overblow the effect of square-cube between the size of a human and the size of a dwarf. It really wouldn’t be that much of a difference.

            For instance, chimpanzees are about 2/3rds the weight of a human, but are just as strong. However, this isn’t anything square-cube-related, it’s because they have significantly more fast-twitch muscle fibres than we do.

            You’re right that they would need comparatively less armour to provide adequate coverage, so could make it thicker if they wanted.

            I’m with you though that they’d still be way more dangerous than typically portrayed. There’s a great artist on IG going under the name didrikma that does bronze age fantasy, and he’s got a dwarf in a sort of ‘pinecone’ armour, reasoning that dwarves are more likely to be struck from above down on their heads or shoulders, so would armour that area more.

          3. As you may recall, Merry stabs the Lord of the Nazgul in the knee from behind, crippling him so Eowyn can finish him off. He should have armored the backs of his legs!

          4. A sequel novel to Willow has several of his enemies surprised to find Willow is every bit as strong as normal sized human warriors. And that they ought to have known, if they had noticed that Nelwyns have shorter limbs but not smaller diameter.

    3. This was the Carolingian system – households were grouped, with the richest providing a well-equipped soldier, and the poorest five providing the equipment for one basic light infantryman. The Anglo-Saxons probably borrowed from this practice.

  9. > But of course setting up your adventuring party where one player is simply the boss of every other play would produce terrible table dynamics in most groups, so no one does it that way.

    That’s not strictly true. In OD&D it was common practice for there to be a “party caller,” meaning that the players would discuss among themselves what to do from turn to turn, but one of them, designated in advance, then announces the whole group’s intended actions to the DM in order to kick off the process of the results of those actions being mechanically resolved.

    1. The “party caller” is speaking for the group but not their social superior. The other players can always say “Hey we don’t agree with that” and the game pauses while the players have another discussion and maybe choose a new caller.

      In the medieval setting the knight might decide to consult the combatants at the rank below, the men-at-arms or archer, but at best this going to be “this is what I want to do, do you see any problems?”. More often it would be like the stereotypical officer to NCO in modern armies “I want this to happen, you figure it how.” Very unlikely that the knight will consult the chaplain or musician on combat tactics, or much else.

      The players would have to obey a GMPC knight or Else. Which I’m sure creates interesting roleplaying opportunities as the characters try to desperately avoid dying for silly reasons, but it’s a very different type of game.

      1. “characters try to desperately avoid dying for silly reasons”

        Sounds like a great premise for one of the Warhammer RPGs…

  10. > And that was my issue seeing the study as well. 2.5 to 3 points of average movement sounds really significant, I think to folks because the general perception is that intelligence both individually and on average over populations is static, but it isn’t and a 3-point movement over two centuries is actually quite slight. IQ measures in the United States and Europe have actually been far more volatile over the last century, resulting in IQ tests needing to be regularly re-meaned as performance improves (the ‘Flynn Effect’) at a rate of nearly 3 points per decade. One of the proposed reasons for at least some of that movement is lead abatement in wealthy countries, so without any basis for comparison, what we may be seeing is a situation where “because of Roman silver smelting releasing lead, the Romans went from being 2010s smart to being merely 1990s smart over the course of 200 years,” which hardly sound very explanatory. It should be noted the cumulative Flynn Effect impact in developed economies since WWII is often placed at around fifteen points, massively larger than the lead effect the PNAS study proposes for the Romans.

    This is a VERY interesting and worthwhile point. At the same time, I think it should also be noted that multiple studies have presented evidence for the REVERSAL in Flynn effect over the recent decades. That evidence is hardly consistent, and as the Wikipedia link notes, the meta-analyses on the subject do not find such a reversal. HOWEVER, the most recent of those appear to be from 2014-2015 – i.e. about a decade old, and necessarily operating with even older data.

    Looking at their citations, it SEEMS like the most recent studies continue to offer evidence for the reversal of Flynn effect, even if there hasn’t yet been enough for a newer meta-analysis. I.e. here’s a Danish study from 2021.

    https://pubmed.ncbi.nlm.nih.gov/34882746/

    > The present study investigated the Danish secular trend of intelligence test scores among young men born between 1940 and 2000, as well as the possible associations of birth cohort changes in family size, nutrition, education, and intelligence test score variability with the increasing secular trend. The study population included all men born from 1940 to 2000 who appeared before a draft board before 2020 (N = 1,556,770). At the mandatory draft board examination, the approximately 19-year-old men underwent a medical examination and an intelligence test. In the statistical analyses, the IQ mean and standard deviation (SD) were estimated separately for each of the included annual birth cohorts based on information from birth cohorts with available total intelligence test scores for all tested individuals (i.e. 1940-1958 and 1987-2000; the mean and SD were interpolated for the intermediate birth cohorts). Moreover, the possible associations with birth cohort changes in family size, height as a proxy for nutritional status, education, and IQ variability were investigated among those birth cohorts for whom a secular increase in intelligence test scores was found. The results showed that the estimated mean IQ score increased from a baseline set to 100 (SD: 15) among individuals born in 1940 to 108.9 (SD: 12.2) among individuals born in 1980, since when it has decreased. Focusing on the birth cohorts of 1940-1980, for whom a secular increase in intelligence test scores was found, birth cohort changes in family size, height, and education explained large proportions of the birth cohort variance in mean intelligence test scores, suggesting that these factors may be important contributors to the observed Flynn effect in Denmark.

    If (or when) there is sufficient corroborating evidence from studies conducted in other countries, we would have to conclude that “2010s smart” would actually be BELOW “1990s smart” and more like “1960s smart”.

    1. Except that the real moral of the Flynn effect is that we’re still not sure what the heck IQ tests are measuring. They’re definitely measuring something, that correlates pretty neatly to a lot of other things like parental wealth, parental education, years of education, and a stable childhood home environment – but if somebody who tests at 85 today is minimally functional and 100 in the 1960s = 85 today then it all makes no sense.

      1. Well, sure, but this point went unaddressed in the blog post above (and, as best as I can tell, in both the paper cited and the other blog post offered as a rebuttal), and I’m by necessity reacting to what is already on the page. If you or anyone else can find a cohesive way to think of that Roman atmospheric lead study which also incorporates this perspective, be my guest!

      2. IQ scores also correlate with outputs, not just inputs, such as educational attainment, earnings, incarceration rate (a negative correlation), etc. Of course, those things correlate with each other and with the inputs C Baker mentions, but IQ retains independent explanatory force with respect to various outputs. The fact that no one can specify clearly what essential or latent quality IQ is measuring is kind of irrelevant. You might as well ask what electric charge is measuring.

        1. Except we know what electric charge is measuring? That IQ is measuring something else is a nontrivial problem. Particularly given that the most complete answer seems to be that it’s measuring test taking ability in a semi-generic way, and that this *isn’t* intelligence or any inherent characteristic but rather the conflux of many factors that cannot be mathematically separated, meaning it *doesn’t* maintain independent explanatory force, you can just bias the results in various known or unknown ways to produce an apparent explanation.

          1. IQ tests for the ability to solve IQ test problems. There is a neat example in Steven Jay Gould’s book “The Mismeasure of Man”, in which a particular study (which I think was on US army recruits, though I haven’t checked) found a close correspondence between IQ results and results for tests of reading comprehension. That kind of effect might explain a fall in recent averages. Say, for a possibility, that modern youth are getting better at modern skills, like internet use and understanding gaming rules, but losing a bit of reading comprehension. If that is happening, but the tests haven’t kept up with the cultural change, then there will be a fall in the results which has nothing to do with a change in ‘intelligence’.

          2. I mostly agree and suspect there is also a more nebulous and purely negative motivational aspect too.

            I disagree insofar as reading comprehension is actually not being lost, in that people are stilling learning how to read and understand information. In particular they are learning all sorts of (often misapplied) lessons about how to engage in low-trust media environments because of the internet, but they *do* still learn how to read and process information.

            Where I see a problem is that there’s a certain abstractness to test problems that a kid needs to be trained to engage with, and I think it can be taken as a given that disengagement (distraction, disinterest) is going to reduce results. Our media might very well be training kids not to bother engaging in these abstract problems. While there’s a lot of bias in how people analyze modern media and it’s influence on kids (The TV is rotting their brains!) there isn’t *nothing* there.

            Longform narrative engagement-reading a book, watching an intellectually stimulating performance-in particular is an experience that trains you to accept sometimes esoteric abstracts for a future payoff, and our society has absolutely shifted to emphasize short, simplistic media more than longform content that requires gratification delay. Videos are literally becoming shorter and simpler, articles have been shortening for decades, and even the fact that you can simply put down a book if you grow frustrated and instantly find something else to do online to read influences development.

            All to say that I suspect we are doing our kids a collective disservice with how we present media content to them as they grow up and that could be reducing their test results because it represents the erosion of foundational skills relating to gratification delay and abstract thinking. So the TV is rotting their brains, but for specific reasons!

            And notably, it’s a hypothesis that can be tested-if kids who engage with shortform video and media content are perfectly capable of gratification delay and test scores are unrelated to the media that subjects engage with I’m wrong.

          3. What is electric charge measuring? Or more precisely, when we measure electric charge, what are we measuring?

          4. The net balance of electrons (or protons, but we use electrons as convention) in a given object, area, or system.

            We measure it by looking at the interaction of a current on an object. There are any number of methods.

            If your point is that we don’t directly measure charge, sure, but we don’t directly measure any fundamental property, we measure it’s interaction with matter over time.

            (I think that’s always true, but maybe some measurements are timeless, that’s for the experimental physicists)

            However we still know precisely what we’re measuring when we see the interactions that correlate with a test for charge, whereas we don’t know what we are measuring when we see the interactions-the test answers or test taking capacity-that we record or can directly infer when administering an IQ test.

            The assertion that we can tease out a given property by using mathematical functions is an unsupported and unsupportable one, and every attempt to do so ultimately reveals a bias in the method on analysis. Of course that doesn’t mean it isn’t useful, but it means every single prediction that you make from an IQ test needs to be independently verified with a separate test to even begin to have scientific muster, because the relationship between the test and your variable is just that tenuous. If it can give you an idea for further experiments it’s fine, but it’s a hypotheses builder, nothing more.

          5. @Dan Where does the charged boson pair (W+, W-) fit into the net balance of electrons, protons etc? Note that those don’t fit into conventional understanding of matter unlike say muons – they’re particles that facilitate interaction, so they’re very “short-lived”. We can measure balance, but we also assign individual charge to them.

            Where does quark charge fit in? That’s fractional charge by 1/3s even though a quark is a lepton just like an electron. If we were to use some “number of electrons equivalent” unit, can we even speak of the charge of 1/3 of an electron? We’re unable to measure individual quark charge (strong force is named that way for a reason), only hadrons and mesons, and yet the standard model assigns charge to them as individual particles.

            You’re using the classical definition of charge as a weighting factor in EM interaction, but subatomic particles actually cause quite a lot of trouble there. It turns out that the deepest definition of charge we have now is an eigenvalue of an abstract wavefunction, which is purely a mathematical construct that gives correct predictions of experiment results. So, from an ontological point of view, *what* *is* it?

          6. So in the simplest sense it’s still electron flux, because while those fundamental fields do cause A. Electrons to stay bound to atoms instead of flying off to the tune of free bird and turning reality into plasma, and B. Create all sorts of fun quasi particles and exotic forms of matter with fractional charge, they almost never interact with traditional matter and if they can even exist independently is an open question. In that sense what we *measure* is still electrons, or protons, or more rarely muons or other exotics.

            On a second level electrostatic force, weak force, and strong force are all in fact related in a fundamental way that was established before they split in the inflationary era of the universe, and the manifestation of them as separate forces is actually just a manifestation of our coordinates in space-time. It’s no accident that the strong force is such that pulling apart atoms into quarks makes more quarks that form clusters of integer charges. It might be *impossible* for a fractional charge to exist, so it’s even less relevant.

            In fact under that understanding all forces are the same force which has this weird and poorly understood field manifestation on four scales-a super strong one on the scale of atoms and atomic interiors that decays rapidly, a weaker force that acts on the scale of fractions of atoms and decays super fast, an intermediate strength one that acts on the scale of protons to solar systems, and a super weak one that acts on the scale of planets to universes. We think this is all one field in multiple parts, although our understanding of gravity is super sketchy.

            And on the last level quarks are an explanation of why A. You can’t pull apart atoms without making new atoms, B.the internal structure of Hadrons observed by scattering particles off their interior, often during hadron formation, and C. Patterns in Hadron composition.

            However even in the aforementioned inelastic scattering experiments we don’t *measure* electric charge as a fraction, we infer a fractional charge topography must exist because of how an integer charge interacts with it. We know it exists but don’t measure it.

            And charge is not defined as an eigenvalue, eigenvalues are useful mathematical tool to model the geometry of the fields that correspond to the various fundamental particles. The actual understanding we have is that all forces are these manifestations of field states that have complicated “geometries”, which we can model through eigenvalue equations. The equation itself isn’t the field, it’s a way of determining the geometry of the field under certain conditions. The confusion comes from quantum esotericism filtering into pop culture; the people saying that eigenstates are the field are basically saying that the map *is* the territory, which is just a mistake.

            That leaves us with the unsatisfactory answer of-technically we only measure electrons (or integer Hadrons) but it appears that on a fundamental is level we are measuring the foundational field state manifestation that interacts on the intermediate scale with intermediate force between its manifestation on the largest scales (gravity) and smallest (nuclear). It’s an expression of a field.

            As for what *that* is or means…well, we are talking about why reality seems to be formed of nested fields or anything else for that matter, either find religion or accept ontological mysteries.

        2. Given the overstretching degree of contrivance that goes into producing actual IQ scores (from the fact that the questions themselves change across time and locations as designers keep ‘updating’ them to make them ‘culture independent’, whatever that means – to the artificial force-fitting of a score distribution into a Gaussian and retaining only the z-scores, discarding the actual figures), it is surprising to treat IQ as a primitive term on the same footing as electric charge as though it was one of the fundamental measuring blocks of the universe. Even ultra-basic notions like temperature have an actual physical grounding (and a formal statistical definition – beta, at least). This seems like a kind of a cop out to evade the need for construct validity and by extension the definition of the ‘inteligence’ part of ‘intelligence quotient’, a long standing complaint of IQ critics.

          (Note that it’s still possible to be such a critic and believe that IQ tests are useful and measuring something meaningful)

          1. IIRC, Flynn himself thought that increasing scores reflected wider engagement with the kind of abstract reasoning the tests measure – a kind of reasoning fundamental to science and some other endeavours, but not reflective of ‘intelligence’ per se.

        3. and our society has absolutely shifted to emphasize short, simplistic media more than longform content that requires gratification delay. Videos are literally becoming shorter and simpler,
          — Dan

          Actually, I’ve been noticing a quite unexpected reversal of that recently! In gaming regions of YouTube there’s tendentially been a decline from longer forum-rooted Let’s Play formats towards shorter, more heavily edited footage — but at least in those sectors I have contact with, that trend has been contested by a surge in popularity of 1, 2, even 3 hour long videos with an intermediate quantity of editing and somewhat abstracted payoff.

          Even videos originally published as a series have a tendency to be edited together into one substantially longer piece, under titles such as “X, the movie”.

          Nor can these generally be understood as simplistic; often condensing anywhere from dozens to hundreds of hours of footage with much attendant explanation, they tend to be quite information-dense.

        4. ” You might as well ask what electric charge is measuring.” Counts of electrons, rates of electron movement.

      3. > Even ultra-basic notions like temperature have an actual physical grounding (and a formal statistical definition – beta, at least).

        Jensen himself compared IQ tests to thermometers to illustrate both the possibilities and the difficulties involved in measuring g. Before these machines became reliable, it was said (according to him) that such a complex phenomenon as temperature could never be measured satisfactorily: “Temperature was then confounded with all the subtleties of subjective judgment, which easily seem incompatible with a single numerical scale of measurement. How could the height of a column of mercury in a glass tube possibly reflect the rich varieties of temperature–damp cold, dank cold, frosty cold, crisp cold, humid heat, searing heat, scalding heat, dry heat, feverish heat, prickly heat, and so on?”[1]. But as his reasoning was not based on spurious analogies, he also pointed out that intelligence can not be as easily rank-ordered like temperature, as it is a rather more complex phenomenon in which several dimensions must be involved to measure it.

        [1] https://steinea.ca/2017/05/15/arthur-jensen-measuring-temperature/

        1. Interestingly enough the people whom complained about temperature unable to be measured with one number were also kinduve correct. There are actually four major variables in temperature-as-felt, which we still measure independently in the following; Dry Bulb [traditional thermometer], Wet Bulb [Accounting for humidity], Black Bulb [Apparent temperature if in sunlight], and Wind Chill [accounting for wind speed].

          Hence felt temperature is really a grid of environmental states, because all those factors are significant in determining both what it feels like and how to plan for it. You need different clothing and tactics to insulate or conduct heat in different state combinations.

          The main difference however is that the original measurement really was a direct component of the complete picture, and dry bulbs are measuring a real phenomena. There’s nothing to guarantee that IQ is even a piece of the final puzzle, it could simply be a false start-what came before needs not be what comes after.

          1. Physics trivia: From my course on thermodynamics, I know that a negative Kelvin temperature (i.e. below absolute zero) technically describes a state of matter that is hotter than matter at an infinite temperature.

      4. I think that IQ has a lot of value when looking at one society at a single point in time. For the reasons you mention, though, i think it’s a lot more problematic to extend it backwards into history. And for similar reasons, i don’t really trust the results of cross cultural comparisons. Jensen and his allies argue with a straight face that many countries have IQ’s in the 60s or below, and that’s implausible on its face; we know what people with an IQ of 65 are like (in the US context), and there’s no way that a society made up of people, on the average, with an IQ like that could function at the most basic level. (Let alone have roads, universities, factories, hospitals, etc.). I think the more plausible alternative is that there’s a lot about intelligence that’s still murky, and we don’t really have as good, or as culturally un-loaded, a way of quantifying intelligence as we used to think.

        1. > we know what people with an IQ of 65 are like (in the US context)

          I suspect you are referring to Lynn’s notorious World IQ map. But low scores (like 80) are often misunderstood: such people are perfectly capable of managing their lives. They are no troglodytes, and (it must be emphasized) very much unlike the disabled individuals we associate with low IQ scores. Consider autists, who are usually of normal intelligence. Few are employed or have a girlfriend, but an intelligence test tells you little about such difficulties. Being dumb is not so different from being short: unless the cause is mutational load, you will probably be fine.

          1. I am thinking of that notorious Lynn “world IQ map”, yes. And I’d say that an IQ of 65 is very different than one of 80- 65 means you are really developmentally disabled, and it’s not credible to me that a country could have an average score *that* low.

            I’m sure IQ is measuring something real, and meaningful, but I don’t think it’s as good a proxy for intelligence, or as clear in terms of what it actually is measuring- or as meaningful across time periods and across cultures- as some believe.

          2. “Low scores are often misunderstood” is a very benign gloss over a very ugly history of IQ, which is that it *was* used as a diagnostic tool (“Idiot” meant IQ below 25, “Imbecile” IQ between 25 and 50, and “Moron” and IQ between 50 and 70) and that these diagnoses were used to justify eugenics practises, including forced sterilization. See for example Buck v. Bell (1927). And the tradition continues with book like “The Bell Curve”
            In that context, a map showing that an entire country has a low IQ score is not just irresponsible but downright malicious, because it feeds into a rhetoric of “These people are not worthy of empathy”.
            The IQ score has so many issues and so many caveats and such an ugly history than frankly, it should not be used or defended.

          3. “The IQ score has so many issues and so many caveats and such an ugly history than frankly, it should not be used or defended.” What should happen to the labor theory of value, or the claim that class struggle is the engine of human history? There seem to be a lot of academics teaching and utilizing those theories.

          4. It really is an illustration that we do not know what IQ measures, that the simple answer to “We know what IQ 65 is like” is “Yes, they score a 65.” And that’s all we can really say. Probably their score is correlated with similar performance on other abstract reasoning metrics, but what implications any of that have for living a practical life is so context dependent as to be meaningless.

            BTW–IQ is just a score. It has been used in racist, exploitative ways, but that doesn’t mean it isn’t real, and hasn’t/can’t be used in non-racist ways as well.

          5. I suspect you are referring to Lynn’s notorious World IQ map.

            Speaking of Richard Lynn. I had once encountered the claim that his data had been fraudulent. For example, for most of Sub-Saharan Africa, he had assigned national average IQs based on data coming from studies on rural illiterate children.
            However, as he is a racist such things should not be a great surprise.

            https://www.reddit.com/r/badscience/comments/14hxliw/richard_lynns_work_on_race_and_iq_is_more/

            Nonetheless, I suspect that if one used an IQ test designed to be as free as possible from linguistic and cultural bias, for example by being based on abstract visual logic, there would still be a sizable gap between developed and developing countries. This because the ability to do such ‘IQ test logic’ is greatly dependent upon education and developing countries, as a general rule, have terrible education systems.
            For example, I recall having read in ‘Poor Economics’, by A Banerjee and E Duflo, that: ‘Moreover, the evidence from India suggests that even when teachers are in school and are supposed to be in class, they are often found drinking tea, reading the newspaper or talking to a colleague. Overall, 50 percent of teachers in Indian public schools are not in front of a class at a time they should be. How are the children supposed to learn?’. And that was not the only problem with India’s pubic mass education system. Comparable things were going on in schools of other developing countries.

          6. Groups that perform poorly on IQ tests generally do worse if the test was designed to be culturally neutral compared to one biased toward another culture. Go figure.

        2. As a lot of people have said, IQ measures your abstract reasoning capability, and while individuals do have differing maximum capacities, someone who’s received no formal education is, on average, not going to be as capable of abstract reasoning as someone who has received at least some education, while still being capable of managing practical tasks.

          To put it another way, I suspect that if you managed to go back to, say, the 13th century, and administer IQ tests around the world, you’d probably find that the average was A. fairly uniform worldwide, and B. on the low end, due to most of the population having little to no training in the sort of abstract reasoning measured by IQ tests. It is not a coincidence that the world IQ map correlates extremely strongly to the average level of formal educational attainment.

      5. I think we also should look at this question the other way around. By now, it is indisputable that exposure to lead causes neurological damage on an individual level. On a social level, too, the so-called lead-crime hypothesis is quite well-known and as of 2022, it has been essentially confirmed by a meta-analysis.

        https://www.sciencedirect.com/science/article/pii/S0166046222000667

        > Does lead pollution increase crime? We perform the first meta-analysis of the effect of lead on crime, pooling 542 estimates from 24 studies. The effect of lead is overstated in the literature due to publication bias. Our main estimates of the mean effect sizes are a partial correlation of 0.16, and an elasticity of 0.09. Our estimates suggest the abatement of lead pollution may be responsible for 7–28% of the fall in homicide in the US. Given the historically higher urban lead levels, reduced lead pollution accounted for 6–20% of the convergence in US urban and rural crime rates. Lead increases crime, but does not explain the majority of the fall in crime observed in some countries in the 20th century. Additional explanations are needed.

        Consequently, we can reiterate the following points:

        * Lead causes neurological damage

        * Neurological damage from lead is associated with more violent crime – and with lower IQ

        * Thus, lower IQ is clearly not, with all else held equal (a CRITICAL caveat) a good thing – else a reduction in it would not be associated with the same kind of neurological damage which leads to greater violent crime

        Now, one CAN argue that a reduction in IQ which is directly associated with neurological damage from lead is more meaningful than a Flynn effect improvement (or a Flynn reversal, as the case may be) seen in the recent decades, because those MIGHT be down to differences in how abstract reasoning is developed and/or valued by a given culture/generation – as capably argued below. HOWEVER, that line of argument would ALSO directly undermine the argument advanced by Bret in the blog itself – that Roman atmospheric lead did not have a meaningful effect since it was comparable to a mere decade of Flynn effect.

        1. Now someone do a study of local atmospheric lead levels from silver mining plotted against the rise of the bronze age warrior elite…

    2. I don’t have access to the full paper and thus cannot review their methodology, but given the reliance of these sorts of studies on student or conscript samples, I kind of wonder how much the results are influenced by the fact that the sort of person who got conscripted into armies was quite different at the height of the Cold War compared to now.

    3. There’s also the theory going around that lead abatement as a gasoline additive led to falling crime rates starting in the 1990s, about 20 years after tetra-ethyl lead was removed. The idea was that lead caused some kind of neural damage and made young people more ornery or less moral or whatever it took to get them to turn to crime. I’m not sure of how this fits with theories about lead poisoning leading to problems in ancient Rome.

      1. Probably not much. The effect in the ’70’s is almost certainly related to the intellectual skills being sought by employers at that time (and still). Consequently, anyone lacking those skills would be at an employment disadvantage, and turning to crime is a very common response to that. The question is to what extent Roman employers were looking for the same abstract reasoning skills, and at how much of an employment disadvantage someone without those skills at that time would have been at.

  11. It almost seems like there’s a historical rhythm to the degree which society can push the cost of arming soldiers. Unfortunately I can’t find a more recent version, but the AP had a 2007 story that put the cost of equipping an American soldier at about $17,500, at a time when median personal income was $26,630 – given that the average private is going to be on the lower end of the income distribution, it costs a pretty similar about of their income to be equipped as our Bruggian militia. By the present, once you start adding more complicated radios, night vision, optics on every rifle, it wouldn’t surprise me if we were back to a year’s wages being necessary to outfit a quality soldier.

    1. “Unfortunately I can’t find a more recent version, but the AP had a 2007 story that put the cost of equipping an American soldier at about $17,500, at a time when median personal income was $26,630”

      Does the story mention the cost of the training? Even the poor bloody infantry receives considerable training nowadays before being assigned to a combat unit.

      Interesting, just looked up a pay chart from 1980, and my base pay during my training pipeline was about 80% of what it costs to outfit an infantryman today. And I hate to think of the man hours and the cost of the gear I trained on…

      “given that the average private is going to be on the lower end of the income distribution”

      At least based on my experience in the Navy back in the 80’s, I wouldn’t take that as a given. While I never encountered anyone at or above (I’m guessing) the 10-15%… We had enlisted from dang near every other point on the income distribution curve. And most of them were solidly middle class.

      It’s a persistent myth that the majority of our military is drawn from the lower income classes.
      https://www.cfr.org/backgrounder/demographics-us-military

      1. They’re on the lower end of the income distribution /when they’re a private/ is what I mean – I’m not thinking in comparison to their SES, just about the individual equipment to individual pay.

        1. Does that count food, clothing and shelter? All of which are provided for a private.
          You can argue about the conditions, but that’s a large component that generally isn’t figured into wage compensation.

    2. The biggest difference with a modern military is that most of the expense isn’t in the soldier’s personal kit. I’m skeptical that it costs a full year’s income to equip a modern infantryman; night vision is probably the most expensive single piece, but almost every piece of kit is available on the civilian market so you can work up an estimate from that.

      However, a *mechanized* infantryman might also require one of the 6-7 seats in an M2 Bradley infantry fighting vehicle, and each vehicle costs 3-4 million dollars. Strykers cost about the same (but can carry nine passengers). So you’re in the high hundreds of thousands of dollars per soldier just for that. And if you want to compare to the ancient or medieval cavalryman, fighter jets cost tens of millions of dollars.

      1. I’d argue that modern infantrymen (at least in the US army) should be classed against cavalrymen. They are overwhelmingly mechanised infantry, after all. Perhaps their closest historical equivalent would be dragoons, which I’d guess would cost the same to outfit as a cavalryman.

        1. Dragoons are typically substantially cheaper than cavalry, that really being their raison d’etre. The key difference is that while they both have horses, a dragoon only needs a horse capable of carrying them around, a cavalryman needs (at least) a horse capable of being fought from – which are generally much more expensive.

          1. True, though they’re still a significant chunk more expensive than footsloggers (on the account of having any horse at all).

          2. Not to mention that cavalry needed the training to be able to fight from the horse, while dragoons basically needed to not fall off.

      2. It should be noted that according to Wikipedia (and the manuals it cites) soldiers in primary infantry brigades of the U.S. Army travel in mere HMMWVs, which ought to be much cheaper than either the Bradley or the Stryker. Brigades mounted in Strykers are their own force (presumably less common than the standard infantry brigade) and Bradleys are reserved to armored brigades where they operate alongside tanks.

        https://en.wikipedia.org/wiki/Brigade_combat_team

        On the contrary, the same wiki notes that by 1970s, the USSR doctrine DID require that every infantry formation would be carried in either BTRs (which have evolved from simple armored trucks to a vehicle which is now on paper superior to a Stryker, even if it rarely appears that way in practice) or BMPs (then a comparatively heavily armed and armoured transport with no peer; now a deathtrap which sits its occupants with their back next to its fuel tank).

        One reason, of course, was the Soviet expectation that Cold War turning hot would involve nuclear warfare, and so troops would have to travel in NBC-proof vehicles (a standard earlier BTRs did not reach, leading to their disappearance from stocks when similarly old tanks persist till now.) This was not considered such an issue in recent times, so after 2000, the HMMWV-like Tigr has also been in production (though dwarved by the Soviet stocks of vehicles heavier than itself.)

        Nevertheless, this fact highlights that the mere POSSIBILITY of nuclear weapons being deployed already entails a range of hidden costs on the world’s major armies – for reasons which ancient or medieval leaders would surely struggle to wrap their heads around. There are likely other things we could think of which would complicate these cross-century cost comparisons.

        1. As of just over a year ago (source: https://sgp.fas.org/crs/natsec/IF10571.pdf) the Army has 14 Infantry BCTs, 11 Armored BCTs, and 7 Stryker BCTs. The National Guard has 20, 5, and 2, respectively. So at least the regular army has armored vehicles organic to most of their infantry. Also, some of the ICBT’s are in airborne or air assault divisions, so you would have to count the cost of the aircraft.

  12. “But of course setting up your adventuring party where one player is simply the boss of every other play[rer] would produce terrible table dynamics in most groups, so no one does it that way.”

    Unless it’s a Traveller campaign, and all the players on the ship’s crew are (USN) sailors, who are perfectly comfortable with such things. Seriously, it was interesting to watch the dynamic change between games… Even the guy who you absolutely could neither trust nor depend on in AD&D, slotted neatly into his role when the game milieu required it.

    1. I’m part of a Star Trek Adventures group where there is also a clear hierarchy, with captain, first officer, and then the leading officers of the different departments – the typical main characters for a Star Trek TV show, basically. While this play style isn’t for everyone and the wrong choice of Captain can break a group, it can work out very well.

      Parts that are important to make it work in my estimate:
      -the leader also has obligations towards their subordinates
      -the leader has their own leader (in Star Trek, an admiral played by the PC to debrief after missions); the NPC leader has the power to reprimand the PC leader if necessary

      TBH, in a pre-modern fantasy setting a hierarchical group in which the members have known duties towards each other based on their positions sounds actually more interesting than an egalitarian group. I am now imagining how, after the group killed a dragon together, the knight now has to decide how to distribute the treasure they just conquered in a way that takes into account social position, preferences of the other PCs, performance in the fight against the dragon, the need to give gifts to his own liege, etc. That feels much more immersive than asking “so, who wants what?”.

      1. As long as you make this dynamic clear in session zero.

        Lack of session zero, or bait-and-switch, has probably brought more games down than anything else.

  13. > More broadly, I think a lot of the popular discussion (I won’t speak for the researchers) here is asking the wrong question: it’s focused on “why did Rome fall” as if empires normally last forever. The Roman Empire united a geographic region (the Mediterranean littoral, construed broadly), which no other empire at any point in history earlier or later has ever united, and then held it that way for four centuries, which is also longer than most empires last in any form. The collapse isn’t the unusual thing about Rome: the rise and long sustained height are the things that demand explanation.

    Well, it is naturally tempting to think/expect/hope that after a certain stage of development, a society would become advanced enough to achieve escape velocity from the cycles which defined the previous societies. After all, societies using speech to gain advantage over animals/agriculture/industrialization/digitalization (to give a highly simplified series of examples) can all be easily thought as a series of such “escape velocity” stages – a “reverse Kardashev scale” for the past, if you will. Thus, it’s hard to begrudge a typical person the thought that the Romans were in a position reach the escape velocity from the typical imperial pattern, if it wasn’t for “just one thing” which foiled that and precipitated their collapse.

    One needs to also remember that the alternative to such thinking goes more like: “Empires have a fixed lifespan after which they collapse -> The United States is an Empire -> The United States will inevitably collapse in the foreseeable future” A more radical form of this argument points out that the AVERAGE empire apparently lasted around 250 years (which seems to concur with the “four centuries, which is also longer than most empires last in any form” line from the blog) and the United States is exactly at that mark, meaning that its collapse is imminent. This Australian factchecker link below seems like a good summary of the argument (and the obvious flaws with how it’s often presented.)

    https://www.aap.com.au/factcheck/empires-strike-back-against-false-250-year-claim/

    To the above, I should also add the obvious point that even IF we accept the timeline above AND the premise that the United States is currently an empire BUT presume it only became one after, say, 1900 or even 1945, then it would still have about 125-170 years left to go. However, this is still A LOT less interesting than the question of whether this kind of historical-timeline-determinism is truly inescapable and impervious to purposeful attempts to change the conditions on the ground, and if we should place so much stock into it in the first place.

    1. Most empires lasting 250 years, in a way that allows to predict how much time an empire has left, would feel very weird to me. If there is a pattern, I would at least expect something like a normal distribution or a pareto distribution.

      1. Look up “Bayesian predictions”. Some of those are just bonkers, generally depending a lot on the definition of the thing being predicted.

      2. Right- even if it is true that the “average” lifespan is 250 years (which I really don’t know enough to say), we know for sure that there are some empires (or maybe better to say, some complex and advanced polities, to avoid the question of what’s an empire) that last much less than 250 years, and likewise others that last much longer.

      3. If there is a pattern, I would at least expect something like a normal distribution or a pareto distribution.

        I know that there is one paper which claimed that the lifespans of empires ‘conform to a memoryless exponential distribution in which the rate of collapse of an empire is independent of its age.’ Curiously, animal species also had extinction rates independent of their ages.

        https://www.tandfonline.com/doi/abs/10.1080/01615440.2011.577733

        However, it used a sample of only 41 empires from 3,000 BCE to 600 CE, which seems rather small to me; another worry is that I don’t know unarbitrary the ‘adulthood’ and ‘end’ dates of the empires were. Though, as this list had been compiled by somebody else; there is a lower than usual risk of the data being cherry-picked.

        1. another worry is that I don’t know unarbitrary the ‘adulthood’ and ‘end’ dates of the empires were.

          Having glanced at it quickly, it’s pretty arbitrary. They list the Byzantine empire as lasting “350 years”, how do they arrive at that?

          The choice of ’empires’ is also pretty odd: how do the Visigoths count as one and, say, the Incas don’t?

          1. The choice of ’empires’ is also pretty odd: how do the Visigoths count as one and, say, the Incas don’t?

            Well, the two articles used as sources for the table on ‘Empires’ Lifetimes and Adulthood Dates’ are about the ‘Size and Duration of Empires: Growth-Decline Curves’ from 3000 BC to 600 AD.
            Pachachuti Inca only began his conquest spree in the fifteenth century, so I don’t think that the absence of the Inca Empire from that list should thus be a mystery.

            They list the Byzantine empire as lasting “350 years”, how do they arrive at that?

            Ah, I had not noticed that the first time I had read that paper, thank you for pointing that out.
            After spending a few minutes looking for an open access version of Size and Duration of Empires: Growth-Decline Curves, 600 B.C. to 600 A.D*, and skimming it, I think I found it.

            Samuel Arbesman seems to got his data from Rein Taagepera’s table 5, it gives the same date and duration for the Byzantine Empire. However, there duration was defined as the time between ‘adulthood’ and ‘failure’. And according to the definitions given on page 5:

            The size-time integral (I) of an empire is the area under its size-versus-time curve. Measured in square megameter-centuries (abbreviated as “Mm2c”), I is a direct measure of an empire’s impact on history, as far as the impact depends on sheer combination of size and duration.
            An empire’s stable maximum size (M) is the size level such that I would be reduced by 5% if parts of the S(t) curve above the M level were lowered to that level.
            The emergence date (E) is the date at which an empire first reaches 20% of its eventual stable maximum size. Adulthood date (A) comes when the empire first reaches 80% of M. Failure date (F) is the date at which an empire, after having reached its maximum stable size, first falls below 50% of it.

            So, presumably, according to the Taagepera, the Byzantine Empire had fallen below its ‘stable maximum size’ of two square megameters around the year 745, thanks to such things as Slavic invasions and Islamic conquests.

            Hmm, now you have mentioned it, Arbesman indeed should have made that clearer in his paper. At first I had also been under the impression that he was analysing the rate of state extinction of ancient empires; instead of the rate at which their extents fell below half of their ‘stable maximum size’.

            * For the very unlikely case somebody else is interested in it, this page here seems to be the only non-closed access link I could find: https://annas-archive.org/md5/55e2a76901ee01cfce0ff50358c82a6e

    2. It is naturally tempting to imagine that there is a “political escape velocity” beyond which a polity just grows so rich and strong and so forth that it becomes somehow invincible and lasts forever… But there is no real evidence that this is actually a thing.

      And now I’m right,
      here you said I’m right,
      there’s nothing that can harm me.
      Cause the sun never sets
      on my dungeons or my army.

      And the empire fell
      on it’s own splintered axis.
      And the emperor wanes
      as the silver moon waxes.

      And the farmers will find our coins
      In their strawberry fields
      while somebody somewhere
      twists his ring as someone kneels…

      Empire,
      by Dar Williams

    3. The notion that there’s some sort of “natural law” about the longevity of an empire is just absurd, and applying it to the modern US is doubly absurd.
      Of course, not just that there clearly isn’t some sort of 250 year timer, just doing a quick sweep over historical empires shows that both their longevity and their fading away are not based on something universal, but often on specific local factors. I’m sure that you *can* do some kind of classification and get some typologies out, but in the end, it’s not going to be some universal rule. (Even the comparison of the two most similar empires in terms of time and size, the Roman and the Chinese, is not yet well-trodden ground).
      But whatever typology we pull out, it’s not going to be well-applicable to today, simply because of the paradigm shift that is industrialization and digitalization. The constraints of an ancient empire in how it can project power over its subjects is extremely different to what an empire can do today. This means both the pressures driving it apart and the solutions that can keep it together will look different.

    4. The US is quite literally an Empire now, but it’s complex.

      You can read “How to Hide an Empire” and from the outside, it looks like a book about “Empire” in the sort of non literal imperialist sense of “The US controls the world and coerces weak countries through the use of the Dollar’s reserve status.” There are lots of people nowadays who when they call the US an Empire, actually mean that it’s a big powerful country that can coerce, influence, and threaten other countries.

      “How to Hide an Empire” isn’t about that. It is about the US territories where the United States government controls the territory and the people, but the locals have no voice in the running of the United States. This is Puerto Rico, the North Marianas, American Samoa, Guam, etc.

      In terms of the economy and population of the US, these territories aren’t important…. But they are quite literally imperial holdings. The president tomorrow can send a governor to American Samoa and tell them what to do without the Samoans being able to a thing about it.

      Now, this literal part of American empire doesn’t have any bearing on the collapse of the United States, but it is odd when you realize that technically the US is still an empire and that there are 2.5 million Imperial subjects.

      1. @Matthew,

        This is a really good distinction you’re making. I oppose most post-1900 US foreign policy as much as the next man, but as you say, I don’t think that the US relationship to Latin America, as bad as it is, is really the same or closely analogous to, say, the British or French relationship to their colonies in the bad old days. Formal independence isn’t everything, but it is something, and decolonization really was a step forward, that we shouldn’t be quick to dismiss.

        I do agree though that you could plausibly say the US is an empire in the sense of its relationship to places like Samoa, some of the other Pacific Islands, maybe Puerto Rico, etc.. Possibly also the Native American nations, since while, yes, they can vote, they are so massively outnumbered by the settler populations that their votes don’t really matter (maybe outside of Alaska, New Mexico and a couple of other states). I think you could say China and Russia are empires in the same sense (or at least, post-imperial states).

        1. I would certainly class China and Russia as empires, as well as the US. There’s some weird obsession since the invention of Tall Ships that an empire or a colony has to be overseas, which just masks the enormous number of populations living under imperial systems around the globe that just so happen to be connected via land borders.

          The term ‘post-Imperial state’ is an interesting one. Does that imply that it’s a state that used to have imperial holdings yet has entirely (or nearly entirely) replaced any non-majority populations with the majority imperial population? Or does it imply a multi-national state that used to be Imperial but now has devolved assemblies (I’m thinking of the context of England, Scotland, Wales and Northern Ireland here).

          1. One of the things which makes defining empires difficult is the amount of assimilation. China is the obvious example because some people, who started out as Han, have conquored other ‘nations’, but those people have, in time, become Chinese. Question is, is that still an empire, when most people have come to think of it as normal. Same applies to the US and the preceeding natives, whose descendants think of themselves as (something) Americans.

          2. @Christopher Blanchard

            Yep, I agree. It all gets very murky, and into things like ‘is it ‘good’ assimilation or ‘bad’ assimilation.

            Good assimilation here being defined as something a population wants to do, and bad assimilation being the sort of things the Chinese are doing to the Uyghers at the moment, or what Canada/America/Norway/Sweden/Australia/etc. did to their indigenous populations (but have now stopped for the most part).

            You can lump England into that as well with the Highland clearances, the banning of Welsh and Scots Gaelic, and all of the various messes that have gone on in Ireland. Hell, you can probably do the same for many, many countries. France and their repression of all of the various French dialects that weren’t the official one etc.

            Then you get even more into a mess about whether it’s acceptable to assimilate people even if they want to, but don’t understand the full consequences.

          3. Does that imply that it’s a state that used to have imperial holdings yet has entirely (or nearly entirely) replaced any non-majority populations with the majority imperial population? Or does it imply a multi-national state that used to be Imperial but now has devolved assemblies (I’m thinking of the context of England, Scotland, Wales and Northern Ireland here)

            I had in mind the second meaning (and also was thinking of things like linguistic autonomy, etc.).

          4. “Good assimilation here being defined as something a population wants to do . . . .” This characterization assumes that a population (or even an individual) has a coherent goal. In my experience, with many acquaintances from both Europe and Asia, most non-Anglo-American people want to (i) preserve their ancestral culture, (ii) become and remain fluent in English, and (iii) (being human) do as little work as possible. Those three goals aren’t really compatible.

          5. most non-Anglo-American people want to (i) preserve their ancestral culture, (ii) become and remain fluent in English, and (iii) (being human) do as little work as possible.

            Most thoughtful and morally upstanding people come to understand pretty quickly, and at a fairly young age, that “doing as little work as possible” isn’t a path to happiness or thriving, so I don’t think this conundrum is as difficult as you make it out to be. Specifically I don’t think we have to worry too much about option III.

          6. “Most thoughtful and morally upstanding people come to understand pretty quickly, and at a fairly young age, that “doing as little work as possible” isn’t a path to happiness or thriving.” Really? Have you compared the attendance at your local gym in January with the attendance in February? Most people I encounter have all sorts of good intentions, but the effort of acquiring and maintaining fluency in Italian or Korean and forcing their children to do the same daunts them.
            https://www.wsj.com/lifestyle/fitness/new-years-resolution-gym-etiquette-elon-musk-doge-db952b1b?mod=lifestyle_lead_pos3

      2. This was a great point, and it led to a lot of worthwhile discussion! I almost feel bad to point out that the “250-year” argument clearly used “empire” in a sense much more coterminous with “superpower”, rather the more formally correct definition centered around the presence/absence of imperial holdings.

    5. There might be some argument to be tried on suggesting that sociopolitical structures calcify over time — that the compounding build-up of precedent and tradition gradually reduces plasticity until the empire in question fails to adapt quickly enough to the world’s ever-changing circumstances and falls behind. I don’t particularly believe it, but it’s interesting at least.

      What I do absolutely believe is that shifts in environment, particularly sharp and dramatic ones, can snap structures apart in the process of adaptation. Although relatively stable by this point, we live in such a time: the product of the industrial revolution and adjacent processes was and is a distinct paradigm to that in which early modern and earlier states arose, and the latter were found so wanting that they have been discarded en masse to seek better alternatives — with the ideal alternative’s form still being a matter of heated debate to this day.

    6. Well, it is naturally tempting to think/expect/hope that after a certain stage of development, a society would become advanced enough…

      This view of history, that it’s a broadly speaking one of forward progress (or Progress) with mostly only small and temporary setbacks — wars and plagues — is sometimes called Whig History. Unfortunately, nowadays it’s unfashionable, despite its obvious core of truth. And it is in this context that the collapse of the (Western) Roman empire is so interesting — it sits not in the reference group of “defeated/conquered empires”, of which an ordinary high school education teaches there are at least a dozen (sic!) — but in a group of two, the other member of which is the Bronze Age collapse.

      Look at the experience of games, from Civilization to RTSes. Once something is researched, there is no game mechanic for it to be forgotten. “Advanced Melee Weapons” isn’t on a timer, that it could unresearch itself if the player doesn’t build any melee units for too long. And even in games without any research component, broadly speaking the arc of any one match/scenario is that the map starts “empty”, the player starts with little capacity to do anything, and then gradually the player expands, “filling in” the map, gaining capability. While many of these games focus on guns and end when military victory happens, in the games that include “butter” the player can likewise make more (and better) of that, too. Here, the player’s empire just unraveling on its own, for no identifiable reason, is almost unthinkable. This is what makes “Britain unresearched Pottery” cry out for explanation.

      Incidentally, this is the sort of intuition that makes people ask questions such as:
      – I thought the USSR was formidable, how come Russia is so bad at (…)?
      – how on earth can an industrial civilization have a housing crisis?
      – more of a remark than a question, ceterum censeo Jones Act delenda est.
      – what went wrong with architecture? Civic and other high-prestige buildings used to invest in looking nice (because it builds legitimacy and inspires pride). Nowadays they are either featureless boxes or even worse things.

      1. Empires do not “unravel on its own, for no identifiable reason”. It is possible for a competent and well-run empire to be “stable”, in the sense that a large boulder being held up by wooden supports on the side of a hill is stable, so long as the supports are well-maintained and regularly serviced. When the supports crumble due to neglect or fail to adapt correctly to changing circumstances, the boulder slides down to the bottom of the hill, and the Empire falls.

        Thinking about technology can be misleading when it comes to what a society can actually do. Monumental infrastructure is not necessarily difficult to build because the mechanical knowledge on how to do it at all has been lost (although, in certain situations involving drastic loss of literacy and written records, it can be), but because its construction exists in a social context in which there is both a motive (king wants to flex his power) and a means (king has the state bureaucracy to organise the peasants to work on the big stone thing) to do it. Without the necessary reason and ability to use it, technology is a solution looking for a suitable problem. A lot of cases of apparent “decline” in “technology level” is more an issue of deurbanisation and loss of centralised state institutions than it is an issue of “we literally forgot”.

        Side note on “intuition”: intuitively obvious things are not always true, and “counterintuitive” does not falsify a theory. Theory is assessed based on explanatory power, which is where “Whig History” falls short precisely because its view of teleological progress cannot explain things like democratic backsliding and imperial decline without slapping on enough caveats and exceptions to crush the theory. Unfortunately, because it feels so intuitively like it has a core of truth, it continues to be very popular in conceptions of the world despite its lack of explanatory power, which then leads to all sorts of popular questions that it cannot meaningfully answer.

        1. precisely because its view of teleological progress cannot explain things like democratic backsliding and imperial decline

          I disagree with Whig history, but i don’t think it’s *quite* as dumb as that. As far as i can tell, Basil was referring strictly to technological progress, not to ideological constructs like democracies and empires, so ‘democratic backsliding” wouldn’t be counterevidence. I disagree with him there too- the Roman Empire isn’t the only example of a society where technical knowledge was genuinely lost, his own example of the postcommunist Soviet Union is another good one, and then there are examples of societies that reverted from agriculture to hunter-gatherer lifestyles, or that “forgot” how to read and write. But, it’s not quite as easily disproval as the (*really* dumb) idea that the human species is naturally progressing towards American political, economic and cultural values.

          1. I would say that the case of the former USSR is different from the other examples mentioned, since most (if not all?*) of the technical knowledge wasn’t “lost” so much as it was transferred to wealthier and more stable countries, as part of the well-known “brain drain”. I don’t think anything comparable had occurred when advanced polities fell before the era of international travel.

            *There MAY be a few important things done by late Soviet science which are still not fully replicated anywhere else nowadays. In the global context, research on using bacteriophages as the alternative to antibiotics that is not subject to antimicrobial resistance is probably the most significant example.

            And we probably shouldn’t dismiss the idea that over time, human societies in contact with each other GRADUALLY acquire knowledge regarding which approaches to governing or to handling specific issues or crises work better than others, and this in turn encourages convergence towards certain approaches. I.e. by now, it has become sufficiently well-accepted that the response towards inflation is to increase interest rates, that Erdogan’s decision to do the opposite sparked shock and concern everywhere – and the economic outcome in Turkiye is likely to discourage anyone else from trying the same again, further increasing the global convergence on this approach. There are likely many more such examples where we care to look. (I.e. the world as a whole is likely to adopt fairly similar policies during the next pandemic, with much less divergence on questions like border controls or which public spaces should and should not be closed then 5 years ago – even if some fraction of leaders might be keen to do the very opposite.)

            This does NOT necessarily mean convergence on values – in fact, last year, a paper found that if anything, societies which were already culturally similar have converged closer to each other (i.e. Western Europe or the Gulf states) but these cultural blocs have moved further away from each other, resulting in a value divergence on a global scale.

            https://www.nature.com/articles/s41467-024-46581-5

            > Social scientists have long debated the nature of cultural change in a modernizing and globalizing world. Some scholars predicted that national cultures would converge by adopting social values typical of Western democracies. Others predicted that cultural differences in values would persist or even increase over time. We test these competing predictions by analyzing survey data from 1981 to 2022 (n = 406,185) from 76 national cultures. We find evidence of global value divergence. Values emphasizing tolerance and self-expression have diverged most sharply, especially between high-income Western countries and the rest of the world. We also find that countries with similar per-capita GDP levels have held similar values over the last 40 years. Over time, however, geographic proximity has emerged as an increasingly strong correlate of value similarity, indicating that values have diverged globally but converged regionally.

            > ….Theoretically, our findings suggest that globalization and intergroup contact alone are not sufficient to produce converging social values. Our findings also suggest that the post-materialist hypothesis—that wealth breeds emancipative values and tolerance—may have stronger predictive power in some regions than others. Our findings support key parts of other theories but do not completely align with any single theory of culture and values. We observe worldwide divergence of values accompanied by re-alignment of values along regional and religious fault-lines, which is consistent with Huntington’s civilizations thesis and more recent work on rises in geopolitical regionalism from macroeconomics. We also find that this effect of wealth varies across Western and non-Western countries, consistent with Eisenstadt’s multiple modernities theory and with other studies that highlight Asia’s unique modernization trajectory. It may be that our findings are specific to a particular period of time following decolonization and the end of the Cold War (1981–2022) and that we would have found different results at different periods of time. Only time will tell if our findings represent a general cultural trend or a historically isolated phenomenon.

            > Value divergence could also explain theoretical puzzles in the social sciences. For example, there is a popular theory that rising wealth and technology facilitate religious decline because they decrease existential insecurity and relieve the economic pressure to have children. But this model does not explain why rising wealth has not brought religious declines in Middle Eastern countries and has even correlated with rising religiosity in some of these countries. One possibility is that these declines will happen with time, especially with generational change. In other words, countries like Saudi Arabia and Qatar may not have been high-income long enough to experience secularization. However, another possibility is that Islam has emerged as a key part of Arabic post-colonial identity. Rising wealth and influence could have led Arab countries to emphasize the religious part of their identity to distinguish themselves from the West.

            Now, the above appears consistent with the data they found. Their final takeaway also seems commonsense.

            > Our research also underscores the limitations of studying Westerners to make claims about human psychology writ large. Cross-cultural scholars have pointed out that people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) countries have psychological traits that differ from the rest of the world. This peculiarity presents an external validity problem for studies that recruit mostly WEIRD subjects, and it also presents an intellectual problem for cultural evolutionists who hope to explain regional variation. We show that this problem has become more acute in the last forty years. WEIRD subjects have become even more peculiar, at least in their social and moral values. This shift makes it more crucial than ever that behavioral scientists develop their theories using data from globally representative samples.

            I am a lot less sure if their data justifies literally citing “Huntington’s civilizations thesis” (that is, the “Clash of Civilizations”), even as they seem to make the case for it here.

            >Practically, value divergence has implications for political polarization and conflict across world countries. Russia has framed the recent war in Ukraine as a war against Western values. Chinese politicians have spoken against countries that “forcibly promote the concept and system of Western democracy and human rights”. Western non-governmental organizations have faced recent accusations of seeding immorality and propagating Western imperialism, and public opinion polling has found increasingly hostile attitudes towards Western countries in the Middle East, Asia, and Africa. Our findings do not shed light on the extent of the anti-Western sentiment or the exact nature of its antecedents and consequences. We do not know to what extent governments strategically propagate certain values to reinforce national identities, as Tomlinson would suggest. One goal of future research could be to understand the extent that political elites have encouraged value divergence among ordinary citizens.

            …I suspect any reader here can think of reasons for “increasingly hostile attitudes towards Western countries in the Middle East, Asia, and Africa” that are rooted in foreign policy far more deeply than in values – to the point where repulsion with foreign policy could be conceivably described as one of the drivers of value divergence. I am even more skeptical of the authors’ interpretation of their results here, which falls into familiar tropes – some of which may well have been contradicted by this blog already.

            > Our findings answer some important questions about contemporary value change, but they also pose new ones. Why has rising wealth led Western, but not non-Western, nations to espouse more emancipative values? We believe that this relationship might be embedded in ecological and cultural events throughout Western history, including the advent of participatory democracy in Athens and early Rome, enlightenment and post-enlightenment philosophy, the Catholic Church’s marriage and family polices, the French revolution, and the reformation. These phenomena may have gradually solidified a Western identity focused on autonomy, primacy of individual rights over obligations to the in-group, and tolerance for breaking norms. As the world has globalized and Western nations have competed for resources on the world stage, this identity may have crystalized even further. But this does not mean that wealth or globalization should have encouraged similar values in African, East Asian, or South Asian cultures. To the contrary, growing power and resources may have prompted non-Western countries to affirm their own traditional values.

          2. @YARD,

            Fascinating and thoughtful comment! I especially agree with your comment that value convergence isn’t happening and is unlikely to happen.

            I did want to comment specifically on this:

            For example, there is a popular theory that rising wealth and technology facilitate religious decline because they decrease existential insecurity and relieve the economic pressure to have children. But this model does not explain why rising wealth has not brought religious declines in Middle Eastern countries and has even correlated with rising religiosity in some of these countries

            I don’t know if you saw the monumental study that the Pew Research Center did in India in 2021- it was a survey of all kinds of aspects of peoples’ worldviews, about religion, culture, and society. They surveyed enough people that they were able to do crosstabs by age, by region, by religion, by education level, by broad caste category (Scheduled vs. Other Backward vs. General), etc.. One of the many really interesting things that stood out from that survey is that, in India, education is not correlated with religious decline except for Christians. Christians in India become somewhat less religious as they get more educated, whereas Hindus, Muslims, Sikhs, Buddhists and Jains (yes they polled enough people that they even could estimate those smaller religions) do not.

            I don’t know why CHristianity seems particularly vulnerable in this way, but I have some speculations.

            I think you’re more or less right about the causes of the scientific collapse in the former Soviet Union- it wasn’t like these people died, they just emigrated, to the US or England or, if they were Jewish, to Israel. I would love to learn more about the bacteriophage thing, if you have a link?

          3. The Moriori of the Chatham Islands reverted from farming to hunter-gatherer lifestyle, because the climate was too cool for their agricultural package. I’m not sure if they ever *forgot* how to farm though (they only seem to have lasted as a distinct society for under 350 years, so that may not be long enough for that kind of knowledge to simply vanish).

          4. > I would love to learn more about the bacteriophage thing, if you have a link?

            Multiple! Since spam filters here restrict me to one at a time if I don’t want a delay, I’ll probably go with this one as an example of a recent overview of the therapy as it is used now, and the reasons it hasn’t been adopted over antibiotics in most countries.

            https://www.sciencedirect.com/science/article/pii/S1198743X23000435

            Other recommendations are “‘They’re Not a Panacea:’ Phage Therapy in the Soviet Union and Georgia”, which goes in depth about the origins of the therapy and its nexus in Georgia, although it only goes to its publication in 2011 and is behind payment/institutional wall, and “Book Review: The Curious History of Life-Saving Viruses” from Undark, which is a much more positive and personalized look at their past, present and future – though as a book, it can also inherently be less rigorous in the service of a narrative when compared to a journal article.

            > I don’t know if you saw the monumental study that the Pew Research Center did in India in 2021

            I presume you mean “Religion in India: Tolerance and Segregation”? No, that was the first time I’ve heard of it, but I read through it now, and it is definitely a remarkable piece of research.

            >One of the many really interesting things that stood out from that survey is that, in India, education is not correlated with religious decline except for Christians. Christians in India become somewhat less religious as they get more educated, whereas Hindus, Muslims, Sikhs, Buddhists and Jains (yes they polled enough people that they even could estimate those smaller religions) do not.

            I do wonder how much of that can be explained by the fact that according to the same survey, most Indian Christians live in the South, and the South is also the least religious region across the board – at least relative to the rest of India. (An outsider like me would be tempted to correlate this to the influence of Kerala, a self-proclaimed democratic communist state and a place I admire in many ways, but it is also only one of the six states in the region.) Unless I missed it, the survey does not seem to compare Southern vs. non-Southern Indian Christians – or Southern Indian Christians vs. the Southern Indians of other religious affiliations. Still, even if ALL of the supposed Christian-specific correlation between education and (a lack of) religiosity was actually a regional correlation specific to the South, that would only reinforce the broader point that in the rest of India, education does not affect religiosity.

            Having said that, another thing which jumped out at me from the survey was the obvious evidence of people for whom religion has become completely about community and not about faith. Most notably, SIX PERCENT of self-identified Muslims and Sikhs also say they do not believe in God (Allah/Ik Onkar, respectively), with this fraction going up to a whole third for Buddhists (although this may mean something different for that religion.) I even had to cross-reference this with Pew’s own Religious Landscape Study – Christians, to see how prevalent the “cultural Christians” of the US were by comparison. Yet, that says that only 1% of US Christians outright reject the existence of God. (Because of how large the US is, even that 1% still amounted to 258 people whom Pew then polled on a range of questions under “Adults who do not believe in God who are Christian” sidebar.) This is smaller than even the 2% of India’s Hindus and Christians in that position – although it would go up to 3% if you also include those American Christians who believe without knowing for themselves if God exists.

            A similar, perhaps even more remarkable factoid – only 49% of India’s Hindus think that one actually HAS to believe in God(s) in order to count as a Hindu – yet 72% would reject one for eating beef. Similarly, there is apparently some 14% of India’s Hindus who would accept a person as a fellow Hindu if they do not actually believe, pray or attend temples – so long as they at least do not celebrate Eid (or eat beef, presumably.) Likewise, seemingly 17% of Indian Muslims would accept one who does not believe in Allah as a Muslim as long as they do not eat pork.

            Last but by no means least, it is shocking for an outsider to realize that caste matters a lot in India EVEN to those NON-Hindus whose religions have no such concept anywhere else. Most notably, about 10% MORE Indian Muslims oppose men and women in their community marrying outside “their” caste than the Hindus do. (~70% for Muslims, ~60% for Indians, with Sikhs and Jains in the high ~50%s) Even Christians have about 36% who oppose cross-caste marriages (which happens to be exactly the same proportion as for the South of India altogether.)

            In all, I am still not entirely sure if I can make any predictions on the basis of the above, but it certainly seems like evidence of how much LOCAL culture, history and traditions matter, and will continue to matter in the foreseeable future for longer than many might think.

          5. Other recommendations are “‘They’re Not a Panacea:’ Phage Therapy in the Soviet Union and Georgia”, which goes in depth about the origins of the therapy and its nexus in Georgia, although it only goes to its publication in 2011 and is behind payment/institutional wall, and “Book Review: The Curious History of Life-Saving Viruses” from Undark, which is a much more positive and personalized look at their past, present and future – though as a book, it can also inherently be less rigorous in the service of a narrative when compared to a journal article.

            Thank you for that! I’m overworked at the moment (haven’t had a day off since late December, and i couldn’t go home for Christmas), so it may be a while before I get to it, but i will definitely give those books a look when I get some free time. I’m fascinated by communism in general and Eastern Bloc science in particular, and would love to learn more.

            I remember in graduate school I was working on a new crop species (a wild plant in the process of domestication), and learned that there had been Soviet efforts to domesticate it, but unfortunately there were no references to the original Soviet articles or data that I could find on Google Scholar (not that it would have helped, since none of us spoke Russian anyway). It made me wonder how much knowledge had been lost when the Soviet economy, and presumably a lot of its university and scientific infrastructure, imploded. In an era before digitization, when the knowledge exists in paper journals or physical notebooks or even in people’s memories, it’s not hard to imagine how knowledge could be lost.

          6. I presume you mean “Religion in India: Tolerance and Segregation”? No, that was the first time I’ve heard of it, but I read through it now, and it is definitely a remarkable piece of research.

            Yes, “Tolerance and Segregation” is the one I’m thinking of. It was, as you say, pretty remarkable. And the title really sums up the basic findings of the study: I was struck both by how strongly people in India today believe in tolerance, as an abstract value, and simultaneously by how strongly they believe in social segregation.

            I’m a little hesitant to say the south is “less religious” than other regions, unless that’s how people in the south would describe themselves. As far as I know, actual atheism and agnosticism are very uncommon in India. Rather, I’d say that there is a strong 20th century tradition of anticlerical politics in the south. In Kerala that was represented, as you note, by the Communists (who I also have quite a bit of sympathy and fondness for) and in Tamil Nadu next door it was represented by the Dravidian Nationalists who were/are also broadly left wing and strongly anticlerical. I think though that both Communists and Dravidianists realized pretty quickly that they weren’t going to make much headway trying to promote actual *atheism*, distinct from anticlericalism, in a highly religious country. I don’t know what’s going on in the other three states.

            I think what’s going on with Christians becoming less religious as they get more educated is this: while Indian Christians certainly aren’t “western”, they do, if they’re Catholic or Protestant, belong to organizations that have *ties* to western countries, and are therefore exposed to the currents of thought that have done such a number on Christian belief in countries like England and America. Muslims and Hindus, and I guess even Orthodox Christians, can write off higher criticism as some bizarre Western European aberration that has nothing to do with us: an Indian Methodist or Anglican or Catholic is still going to be exposed to this stuff, even if they disagree with it, and can’t ignore it in quite the same way.

          7. Last but by no means least, it is shocking for an outsider to realize that caste matters a lot in India EVEN to those NON-Hindus whose religions have no such concept anywhere else.

            Yes, the thing about how important “not eating beef / not eating pork” is to Hindus and Muslims respectively- even more important than whether you believe in zero, one, two or many gods, apparently- and how strongly people tend to oppose caste intermarriage.

            The caste intermarriage thing is interesting because of how little it varies across most demographic categories. Young people and older people; educated people and uneducated; Backward Castes and non-Backward Castes; Hindus, Muslims and Christians all seem to be strongly hostile to inter-caste marriage. The two things that do seem to affect it are that 1) Buddhists and 2) people in the South are more accepting of intermarriage (as you note, only 36% of people in the South are strongly opposed to it). I think there are pretty clear historical factors going on there: Buddhism was reinvented in 20th century India as a modernizing, social justice oriented religion that explicitly rejected caste, and in the South you have the legacy of modernizing left leaning political movements (communists in Kerala, Dravidian nationalists in TN, and *maybe* the legacy of a historically relatively progressive ruling family in the Kingdom of Mysore prior to independence, although i’m less sure about that).

        2. “Technology” construed broadly is “ways of doing things.”

          This can be how to make a boiler that doesn’t blow up, but it can also be how to run a government department without having so much graft or theft it collapses in a crisis.

          We seem to be losing a fair bit of that last named piece of technology.

          People see the well-run department and think “Oh, we can tolerate a little more in the way of problems”. Rather like “Oh, we can make that boiler plate a little thinner.”

      2. There are definitely much more than two examples of civilizational collapse:
        – the end of the Indus civilization (Mohenjo Daro, Harappa…) with the loss of sanitation, city planning and possibly writing
        – the Balkans after ~600 AD with local loss of writing and urban living (except for Constantinople, Thessalonike, Thebes and Athens)
        – the end of the Classical Maya with local loss of writing and towns
        – the end of the Mississippi civilization (Cahokia)
        just for starters. For comparison, the end of the (Western) Roman empire also involved only local and partial loss of writing and urban living – lowland Britain is kind of an exception.

        1. I was thinking the same thing.

          The only way of saying that the only comparable civilisational collapse to the Roman Empire is the Bronze Age Collapse is to tipex over all of the other ones that are comparable and call it done.

          Presumably that’s one of the reasons people are less keen on whig history nowadays…

  14. This all seems like it also provides interesting and useful context for the cultural role of looting on the battlefield – both why it is so often portrayed so negatively and why it’s nevertheless a fixture of a certain kind of fantasy. If a suit of armour could easily represent a year’s wages for a soldier, even if you can’t realize much of that price yourself it must be pretty tempting to try to get a slice! And, conversely, whoever’s leading these armies is going to want to keep control of that kind of wealth for both mercenary reasons and out of class interests. Seems like a topic that might be ripe for elaboration sometime.

    1. The class interests bit is very interesting, because for a while I was wondering ‘why didn’t all of these Gauls that knocked seven hells out of the Romans every now and again simply end up with an entire army clad in chainmail too?’. Or, really, any army that beat a comparable enemy and could distribute more of their equipment to more of their troops. It seems like such a security no-brainer that it’s mainly just Hannibal that did it.

      Class interests would do it though. We don’t want all of these lower class plods running around in chainmail, that’s for us Big Men. If they’ve already got all this wealth, we’re not going to be able to gift it to them to keep their loyalty. Better scoop it up and put it in my chainmail shirt repository to hand out for good behaviour.

      Unless I’m totally misinformed about this, and you did get a load of scarily well-armoured Gauls running around after beating the Romans.

      1. Would the Gauls have had enough time and control of the field to loot the armor and organize distribution and transport though?
        It’s also interesting that from what I can see the Romans got the idea of mail *from* the Gauls.

        1. Yeah my understanding is that the Gauls did invent mail, but primarily as an elite good for their (comparatively) fabulously wealthy elites. Meanwhile, the Romans churned out so much of the stuff they could armour their rank and file.

          For me, it’s an interesting example of innovation happening in an elite/heavily unequal context, but being really capitalised on by a more egalitarian approach. Sort of like motor cars being originally the playthings of the stupendously rich, but being utterly transformative when nearly everyone could use one (if not own on themselves, i.e. public transport)*.

          Gauls and their predecessors appear to have done quite a lot of successful innovation in sword design too, both in bronze and iron.

          *I know less about the early history of rail, so I’m not sure if that was driven primarily by fabulously wealthy elites or not.

      2. A big part of the answer is just going to be that there isn’t that much mail to loot. Sure, all the Romans are in armour – but how many of them _died_, and how many of those got left behind instead of being recovered by their retreating compatriots?

        If a field army is four legions and you get 10% of them in loot, that’s 2000 mail shirts tops. Which yes, is a lot – but if your army was 20,000 facing it, it’s only 10% of your troops going into mail. If your army was 40,000, only 5%. So it’s not like one victory means everyone has mail now.

  15. > One other thing to note here that comes out clearly: plate cuirasses are often quite a bit cheaper than the mail armor (or mail voiders) they’re worn over, though hardly cheap.

    Beyond any technical factors, this seems very likely to have been one of the key reasons plate takes over right down the ‘class’ scale in European warfare. While it’s better protection, a cheap cuirass has a number of issues compared to a mail shirt – but being half the price is a pretty good argument to use it anyway.

    I think I first saw it pointed out by Dan Howard writing for MyArmoury a decade ago.

  16. Last week’s post could not have come out at a better time for me. I was just in the process of overhauling the Equipment chapter of the 5e Player’s Handbook for a new campaign. I found a lot there to be complete nonsense (e.g., 1000 ball bearing available for the same cost as a spear in an ostensibly medieval setting), but I was unsure what sorts of prices would be realistic. My rudimentary research kept running up against the wild price fluctuations that last week’s post described. However, after reading that post, I think I finally settled on an approach that feels reasonable: divide the default D&D equipment prices and treasure values by ten — going from gold pieces to silver pieces as the default currency — and then deciding that a silver piece actually weighs closer to 2.5 g, rather than the almost 10 g assumed by the base rules. This seems to get most of the equipment costs into the right order of magnitude. There are definitely still kinks to work out (why is the longbow worth as much as a chain shirt?), but I think this will be a usable benchmark for character wealth progression in my campaign. Thank you for the discussion last week and the helpful follow-up today.

    1. European yew seems to grow naturally as a lowland plant only in Britain and around the Baltic. So it could be the case that the longbow in your manual is not the single-piece British longbow people often think of, but one of the laminate longbows that show up in far more of the world, but require a lot more skill and time to make.

  17. To add to my previous comment: I should note that the “empires normally/inevitably collapse” argument has a more radical, yet still rather popular version. It goes approximately like “Societies/civilizations inevitably collapse -> the modern world is a single globalized and interconnected civilization -> due to those interconnections, all industrialized countries will collapse at approximately the same time (be it due to nuclear war, climate change or something else) -> you should get a small farm somewhere, or at least stock up on canned food and prepare for Mad Max.”

    A writer who has done a lot to shape this perception is Joseph Tainter, whose self-explanatory titled The Collapse of Complex Societies from 1988 argued that any complex civilization (the book focused not just on Rome, but also the Maya civilization and the Chaco culture – although it cited 14 other societies in its list of rapid collapses) ends up devoting more and more of its outputs to maintaining social complexity which adds less and less to its functioning – and after a certain point, those costs become so great and make so little sense that the society is hollowed out and collapses when met with an external shock it would have been able to weather in its stronger form. In the book, he explicitly argued that the modern world was already exhibiting these dynamics.

    More recently (in 2005, so still 20 years ago) Jared Diamond published a book with a similar message titled “Collapse: How Societies Choose to Fail or Succeed” – although “choose to succeed” did leave open the door to avoiding such collapse, the essential argument that a modern-day collapse would now involve the entire world appears the same. Even more recently (5 years ago) three French authors (which is no accident, as shown below) condensed the message even further with “How Everything Can Collapse: A Manual for our Times”.

    I am by no means suggesting that the argument is confined to those books, but (a few journal papers aside) they likely offer the most erudite version of this argument. From searching this blog, it does not appear like any of the three books (or their authors) were discussed on here before, even though it seems like their work, rightly or wrongly, had been of great relevance to modern society. In particular, I would like to note this criminally overlooked piece of research – which comes from France, now at times described as “the birthplace of collapsology”.

    https://www.jean-jaures.org/publication/la-france-patrie-de-la-collapsologie/

    According to the polling published by this French think tank in early 2020 (but conducted in later 2019, i.e. before the pandemic), the MAJORITIES of those surveyed in 4 out of 5 Western countries all AGREED that civilization is going to collapse in the near future – the percentage went as high as 71% in Italy and was at 65% in France, but it was also at over 50% in the US and the UK – even in Germany, the “most optimistic country”, 39% agreed. There is A WHOLE LOT MORE data in there than what I can reasonably summarize, but just the topline finding should give one pause. The fact that this poll does not, to the best of my knowledge, appear to have been followed up on, in any country or by any other organization, seems extraordinarily negligent. To my mind, this kind of a question should be asked at least yearly, and across as many countries as physically possible.

    After all, a society where the majority of inhabitants expect not just said society BUT THE ENTIRE SYSTEM OF GLOBALIZED PROSPERITY to end within their lifetimes (one of the follow-up questions explicitly concluded that most of the respondents who expect a collapse specifically see it as either a permanent reversion to great scarcity and hardship, or to a rationalized/idealized agricultural economy, with only a small fraction thinking industrialization would reestablish itself) would take VERY different decisions from a society where most people do NOT believe in these things. In a completely unsurprising finding, a French-only set of follow-up questions found that the respondents who did not believe in collapse were the most likely ones to support Macron, while strong majorities of Melenchon and Le Pen supporters believed in the impending collapse. I don’t think it takes a great deal of imagination to see how these dynamics would play out in the other countries surveyed in that poll.

    1. To clarify a point which might have become jumbled above – I do not think either Tainter or Diamond ever stooped so low as to endorse the last stage of that popular argument (the one I summarized as “the world will become Mad Max”.) Both have decades of experience in anthropology and so would be well-aware that a “collapse” as defined by historians has very little to do with the Mad Max/Fallout/etc. cultural memeplex. However, much like how The Terminator still has a MASSIVELY OUTSIZED role in how people think about the AI and Gladiator, Braveheart, etc. have an outsized role on the perception of their corresponding time periods) those (and other works of popular culture in their wake) are FAR more accessible than the academic writings and equally happen to shape the people’s understanding far more than anybody rational should want them to.

      1. I used to hang out with a woman whose parents migrated from Ukraine to Israel post-collapse of the USSR; and one of the things she talked about was how society actually *did* collapse in a very real sense, at least briefly, but also how low-key it was: It was harder to buy food so people relied more on their kitchen gardens, pots from the local factory became a kind of local currency for a while, etc. And the point is that while a lot of people noticed that things had gotten worse, they also kinda… got used to it? Just soldiered on? ANd because the collapse was relatively localized it bounced back eventually. But the point being that it’s not even neccessarily that people will recognize something as a collapse: It’ll just be a slowly boiling frog situation. Where at some point someone realizes that “Hey, didn’t we used to have all that Stuff? What happened to that?”

        1. I was myself born in Russia around the time of the USSR collapse – before my family emigrated to Australia. The description you provided is….not inaccurate per se, but it misses A LOT. I.e. for starters, the political collapse of the USSR actually occurred at the peak of “stuff disappearing” – and that was one of the most obvious sources of discontent with the system. Conversely, the literal collapse was instead the time of stuff beginning to flood the shelves after the first couple of years (both renewed domestic production after price controls were lifted, but also a lot of Eastern European imports of dubious quality, electronics from Asia, etc.) and more variety than ever before – but most people couldn’t afford any of it for the first few years due to rapid inflation. The U.S. was literally shipping leftover food supplies from the Gulf War in military aircraft to Russia and the other Soviet countries during that time – a total of ~25,000 tons in what was known as Operation Provide Hope. There is a reason that an iconic song of the period was 1993’s “Two pieces of kolbaska” by a “girl band” Kombinatsiya – where the narrator agrees “not to be proud” and go to a prospective boyfriend’s house, effectively selling herself because at least he had two pieces of meat on his table when she had absolutely nothing to eat. Another symbol of the era were chicken imports from the U.S. – the so-called “Bush’s legs” which soon made up the majority of all chicken consumed in Russia.

          https://www.npr.org/sections/thesalt/2018/12/06/673806672/chicken-diplomacy-how-president-bush-went-for-the-gut-in-the-former-ussr

          In Russia specifically, things improved somewhat by 1995, when the U.S. aid was considered no longer necessary (and at that point most people were opposed to the introduction of food rationing, when many were supportive in 1993) – only to receive another shock during the 1998 default. That entire decade saw the massive explosion in every sort of crime, and of skilled occupations being devalued. Putin’s recently told tale of coming back from his Dresden post and being forced to drive a taxi for a while to make ends meet may or may not be apocryphal, but it certainly reflected the experience of many citizens who had formerly respected professions. And of course, there was the literal war in Chechnya (very small next to Ukrainian War in terms of military casualties, but possibly exceeding it in terms of civilian deaths) and in the Karabakh, where the 1990s conflict is thought to have displaced over half a million Azeris, as well as claiming 25k lives (while its recent conclusion did the opposite and displaced over 100k Armenians and cost an additional 7k lives.) In short, the collapse might have started slow for those people at first, but they have definitely noticed it soon enough.

          Up until the war, the ultimate Putinist argument was as simple as “We had democracy in the 1990s and it brought us crime, chaos and decay. Putin built a “power vertical” and it brought us stability”. It was so common that a popular meme compared Maslow’s Pyramid of Needs with a Vatnik’s Pyramid of Needs which just had a “Physical needs” tier and a “Stability” tier – yet this approach consistently worked on the general public. Of course, the main reason he was able to “bring stability” was because the oil prices that were the foundation of the economy had recovered massively – and both Russia and the other Soviet nations were also able to benefit from trade with the places more prosperous than themselves. And as mentioned elsewhere in this thread, a lot of the more prosperous people simply skipped the whole thing by being able to leave the country to more developed places.

          Meanwhile, if a person believes in a scenario of a global collapse nowadays, they would also be very likely to believe in extreme consequences of climate change (often a lot more exaggerated than any published literature) and also treat something like the Limits of Growth as close to gospel. In that case, they would also believe that neither of the factors which mitigated Soviet collapse would apply. That is, there would be no external, prosperous place to pull off anything like Provide Hope or even to dump cheap food from abroad as climate change would devastate agriculture “everywhere”, the global economy would peak and never reach the same heights again, international borders would effectively shut to flood the wave of migration from countries becoming “literally uninhabitable” (mostly a misinterpetation of a handful of papers on wet bulb temperatures, popularly known through the widely exaggerated scene in Ministry of the Future), the main source of migration in the developed countries would instead be internal from endangered cities to “climate havens” like Duluth or Great Lakes or Buffalo, NY in the US, etc.

          The vision in the paragraph above is dubious in many respects. (i.e. the Limits to Growth is extremely simple and has clear issues, climate change may not hit food supplies hard enough to even offset the impact of less-developed food producers undergoing the same sort of agriculture mechanization developed countries already experienced decades ago, some predictions are outright misinterpreted, etc.) YET, it is quite likely that a person who answered the question posed by the Jean Jaures poll in the affirmative already shares most, if not all, of its elements when they think of the future – and as said above, it’ll impact their near-term decisions about the present. Hence, I believe it is an important responsibility for public-facing intellectuals to seriously engage with these narratives.

    2. We don’t have any polling results available from antiquity, but, as Bret has noted, educated lamentations of Roman decline, with at least implicit predictions of collapse, start at least as early as the first century BC. With the benefit of hindsight, there’s no reason for students of the past to take them seriously, and, having learned from history, there’s no reason for students of the present to take contemporary predictions of collapse seriously, whether they be elite or popular.

      1. There are two issues at hand here. People expecting a collapse may be right or may not be, but as YARD pointed out, the expectation itself will have actual consequences.

        Objectively, all previous empires and civilizations either came to a complete end (the great majority) or at least went through periodic collapses of large-scale order (China). You can of course claim that our civilization, and the American empire, are different, but I would be interested in knowing why you think so.

      2. Before dismissing Roman complaints of decline as nonsense in fashion of “old man yells at cloud”, I find it more interesting consider possibility they were partially true while not 100% accurate.

        Case in a point, Rome and lamentations from the first century BC. It should be noted that there was a lot of crisis and collapse going on during that period. After a period of strife and civil wars, Roman republic was replaced by different form of government. The empire did coincide with height of Roman power, but from a point of view of the civil and political life declined. And its not like Roman history was perfectly smooth sailing afterwards.

        Second point, as the copybook heading says, there is a great deal of ruin in a nation. Perhaps it was possible to correctly observe some decline — or more generally, observe negative aspects in society which were contrasted to ancient ideal that may not have been strictly historical. Just that the negative aspects were not sufficient enough bring the downfall of Rome immediately all at once, or more likely, were overshadowed by phenomena that drove some centuries of Roman success, the overall success is not proof the issues weren’t there all along. Bu eventually there no longer was Roman empire as we know it. It is not implausible that the contemporaries were aware of some of the issues they were facing. Perhaps not with full understanding, or with misunderstanding, but it is not same as fully mistaken either.

        Pick nearly any country at any time point with free political discourse. Complaints of political partisanship and strife, administrative dysfunction, personal corruption, lack of proper morals tend to be common all the time. No matter how much of problem they are, people would like less of them. From ants level view, the political and military scene of any country seem to consist of crisis after another crisis with little reprieve in between. Many countries and empires prosper, or nevertheless survive the crises as they come without outright failure. Yet eventually they don’t, they face a crisis they can’t deal with but balloons to a catastrophe of some sort, it often appears that overabundance of problems and issues everyone knew of clearly contribute to catastrophe.

    3. The Late Bronze Age Collapse seems like the most obvious analogue, especially if you’re not convinced that the modern world consists of a single global empire or civilization.

    4. Jared Diamond talks about modern countries (including industrialized ones like Japan and Australia) as individual countries with their own challenges, ecological and social constraints, and internal value systems, so I don’t think he thinks that a collapse would *necessarily* be global in nature (unless it’s the result of something like catastrophic global warming, or a nuclear war, that would affect everyone. even then, some would survive it much better than others).

  18. Why would a helmet be half the cost of a coat of mail? I was under the impression that mail was ridiculously labour intensive…surely a helm would be a much smaller fraction of the cost?

    1. I admit, I also find that odd, but those are the figures. It’s too early for it to include a coif or an aventail, so I’m not sure why the prices are so close, but they are.

        1. I imagine that the prices also reflect resale value. A hauberk with a hole in it is easily repaired, a helmet much less so. Also, a helmet is fashioned out of a few large pieces, and ancient and earlt medieval forge techniques meant that that labour needed increased disproportionately to size.

    2. This is the seventh century – you haven’t had the Black Death do a real number on the labour market yet, and making plate iron is extremely difficult. Put them both together and rigid armour components are much more proportionally expensive compared to mail.

      Both of these factors change substantially over the next 700 years or so, which changes relative prices.

  19. Typically most adventuring parties tend to organize more along the lines of a pirate ship than a knights retinue-individuals operating far into the legal grey or black zone who trade any and all social protection for an egalitarian wealth distribution that allows rapid advancement. People absolutely did climb the social ladder by participating in piracy (or banditry) throughout all eras in ways that just weren’t possible otherwise.

    Of course the economic conditions and social theory that let “egalitarian” piracy take off, which extended about as far as the decks of the ship and no further, still weren’t ubiquitous. But in general outlaw bands displaying more equal pay division is a fairly common solution to the increased risk of going outlaw decreasing labor participation.

    I still find the scale problem of costs to be ludicrous. Likely this comes down to people having no idea how effective a sword or spear is at killing things; every animal on the planet can be killed by a team of men with spears, even if at peril, up to and including sea borne megafauna like whales. This, combined with the fact that game designers rush to fulfill archetypes in high fantasy, means that even socially disadvantaged low level characters run around with absurdly lethal and expensive kit including heavy armor, whereas a real upstart peasant risking it all might have a knife, spear, and clothes, and count themselves lucky. DND in particular just throws what should be relatively advanced kit at you as a prerequisite to adventure.

    The only place to go then it far out of the range of historical realism and into magic, which is where you get conversions to hundreds of thousands to millions of man-years of labor, instead of merely the single digits man years that we can somewhat ground in reality (which is still impressive and fascinating!).

    Hence why level 7 equipment for a DND party might include a wand of fireballs that does a passable impression of a modern grenade launcher and is about as expensive as developing one in 1400 would actually be, instead of a warhorse with barding that still represents a years wage and is the top end of nonmagical equipment. The designers can’t keep a lid on it, basically. I view that as a failing, ymmv.

    1. The problem with converting magic costs into man-years of labour, is that no matter how many millions of normal men you have, they can’t make magic items.
      Magic item costs should be on a completely different financial system, not easily transferable into mundane money or vice versa.

      1. That is my own interpretation for how to make sense of it, but in some settings you absolutely can translate magic to labor…And in those the entire concept of a preindustrial economy no longer functions, so it’s a fascinatingly complex discussion.

        Eberron, for instance, has trains, magic fertilizer, I think telegraph lines, magic mineral excavation and forging, and magical antiseptics. It’s an industrial economy, so prices should have absolutely no relation to history at all, and the idea that coinage is even in use versus the world being in the midst of a transition to promissory bank notes and paper currency is extremely silly.

        That might be expecting too much from game designers though.

        1. Promissory bank notes require trust in the issuer. I don’t know much about the Eberron setting, but there might not be much transferrable between the endpoints of a typical player party journey.

          Coins can usually be traded for the value of the metal at least.

          Trust can change as well. See the scene at the beginning of Cryptonomicon, set in Singapore near the start of the Japanese invasion. All the banks issue paper, but they settle up with gold every few days, by carting the paper from other banks back to them.

          I’ve run into this problem in my homebrew science fiction setting. There’s interstellar travel via hyperspace, but it’s slow compared to most other settings.

          There’s no overarching Imperium to issue “credits”. So issued money is really only good in a given star system.
          You need goods in the hold of your ship to trade at the destination, or you need a letter of credit from a merchant who knows and trusts a merchant at the destination.
          Heavy metals are fairly compact and still valuable since transmutation still takes too much energy to be worth it and they’re relatively rare.

          1. That’s a good point…Except everyone in Eberron has standardized coinage that traces back to the old imperial coinage anyway. Yes, they issue their own coins, but there’s a standard set of measures they use, so they’re fully fungible. At least on the main continent. Trade between continents does happen, and in that sense coinage is logical, but it’s not the focus of the economy.

            To engage with the setting more closely-there is a dragonmarked house (international organization based on hereditary magic) that handles security and banking, and they absolutely do have promissory notes, and it’s a high trust environment.

            My issue with the setting is that, despite the fact that this system actually exists, none of the major nations use it. They still issue coins as their currency, instead of copying the successful system used by bankers *within their own nation*.

            However I might be missing something about the politics of fiat currency development, in that a NGO has cornered the market, meaning that if governments want to issue currency they can’t compete with the established trust of that organization. In our world paper currency creation was driven by central banks and the inability of countries to repay in coins after and during WW1, so perhaps a neutral trusted international bank acts as a blocker to the establishment of central banks and therein fiat currency. Governments need money so they mint coins, they can’t create a real central bank because people would just use the banking guild (which would take offense to the attempt!), so they have to fix everything to actual metal.

            There might be a parallel to developing economies and the American dollar, but that’s beyond me.

            I just feel that’s…A relatively weak ex-post-facto justification. Governments would have at least tried.

      2. Which is a terrific problem for a game system where characters have to be itemizeable on their character sheets

    2. “even socially disadvantaged low level characters run around with absurdly lethal and expensive kit including heavy armor, whereas a real upstart peasant risking it all might have a knife, spear, and clothes, and count themselves lucky. DND in particular just throws what should be relatively advanced kit at you as a prerequisite to adventure.”

      This is why I prefer WFRP; while D&D starts you off as a Robert E Howard character from the first session, ready to crush at least a small to medium sized throne beneath your sandalled feet, WFRP has a list of resolutely unglamorous starting careers that includes “Beggar” (you have a bottle of rotgut spirit, a knife, some rags, and a stick, and your core skills are “Beg”, “Street Fighting” and “Consume Alcohol”). Even if you start as a magic user, you’ll be an apprentice for at least your first year, and your magical abilities will be restricted to things like “creating a small flame on your hand” and “create a mysterious noise”.

      1. Agreed the general humbling tone of many non-DnD RPGs is something I enjoy. Dark Heresy is particularly humbling…

      2. You and Dan and maybe others here might find Caverns of Xaskazien II roguelike game (distributed as donationware) to be interesting. While it does not attempt to be truly realistic and is instead more of a stylized representation of all the difficulties an adventurer might encounter, a core part of its premise is that even though the characters take on the task of slaying an extremely dangerous titular demon, they need to be really weak to avoid drawing excessive attention to themselves and get a tougher demon or banshee or ultra-skilled goblinoid assassin sent after them from the depths immediately.

        Thus, even the “Soldier” class starts with a Knife, Wooden Shield and (I think Beer) + very weak healing and electric Sparks that can only be cast 2-3 times a day with the typical starting amount of intelligence. The other classes are in a similar boat, and all have to rely on whatever they find in the dungeon (in addition to levelling up, obviously) to get ahead. And the scale of equipment is such that the first couple of floors have junk like paper hats for helmets and basket lids for shields, mail armour is more of a respectable mid-game thing, and plate armour only really starts appearing from floor 20/30. Similarly, the initial spear-type weapon is a pitchfork, and the first ranged weapons available are darts and slings. Bows and small pistol crossbows show up around floor 10, AFAIR – heavy crossbows are just as late-game as plate armour and require the skills to match. Magic staves and even scrolls not only are inherently limited in use, but often fizzle outright if you haven’t become the expert at using them – which is also hard to achieve before midgame due to stat requirements.

        It should also be said that in the lategame, trying to loot ANYTHING off the floor is EXTREMELY risky, due to the increased chance of that item being a bait for a trap which can cause an outright instakill or do something little better like outright destroy your most precious equipment or scatter it throughout the large dungeon floor. While there are countermeasures to traps, none are foolproof. Simply running to find the stairs to the next floor down as soon as possible is often the best strategy due to how difficult late-dungeon enemies are, and how many resources they are likely to drain from you even if you win – leaving you badly weakened if you do reach Xaskazien when compared to just finding the shortest route to him.

  20. First, note that the way the system is set up, new commenters (or suspicious comments) go into a moderation bin for me to approve before they show up, which is why some comments may be delayed in appearance[.]

    Note that a comment with multiple links automatically gets flagged as suspicious, probably to combat spam. Most of the time when I saw someone complain about being censored, the actual reason was that their comment contained multiple links and thus appeared a bit later.

    The whole section about the comment policy in this post makes me wonder if it would be useful to have a shortened version of it somewhere around the comment section that shows up on every post, so that new readers get informed immediately.

  21. I’d be interested to see sometime an analysis on how much wealth elites were extracting from the lower classes and how it differed in different times and different societies.

    Did a farmer in a 300 BCE greek polis keep a lot more of the yearly wealth his family generated than a farmer in 1300 CE France did?

    1. Our Esteemed Host’s thesis in his upcoming book seems to be that a) the Roman Republic was unusually egalitarian in terms of the wealth and retained surplus of the average person and b) that this is is why Rome Always Won. That is, most Roman men in the Middle Republic were ‘middle-class’ freeholders who were able to equip themselves to Rome’s unusually expensive standard of equipment out of their own pocket, giving Rome a huge advantage over its peers in both manpower and troop quality. By contrast, the Diadochi states tended to recruit from a either a small extractive elite (who could afford quite a lot individually from the surplus they drew from others, but were never very numerous) or from the much poorer subject peoples (who were both not very loyal and not very able to equip themselves well).

      1. I wonder how much of “Rome Always Won” was a snowball effect. They won, so they got Fat Lootz, and thus got rich and were able to equip themselves that much better. Rinse and repeat.

    2. I think the question is complicated by the fact that in many historical societies, there is a public coffer, overseen by the state, at varying degrees of separability from the private wealth of “the elite”. When the Roman Republic drafted freeholding farmers to help fight the Second Punic War, was that labour extraction by “the elite”?

      1. I was going to say exactly that, yea. One needs to consider, when what we generate is extracted by the state, what are we getting in return and what are elites getting in return?

    3. I’d very much like to see that too. With the caveat that wealth extraction isn’t always a good proxy for inequality and that it’s not always totally straightforward to define, it would still be interesting to see some estimates.

  22. About making lance fournir work as adventuring party, I think it could work by reducing the prestige of the paladin, either by making him (very) young, or being disgraced by any way. And by increasing the prestige of the other members, either by making them very experienced, or have achievements in other field. I think both of this can be adjusted to make them level. Maybe they are a knight’s old trusted (non-knight) retainer that’s tasked to support/watch his exiled son or something.

    1. or the command decisions are made by the group and the Pally etc executes it that is if the Pally is the leader

    2. A lot of DMs enforce a party account system for all liquid assets, or all treasure that isn’t immediately equipped. In play, that isn’t really different from a single party account that’s nominally owned by one of the players, because either you’re playing a lance/ship/company/band that is mostly free of poisonous social dynamics or you have a D&D campaign that only lasts one session. If as a DM you wanted to say that working with some of the power structures in your campaign world requires filing paperwork declaring one of the players is the boss of the rest of the party, I don’t think it would really change gameplay. I also don’t think modern players could do more than pretend to be a lance, though; the parts of that structure that aren’t alien are also not escapist.

      1. “the parts of that structure that aren’t alien are also not escapist”

        I feel like that’s the core issue with implementing more realism into things like DnD. The players’ understanding of ‘escapist’ generally means ‘can do whatever I want, then I’ll play around with fun consequences’.

        I understand the appeal of that to many people, but it’s not the only form of escapism. Though I suppose it’s probably the most popular, which is likely why DnD is the most popular RPG. There’s fewer people who find escapism in a really in-depth historical social situation like you can make in Runequest, for instance.

  23. I hit a minor rabbit hole reading up on those inscriptions. The work that must go into deciphering and cataloging (and then analyzing, synthesizing, …..) is melting my brain. I’ve been taking our host’s and other historians’ work way too much for granted, and I apologize.

  24. One thing that struck me, reading Josiah Ober’s Rise and Fall of Classical Greece, was his characterizing Greece, post Late-Bronze-Age collapse, as a society that had fragmented, to the point the old social relations (small number of elite warriors supported by lots of workers) were interrupted. In this “dark age” there was room for the hoplites — less armed, more affordably, more of them — to become a new warrior class. He did a really good job of showing how getting oneself armed — in that specific time, place and technology — led to a larger number of soldiers in the average polis involved in its governance, and from there on to Pericles. I’m sure I’m completely vulgarizing him, and I have learned to be suspicious of any single-cause explanation for *anything*, but it had a gorgeous explanatory elegance.

    1. For a very deep dive into the evolution of the polis (a result of contingent struggles between various contending impulses/socio-economic forces not just some ‘natural’ development), you may want to look into the densely reasoned and documented John Ma, Polis: A New History of the Ancient Greek City-State from the Early Iron Age to the End of Antiquity (Princeton U.P. 2024). Ober has lavishly praised Ma’s work.

  25. what does “under the axe” mean on the quoted inscription? surely not the same thing it means today, given the context

  26. > As an aside, a more interesting experiment might be to have the knight of the lance fournie be a GMPC, with the players only consisting of the retainers (and thus notionally equals), though I think making this fun would require playing the GMPC-Knight as something of a hopeless buffoon, since he needs to not steal the spotlight from the players, but that too could be fun and funny. I, uh, I have something a reputation in my RPG group for Well Meaning But Hopeless Buffoon knightly types as both PCs (when I’m not DMing) and NPCs, so I find the archetype fun.

    Are you familiar with David J.Prokopetz’s game [To Serve](https://penguinking.itch.io/to-serve)? I haven’t played it myself, but as far as I can tell, it seems to essentially be custom-made to create this exact dynamic.

    1. I also was about to suggest checking out this game. The introduction certainly gives the right vibe:

      >To Serve is game of credit where no credit is due, and vice versa. One player will take on the role of a questing Knight, and the other 2–4 players will assume the roles of the various Servants who make up the Knight’s retinue. Throughout your travels, many challenges will be overcome through the Servants’ careful preparation and occasional direct interference, and the Knight will – of course – receive all the accolades. The Servants’ players will tolerate this state of affairs because there are no rules for altering it.

      It’s almost GM-less, as the Servants share the responsibility of narrating with the Knight, and it have a collective character building process. And hey, it’s free!

  27. > “Even a simple analysis shows that a high level party is walking around with the equivalent labour hours to the Great Pyramid of Giza on their backs, if you attempt to work out how many labor hours high level equipment represents in wages.”

    Is this really true? In 3.X, I believe a character was expected to have accumulated 200k gp worth of treasure by 20th level. I guess the quote specifies “a party” not “a character”, so call it 800k gp for a four-person party. Still at 1 silver/day wage for a common laborer, that’s 8 million man-days. Didn’t it take tens of thousands of people many years to build the Great Pyramid? 8 million man-days might be a significant fraction of a Great Pyramid, but I don’t think it’s a whole Great Pyramid.

    Even if work on the pyramids was highly seasonal, 10,000 people working 90 days per year for 10 years is still 9 million man-days people—and I’m inclined to bet all all those numbers being too low (though I might be wrong).

    Also, the real problem here is being willing to assign any gp value to a +5 vorpal sword, rather than assuming such a thing is a priceless artifact that a person would only willingly give up in exchange for another priceless artifact, or some service of truly incalculable value.

    1. In 2nd and 3rd edition at least, the full rules for ‘what is actually for sale in any given place’ were a supplement the DM had to buy separate from the DM manual. +5 anything should not be for sale anywhere, not even in a ‘by appointment only’ place in fantasy Constantinople.

    2. In 3.5, the expected WBL of a 20th-level PC is 760 000 gp, not 200 000 (DMG 135).

      Also, under 3.5 rules, fantasy Constantinople has +5 things for sale. GP limits for urban centres are on page 137. Anything over the gp limit is not available for sale, anything under the limit most likely is. The gp limit for a metropolis (a city with over 25 000 inhabitants) is 100 000 gp. So you should be able to buy +9 equivalent armour there (since they cost 81 000 gp + base armour cost), as well as +7 equivalent weapons (which cost 98 000 gp + base weapon cost). Generally speaking, it isn’t optimal to do so, but you can do it.

    3. Egyptologist John Romer’s best guess at the Great Pyramid estimates a bit over 32 million man-days in total (based on 300 days per year, working 10 hours per day). That’s for the direct labour – the families needed to support the labourers double that.

      Interestingly, it tails off sharply: the workforce is 24000 in Year 1, but 10000 by Year 4 and only 3000 in Year 14 (the last year of construction).

      So the party is only hauling around a quarter of a Great Pyramid.

    4. 3.5 or 3.0 D&D, someone with 0 skill points and a 3 in every attribute could use untrained craft to earn 3 gold per week, which is over 4 times the claimed 1 SP/day. This is cleverly hidden in the craft rules in the PHB where it allows that craft can be used untrained and states the income untrained craft produces plus the take 10 rule.

      A human commoner who put any effort into earning a living was likely to be at over 10 GP per week, well over an order of magnitude more than the 1 SP/day.

      1 SP per day was for completely untrained day labor, most people have some skill in what they do for a living, and if you could commit to even 1 week at a time you get the rates above.

  28. A good follow up to this would be: how much could a soldier hope to make from looting / ransoming / selling slaves in a successful war? Would doing this once be enough to pay for the equipment? I’m aware that your kit / social standing determined how much you could get from doing this (eg, cavalry got 3x as much as infantry in Roman triumphs, a medieval commoner wouldn’t be allowed to ransom a high noble, etc…). Would be interesting to view warfare as a capital intensive, high-risk-high-reward venture on the part of the individuals fighting.

    1. This was something I was about to comment on, but you beat me to it! Curiosity over how opportunities for making real money thru looting and ransoms might make up for the low wages of a common medieval soldier, and might be an inducement to volunteer for service rather than be forced out in a levy.

    2. I’m also really curious, battlefield looting is often seen as something done by surviving locals or camp followers but I’d be surprised if soldiers didn’t take part. I mean, they regularly do in the modern age, I think it’s likely it happened in the past too.

    3. The same thought occurred to me. Most loot was in persons (up to the 11th century) Prices for slaves run around 10-20 Byzantine or Arabic gold (nomisma or dinars) at 4.5 gms each. That’s retail, and no doubt the slave merchant accompanying the army will give a good bit less. Still, 3-4 stout Irish monks would probably buy a mail coat, helmet and good sword, and set you up as someone worthy of a jarl’s regard.

    4. “a medieval commoner wouldn’t be allowed to ransom a high noble”

      Generally if you’re ransoming someone, you’re feeding and housing that someone until the ransom is paid. After all, if the person being ransomed dies, you’re not getting anything.

      It’s not that a peasant wasn’t allowed to so much as there’s no hope of being able to afford it.

      1. There are cases where a common soldier get a significant share in a ransom. His commander (or the king) takes over the prisoner but does reward the commoner.

        1. So sort of like buying the ransom off the commoner? I’m glad this thread is getting some traction; the personal economics of looting seems like a very fruitful area of comparison between different places and times.

  29. Why is it assumed mail got more expensive and not the Solidarri lost worth, if coinage also represents trust in the state I would assume a post roman state is going to have coinage that isn’t as robust as its predecessor.

    This all feeds into your thesis that only societies that can’t field heavy infantry use heavy cavalry, and that only Rome ever had large numbers of heavy infantry. This is a rather Romanocentric view of history, and while I’m not a trained historian, feels like the very technological explanation of history you railed against in your previous posts.

    1. I don’t think Bret is making either the argument “only societies who can’t field heavy infantry use heavy cavalry” nor “only Rome ever had large numbers of heavy infantry”. Concerning the second argument, he’s been very clear about the amount of heavy infantry the Hellenic successor states had, in form of the sarissa phalanx, and he’s often in passing mentioned the also mainly heavy infantry based Chinese armies. And concerning the first, he’s mostly just listed facts without claiming there is some strict rule or value judgment, and it should be noted that most armies he talked about fielded *both* heavy infantry and heavy cavalry, but in different ratios and thus different roles on the battlefield.

    2. The comment isn’t about the price in terms of _solidi_, it’s about the relative price of the mail vs the horse. That ratio is currency independent.

    3. Another factor in whether heavy infantry or heavy cavalry was even developed I found in Firearms: a Global History to 1700 by Kenneth Chase.

      One thing he explores was that the Chinese developed gunpowder first, and even used it in warfare. But they didn’t develop it to nearly the extent the western Europeans did. Why?

      Early firearms were not useful against the enemies Chinese fought. Matchlock weapons were not usable by cavalry, they had to be used by infantry. Even then, they were mostly useful at first in siege warfare.
      But infantry were not useful against *nomad horse archers*, which were the prime worry of every Chinese state.

      Cavalry ended up being much lighter in China than in the western European states, as they were more useful against nomad horse archers. Though only in a defensive manner, as the nomads could retreat far enough into the grasslands the grain fed horse cavalry couldn’t follow them.

      Eastern European cavalry ended up lighter than western, since they ended up fighting both nomadic horse archers and other heavier western types.

      The nomadic horse archers were limited however by the terrain as well. When they got into settled areas where they started fighting heavier troops in their supply areas, or coming up against fortresses, they were mostly limited to raid-and-run, unless they had either other settled types helping them, or lots of in-fighting among their enemies.

      Anyway, the idea here is that heavy infantry and heavy cavalry are products of the universal arms race in areas with conditions that favored them over lighter-equipped soldiers. Money is one factor, but being able to support the troops, and the troops being able to fight their enemies, are other factors.

      1. The Chinese did rely on heavy infantry in their wars against the nomads. Often the strategy involved extending lines of forts to restrict nomad grazing and provide bases for cavalry forays (the same was done by the US on the Great Plains and the Russians against the Tatars).

  30. ‘(As an aside, a more interesting experiment might be to have the knight of the lance fournie be a GMPC, with the players only consisting of the retainers (and thus notionally equals)’ – this is kind of how Dark Heresy, the Warhammer 40k RPG, does it. You are all working for the (NPC, not on deck unless things get really serious) Inquisitor while being basically a bunch of ordinary schlubs yourself.

  31. The idea of a party being based around a knight’s retinue likely has some accuracy. However I don’t think it’s quite right.
    In a “real life” setting, the knight is generally the one fighting, with the others playing defense or support (guarding the knight’s back, etc.)
    In D&D on the other hand, the fighter is often out front tanking while the real damage is being done by the spell casters, especially the wizard.
    By “tanking” I mean mostly keeping the enemies from getting to damage range of the spell casters.

    The rogue did their best damage only if they got into the right position. Which unfortunately in earlier editions generally meant out front of every one else – they got backstab bonus damage only if they were on the opposite side of an enemy from a friendly. Which either means in back of the enemies, or in the middle of them. Not conducive to survival unless you’re really hard to hit. Since rogues are usually easier to hit than fighters, this doesn’t work.

    Current editions have somewhat fixed that by making the backstab apply if the enemy is sufficiently distracted, which is mostly that they’re also in melee range of another friendly.

    So I think the party, if they are organized on the retinue system, ends up being organized around the *wizard*.

    Older editions required the wizard had to accumulate more experience to go up levels – so I think the designers would have assumed that the wizard tended to get more experience. This is less visible now since a lot of games either evenly divide up experience or use milestones instead.

    Ars Magica had a mechanic where the group built (I think) multiple parties. Each player made up a wizard and the others made support characters for that wizard. The group would rotate which party they played each session. So each player got a turn at the wizard for the session.

    My experience with a GMPC was in a couple of campaigns I ran using Pathfinder first edition. There were three players, and they tended not to want to play clerics. “All they do is heal. If they don’t heal, the game bogs down.”.
    So I made up a cleric, who adventured with them. They did complain a bit about less experience, until I ran a couple of adventures without the cleric and they had problems due to the lack of healing.

    D&D 5th edition has changed this dynamic. There’s a lot more healing for each character, clerics don’t do as much healing with their spells, and it’s rather more difficult for a character to die in the first place. The result being that a party cleric is just not a requirement any more, though they are pretty nice to have – for additional offensive power.

    1. AD&D first edition, I played clerics. I sometimes used that as leverage in character creation because the DM knew how hard it would be to get any other player to make a cleric.

  32. Something that would be interesting to dig into is how D&D-style classes and magic would lead to a different aristocratic and military roles. The wizard in the back needs the armored fighter/knight to protect them, but the standard D&D wizard compensates for vulnerability with a much higher ceiling of destructive power. So it might make sense for the wizard to be the military aristocrat and the fighter to be their retainer. I imagine it would come down to the ‘cost’ of magic in terms of paying for magical education (and maybe needing high status to even have this education as an option) and whether magic-talented individuals are a rare subset of the population. If magic is a birth-lottery, I imagine the aristocracy would be very interested in finding and adopting/‘adopting’ potential magic-users, with attendant social consequences.

    1. If magic is a birth-lottery, I imagine the aristocracy would be very interested in finding and adopting/‘adopting’ potential magic-users, with attendant social consequences.

      Or they could arrange marriages instead. That could also lead to interesting social consequences.

      For example, if a society is sufficiently patriarchal for male magic-users to be significantly valued more than female ones, then one could up with a patriarchal nobility which values daughters above sons. This because they can offer the hands of their daughters to powerful wizards of low births in exchange for their loyalty.
      By contrast in a ‘not that patriarchal’-society, in which witches are trained for war from a young age, nobles might tell their first-born sons to ‘always respect your wife, even if she is of very low birth, as we don’t want to risk her accidentally burning down the house with fireballs if you make her angry.’

      I also wonder how such things would affect social hierarchies. A significant part of the elite would have been born in poor households and only got their position after their magic powers were discovered and then elite families decided they wanted them to marry their children.

      1. There was a period in Japan where high-rank nobles wanted daughters very much because there was the possibility of arranging a marriage with the emperor.

        Much depends on how quickly the children can be identified and how much training they need — and how much it costs.

        The real big divergence is that there is real power in the land that is not fundamentally social. A brilliantly mathematical-minded artillery officer can rise in the world, but not so much as a prince who can recruit such officers and have them trained.

        Imagine a world in which the crown prince holing himself up in his study and not playing politics is the danger sign that he is going to try to seize the throne early.

      2. In the Traveller TTRPG, the Zhodani Consulate is a human civilization where the middle and upper classes are supposed* to be people with psionic powers. Any prole children found to have psionic potential are taken away from their families to be raised in the families of the intendant middle class. The upper classes spin this as a way for the proles to aspire for their children to rise. We aren’t told how the prole families feel about this. Though any too obviously upset about it will attract the attention of the literal Thought Police.

        * Though there is never any mention of what happens to the non-psionic children of intendants and nobles. Probably just shuffled off into jobs not actually using psi powers.

  33. The Ars Magica RPG does the “one player is the boss, the rest are retainers” setup (except the boss is the Wizard, not the Fighter), by having each player create a wizard and a retainer and rotating whose wizard gets to be the spotlight each game.

  34. I am pretty sure that decades of collecting, painting, and playing with lead-tin toy soldiers has made me a lot stupider. Why else would I have spent so much money on them when I could have been living it up with champagne and lobsters?

    1. Well, it’s very difficult to play wargames with champagne and lobsters. The former falls over too easily on uneven ground, and the latter won’t stay in their place so the board keeps needing to be reset.

      1. Or you play with the lobsters after they’re cooked, but then there’s keeping track of what damage was done via play and what was done by hunger.

  35. A lot of times the “commander” role in a game is mechanically a support character, with their command abilities represented by buffs they hand out to others. You might get the ability to move other players around or grant them off-turn actions to represent how your tactics put them in the right place at the right time, or give them an offensive boost to represent your inspiring presence.

    I think it’s a good egalitarian way to handle this dynamic. The commander gets the fantasy of “I’m a big important leader who gets to point dramatically and have my minions stab a guy for me,” and the other players get nice buffs that make them appreciate having the commander around.

    There are also a few games that have rules where “having a retinue” is a class feature, deployed sort of like an equipment or summon ability – you have a blob of faceless NPCs following you around offscreen, and if you get into a situation where throwing minions at it would help, you activate your ability and the NPCs enter the scene and do something for you.

  36. That £21 for a heavy infantryman in 1304 is denoting English pounds or French livres turnois? The former is still worth 258.9 grams of silver as late as 1351 when the latter is 80.88 grams of silver as early as 1262 and Louis IX monetary reform. That’s not a small difference. I guess the latter.

    Post that to convert the figures given in grams of gold as a mental exersise you end in something like this:

    Legionnaire infantryman: 27.24g
    Lex Ripuaria mailed infantryman: 113.5g
    Heavy infantryman in 1304: 141.54g or 487.41g
    Unarmored infantryman Edward II: 5.8g
    Light armored infantryman Edward II: 23,21g
    Knight’s Milanese armor (£8 6s 8d) in 1441: Between 132.01g-178.26g (£ goes from 20.89g in 1412 to 15.47g in 1464 and the Medieval sourcebook price is smack in the middle)
    Squire’s armor (£5-£6 16s 8d): 77.35-105.7g at 1464 values 104.45- 142.74g at 1412 figures.

    Admittedly I find the variation interesting. An infantryman from mid 14th century onwards in Italy is getting paid 3 ducats a month… nearly the same in silver with his Hellenistic counterpart 16 centuries earlier (27.38 drachmas to be exact). But the cost of equipping him if the value for Valens is correct has jumped from 3 to 5 times more? This seems to be indicating to me both a large upsurge in metalwork prices in the middle ages compared to the late antiquity and a notable decrease as you go from the early 14th to the mid 15th century despite the increase in wages post Black death. Not certain how much sense either is making…

    1. The increase from Late Antiquity to the middle ages seems logical to me: Supporting forges requires state power, so as state power wanes, forges get smaller and blacksmiths fewer, and thus the cost of metalworking rises.
      No idea about that 15th century drop, though.

      1. That could be the widespread growth of the blast furnace and related mechanisation like trip hammers. Although that’s really more of a 13-14th than 14-15th century thing.

    2. It’s possible that the medieval heavy infantry was wearing more maille than their hellanistic counterparts? Which could help explain.

      Then plate armour perhaps being cheaper to make explains the drop in prices.

  37. Raising your social status could be as simple as being good at looting or with a couple of friends, capturing someone valuable for ransom (and keeping other people from grabbing your prize). One recalls that nice fellow in Les Misérables who financed his inn on the proceeds of the pockets of the fellow whose life he inadvertently saved. Also, picking up nice weapons and bits of armor from people who no longer needed them could get you an upgrade as well.

    1. This was the plan of the two farmers in Kurosawa’s “The Hidden Fortress”:

      1) Sell farm
      2) Buy armor and weapons
      3) Join the army
      4) Get rich from loot
      5) Profit!

      Except they neglected step 3a) Join the army that is going to win.

      1. It’s a semi-common motif: Joining the army might be risky, but it’s one of the few ways a person can actually end up with a large chunk of money. Espeically if they’re sneaky and/or ruthless enough.

        I’m reminded of Torstensson’s army in the 30-years war at one point starting to suffer massively from desertions… Not because things were going badly but because they were going well: They’d been rampaging through relatively untouched areas for a while and a lot of soldiers figured they’d gotten enough loot that it was time to get out. Especially among the cavalry (who for obvious reasons had an easier time running away)

  38. 6 solidi would be worth 60-120 drachmae, going by weight and typical value ratio.

    I think I’ve read that making the links for a mail hauberk (or shirt? size matters) would take about 4 man-months of low-skilled labor, so that’s almost 50 drachmae (120 days, half-drachma/day if that, minus some rest days) right there, not counting the cost of the iron. Not sure how to estimate that: was iron more or less expensive than copper, which itself was around 100:1 with silver? At 100:1, 30 pounds of iron would cost 30 drachma.

    1. Oops, forgot my other comment: regarding wartime social mobility, wouldn’t looting armor be a way of jumping from light infantry to heavy infantry classes?

      1. I suppose the question is whether you’re allowed to keep it, or whether you’re expected to hand looted items of a given value into one of your superiors (either as communal army loot, or to be pilfered by the upper classes).

        Quietly pocketing a gold coin and keeping schtum is one thing. Passing off why you suddenly have a spiffing new mail shirt is another…

      2. A lot of ancient and medieval societies are very stratified, and jumping from one class to another isn’t just how much money or other stuff you have.

        If you’re a peasant with a looted suit of armour, in many societies by law and to most people you’re still a peasant. You don’t get out of your feudal obligations to your liege lord just because you want to join a mercenary company.

        In the non-professional armies (see for example Bret discussing the army of Rohan) you fight with your mates. These are people who know and trust and support each other, which greatly improves your chance of survival. Sure, you might be better armoured now, but do you want to go off and risk fighting alongside a bunch of strangers you don’t know? (And do they want to risk fighting alongside you?)

        I’d think it would be easier in Greek city-states or the Roman army to be promoted, but even there it’s probably not going to happen until the next campaign season or census, and you’re still going to have to prove that you not only have the suit of armour, but also the required ancestry, wealth level, or whatever.

        On the other hand, in war-torn regions (France at times during the Hundred Years War, England at times in the War of the Roses) yeah it probably would be possible to jump class solely on loot.

        1. Rich commoner daughters marrying impoverished nobles has long been a thing. Can a peasant who gets ahold of a mail shirt use it to dower his daughter into marrying up the social scale?

          1. wouldn’t they be likely to be the daughters of rich *merchants*, and not just any merchant, but only those involved in long distance trade of high value goods, i.e. people with orders of magnitude more capital than a peasant with a mail shirt.

            I suspect that they would be able to pay for an apprenticeship into some craft for one of their sons, which would still be an improvement of social status (but a much smaller one), but I have no idea whether they would have the social opportunity to do so.

          2. My understanding (more of the early modern than medieval period) is that buying an apprenticeship for a son into a profitable trade was very definitely a means by which peasant families upgraded their status. With luck, the son might make enough money to purchase an estate and become a member of the minor gentry.

        2. Medieval Sweden seems to have held regular musters for who could show up with a horse and armour (which would make you tax-exempt, and basically give you access to the lowest rung of nobility) and there seems to have been at least a few people who kinda floated in and out of it.

          Now, these are obviously already fairly wealthy people, but the point is there is often *some* flex.

  39. One of the proposed reasons for at least some of that [Flynn Effect] movement is lead abatement in wealthy countries …

    This cannot be the case in general, because the Flynn Effect has been seen in test results going back to the beginning of the 20th Century in Europe (and, I think, almost as far back in the US). It’s an effect which predates both the rise and fall in environmental lead (the latter only beginning in the 1970s in the US).

    At most you could argue that post-1970s increases in IQ might partly be due to lead abatement, and might help obscure a hypothetical decrease in the general (non-lead-related) Flynn Effect over the same time period.

    1. IQ is, at best, a crude single-axis measure that gives a very, very rough estimate of a multi-axis system (I think modern psychologists recognize nine different types of intelligence). At worst, they have demonstrably been built to reinforce classist and racist ideologies.

      What all that means is, we should treat anything involving IQ testing with an extreme amount of skepticism–something along the lines of the amount of skepticism you’d treat a Young-Earth Creationists discussing sedimentology would be appropriate. Any “findings” should be treated as tentative and extreme caution should be used in interpreting the results.

      If we must consider causes, there are any number to choose from. The US Department of Agriculture’s Yearbook in 1919 discussed rampant malnutrition in the USA among farmers at the time–vegetables sell well, and anything you eat can’t be sold, and malnutrition is not kind to brainpower. Education was extremely limited to most lower-class people of the past (though it varied with geography and time), which is going to directly impact ability to comprehend, much less get high scores on, IQ tests. Sampling methods were extremely crude (it wasn’t until the 1990s that paleontology began using systematic, statistically defensible sampling methods!). Sociological pressures would influence both answers and interpretations. The list goes on.

      IQ from the past is more significant as a means to investigate those issues than it is as a way to measure intelligence!

      1. The most plausible explanation is probably something like Flynn’s: the increasing thoroughness and prevalence of standardized state education makes people better able to answer the kinds of generalized, abstract questions that IQ tests measure — and in particular more likely to adopt the abstract thinking the tests reward.

        Note that the Flynn Effect is extremely widespread (and was, I believe, first detected using analyses of Dutch testing of military draftees), so US-specific explanations are unlikely to be very relevant.

        I would strongly recommend Cosma Shalizi’s thoughtful review of Flynn’s book on the subject.

        “A number of explanations have been suggested for the Flynn effect, most of which Flynn swats down with little trouble. It is just too large, too widespread, and too steady, to be due to improved nutrition, greater familiarity with IQ tests, or (a personal favorite) hybrid vigor from mixing previously-isolated populations, all of which have been seriously proposed.”

  40. > By contrast, the average cost of a good quality longbow in the same period was just 1s, 6d, which the longbowman could earn back in just over a week.

    Do we have any idea, how long an bow survived under campaign conditions? Would a archer have to pack spares if he expected to be on a three month campaing in France? Would he need one every other year?

    > plate cuirasses are often quite a bit cheaper than the mail armor (or mail voiders) they’re worn over, though hardly cheap.

    Well that makes sense to me. People made chain mail, because they could not make an big enough piece of steel to create an meaningfull armor from one piece anyway. Even in an environment were metall is far more expensive then labour, it is hard to outweigh the hundereds of hours of skilled labour needed to make those things.

    1. A lot of the work in making a mail shirt isn’t “skilled” labour – it’s drawing wire and making rings. Neither of those are trivial, but they’re pretty easily farmed out to apprentices who will broadly do an acceptable job without too much specific training.

      We mostly think about armours in a strictly linear progression: plate is the best, then mail, then leather or fabric. Maybe scale or lamellar between plate and mail depending how you want to slice it. And in the very narrow domain of purely “how much protection can this give”, it’s pretty much correct – but armour is a lot more than just that very narrow domain. In a lot of other areas mail shirts have substantial advantages over plate cuirasses: they’re way more flexible on size; easier to carry; easier to wear in combination with clothing; etc. A hypothetical real world D&D fighter would be very likely to want mail instead of plate regardless of price.

    2. On bows, my impression and experience is that they last for years but do eventually break. But there’s always the risk of accidents and unforeseen events. With a longbow I’d be confident that it would last a three month campaign in France, but I’d be a fool not to own a spare.

      We do have fairly detailed recruiting and logistics records for some of the 14th and 15th century English armies, especially the invasion of France in the 1340s. Edward III was buying bowstaves, arrow shafts, etc by the thousands and tens of thousands. Robert Hardy in the previously mentioned Longbow book estimates that, very roughly, there’s about one bowstave bought for every hundred arrows, and a hundred arrows is more than any archer will be able to shoot in a battle. I’m sure someone has done a more detailed analysis of these records, but I don’t know where.

      Mongols and other steppe nomads often owned three bows, but those are composite bows that are difficult to make. With longbows the bowstave is a rough chunk of suitable timber for one bow. You (the archer) go to the bowyer and tell him you need a new longbow, he sees how tall you are and maybe a chat about how strong you are, then pulls out a bowstave and starts cutting it into shape. Can apparently be done in a day.

  41. My (sadly abortive) campaign had a GMPC that the PCs were retainers of. The way I played it was,

    1. He was a battle master, which meant a lot of his actions in combat were to enhance other party members, and I asked the players who wanted the buff.
    2. He was also the only non-spellcaster in the party, which meant his actions took very little time. For this reason, the 5e sourcebooks recommend that if the GM needs to add a GM-controlled character to the party, a fighter who is either a battle master or a champion and doesn’t have special feats is the best option.
    3. He was a knight banneret used to feudal monarchy, and then the party got dropped into a merchant republic city, so he couldn’t really participate in the intrigue beyond giving the PCs money and taking (some of) the credit because he found institutions like an elected mayor, open competition for control between factions, and a mercenary guild too alien.

    (Another NPC, a wizard of the same level as the party, joined for an adventure that didn’t complete by the time we broke due to scheduling difficulties. The way I played her was that she was inexperienced at adventuring and so would straight up ask the arcane spellcasters in the party what spells to prepare – essentially, she was there to provide extra spells for the arcane casters, and independent evidence of what the dungeon they were supposed to go to had for paying customs.)

    1. The Japanese RPG _Maid_ is based around the PCs being fairly powerful maids serving a feckless Master NPC.

      Ars Magica has tiers: mages, in charge; companions, who might be peers or high servants of the mages; grogs, who are servants/peasants/soldiers of the mages. But every player has a mage and companion, and creates 1+ grogs, and any particular game moment has each player playing their mage or their companion or one of the grogs, as the gameplay calls for. I imagine it gets awkward if both your mage and companion should logically be in a scene…

  42. An example of how a soldier might earn his arms is in the Ecloga of Emperor Leo the Isaurian.
    And that treats the case when soldier´s arms are owned not by his superiors, but rather his peers/inferiors: his brethren/coheirs who are not soldiers.
    The provision of Ecloga goes that if brothers wish to divide their common inheritance before ten years, then the arms inherited from the father and in use by one of the brothers should be regarded as part of the shared property and on property division the equal share of the soldier should be counted as including his arms. After ten years of common life, the savings of the soldier brought to the common family inheritance should be counted as having bought out the arms, so the soldier brother should get a whole equal share of property other than arms. Note that the arms were enumerated as including warhorse.

    Also, as for the social rise from unskilled worker/lightly armed infantryman to skilled worker with tools, like a heavy infantryman or a cavalryman: how much of the capital investment needed to be a skilled soldier lay in the cost of the kit, how much in the cost of the training needed to be effective with these arms?

  43. I feel like the Flynn effect isn’t even the start of the issues with the methodology of that Roman IQ drop study. It’s pretty clear to me that they did not actually measure IQ according to its definition of “Score on a test with questions designed to measure abstract thinking”. Even if they did have a time machine and could test Romans of different eras (in which case they should lend it to Bret so he can get some army budget books from Carthage and Macedon for his book), you’d still have all the myriad of problems with IQ testing even today.
    But since they don’t, they had to somehow derive an IQ from some proxy indicators via math that just introduces more problems. Whatever relationship between crime rate, skull size or economic output they used has firstly its own issue with unresolved secondary uncontrolled variables, will have been derived from data collected in post-industrial countries that makes it questionable for how applicable it is to a pre-industrial settings, and will have error bars in its relationship that are going to drown out this 3-point drop they claim to see.
    Bringing the whole thing back to IQ feels to me like something designed for publicity, to make a good headline that will grab media attention, instead of something that is grounded on good science. If you have some correlation between lead concentration and some measurable indicators based on Roman skeletons or other archaeological data, then publish that, instead of adding on some spurious math to derive some “IQ” value from it!

    1. While I agree with you that it’s likely sensationalist, and having not seen the study myself, I expect they’d have taken a different tack.

      1. Establish the proportional relationship of atmospheric lead to average IQ drop is (likely from prior research)
      2. Reason that Romans were unlikely to be more resistant to lead poisoning than modern humans (considering that both Romans and us are ‘modern humans’ biologically speaking)
      3. Establish Roman levels of atmospheric lead
      4. Imply anticipated IQ drop from the above

    2. There’s also the fact that 3 IQ points is 0.2-sigma (given the tests are normalized such that one standard deviation is 15 points). Even if we accept all the claims about IQ, that’s a tiny effect. I don’t doubt that the lead concentration values (or whatever they measured) were statistically signification, but converting it into IQ values gives a result that’s 10 times below the 2-sigma factor used in social science to establish significance (or so Wikipedia tells me; as an astrophysicist I’m used to a 5-sigma requirement).

  44. Along with most of the original “Mad Max In Space” baggage that was quickly jettisoned from the Battletech background material, the use of the term “lance” for the smallest “unit” of Battlemechs was intended to evoke the lance fournir system; in that the Battlemech Lance (of, say Decision at Thunder Rift) was A Knighted Mechwarrior, his subordinate Mechwarrior Retainers, and their retinues of tech, astechs (assistant techs), “base security” forces, and the rest of the logistical tail.

    In the BT’85 rulebook source material sidebars Battlemechs are explicitly inheritable wargear, and petit noble houses hold their titles because of their possession of a ‘mech (or several mechs).

    Even in the current era (both in the Doylist sense of the fluff of 2024, and the Watsonian sense of the events of 3151), personal ownership of a Battlemech by a member of a regular major interstellar faction’s military is still something that happens. (Just ran into a reference to this, where someone’s stupidly rich family buys them a front-line assault mech to use after that character graduates from a prestigious military academy and is commissioned in the regular military of the Great House they owe allegiance to)

    1. The player character in HBS Battletech (which is set in the Succession Wars) is a noble who inherited their family’s ancestral Blackjack. Depending on your choice of backstory, you either lost your family or ran away from home, which leads to you going out to seek your fortune with your mech and eventually joining up with House Arano.

      1. Jordan Weisman was heavily involved in the HBS Battletech project, and the BT lore hasn’t completely retconned the BT’85 sidebar material. Just, limited its scope.
        That was just about the least surprising-to-me part of the story in the campaign

  45. Breaking out a top-level comment for the discussion of armour for smaller humanoids specifically, as WordPress limits no longer allow replies to comments left by Dinwar, Ynneadwraith, ey81 and Matt Cramer. Some of the most interesting points, IMO:

    > There’s a great artist on IG going under the name didrikma that does bronze age fantasy, and he’s got a dwarf in a sort of ‘pinecone’ armour, reasoning that dwarves are more likely to be struck from above down on their heads or shoulders, so would armour that area more.

    I looked that pinecone up, and I suppose the rest of the commentariat here deserves to see it too.

    https://www.instagram.com/didrikma/p/DAn8ET3INZo/

    It’s an unusual design for sure, but I suspect that not only would it make it very difficult to move arms if the scales near armpits are as large as drawn by the artist, but it would probably render the wearer effectively unable to turn their head and look to the side without rotating the whole body. I also saw that artist drew an alternate design which had much more reasonably-sized scales around the shoulders, but chose to protect the head with a helmet as tall as the rest of the dwarf’s body. (Can’t link without sending the comment to spam filter.) Besides consuming a huge amount of metal, such a design would weigh greatly on the head and either fly off it immediately or risk severe neck injury once it receives a blow anywhere along its surface.

    This inspired me to think that the most reasonable armour for a dwarf to secure against blows from above might actually be a “cope cage” – i.e. something like a leather “backpack” put onto mail/scale armour which holds two wooden posts (with maybe a thin layer of metal to make it more difficult to cut through) which then support a flat metal plate (or even slats) above the dwarf’s head. For those in plate armour, the posts could potentially be attached directly to it with rivets and the like. Since they are behind the back, at shoulder distance, they should not impede situational awareness much. Additionally, dwarves generally being broad-shouldered ought to make it easier to distribute this weight. While, say, a goblin or a halflibg would in theory benefit from that form of armour too, their thinner and narrower frames could make it more difficult for them to remain balanced and not top-heavy while wearing this. (Though the flip side is that being narrower would allow a smaller, lighter surface to fully protect them in the first plate.)

    > You’d also want FAR heavier armor on your legs. Your knees would be more vulnerable, and someone who’s gotten both knees broken isn’t going to be doing much combat. Maybe plate armor from the waste down, maille for the torso?

    Probably not against humans or other opponents larger than yourself. Stabbing at a knee from such a height is going to be hard, particularly if the dwarf/goblin etc. carries a shield. There would be a considerable risk of missing entirely and blunting your weapon against the floor or even getting it stuck in softer ground. Against similar-sized opponents, the logic is more sound, although maybe still not sound enough for “legs in plate, torso in mail”, when considering how much plate impedes mobility. Perhaps compressed wool/boiled leather leggings with a knee-length mail skirt would be a reasonable compromise?

    > I’d have to think more about fighting dwarves. They’re not that dissimilar in stature to humans (compared to, say, pixies or dragons), so exotic weapons and armor aren’t necessary. But most tactics in the past were, for obvious reasons, built around attacking human-sized opponents. A spear that would hit me in the face would pose no threat to a dwarf, and it would take some training to figure out how to attack dwarves.

    I wonder if heavy cavalry would actually need somewhat longer lances in order for a charge to remain as effective as it is normally? (Horses crashing into the shields and weapons of the first line as riders’ lances go over their heads and stab the second or third one instead are an interesting visual, but probably not a path to victory.) If so, would that also reduce maximum charge distance due to weight and/or impact tactics + logistics in other ways.

    > There’s also the metalworking aspect. It’s entirely possible that they’d not merely have more armor but functionally better armor. They’re master metalsmiths in most stories, so it’s entirely reasonable for them to be in some equivalent of mass-produce plate armor instead of maille. That likewise presents real problems, because defeating that sort of armor requires different tactics than defeating maille.

    The flip side is that dwarves are traditionally portrayed as effectively exclusive underground/mountain dwellers; goblins are often portrayed as living in forests too, but still trend in that direction. Even though this technically means they all live right next to ores, it also complicates actual metalworking greatly for the reasons already described in the “Iron, How Did They Make It” Collection on here. That is, you need to BURN so much combustible material to actually turn the ore into something useful, and UNLESS our dwarves/goblins are lucky enough to ALSO have easily accessible coal in the vicinity (far from guaranteed – coal deposits IRL come from former wetlands and a lot of them are not the sort of places normally thought of dwarven country in fantasy) AND can coke it, that means a lot of wood, which they would either have to collect on the surface themselves, or get captured slaves to do it (as stereotypical goblins and WH’s Chaos Dwarves would have done) or just trade for it, probably at exorbitant rates, since the people selling them wood would likely be selling agricultural produce as well.

    After all, as the Collection here tells us, 1kg of iron produced with Roman methods requires 14.6kg of charcoal, and that requires ~105kg of wood. It appears near-impossible to produce charcoal underground, and even using charcoal to produce iron (and then actually smith into something useful) would also consume A LOT of oxygen in the process – thus, underground forges are probably not going to be workable unless they are actually in the side of the mountain or something. Dwarves/goblins/etc. would probably not be making much armour unless they can first lay claim to enough forested area on the surface to supply them with wood – and if that area is already occupied by humans or the like, such an effort would be extremely deadly for any individual warriors. After all, they would arguably be even less likely to have leather or wool/fabric armour than metal (can’t get either of those without pastoralism, and without trade that requires venturing even further from the caves to control the plains) and so would have to rely on either sheer numbers or superior organization to drive back human/orc opponents who would be far more likely to at least possess wool or leather. (Ironically, copper might be the easiest to obtain.)

    If/when they DO obtain enough land, however, their logistics could derive VERY substantial benefits from the square-cube law. That is, a humanoid with a body area exactly half that of a human would need EIGHT TIMES LESS material to be outfitted with equivalent armour. This could benefit not just metal armour, but even the other forms of it – i.e. I’m imagining goblin warriors who look “fat” because they wear extremely thick padded armours. (Though then, the main constraint on thickness would probably be heatstroke.) Whether the outcome is the elite warriors wearing armour superior in thickness and coverage to anything a human could have or comparatively simpler armour being far more universal than in any human army, the implications for their neighbour seems clear – you either deny them forested land early, or suffer the implications not long after.

    1. Additional implication of warfare for dwarves/goblins/halflings. I was going to write that the societies of all three would likely also be at a disadvantage vs. humans/humanoids on the surface due to being able to cover less distance on foot, and thus possess less efficient logistics. I then realized this was just a stereotype, and chose to look up whether real-world pygmies actually walk slower than regular-sized humans. It appears that while they do, the difference may not be THAT significant. (Lleg in the paper seems to stand for leg length from foot to hip.)

      https://royalsocietypublishing.org/doi/10.1098/rspb.2018.1492

      > ….Consider three hypothetical male individuals foraging in the Malaysian rainforest. The individuals differ in stature: one is of typical American stature (Lleg = 0.94 m, mean American male; [17]), another of typical Batek stature (Lleg = 0.84 m, mean value for the Batek in the present study), and the third of typical Efé stature (Lleg = 0.76, male Efé of the Ituri forest in Central Africa, the shortest human population on record; [1]). All three individuals share the same preferred dimensionless walking speed in the open, v = 0.5. In the open, taller stature results in slightly faster absolute walking speeds (American: 1.52 m s−1, Batek: 1.44 m s−1, Efé: 1.37 m s−1), and longer foraging distances covered over the course of two hours (American: 10.9 km, Batek: 10.3 km, Efé: 9.8 km). Taller stature, therefore, is predicted to confer a weak benefit to foraging efficiency in the open landscape, as faster and/or farther travel increases encounter rates with prey.

      Now, a difference of 0.6-1.1km over 2 hours of walking would add up to more over a full day of marching – but it’s small enough to plausibly argue it can be more than offset through improved organization or other approaches to logistics (i.e. smaller size = lower caloric requirements = smaller supply train = the army needs to wait less for its supply train to get into position = it can actually spend more time on the road and make up for those several “missing” km.) There is also the advantage of MUCH faster movement through heavily forested areas, as described in the same paper.

      > However, the CSLM (Batek equation) shows the opposite is true in the rainforest, where speed (American: 0.61 m s−1, Batek: 0.95 m s−1, Efé: 1.38 m s−1) and distance travelled in two hours (American: 4.4 km, Batek: 6.8 km, Efé: 9.9 km) are both drastically reduced with taller stature.

      There is also the question of mounted mobility. For starters, the “goblin wolf rider” trope in The Hobbit (and other works in its wake, like Warhammer Fantasy or Battle Brothers video game) is going to be very challenging for the same reasons keeping carnivores in general is challenging but probably not IMPOSSIBLE – after all, the Indigenous people of the Far North can afford to keep dog sleds in a very unforgiving and resource-poor environment. Assuming that our goblins have both domesticated pigs, bovines, sheep, etc. and have lower caloric requirements than humans, keeping a stock of meat just for those could be manageable – albeit still near-certain to be a very elite way to fight. The more important question is how much value does a wolf’s maw really add against infantry with any kind of (leg) armour – the fact very few rulers are recorded to have tried deploying dogs into pitched battle suggests it’s not much.

      Consequently, even the goblins might generally prefer riding ponies, donkeys and/or mules if they can support those – as well as the surface dwarves/halflings. While I don’t see them getting much value out of close-quarters fighting on their back, they would still be extraordinarily useful as both pack animals and for dragoon formations. (While The Hobbit had dwarves & Bilbo on ponies, I don’t recall any being used at war anywhere in the wider LOTR lore.) And in theory, a society of a shorter race which somehow managed to domesticate full-size horses (maybe with the help of humans/orcs/etc. cooperating?) would magnify these benefits even further. Although a halfling or a goblin atop a grain-fed horse would be barely able to do anything in melee, such a horse could well manage to fit multiple dragoons, potentially resulting in very strategically mobile formations with limited footprint.

      Last but not least, there is also the possibility of mounted archery – assuming they can develop the right tactics without the steppe culture preconditions, that is. Shorter arms may impede them from pulling a bow the same distance as humans – but their non-mounted archers would have the same issue, so it would probably only encourage them to make as many archers mounted as possible. Otherwise, the only alternative to avoid being comprehensively outshot and relying purely on their armour-production advantage to close the distance is the production of advanced crossbows.

      Having said that, the biggest impediment to archery from those might simply be eyesight. Any race which evolved underground would be extremely unlikely to tolerate sunlight well. They would likely be forced to act nocturnally on the surface, or even develop some form of “sunglasses” to operate during the day. The confined characteristics of the underground would also provide no evolutionary advantage to being able to look far ahead and judge long distances accurately, further complicating any attempt at ranged combat. If this logic is taken to its conclusion, it could well suggest no civilization of any such race could hope to win in large-scale warfare outside their home turf once firearms are developed unless they figure out a solution – perhaps using literal magic or the like.

      1. Naively, halflings might have (at a guess) twice the metabolism of humans (because smaller mammals) but be 1/8th the mass (because half the height). So 1/4 the food needed to support a human-level intelligence.

        What does warfare look like when the enemy is half your height and reach, and somewhat weaker, but there are 4x as many of them per land area, and they’re just as good at tool use as you are? Actually, possibly _better_ at coming up with tools, simply because there are more of them — more minds in close contact, more ideas. Oh, and a donkey can carry food for 4x as many soldiers, or 1 soldier 4x as far.

        Responding to another comment: the Silmarillion linked elves and orcs/goblins in fantasy ever since, but in lifestyle, Tolkien goblins live a lot more like dwarves. Oddly he never recorded considering the idea that orcs _were_ warped dwarves, despite this solving many of his self-imposed theological problems…

        1. Great points! When it comes to the last one, I must also say it’s remarkable that a modern person is probably used nowadays to think of goblins as merely “shorter, worse orcs” – often somewhat smarter, but still obviously inferior as a group and playing second fiddle to orcs whenever both are around. Yet, while orcs at most get a few offhand mentions in the Beowulf (where elves are apparently just as evil as them, exactly as Dinwar was saying) before Tolkien, goblins have much deeper roots in folklore. Perhaps, knowledge of that had somehow prevented Tolkien from simplifying goblins too much?

          Either way, one of my favourite random factoids is that an Interwar-era American plane was actually designated Grumman Goblin in Canadian service. Sure those planes ultimately never fired a shot in anger, but even so, can you imagine today’s pilots willingly climbing into a plane named after what is now generally thought of as not just generally evil, but fundamentally a weak, “cannon fodder” creature?

          https://en.wikipedia.org/wiki/Grumman_FF

          1. Blame D&D. You need stat blocks that are worse than orcs so that the characters can level up.

      2. Naively, halflings might have (at a guess) twice the metabolism of humans (because smaller mammals) but be 1/8th the mass (because half the height).

        your guess is pretty close- the “canonical” estimate for the way metabolic rate scales with mass is a 3/4 power law, so halflings would have about 1.68 the specific metabolic rate of humans. If they follow the rule, which they might not, it’s more a crude rule of thumb than something overly exact.

    2. The usual solution for that comment level problem is to scroll up and find the the last comment that was one level higher and allows replying.

      1. And if you are worried about ambiguity, just copy/paste the header from the comment you mean, like:

        “Mikko Saukkoriipi @ January 15, 2025 at 2:01 pm”

        Although come to think of it there could still be a time zone confusion problem there.

    3. “There would be a considerable risk of missing entirely and blunting your weapon against the floor or even getting it stuck in softer ground.”

      I was referring to defense AGAINST dwarves in my post, not defense FOR dwarves. But still, this is a valid consideration. But it’s also not really true. Not enough, at least, that low-angle attacks were ignored in historical manuals–what I mean is, they (and their counters) were included in historic manuals. 80% of swordplay is in the feet, so if you stab a guy’s foot he’s going to be at a significant disadvantage (to say nothing of the femoral artery, my wife’s favorite target).

      The real issue is angles. I’m well over 6′, and I’ve sparred against people less than 5′ tall. They can scrunch up and make it really, really hard to hit (especially in Olympic fencing with foils!). There’s just less to cover, and with a shield it’s pretty easy (if you’re an experienced fighter) to scrunch yourself up in a way that both allows you freedom of movement and precludes me being able to hit you.

      “I wonder if heavy cavalry would actually need somewhat longer lances in order for a charge to remain as effective as it is normally?”

      I suppose it depends on the army. For my part I’d probably opt for either aerial support (when dragons are an option, always use dragons!) or specialized anti-dwarf tactics. Romans didn’t revamp their cavalry for elephants, they made new tactics against them. I’d have to think more about what that would be.

      All of that is assuming, of course, standard shield wall tactics. In fantasy there’s no reason to presume that those would be valid. I always liked the way a friend once described archers in Diablo II: “Think of them as mages who use money instead of mana.” Give dwarves access to magic and bows, and suddenly the equation changes pretty drastically! For my part, I like the idea of dwarves introducing more modern-equivalent firepower. A bow that launches arrows with explosive spells is functionally indistinguishable from artillery, and wands have been treated as general-purpose sidearms in fantasy before. Modern tactics would appeal to the engineering side of dwarvish psychology.

      “The flip side is that dwarves are traditionally portrayed as effectively exclusive underground/mountain dwellers…”

      I’ve never liked that, personally. Go back to the original myths and things are very, very different. You tend to find dwarfs (as in, the classic fairy-tale creatures, as opposed to dwarves, the modern fantasy creatures) in a variety of settings. Mines, sure, but they’re not EXCLUSIVELY in mines. “Snow White”, for all its flaws when it comes to dwarfs (can you tell I like Tolkien?), gets that right–they go to the mine, but they LIVE in a cottage in the woods. Some of the Nordic dwarfs lived near streams and in forests as well. For my part, I see no reason to restrict them to mines unless, as you say, the mines are just so rich that they can afford to import the necessary food and fuel.

      You don’t need to lay claim to the land in order to use it, remember. Trade is possible. Mongols didn’t have a lot of furnaces, yet they had iron tools, because they traded for them. Venice didn’t have a lot of gold mines, but it certainly had a lot of gold! And spices, and silks, and the like. If the economic conditions are right, you can obtain a lot of stuff that’s very, very much non-local. It would in fact have to be the case, as it’s impossible (for humans anyway) to live on rats and fungus alone, even if you include ketchup. And culture plays a role as well. Dwarves may have a culture where being armored is part of being an adult (though I should emphasize that life-cycles among fantasy beings do not need to be “Like humans, but elongated”; they can be much weirder, as fairy tales demonstrate!), and this would incentivize them to emphasize such trade.

      And again, you have the fantasy element to consider. Iron and coal are not often found in real life in the same areas (though it’s certainly not unheard of; see Alabama), but there’s no reason to presume that they follow the same rules in a fantasy setting. Nor would dwarves necessarily be limited to the same fuels as actual humans. The trick is to integrate it into the world cohesively.

      As an aside, the introduction of goblins to this discussion is very weird to me. I get where you’re coming from. It’s just, I always lump dwarves and trolls together; goblins are linked with elves in my mind. The Germanic literature on elves was not as…nice….as modern fantasy readers tend to think! This is entirely tangential and mostly musing on the evolution of fantasy creatures through time and my own personal idiosyncrasies.

      1. Great points in all! I guess I should note that speculating about use of magic on the battlefield, for enchantments, etc. is going to be a completely separate discussion that depends heavily on the magic system and the kind of questions that are often thought antithetical to the very idea of magic (how many magic wands can be produced in a year, etc.) so will probably be too complex for now. The one concrete thought I had is that given the effort it had taken to heat forges via charcoal, any sort of enchantment which could have enable a faction to enchant forges to generate that heat on their own would have been quite literally worth its weight in gold. In fact, for any actual deep underground settlement, such a magic forge would probably be a prerequisite.

        The other consideration is that dragons would (in)famously have to be thoroughly magical creatures to be able to even take off, let alone breathe fire. Some settings did lean into that heavily (i.e. the dragons in Japanese Drakengard, better known for its android-heavy spin-off Nier, were so magical they could not only talk and be enchanted to kill deities but also spit literally homing fireballs – and it wasn’t just a player character thing) but otherwise, I might prefer something a little more plausible like the roc or even harpies (although those would have to have much smaller bodies and FAR larger wings than normally drawn to actually fly without magic) – just to drop stones or explosives onto enemies’ heads.

        And yeah, it’s true that, say, Beowulf describes elves as akin to ogres and evil spirits. I am not sure if you have ever heard of a classic roguelike called NetHack* but one of the many unusual things about it is that while it takes a lot from Tolkien (to the point that it not only has mordor orcs and literal nine Nazgul, but if a player finds an elven dagger and names it Sting, it would actually become enchanted) its elves are nevertheless considered chaotic, same as orcs and many other generally evil creatures. Hence, a player character who is an orc could actually see not just fellow orcs but also random elves in the dungeon be peaceful towards them (though it doesn’t work the other way around) – while a dwarf or a gnome character WILL always have to fight any orc AND elf they encounter.**

        *(Completely free, which is one reason it does not need to worry about copyright violations, It is also ludicrously hard to beat due to its unusual keyboard-heavy controls and extraordinary amount of interactions – i.e. the player needs to routinely blindfold themselves to avoid blinding attacks or to use telepathy (and perhaps to avoid dying to Medusa), cockatrices are notorious for the myriad of ways in which they can stone the player, whether they are alive or dead or even unhatched, while descending into Gehennom requires the player to do a ritual with the literal bell, book and candle.)

        **Funnily enough, a neutral human character would be in one of the worst positions for a while – they would have to fight every dwarf (they attack everyone not lawful) and every gnome (who attack every player character besides themselves and dwarves), in addition to every orc and elf, and there are comparatively few enemies who are only peaceful towards neutrals to offset that. A lawful human would always see dwarves as peaceful, and a chaotic human would at least have some orcs and elves neutral. The flip side is that dwarves are always rare, and by the time orcs and elves are frequent, a player is usually strong enough to never lose to them under normal circumstances. Hence, gnomish and dwarven characters actually have a significant early-game advantage due to not having to kill everyone in the Gnomish Mines to get to the only (near)-guaranteed set of shops in the game. (Sorry for the digression.)

        1. “I am not sure if you have ever heard of a classic roguelike called NetHack*….”

          My best YASD was a Cursed Amulet of Strangulation. “Oh cool, found an amulet! Can’t find my pet to BUC test it, I’ll just put it on. Oh….Oh no…..OH NO……”

          “In fact, for any actual deep underground settlement, such a magic forge would probably be a prerequisite.”

          Either that, or a massive supply of coal, yeah. The problem with charcoal is that it’s really, really brittle. Without binders or modern equipment transporting it more than a couple of days away would ruin it. Maybe they could import the wood, but that’s a LOT of wood to haul down into a mine.

          I kind of like the way Morrowind does it. The dwarves (Dwemer) were basically steam-powered, using the lava pools from a volcano (held inactive by a trio of ascended god-heroes) to generate heat. It provided a ready source of thermal energy for metalworking, as well as a ready source of power for all their fun traps and gizmos. The geologist in me isn’t thrilled (heat transfers to air and metal! mafic volcanoes don’t erupt the way Vvardenfell did!), but in-universe it makes a lot of sense. You’ve got to deal with the heat anyway, may as well make use of it.

          “The other consideration is that dragons would (in)famously have to be thoroughly magical creatures to be able to even take off, let alone breathe fire.”

          I always give dragons a free pass, mostly because I really like them. Plus, fantasy by definition requires a level of unrealism, and dragons are mostly taken as a given. It’s like FTL travel in sci-fi–it’s such a common trope that as long as you don’t make it something totally stupid (without reason; see Douglas Adams) it gets a pass as just part of the setting.

          But agreed, even something like rocs or harpies or the like–even relatively small dragons with the intelligence of a German shepherd–would have the potential to radically alter the battlefield in an ancient battle. Shield walls only work because, realistically, battles are 2D. (Attempts to fling people over shield walls do not go well.) Adding a third dimension to it radically alters the situation. This is seen in sieges, where specialized equipment is constructed to manage the higher shooting platforms. Trying to do the same thing in a field battle would be extremely difficult. The testuno formation isn’t really viable for a running battle!

    4. The square-cube law is for volume/mass as linear dimension increases, but for armour it is the surface area that matters and that’s four times, not eight.

      Even worse, dwarves etc are still flesh and blood, so if they’re going to swing axes as hard as a human, or block blows from a human, they’re going to need bones and muscles of about the same thickness.

      Dwarves are usually depicted as shorter than humans, but with similar body, arm, and leg thickness. I think a more appropriate ratio is cylinders of a given height, where a human has about twice the surface area if they’re twice as tall.

    5. “Horses crashing into the shields and weapons of the first line . . . .”–Minor point, but as Bret has pointed out, horses are not battering rams (nor even football linemen). They do not “crash into” solid infantry ranks. John Keegan also discusses this issue. Cavalry are effective against (i) infantry in loose or disordered formation, which might include those whose formation has been disrupted by artillery, or (ii) fleeing infantry, including untrained levies or those with poor morale which break and flee as the cavalry commences a charge.

    6. Halflings are going to be inherently worse at concentration of force than regular humans, so the humans can probably smash their way through the halflings line without much difficulty.

      You are contemplating the equivalent of a lethal fight between adults and small children. That is not going to go well for the beings the size of small children.

      1. This isn’t a very good argument. As Ynneadwraith already noted above, “chimpanzees are about 2/3rds the weight of a human, but are just as strong….because they have significantly more fast-twitch muscle fibres than we do.” Likewise, children are not weaker than full-grown adults ONLY because they are smaller and their muscles are smaller but also because they have significantly fewer fast-twitch fibres than the adults. You can read about it here.(“Slow-twitch” and “fast-twitch” fibers are formally known as “Type I” and “Type II”, respectively.)

        https://onlinelibrary.wiley.com/doi/10.1002/mus.27151

        > The finding of a larger relative type II fiber area in the adult males than in the male children in our study suggest that the adult males have a greater local anaerobic potential. This agrees with earlier findings reporting that adult males as compared to male children had a more anaerobic profile, with both higher basal muscle phosphofructokinase activity and greater maximal muscle lactate accumulation in response to exercise. Moreover, a well‐known sex‐specific increase in muscle power from child‐to‐adult with larger increase in males could partly be related to the sex‐specific increase in relative type II fiber area. This fiber type is known to develop higher peak power than type I even after adjustment for size.

        Consequently, if an adult halfling has the same muscle fiber profile as an adult human, then they would in fact be (considerably) stronger than a human child who is otherwise of the same size. If we assume that their muscle fiber ratio is actually more like that of a chimpanzee for whatever reason, that would reduce the difference in strength between the two even further. Add a few more assumptions (i.e. generally outnumbering humans on the battlefield due to requiring less food per person, as argued by mindstalko above) and this would again get interesting.

        1. Modern contact “combat” sports such as wrestling and boxing all divide up contestants into weight classes, recognising that a fight between a 50kg boxer and a 100kg boxer is unlikely to be fair. Those are people with the same kinds of muscle fibre.

          Halflings and dwarves can be as strong, but with shorter arms they can’t do as much damage. For a given muscle movement, an arm twice as long has the end moving twice as fast. Kinetic energy is mass times velocity squared, so that blow lands four times harder.

          (And yes this progression isn’t perfectly linear: there are also control issues as weapons get longer which can cancel out the benefit. The comparison here is beings shorter than humans vs the average human. And if you’re going to say that in combat rarely gives people time for a full swing, yes and that makes it even more useful to have long arms that generate more force from smaller muscle moves.)

          And pure weight/mass does count in combat too: the shield punch or shield press. If I as a human with 100kg combined body and armour step into shield to shield contact with a halfling weighing 30kg, that halfling is going backwards unless they’re literally nailed to the ground.

          Here’s Gimli at the (book) battle of Helm’s Deep after killing two orcs at the end of the sally against the Isengard ram: “but I looked on the hillmen and they seemed over large for me, so I sat beside a stone to see your sword-play”

          1. All good points. I have speculated at length about such considerations in response to ad9 below, who expressed a similar idea in a blunter fashion, although a lot of it centered around the assumption a halfling would be a mere ~8-10kg – seemingly suggested by a raw application of square-cube law, even if dubious in other ways. People who are more similar in size to modern pygmies and weigh closer to them (I think ~40 kg for actual pygmies; maybe down to 30kg if we assume that, say, a goblin is a maybe a foot shorter than them still) would be less hopeless in unassisted melee combat, but also wouldn’t be able to ride canines as easily. (Real-world wolves, as opposed to wargs*, may be out, weighing 30-40kg themselves. Heaviest breeds like mastiffs weigh double that, though, and may still work.)

            * In fantasy, we can also assume the existence of almost any creature physically suitable for being ridden by such people, of course. Eragon had the “Feldûnost” with a completely ridiculous ability to jump 70 feet. DnD’s Rothes are about 4 feet tall in NetHack and can be saddled there. Although they actually move slower than the player character, they are also literally stronger in a fair fight until level 6 or so. Designing a physically/biologically plausible fictional creature that doesn’t implicitly rely on magic to stay alive is a whole other headache, though.

            I also couldn’t make comparison after comparison with real-world pygmies without looking up more of their own history. The best-described example is probably of the original inhabitants of Rwanda – the Twa pygmies of the forests, who were eventually conclusively defeated when the agriculturalist Hutus and pastoralist Tutsis arrived centuries later, clearing many forests and reducing them to a subservient, (borderline) enslaved caste long before the arrival of the Europeans.

            Granted, Twa and other pygmies apparently have a life expectancy at birth below 18 years and not many teenagers live beyond 25 – apparently largely due to prevalence of diseases such as ebola combined with difficulty of obtaining calorific requirements in the forests rather than an inherent part of their biology. One can see how a society so focused on basic survival would not produce an experienced and properly-fed warrior class (as opposed to hunters – in spite of their size, they are well-known for prowess in hunting elephants) and therefore lose to a society which can. Their defeat could be easily seen as part of the same pattern where (contrary to the myth of their inherent “toughness”) such hunter-gatherer tribes generally lost to more socially organized neighbours whenever they did not have something like highly hostile steppes to retreat to (as was explained by Bret in the “Fremen Myth” series), regardless of whether they were full-sized or not.

            And for what it’s worth, we actually DO have a recent example of a conflict between the Twa and regular-sized Luba people in the Democratic Republic of Congo, where both have started from a more level playing field – the Twa still in a subservient position (hence their rebellion) but also with the benefits of a more agricultural lifestyle compared to their past. This was also a conflict where apparently firearms were only really available to the Congolese military who didn’t seem to participate very often, and the bulk of the fighting was with machetes and arrows.

            There is not much information on which side ultimately suffered the most, but the conflict displaced 650,000 people (apparently more than the total population of Twa in the entire country) and effectively stopped the harvesting of fields in that region. This, and the fact the DRC government ended up making political concessions to the Twa after the peace deal was struck, implies they fought better than size alone would suggest. Granted, all references describe Twa tactics as a guerilla/terror campaign where they tended to ambush people (often unarmed villagers, sadly) and shoot them with arrows up close, so it’s only indirectly relevant to this discussion.

            https://en.wikipedia.org/wiki/Batwa%E2%80%93Luba_clashes

        2. To quote mindstalk0:

          “What does warfare look like when the enemy is half your height and reach, and somewhat weaker, but there are 4x as many of them per land area, and they’re just as good at tool use as you are?”

          What it looks like is that the enemy is inherently worse at concentrating force, and loses. The midgets end up as slaves that are occasionally used as scouts.

          We are talking about battles with 8kg people armed with four foot spears. Having a lot of them just means having one overwhelmingly one sided battle after another.

          At least until people invent mechanised (or, in a fantasy setting, magical) warfare. (This is ignoring the issues with half-scale brains and eyes, of course).

          1. You have a point, but eyesight doesn’t seem well-correlated to actual eyeball size – eagles and other birds of prey certainly don’t have larger eyeballs than humans, do they? With intelligence it’s more complex – on one hand, elephants are (probably) not more intelligent than humans in spite of much larger brains, but on the other hand, people with microcephaly are very lucky to retain normal intelligence (although it DOES happen occasionally.) The most important factor in intelligence is pareto-frontal lobe specifically, and you can make all kinds of assumptions there.

            Now, for the fun part (LONG comment ahead!) – 8kg is approximately the weight of two THIN adult CATS! (the healthy range for a single 2+ year old cat is apparently 3.6-5.4 kg.) I realize that assuming a halfling is literally a 0.5 scale model of a human and then applying the square-cube law to reduce adult human body weight by 8 would actually result in similar numbers (not 8kg, as that would assume average regular male weight of 64kg, but somewhere in the 9-10kg range), but that would still mean that, say, a typical WOLVERINE (weight range 11 to 18 kg, apparently) weighs more than a halfling would, and it just seems like there would be a mistake somewhere with these numbers. For comparison, the various real-world pygmy people, whom I cited earlier, apparently have a body weight of around 40 kg – though their body size is also 0.75-0.8 of a regular human rather than exactly half.

            It should also be noted that square-cube law may not apply to armour weight vs. volume (as pointed out by scifihughf above) but it apparently does apply to body strength RELATIVE to body weight – which is the reason insects like ants are so strong, and it would also mean that halflings and other smaller people would be able to carry not just the perfectly scaled-down version of human equipment, but something which accounts for a greater fraction of their body weight.

            https://www.thoughtco.com/ants-lift-fifty-times-their-weight-1968083

            Thus, for one thing, “four foot spears” is something they might use to fight each other, but it’s not a limit on what they actually carry. Historical Macedonians carried the sarissa, which was 5.8 meters long according to Bret (i.e. 19 feet) – so even a perfectly halved version would still be nearly 3 meters long. While 0.8m of that length would be sticking out behind the wielder (again, this is just halving Bret’s estimate of 1.6 meter length of the real sarissa being used for balance) that still leaves us with a 2m distance between the wielder and the spearpoint that an attacker must overcome – and we may have slack to assume somewhat longer shafts due to the same square-cube law.

            None of this overcomes the more fundamental issue with “true halflings” (as opposed to stocky dwarves or to goblins, whom I imagine to be more comparable to pygmies) in a battle of line infantry, of course – IF those weight calculations are right, true “half-sarissas” would still be coming into contact with a mass of opponents 8 times heavier than those holding them, so against any organized opponent, the front ranks would still be reeling from sheer force, and a shattered formation doesn’t last long.

            Now, I can IMAGINE a few attempts to work around this physical reality – i.e. chaining every member of this halfling phalanx to each other to make the shock spread more evenly through it could be one attempt. Outright bringing anchors of some kind onto the battlefield and attaching themselves to them (similar to scifihughf’s ironic suggestion of nailing themselves to the ground) could be another. There is also the option (probably a requirement, really) of a MUCH thicker layer of padding underneath body armour (up to 10 cm, perhaps?) than anything seen historically, since again, square-cube law likely means they could handle that extra weight. A human would also faint from the heat of all that padding – but smaller size also means improved thermoregulation, making it more plausible for them. (Same reason desert creatures are considerably smaller than their cousins elsewhere and real-world pygmies traditionally live in hot, damp forests in the first place.) At the very least, all of the above could make for interesting-looking pitched battles from a purely fictional perspective – but still something they would probably choose to avoid if at all possible.

            At the same time, it’s interesting to think that IF these weight numbers are accurate, then these ~10kg people would also become perfectly capable of riding dogs. A typical sled dog seemingly could do it – it weighs 20-25kg yet can pull at least 3X its body weight, suggesting it could support armour (on itself and the rider) and quite a bit of equipment as well. Breeds like St. Bernards or Great Danes would be considerably harder to feed and field in numbers, but they also weight almost as much as adult humans and would literally have the same or even larger difference in weight to such a halfling that a horse (~500kg) has to a human. (This obviously assumes they can efficiently domesticate large dogs in the first place – while humans do domesticate and ride outright elephants, they aren’t carnivorous. Then again, most fantasy settings could just have domestication assisted by magic.)

            That is, if a 10kg halfling can be attached securely enough to an 60-80kg St. Bernard, it MIGHT actually eliminate the weight advantage of a human entirely. Humans’ reach advantage would still be there for any weapon besides those “half-sarissas”, of course and the kinetic energy advantage would at most have to be negated through superior armour. (And the obvious ability of the dog to bite, although that wouldn’t be much of a factor vs. a thoroughly armoured human.) Additionally, if dogs can pull sleds, they can certainly pull chariots – and those could in theory be used to mount halfling-scale “scorpions”. Even if we are are being conservative and assume it would merely be comparable to a heavy late-medieval crossbow (i.e. the Genoese one) to retain mobility and avoid the limitations of a carroballista described on this blog – that is still likely superior to anything a single archer/crossbowman of their size could draw individually, so the factors which caused the decline of chariots in favour of horse archers would not apply to halflings. In numbers, such chariots could well replicate the “feigned flight” shooting circle tactic, using said crossbow’s range to offset their inferior mobility next to true horse archers.

            So, in theory, we could potentially have a settled people with high-level organic economy ability to produce armour and other implements of war, but who would fight in the open (if forced to) with mounted force tactics more reminiscent of nomadic people due to their physical limitations. Still a force with a lot going against it, but not outright hopeless.

          2. I am all in favour of future artists (movie makers?) depicting the Shire Self Defence Force of chariots drawn by large ferocious dogs and with post mounted windlass crossbows.

          3. The problem with this assessment is that it presupposes that the Halflings are idiots. Intelligent beings play to their strengths while also taking advantage of your weaknesses.

            This is seen quite clearly in the final chapters of LOTR, where the Hobbits take back the Shire. They don’t attempt to meet the enemy in pitched battles, shield-wall to shield-wall. What we see is a number of things.

            First, the enemy underestimates them. Badly. The enemy assumes that small stature and generally unwarlike society mean they are easy pickings. They were correct to a large extent, until they quite suddenly weren’t–and their success up to that point made them over-confident and unprepared.

            Second, the Hobbits set ambushes that take advantage of their strengths. People tend to forget that Hobbits are hard to see, despite it being a recurring thing in the books–they’re not invisible, they’re just smaller than humans and more adept at utilizing natural concealment. This means that by the time the ruffians under Sharky realized they were in trouble, they were surrounded and badly outnumbered.

            The combination presents a very different type of war from what Gondor practiced, but not an ineffective one.

            “But that’s the Shire! That’s home field!” you say. Okay. Look at Merry in the siege of Minas Tirith. He’s able to fatally wound a load-bearing enemy simply because the enemy didn’t notice him. And let’s be clear: This was not something the Enemy was unaware of. The whole point of taking Pippin to Minas Tirith was that Gandalf thought Sauron thought Pippin had the Ring, and the Nazgul had been spying on Rohan’s movements since that night with the Seeing Stone. This is not an out-of-context problem, but rather part of the reason to push as hard as Sauron did. We can assume that capturing Halflings is part of the mission. And the Witch-King STILL misses the Hobbit.

            I will grant that armies change the equation–hiding one person is very different from hiding armies. But still, let’s not pretend that there are no advantages here.

            There’s also the question of preferred weapons. There’s no reason to assume that Hobbits would opt to fight Bigfolk using the same weapons. There are elements of tradition in military equipment, after all, and while there are debates about the efficacy of archers, the fact that they were used right up until guns took over the battlefield more or less demonstrate that they are effective weapons (tradition won’t keep a totally ineffective weapon around). Slings are also useful.

            But there’s another way they could fight: Logistics. LOTR presented Halflings as effective farmers, something that is non-trivial when dealing with militaries. I may be 4′ tall, I may not be able to stand in a shield wall, but I CAN raise a lot of wheat for hard tack. I can also make rope, and forge metal (iron doesn’t care how big you are and Hobbits had smiths), and raise livestock. Potatoes alone would revolutionize social and military logistics–we know this from history.

  46. I was wondering if Dr. Devereaux (or any of the readers) has/have read the book Fiefs and Vassals by Susan Reynolds. I remember he said in an older post that he and many other historians have become skeptical of the term feudalism because of how it conflates vassalage and manorialism but she argues that the vassalage relationship did not exist at all in the way it is often conceived.

    1. Reynolds didn’t argue that — what she explained in her book is that the usual conception of feudal vassalage makes the relationship sound more universal than it ever really was across time and space, when in reality “feudal” relationships between lords and vassals were much more idiosyncratic with significant variations across the breadth and timeframe of medieval Europe, with only a minority (again, in both time and space) actually having all the features associated with the classical image of “feudalism.”

  47. “More broadly, I think a lot of the popular discussion (I won’t speak for the researchers) here is asking the wrong question: it’s focused on “why did Rome fall” as if empires normally last forever.

    How did this person die? Haw, do you think people are naturally immortal?

    No offense, but I hope this illustrates just how inane this quibble with the phrasing of the question is.

    It would make sense if (and probably only if) there was an established paradigm regarding the… well, decline and fall of empires, and the Roman empire fell in accordance with said accepted paradigm.

  48. Equating “man-at-arms” with “non-noble elite combatant” is fair, I guess, but it also obscures the fact that such men came from a mostly-hereditary elite class in their own right — the “gentry” or “gentlemen” in England before the definition of the class was broadened by successive Reform Acts in the 19th century. Also , the so-called “unmounted man-at-arms” sounds weird to me since he wouldn’t really have been unmounted; he would have still had a mount to carry him around on the march, just not required to be able to use this mount in combat. At least I don’t think I’ve seen any medieval documents using the term “man-at-arms,” “homme d’armes,” “uomini d’arme” or the like for actual full-time infantry with no mounted transport.

  49. > Even a simple analysis shows that a high level party is walking around with the equivalent labour hours to the Great Pyramid of Giza on their backs, if you attempt to work out how many labor hours high level equipment represents in wages

    It’s true that some of this is because the costs used in the game are modelled around the parts that affect gameplay the most, and that some of this is because lots of things are just inconsistent, like the ball-bearings thrown in because they’re an “adventure-y” sort of thing but don’t make any sense in the world, or the inconsistent costs for mundane equipment.

    But the discrepancies might be most visible in starting-level parties which inexplicably have plate. At high levels weirdly some of it might actually be true. In DnD there’s a probably a bigger gulf between an average lowest-equipment soldier and the most powerful individuals than in any real-world eras. The extent to which a 15th level character outclasses an average human is inconsistent because the game tries to balance the world so combat and non-combat activities are still meaningful, but in some ways a 15th level character could be more relevant than numbers of entire armies.

    > Now part of the reason for that is that the adventuring party is, itself, patterned off of a knight’s retinue

    That’s a great comparison. But is true that was an influence? I thought that adventuring parties were mostly originated in DnD, which grew culturally from swords-and-sorcery books (and many other types of fantasy) and mechanically from wargaming. Sword and sorcery books seemed to mostly have a barbarian like conan, or rogues like Fafhrd, not a slate of different equal characters. But the DnD characters immediately specialised into different fantasy tropes to provide varied play experience. I feel like the “one character of each discipline, equally necessary to success” was born there.

    > your D&D party’s fighter is probably walking around in equipment that represents a level of wealth your party’s rogue can never hope to have

    Presumably this is true in the sense of the archetype each character looks like, but in an actual game, they likely have the same income because of how the game works: just that the fighter buys armour and swords which are visible, and the rogue buys rings of invisibility (or high living) which isn’t.

Leave a Reply to holdthebreachCancel reply