Collections: Life, Work, Death and the Peasant, Part II: Starting at the End

This is the second part of our series (I, II, IIIa, IIIb, IVa, IVb, IVc, IVd,IVe, V) discussing the basic contours of life – birth, marriage, labor, subsistence, death – of pre-modern peasants and their families. As we’ve discussed, pre-modern peasant farmers make up the vast majority of human beings in in the past.1 Last week we started by looking at the basic building blocks of peasant society: the household and the village. We noted that rather than organizing as individuals, pre-modern people generally and peasant farmers in particular, organized as households, a single economic unit based around a residence and landholding, which included both kin and non-kin members. Those households were then organized into villages, with each household often having scattered land holdings in different parts of the village’s farmland.

In this part and the next, I want to get into some of the demographic patterns that shape the life of these pre-modern farmers. While cultural practices differ quite a lot between different people in the past, the mortality and fertility regime that shaped the life of pre-modern farmers tended to be quite consistent between different pre-modern societies – at least to the degree that they are so radically different from conditions now. This week, we’re going to begin at the ending, with mortality patterns, because they provide necessary foundational context for all of the other demographic patterns. The pattern of mortality among pre-modern people doesn’t exist that far in the past – everyone lived and died like this through the 1700s – but is so different from our experience today as to be profoundly alien to us. It isn’t just that people lived shorter lives in the past, but that the structure of mortality and the expectations it created were different.

Naturally, I should note then before we head in, that this post is going to involve some pretty frank discussions of death – mostly from a statistical angle – including child and maternal mortality. I’ve also included some images of death-related artwork to help anchor the reader’s thinking about the culture of death in these societies. That may be somewhat distressing (I suppose it certainly ought to be), so read at your discretion.

Via Wikipedia, an example from Beram in Istria of the ‘danse macabre’ (‘the dance of death’) motif in fresco (1474). Death was a fairly common theme in pre-modern artwork, often presented, as here, quite bluntly as ‘memento mori’ – a reminder that all must die sooner or later.

But first, if you like what you are reading, please share it and if you really like it, you can support this project on Patreon! While I do teach as the academic equivalent of a tenant farmer, tilling the Big Man’s classes, this project is my little plot of freeheld land which enables me to keep working as a writers and scholar. And if you want updates whenever a new post appears, you can click below for email updates or follow me on Twitter and Bluesky and (less frequently) Mastodon (@bretdevereaux@historians.social) for updates when posts go live and my general musings; I have largely shifted over to Bluesky (I maintain some de minimis presence on Twitter), given that it has become a much better place for historical discussion than Twitter.

Death By the Numbers

It may seem odd to start with death, since after all that is normally the end of a story rather than the beginning. But if we want to understand the structure of life in pre-modern conditions, we really do need to start with death because mortality is, in practice, the ‘forcing function’ of a lot of how life is organized in peasant communities and the pre-modern world more generally. In particular the pattern of mortality of pre-industrial life combined with drive to sustain population functionally mandate certain family patterns among the vast majority of the population, which in turn drive labor patterns. This is particularly true for the peasantry: we often do see upper-classes in pre-modern societies fall below population replacement, but that is no real problem for the society, because there is an effectively infinite supply of people from the lower orders perfectly happy to ‘move up’ into the aristocracy.2 By contrast, there is no ‘reserve’ population to replace the peasantry (understood here to include enslaved agricultural laborers) – instead it is their population growth which sustains the other social orders. So high mortality forces high fertility.

Of course that’s not quite how the causation picture works: high fertility is the result of human choices (and they are, as we’ll see choices), but high mortality rates limit the range of fertility patterns capable of producing a stable population, not quite down to a single point, but to a sufficiently narrowly constricted range at the upper-end of possible fertility. A society which moved out of this range at the lower end would rapidly shrink, creating space for societies (or families within a society) which maintained higher fertility. In practice, this doesn’t result in some sort of ‘rising and declining civilizations’ that one sometimes sees discussed in moralizing, largely a-historical ‘class of civilization’ terms, but rather a situation where all societies put a pretty high value on childrearing and so, absent serious disruptions which prevent it, maintain steady, modest growth at the upper-end of the fertility range, with only modest variation. Again, that range isn’t a point – there are some different marriage and family formation patterns – but that range exists in the context of a mortality regime which is functionally unchanging for all pre-modern societies.

And I want to be clear and reiterate that last point, because it is the other reason we’re starting with death: the basic pattern of pre-modern mortality changes very little from one society to another or from one pre-industrial era to another. It also varies very little from one class to another: the fantastically wealthy aristocracies of pre-modern and early-modern societies could buy many fantastical things, but nothing they could spend money on would do much to preserve their children from shockingly high child mortality rates (they may have been somewhat better at surviving longer in adulthood, but only marginally). Until the 1700s, no society’s medicine could meaningfully change the pattern either. The major features of mortality are thus constant in a way that very few things are truly constant in studying historical societies.

Via Wikipedia, a Roman mosaic (first century BCE) from Pompeii, now in the Museo Archeologico Nazionale di Napoli, showing the wheel of fate which oscillates between wealth and poverty (represented by the cloth suspended on either side). The skull in the center on which the scales balance is a reminder that death in this balance is universal and never far off.

So what were the patterns? This is a point where the way we walk about pre-modern mortality tends to skew people’s understanding, because the statistic we lead with is life expectancy at birth (note that we’re not dealing with stillbirths or miscarriages this week). What you’ll hear is “life expectancy [at birth] was around 24 – somewhat higher for females, somewhat lower for males.” And that’s more or less true – the exact figure will wobble around depending on the society and the data set, but never gets far from the mid-twenties – but can be somewhat deceptive. I remember realizing that someone had fundamentally misunderstood something when I was playing Children of the Nile and when your ruler turns 20 or so, the game informs you that you are old and must think constantly about death. But obviously a 20-year-old – peasant or aristocrat – did not think death was right around the corner, because it usually wasn’t. Instead, that basic life expectancy figure is the product of a set of more complex interactions.

We can think about these patterns through demographic modeling: we take the ancient or medieval data we have and look for a modern demographic model (often based off of the study of populations in developing societies) that fits the patterns we see. The standard such model for the ancient world is Model West, usually L3, from Coale and Demeny, Regional Model Life Tables and Stable Populations (1983),3 because it fits the data out of Roman Egypt fairly well. One quirk to these model tables I want to note, because it sometimes confuses folks, is that they express ‘life expectancy’ not as a total expected age, but as average years of life expectancy from a given age, so a 25-year-old with 26.6 years of life expectancy is expecting to life to age 51, not just a year and a half.

What we see in these models is that life expectancy (female:male) at birth is very low, 25:22.8, but after the first year rises dramatically to 34.9:34.1 (note the gender gap narrowing) and by age 5 to 40.1:39.0 (remember to add the five years they’ve already lived). So life expectancy goes up quite a lot over the first several years of life, which is not, intuitively, the pattern we expect: we normally assume the more you’ve lived, the less years you have left.

A quick chart showing a population under the assumptions of a Model West L3 life table. The blue bar indicates the percentage of a birth cohort that is surviving at a given age, while the orange bar indicates the percentage of a birth cohort that has died in each age ‘jump’ (the Coale and Demeny tables are incremented in five year increments; I keep meaning to sit down with a demographer and produce a version with each year interpolated between the increments)

What is going on here is the substantial impact of absolutely staggering infant and child mortality. Under these assumptions, by age 10, fully 50% of all children born are already dead; only about 45% of all children make it to adulthood as we generally define it (around 18 years). And as a reminder that is only for live births – we have not yet considered miscarriages, stillbirths or maternal mortality. This enormous child mortality rate is not an accident of a particular ancient society, but in fact an absolute constant for all pre-industrial societies, be they hunter-gatherers, pastoralists or farmers, be they urbanized or not, ‘civilized’ or not, ‘western’ or not. For all societies, everywhere at every time before about 1750 (and in most places for a long time after that) it was simply a fact of life that half, HALF of all children died. Our World in Data produced a famous chart of this ( released under CC-BY which I can just put here:

Youth mortality (deaths before age 15 or so) in modern developed countries today is a fraction of 1%; even poor developing countries are below 10% in all but the worst cases (the global average is 4.3%). That shift is, of course, part of the broader ‘demographic transition,’ which perhaps we’ll talk about one day, but I want to stress it here because it is one of the most striking ways in which “the past is a foreign country,” although in this case the past is even more alien than any foreign country on earth.

Adult mortality patterns can vary a bit more based on the society in question and by conditions, but the basic outline of mortality rates remains fairly constant. Generally early adulthood brought with it two major mortality ‘filters’ – despite young adults being the least vulnerable to sickness and such – military mortality (mostly for men) and maternal mortality (entirely for women). Military mortality for our peasants, as you might imagine, varied quite a bit based on society. In some societies, the peasantry was substantially demilitarized, ruled over by a warrior-aristocratic class that did much of the fighting or protected by a professional military class and so we might expect military mortality among the peasantry to be very low. On the other end of the spectrum, Nathan Rosenstein4 estimates that military mortality may have claimed something like a third of all Romans who served during the third and early second centuries BCE – and as we’ve discussed, nearly all Roman males served, at least in periods of high military demand. I should note that there are a lot of complications in this data, so take that military mortality rate as something like an absolute upper-bound.5

Casualties from pitched battles might average around 10% of all combatants,6 so a male peasant population that served as a militia in these sort of battles, like Greek hoplites (who are probably just the upper third or so of peasants, wealth-wise) might lose something like 4.5% of a male birth cohort (remember that child mortality) on average in a pitched battle. If we assume that a given age cohort might fight 2 or 3 such battles in their lifetimes, which seems a reasonable guess, we might expect something like 10-12% of the relevant military class’ birth cohort to die in battle. If I’m right in guesstimating that the hoplite class represented the upper third or so of the farming population (the rest being too poor or enslaved) we might suppose battle casualties would have consumed around 4-5% of the whole society’s male birth cohort. Expressed another way, that would mean that 10% of all males surviving to adulthood would perish in combat.7 Rosenstein estimates the average combat mortality rate of Roman legions during the Republic – legions that tend to win rather a disproportionate amount of the time – at around 5.6%.8 In short, for societies that field mass conscript armies, military mortality is a significant factor, but pales in comparison to child mortality. Under these conditions, individual battles or wars will almost never be demographically significant (that is, cause noticeable change in population) because casualties are a fairly small subset of the fairly small subset of men who serve. However, the sustained pressure of such warfare might female-shift a society slightly and lead to altered family patterns.

Via Wikipedia, a sample of woodcuts by Hans Holbein the Younger showing the Danse Macabre (1538). One of the notable features of the danse macabre as an artistic motif is that it very frequently shows death striking a wide range of individuals, typically including the wealthy and powerful (kings, nobles) alongside the poor (farmers and peddlers but also a frequent motif is to include a disabled man who has lost a leg and is clearly very poor), with both secular figures alongside members of the clergy, both high and low, all to stress death’s universality: here we see death stalking the farmer, an abbess and a peddler.

The other factor for young adult mortality, pertaining in this case to women was maternal mortality (death as a result of childbirth). This is something of a tricky topic because there is a tendency to substantially overestimate maternal mortality rates in the public imagination, which makes it really tricky to thread the needle of “higher than now but not as high as you think” without giving some sort of false impression. Maternal mortality is expressed as a fraction of live births (which I am going to convert into percents) and seems to have been very roughly in the range of 1-2% for early modern societies, falling to around 0.5% in the 19th century among western societies for which we have good data and then dropping down to its current level among developed countries of 0.025-0.001%. So we generally assume that a pattern of something like 2% maternal mortality is the right ballpark range for the ancient and medieval world, although there must have been variation we can’t see.

Now, 2% doesn’t, perhaps, sound like a lot, but in a society where something like half of all children are dying before adulthood, women are having quite a lot of children simply to maintain population at replacement. We’ll get back around to fertility rates later in the series but for how if we assume each woman in this society is having on average five live childbirths, a 2% chance of death at each stage adds up pretty quickly – a woman who gave birth five times with a 2% chance of dying each time has a total chance of dying at one stage or another of 9.607%, which as you will note is not far off from the chance of dying in a battle (though, contra the House of the Dragon team, who do not appear to understand statistics, it is not that each birth has the fatality of a battle but that a lifetime of births does, which is not the same).

The great remainder of death, the background hum of it, as it were, was from sickness and disease. I should note that this is generally what we mean when we say someone died ‘of old age’ – they were not crushed to death by the prodigious number of candles on their birthday cake, but they merely got quite sick while also being old, the latter factor reducing their ability to handle the former. While in modern society these causes of death tend to be the sort of things – heart failure, cancer – which will kill you at any age but mostly only happen to the very old, in pre-modern societies, the diseases that tear at the elderly population tend to be the endemic infections, fevers, pneumonias and such that the younger adult population might generally survive – age and nutrition diminish the immune response until these sicknesses become lethal at one age or another.

The patterns of these age-and-illness related deaths were markedly seasonal, precisely because they were made lethal by weakness of immunity, which correspond with age but also nutrition. The precise ‘dying season’ varied from one region to the other based on climate and on the timing of key crops. In much of Europe it was the colder months that death stalked whereas in the Mediterranean it was late summer and early fall. Climate is certainly one factor here: individuals with fragile health are more vulnerable to excessive heat or cold temperatures and either can kill.

But another factor is nutrition in the agricultural cycle: warmer climates often planted winter wheat (harvested early in the summer) and had their dying season in late summer, early fall, while cold climates plant spring wheat (harvested in fall) and having their dying season in the winter. It’s not hard to see the agricultural cycle at work: harvests are quite variable, ample in some years and short in others. Peasant farmers have all sorts of ways to protect against this risk (we’ll discuss some of them in this series), but a poor harvest still meant belt-tightening and you didn’t wait for the food to run out to begin rationing. So if the harvest was poor, food for children and the elderly has to be limited (the working-age adults cannot be short-changed, they need to be fit enough to plant and bring in the next crop) starting at that harvest and so a few months later when the weather turns bad (too hot or too cold), the already malnourished elderly or very young begin to die. Not of starvation, but of this or that disease or infection which, had they been fully fed, they might have fought off.

Via the British Museum (1926,0412.41) a modern sketch (by Paul-Albert Besnard, 1920) that was too on-point for me not to include, showing death embracing a woman while the snow of winter falls outside. I should note that while I have framed this series as being about ancient and medieval peasants, the basic contours of this pre-industrial life persisted in much of the world through the 19th century and into the 20th and some elements of it persist in some of the poorest countries today.

That interaction produces something of a balancing effect, assuming agricultural production remains constant: as a population approaches its food production limits, you get more bad years compared to good years and so mortality rises, with the child mortality putting a cap on population growth (along with reducing overall adult life expectancy as older folks pass away earlier). Now this simple interaction is very simple and leads a lot of folks into Malthusian purist thinking, but there are a lot of things that can disrupt this Malthusian interaction, even for peasants – there are a lot of ways for agricultural production to rise modestly enough (increased trade and specialization, bringing more land under crops, better farming techniques, selectively bred crops, etc. etc.) to sustain population growth for a really, really long time and even some very long-settled parts of the world didn’t reach agricultural saturation where basically all useful farmland was in production, until quite late.9

The basic pattern of mortality: massive child mortality, military and maternal mortality for young adults, seasonal mortality for everyone else, combined with the standard background noise of accident, misadventure and crime collectively makes up the mortality picture of the peasant. I should note that there probably is a significant amount of regional and sub-regional variation here we’re not capturing, but the overall patterns here are fairly consistent.10 We’re going to get to the implications that has for the structure of peasant households in just a moment, but first I want to talk about some of the implications for culture and in particular, the culture of death.

Attitudes About Death

While the patterns – very high infant and child mortality, high maternal mortality, strongly seasonal disease-related mortality – are broadly consistent across pre-modern societies, and thus the standard condition of life for our ancient or medieval peasants, the cultural response to this is varied.

The usual mistaken assumption here is to assume that given the high rate of infant and early childhood mortality that parents and society at large was broadly detached from its children. As Patricia Crone points out (op. cit., 116-7) that is simply not the case: while written sources intended for public audiences often assume a polite silence about children lost very young, expressions of agony and grief are far more common in the source material than detachment. Pre-modern peasant parents went through the same cycles of excitement at a welcome pregnancy (and, as we’ll see, most were welcome) and terrible, wrenching loss that modern parents would at the loss of an infant. But people in the past experienced the same feelings we do, even in a far less kind world, something that comes out quite clearly when they do write to us. By way of example, I’m going to include in quote blocks a handful of epitaphs from E.A Hemelrijk’s excellent Women and Society in the Roman World: A Sourcebook of Inscriptions from the Roman West (2021).

To Claudia Fortunata. Claudia Quartilla set this up for her sweetest daughter Julia Foebe, for her mother and Claudius Felix and Claudius Fortunatus for their most dutiful sister. Oh unworthy crime: a mother made a tomb for her daughter! (CIL 9,4255, trans. Hemelrijk)

To the spirits of the departed. For Euposia, who lived for one year, eleven months and seventeen days, and for Zosime, who lived for eight months. Their mother Zoe made [this tomb] for her sweetest children. (IGUR 544 = IG 14,1609, trans. Hemelrijk).

All sorts of cultural practices attest to the person-hood of even very young children, like the Christian practice of infant baptism, performed as early as possible to ensure the salvation of the child’s soul in case – as was very often the case, note above – they didn’t live to adulthood. Likewise, the loving, careful burial of deceased infants and young children is common, including practices like “eaves-drip burial” in medieval Christian cemeteries or the ’emergency’ baptism of stillborn infants in medieval Italy all attest to a deep and profound attachment to children, even those lost extremely young. It wasn’t that peasants cared less, but that they lived in a world with much more grief, albeit that grief might be channeled into non-public expressions or expressions that don’t survive to us.

To the spirits of the departed. Aemilia Donativa lived one year, four months and thirteen days. She lies here. She lived sweeter than a rose. Turbo, her father and Designata, her mother, made this for their daughter. (CIL 8, 16572 = ILAlg 1,3165, trans. Hemelrijk)

To the departed spirit of Cornelia Anniana, our daughter who was already babbling when she was not yet two years old. She lived one year, three months and ten days. The parents made this with their own money for the sweetest daughter. (CIL 14,2482=ILS 8488, trans. Hemelrijk)

On the other hand, that grief was, if not lessened, expected. In these societies, burying children was an expected part of being a parent. Likewise, burying a young wife lost in childbirth was, while not an inevitable expectation, hardly an unheard of thing. That didn’t make it any less impactful. Funeral epitaphs like the following (also from Hemelrijk, op. cit.) are very common:

To the departed spirits and eternal memory of Blandinia Martiola, the most blameless girl who lives eighteen years, nine months and five days. Pompeius Catussa, citizen of the Sequani, a plasterer , set this up for his incomparable wife, who was most kind to him. She lived with me for five years, six months and eighteen days without any foul reproach. He had this made during his lifetime for himself and for his wife and dedicated it under the axe. You who read this, go bathe in the baths of Apollo, as I did with my wife. I wish I still could (CIL 13,1983 = ILS 8158, trans. Hemelrijk).

You who pass by, now stand still and linger for a while. Read the misfortune of a mourning man. Read what I, Trebius Basileus, her grieving husband, have written so that you may know that the writing below comes from the heart. She was adorned with all good things, unoffending to her dear ones, guileless, a woman who never committed any wrongdoing. She lived twenty one years and seven months and bore me three sons, whom she left behind when they were small children. Pregnant with her fourth child, she was died in the eighth month. Stunned, now examine the initial latters of the verses and willingly read, I pray, the epiath of a well-deserving woman. You will recognize the name of my beloved [Grata] wife. (CIL 6, 28753, trans Hemelrijk; the Latin forms an acrostic spelling out the name of the deceased, Veturia Grata).

That said the expectation meant that the social script around death in these societies was generally very strong. Cultural practices around mourning, burial (or cremation) and memory varied substantially, but anyone growing up to adulthood in a peasant village was likely to have experienced quite a lot of funerals and so while death wasn’t any less sad, it was processed to a degree as a ‘normal’ part of life rather than a sudden disruption of it.

From Roman Egypt, a collection of mummy portraits, painted faces on wood slats which would be buried with the deceased to enable their spirit to recognize their body, a distinctly Egyptian burial custom (though note the very Roman cultural presentation of these men – they are both Egyptian and Roman, there was no necessary contradiction between the two).
All via the British Museum these mummy portraits are:
Top Row (starting from the left): inv. EA74715, dated 100-120AD; EA 74718, dated 80-100AD; EA74704, dated 150-170 AD, all from Hawara in the Faiyum, Egypt.
Bottom Row (from the left): EA63396, early second century from Rubaiyat, Egypt, EA 63396, early second century from Rubaiyat Egypt and EA74707, 70-120 AD, from Hawara in the Faiyum, Egypt. All six are now in the British Museum.

This isn’t the place for me to go into every sort of death ritual for every pre-modern peasant culture because there are too many and I don’t know them all. But I do want to note some common features that show up frequently (but not universally). One of the most striking is that funerary rituals often focus substantially on the continuing place of the deceased in the community. We’ll come back around to this, but, as Crone observes repeatedly, pre-modern societies were markedly less individualistic than most modern societies, understanding individuals primarily as parts of a household, a family, a community rather than as individuals. That didn’t change when those individuals died.

So, for instance, in addition to the remarkably individualistic tombstone inscriptions above (the Romans are, as ancient cultures go, unusually individualistic in their outlook), the Romans believed that, after the standard nine-day period of mourning, funeral procession and burial that the spirit of the deceased became part of the di manes, (the ‘divine spirits’). The ancestor spirits watched over the family and community but also remained very much a part of it. Nine days after the death of the deceased, the Romans would hold another funeral feast which would include an offering to the di manes, of which the deceased was now a part. Likewise, of course, in medieval Christian contexts (both Catholic and Orthodox) the deceased were to be buried (if baptized) in consecrated ground to await the resurrection of the dead. This too represented not an exit of the individual from the community but rather something closer to a changing of their position in it: from the active, living part of the community to the waiting, expectant part. Ancient Egyptian belief represented the afterlife as a continuation of the living world, with individuals assuming the same roles in death as in life, while the remembrance and offerings of the living family of the deceased sustained their spirit another form of a continuation of the individual’s place in the community.

There is a tendency in this context, I think, for moderns to jump to asking, ‘well, what if the community ceases to exist, or many centuries pass and this memory is lost?’ And I’m not going to say no one ever had that idea, but by and large I think that is a very modern view that understands the world as impermanent and changing in a way that most pre-modern societies do not. From the perspective not just of the peasant, but also the peasant’s priest, humans are impermanent, fleeting things but the world, its structure and rhythms, its communities and families, these are far more permanent and lasting. This is a pre-modern world, after all, that changes slowly, often imperceptibly slowly, with children coming to inhabit the exact roles, working the same land, living in the same houses, as their parents, one generation after another. At least to my sense, planning for a radically different world 10,000 years in the future sits largely outside the pre-modern worldview which imagines that the world doesn’t change much and at most goes through a series of repeating cycles.

Implications for Households

The mortality pattern as laid out here has a few implications for the shape of the peasant households we see and the sort of families they form. We’re going to walk through some of these in more detail in the next part where we talk about nuptiality and fertility (marriage and babies) but I want to note some of the high points here.

First a high proportion of these societies at any given time were children, even by their standards of childhood (often ending between 15 and 17, not at 18). Generally about half of the population at any given time under this mortality regime is going to be age 15 and below, whereas for a modern population close to replacement that figure is going to be 20-25%. Children were thus socially omnipresent in a way that they simply aren’t in any modern industrial society (but are in some developing countries).

Equally, for societies with very low productivity the demand to feed that population means that pre-modern cultures do not have a ‘childhood’ as we understand it, as an extended vacation from work and adult life. Children were instead working in whatever capacity they were physically able as soon as they were physically able because these societies simply lacked the resources to support half of the population on a non-working basis (which is also going to be true when it comes to labor and gender, but we’ll get to that later in the series). This, of course, was especially true for our peasants, at the bottom of the society, whose work was necessary for its basic subsistence, but one gets the sense that childhoods were short and transition into work was common even among the higher rungs of society – for instance the age for an elite boy to become a page attending a knight was seven.

This mortality regime also has implications for the ‘cycle’ that households went through. After all, if a married couple began having children in their late teens and continued doing so through their twenties (which they would have to do simply to make replacement; we’ll talk more about this next time), by the time their last children would be coming of age, the parents would be into their late thirties or early forties and a casual glance at the graph way up above will tell you that many of those parents are, at that point, living out their last years. Under the assumptions of that Model West L3 life table, at the start of reproductive age, the life expectancy – which is to say the point at which we’d expect half of the cohort to have passed away – for women was around 50 and for men around 48. So most children would reach adulthood while their parents were still living, but equally most children would lose their parents while their own children were very young. That interaction is further influenced by marriage patterns (particularly differential ages at first marriage), which we’ll talk about subsequently.

But the result is that while these households are often ‘multigenerational,’ and (as we’ll see) marriage and reproduction often begin very early, ‘three’ is generally the maximum generations that are alive at one time and even then usually not for very long. That also helps explain cultures where young men were expected to have property in order to marry and establish a household – the expectation is not that they are acquiring land (extremely hard to do in these societies) but rather that inheriting land is what will make them viable marriage partners (again, that differential age at marriage). Under this mortality regime, a young man in his early 20s probably does not have to wait very long to inherit the farm (especially because his father likely also married somewhat later in life).

Finally, and this will set up the next part of this series, it should already be obvious that the fertility regime this level of mortality demands in order to avoid the population shrinking into nothing in just a few generations is going to be one of very high fertility: simply maintaining population under conditions where half of all children die before adulthood and many adults die early in adulthood (especially women dying in childbirth) is going to necessitate a lot of children.

And that’s where we’ll turn next: marriage, family formation and childbearing (but not rearing just yet).

  1. And you may be thinking, “but wait, human beings lived as hunter-gatherers for a long time before farming. True! But the total human population was much, much lower than that. Estimates place all of the people who lived before the advent of farming at around nine billion, compared to some 100 billion who lived after the advent of farming (of which around 8 billion are alive today). Not all of those 100 billion lived in farming societies, but most of them did, as agriculture enables far higher populations than pastoralism or hunting and gathering. Consequently,pre-modern farmers represent almost certainly a majority – probably a large majority – of all humans who have ever lived up to this point.
  2. Except, of course, if someone was foolish enough to design a society which categorically blocked such new entrants into the landholding class. A society like that might suffer from a rapidly declining landholding class, leading to the collapse of its military system. But who would be so incredibly foolish to do that?
  3. Reproduced and used in Frier, op. cit. On the limitations of these models and efforts to produce better ones, note R. Woods, “Ancient and Early Modern Mortality: Experience and Understanding” The Economic History Review 60.2 (2007). In practice, these models are models, rough guides to the plausible rather than exact fits and because they derive from early modern evidence rather than medieval or ancient evidence (which is generally insufficient to the purpose – while we have a lot of individuals whose age at death is known, women, the poor and children are systematically underrepresented, which hopelessly compromises the data in most cases).
  4. Rome at War (2004)
  5. In particular, the roughly 1/3rd figure assumes 12 years of average service, which is almost certainly too high and the point Rosenstein is making is actually about the potential impact of a period of declining military service (meaning falling military mortality). More broadly, most of these deaths are from disease which is awkward because while Rosenstein has ancient data for battle deaths, he does not have any for disease deaths and is forced to estimate from early modern data and the effort is very messy. I should note that W. Scheidel has argued, in Measuring Sex, Age and Death in the Roman Empire (1996) that we might not actually expect much if any excess disease mortality among Roman soldiers compared to their civilian counterparts, because hygiene and sanitation in the armies might not be very different than that in civilian life.
  6. As estimated in Krentz, “Casualties in Hoplite Battles” GRBS 26.1 (1985)
  7. That figure is actually probably a reduction from military mortality rates prior to the development of agriculture and the state, which might have been as high as 25%. See A. Gat, War in Human Civilization (2006).
  8. op. cit., 124, but note the following pages discussion of legions with no known battles, wounded casualties, etc.
  9. Indeed, it was possible for this to happen long enough that the great global Malthusian Crisis simply…never happened. And never will! Instead, we now produce enough food to feed the world and indeed to feed the expected peak human population of c. 10-11bn people on the farmland we have. The demographic transition means that the human population will likely effectively peak at that point, rather than growing endlessly. Maybe if we invent immortality, we’ll get a different kind of population crisis, but in the meantime, Malthus will have been wrong – usefully wrong (which is the best kind) – but wrong.
  10. On some of the variation and its implications, particularly for populations for which we lack good demographic data, see R. Woods, “Ancient and Early Modern Mortality: Experience and Understanding” Economic History Review 60.2 (2007).

440 thoughts on “Collections: Life, Work, Death and the Peasant, Part II: Starting at the End

  1. I know this is primarily about peasant and thus rural households, but do these mortality patterns meaningfully shift in what urban populations do exist? I get the impression they almost have to be, given that cities tended to be demographic sinks and needed replenishment from the countryside to keep going. Do we have any evidence for such differences?

    Also, to stray into the even more morbid, how do we square the demonstrated care and regard of even infants whose odds of survival aren’t great with the also well attested widespread practice of infanticide? Those two things seem like they should conflict.

    1. Urban populations have a lot higher disease mortality through the 1800s because people are compacted. At least

      People care about the babies they want and don’t care about the ones they don’t? People get very attached to their babies in the womb all the time, yet vast majority of people are also fine with abortion in at least some circumstances.

      Also its a much harder world with hard choices and the young and sometimes the old get shafted. Like people absolutely knew giving most of the food to prime-age in shortage times had good odds of resulting in the rest passing. People are always people though and so like to distance themselves from the implications. Japan old people climbing the mountain which technically was voluntary but I’m sure there was often some pressure to do the ‘right’ thing. I remember a commenter on this blog a while back posting how during a Swedish famine in the 1800s, 2 of his great-great whatever Aunts were ‘lost’ in the woods. Not the most humane way to murder the kids you can’t afford to feed, but provides needed emotional distance. Probably why exposure was Of course you can also sell into slavery the kid you can’t feed in some cultures, though I’m guessing famine is not the best time to do that.

      Imagine how desperate you’d have to feel to make some of those choices. Outside some developing countries (and even they have various advantages over pre-moderns) that level of desperation is extremely rare. Of course you also have people that its hard to feel sympathy for by modern sensibilities like the Roman telling his wife in a letter to get rid of the baby if it is a girl. Though if you only have the resources to support a finite number of kids, girls marry out while boys can be your old-age support…

      1. That was crueldwarf on July 31, 2020 in response to a question from me.

        “Yes, it indeed happened. And it usually was not even limited to the older adults and elderly. Way too often it was children who were subjected to that. Two of my great grandfather sisters were literally ‘left in the forest’ to die because there was a famine.It happened in 1896 I think and both of them were less than 6 years old. Subsistence farming lifestyle is merciless sometimes.”

        https://acoup.blog/2020/07/24/collections-bread-how-did-they-make-it-part-i-farmers/comment-page-1/#comment-7976

      2. Hansel and Gretel, anybody? At least in the versions I read, nobody is blaming the father for “loosing” the kids in the forest… Written down 1810 by the Grimm brothers, not that far away from the pre-industrial times described in the blog post…

    2. While we’re mentioning it, it’s worth noting that infanticide was disproportionately done to female children (because they are less valued/more expensive in the long run – paying dowries and so on) and the responsibility for exposing infants usually fell on the mothers.

    3. I wonder if infanticide became less legitimate with Christianity? (It certainly still happened on occasion, as other posters have said.)

      1. Christianity from its earliest days was hostile to abortion and infanticide. Now of course in practice that was breached as much as followed once it became the nominal universal religion, but that is still a change…

        1. Christianity discovered widespread hostility to abortion in the 1850s and anti-abortion theology didn’t become accepted in Protestant sects until the 1970s & 1980s. There is a strong tradition of orphan charity that appears to run back to the early Church, but the natalist mania that has swept Christianity in the last two centuries is purely a reactionary response to the drop in fertility that echoed the drop in child mortality shown in the graph in the post.

          1. That’s very much not just a new thing. Some of the earliest church writings we have are anti-abortion. It seems to have been more acceptable in the Middle Ages pre-quickening and Protestants also tended to be more abortion-tolerant.

          2. @Endymionologist,

            It’s very much not a “post 1850s” thing. The Didache, the Apocalypse of Peter, the Letter of Barnabas and a bunch of other early Christian writings, as well as a bunch of early church fathers, Patriarchs and at least one quasi-Ecumenical Council all harshly condemn abortion, without making any distinctions as to stage, as far as I can tell.

            If you mean that attitudes relaxed somewhat in the 13th century or thereabouts, you would be right, but that isn’t really related to early Christianity in any way, shape or form.

          3. at least one quasi-Ecumenical Council all harshly condemn abortion

            To be a little more exact about that, the council in question was meant to be a sort of appendix, I guess, to the Fifth and Sixth Ecumenical Councils, and Catholics and Eastern Orthodox seem to disagree about whether it counts as an actual Ecumenical Council or not.

            https://en.wikipedia.org/wiki/Quinisext_Council

        2. It’s worth noting that the degree of that hostility varied very substantially over time. I.e. according to wiki, “Infanticide became a capital offense in Roman law in 374, but offenders were rarely, if ever, prosecuted.” (374 was some 50 years after Constantine the Great converted, but apparently ~25 years or less after Christians became the majority of the population.) Hence, the quote describes the status quo across much of Europe for roughly a century. Afterwards, the following happened.

          https://academic.oup.com/book/41267/chapter/351205634

          When the Roman Empire collapsed in the 5th century, the jurisdiction of infanticide was relegated to the church, which regarded carnal delicts a sin rather than a crime. The punishment – public penance of the mother for 7-15 years – was milder than that which the murder of an adult would incur. The Council of Florence decreed in 1439 that the souls of children who died without having been baptized descend to hell. This turned infanticide from a penitential sin to the most heinous of all crimes. The states passed laws that abominated infanticide even more than the murder of older humans and punished women with ever more cruel forms of execution.

          Towards the men, however, who usually abandoned the women they had impregnated, the laws were lenient. Churches and society continued to vilify illegitimate birth, thus enhancing rather than preventing infanticide. The Habsburg-German legislation of 1532 ordained to torture any woman who had concealed pregnancy and birth and claimed the infant was stillborn. Legislation developed similarly in other countries, albeit at a different speed. French (1556) and British (1623) legislation reversed the burden of proof and demanded the death penalty for concealing pregnancy and birth when a dead infant was found.

          I already mentioned this under a March post, but it’s probably worth repeating that one of our best “ground-level” insights into how the legal system worked in practice during the 16th century comes from the diary of Franz Schmidt, a Nuremberg executioner who ended the lives of 361 people over the 45 years of his service. According to the book based on said diary, The Faithful Executioner: Life and Death, Honor and Shame in the Turbulent Sixteenth Century a substantial fraction of those were women convicted of infanticide. Nuremberg had apparently initially reacted to The Council of Florence by ordering to burn them at stake, but by the time Schmidt assumed his position, the standard practice was to drown them in a sack was considered more practical. Eventually, he successfully lobbied to make things easier for himself and simply perform beheadings. (The book even distinguished between beheadings with the condemned in a sitting, kneeling and standing position – the former was the easiest and said to have been done most often, while the latter was mostly done when the executioners really wanted to show off.)

      2. “infanticide became less legitimate with Christianity”

        Going by the Wikipedia page, the main difference was that medieval Europe started creating orphanages and foundling houses, because Christian women were still exposing their babies or throwing them into the Tiber river (for one specific example.)

        One wonders how many infants were saved by the orphanages, vs dying slightly later.

        Wiki also has ‘very high sex ratios were common in even late medieval Europe, which may indicate sex-selective infanticide.[58]’

        But I’d assume that Christianity meant people were less open about it. And sex-selection might happen from e.g. not feeding your girls as much, rather than explicit exposure or infanticide.

        1. My understanding, from the early modern period when adequate records survive (say 1600 to 1800) is that foundling homes and orphanages had appalling death rates. Babies need more than food, and if you leave them in a cradle all day, feed them a few times, and otherwise ignore them, they suffer “failure to thrive” and generally die.

        2. Christianity mostly meant that single and desperate mothers exposing their children to die were more likely to face punishment at the hands of patriarchial societal power.

    4. how do we square the demonstrated care and regard of even infants whose odds of survival aren’t great with the also well attested widespread practice of infanticide?

      It does seem to me that people are much more likely to abandon a baby than to smother it, even though smothering is arguably kinder than leaving it to die of starvation and exposure.

      But… if you abandon your baby then you can just tell yourself a little story that somebody else picked it up and is now caring for it, maybe somebody who lost their own baby and really, really wants another one.

      1. “you can just tell yourself a little story that somebody else picked it up”

        Which might not be that implausible, if you leave the baby where people are likely to find it. Crossroads, or a river where people will go for water or washing.

        Of course, someone picking up a random baby might raise it as a slave, unless they reeallly need an heir.

      2. This is certainly plausible. On the other hand, we also have the following, which doesn’t exactly seem to match this hypothesis, to say the least.

        https://en.wikipedia.org/wiki/Infanticide

        In Egyptian households, at all social levels, children of both sexes were valued and there is no evidence of infanticide. The religion of the ancient Egyptians forbade infanticide and during the Greco-Roman period they rescued abandoned babies from manure heaps, a common method of infanticide by Greeks or Romans, and were allowed to either adopt them as foundling or raise them as slaves, often giving them names such as “copro -“ to memorialize their rescue. Strabo considered it a peculiarity of the Egyptians that every child must be reared. Diodorus indicates infanticide was a punishable offence.

        (Lots more highly relevant paragraphs in the linked article besides that one, of course.)

        1. “seem to match this hypothesis”

          What hypothesis are you referring to? Your message seems to reply to C Baker, but I don’t see a hypothesis you’re contradicting.

          1. Well, I don’t think that people who at least implicitly hoped for their babies to be rescued (which seems to be the hypothesis) would choose a manure heap, of all places – especially since the timeline was apparently that of the Greeks and Romans settling in Egypt doing that on their own, unprompted, and then the locals noticing the pattern and reacting to it. Once it had become common knowledge that “the Egyptians willing to adopt will look for your baby at the heap”, then sure, you could at least begin to argue it makes sense, but the foundation for that seems to be have been borne out of practically performative neglect.

            (And yes, I understand that manure was (often still is) a valuable resource, but it still stretches plausibility to think that there were no other alternatives. Moreover, Sean Manning (an actual historian who at times comments on here) appears to believe the same, and explicitly refers to this as “the Greek and Babylonian practice of tossing out unwanted newborns with the trash”. (in the context of pointing out the limitations of moral relativism in dealing with the ancient societies.))

            https://www.bookandsword.com/2021/12/04/child-abandonment-in-greek-and-roman-egypt/

        2. Manure heaps are warm (due to composting), which would give an abandoned infant a higher chance of not dying quickly from hypothermia. This may have been a way of trying to give an unwanted infant a higher chance of being found and adopted.

      3. > if you abandon your baby then you can just tell yourself a little story that somebody else picked it up and is now caring for it

        Indeed, this is the plot of one of earliest Mediterranean novels, Daphnis and Chloe!

      4. On the other hand, if you’ve received a prophecy that your son will one day kill you, probably safer to smother the baby than expose it.

  2. I hope this is not too “political” a comment for this post, but when I read:

    > This is particularly true for the peasantry: we often do see upper-classes in pre-modern societies fall below population replacement, but that is no real problem for the society, because there is an effectively infinite supply of people from the lower orders perfectly happy to ‘move up’ into the aristocracy. Except, of course, if someone was foolish enough to design a society which categorically blocked such new entrants into the landholding class. A society like that might suffer from a rapidly declining landholding class, leading to the collapse of its military system. But who would be so incredibly foolish to do that?

    I cannot help but think of rich countries in the 21st century with low fertility and strict limits on immigration.

    1. Though most of the developed world is relatively open to immigration in the grand scheme of things and wealthy enough to attract it. Low fertility is rapidly becoming universal. Its the marginal countries that will do much worse demographically, just like the UK’s population is holding up much better than Bulgaria. Incidentally East Asia is *really* screwed going forward, seems to be bottoming out in fertility lower than Western countries while simultaneously being less open to immigration.

      1. > UK’s population is holding up much better than Bulgaria

        I agree, the combination of low fertility and high emigration rate is killing countries that are less desirable as a place for living.

      2. Yes, and as I posted elsewhere here, what if it’s not just a one-time “correction” where we have a bunch of old people die off due to inadequate amounts of younger people because of this low fertility, but then a problem that continues to grow worse as younger people choose (or are pushed for various reasons) to continue having this low fertility rate which then perpetuates the cycle again?

        1. Low fertility is to a substantial extent imposed on developed nations by policy choices. Prioritizing childbearing during one’s years of peak fertility is generally a fiscally irresponsible decision that comes at high opportunity costs and will tend to pass those costs onto the child. For instance, the couple that have three children between the ages of 22 and 30 are going to have a very hard time putting those kids through college, and then the kids have indentured servitude, excuse me student loans, to contend with.

          It would be possible to imagine a society which diverts a significant fraction of the resources of industrial civilization to make childrearing more feasible, it’s just that we don’t live there.

          Virtually no one bats an eye at the idea that the taxpayers in a relatively wealthy country like the United States should collectively invest half a million dollars in providing old-age pensions and state-subsidized health care and elder care for Joe Smith the retiree between the time he stops working and the time he dies.

          The idea that the taxpayers should collectively invest half a million dollars to set up an annuity to help newborn Johnny Smith’s parents defray the costs of raising and educating him would have at least half the voting population up in arms from the outrage.

          So long as that remains the case, it should not be any wonder that our natural birth rate is well below replacement rate and our population is graying. Where society puts its money may not reflect the priorities of the average citizen, but it certainly reflects the priorities of the people who are in charge.

          1. This is a straightforward problem of doing accounting wrong. Before the principle of depreciation/amortisation of capital was widely implemented, great swathes of entrepreneurs/managers kept shooting themselves and/or their principals and/or their business partners in the foot by very literally not accounting for its effects. Likewise, environmental pollution was curbed — partly by compulsory regulations, yes — partly by saying “OK, you are allowed to do the odious thing, but you have to pay”, attaching an invoice to the act of pollution. Congestion charge for cars, carbon tax, etc. The concept of “human capital” already exists, it shouldn’t be any great conceptual leap to attach a condition of “yes you are allowed to e.g. employ people in a way that interferes with their fertility, sometimes the economic opportunity really is worth it to society, but you are using up a shared resource from society by doing so and therefore you have to pay for it — which you should be able to do if the economic opportunity is real”.

            To look at it from a different angle, and making extreme simplifying assumptions to fit this on a napkin: both parents would each make $X over their careers if they had zero kids. (Yes, yes, this stands in for the NPV.) Presumably the mother gives up some fraction of this for the couple to have kids, each of whom will (let’s assume) potentially earn adjusted-for-everything-but-the-kitchen-sink $X over their careers if they have no kids. Raising these kids is entirely real productivity by the mother; pulling it forward into the present (adjusting it down by the real growth rate you expect, however much that is) makes it directly comparable with the sacrificed fraction of the on-career productivity.

            But don’t take this too seriously, unless you take it extremely seriously. Since the first kid is the largest hit to the career, with generally the Nth one taking a lower fraction from the total-potential-career-earnings than the N-1th, a simple optimization would suggest a disproportionation: couples should have either zero or many children. Probably this is not what people want, hence the “you shouldn’t take this seriously” part. However, if you do, then you should go all the way, where improvements to neonatal incubators and “boarding kindergarten” (if I can call it that) together could solve the problem. The first may well be developed within decades; examples for the latter have existed during the 20th century (some of which turned out remarkably well).

          2. Housing + transportation policies also affect affordable fertility. If you can’t afford a big home, you’ll have few if any kids (at least with modern expectations.) If a home you can afford also means living someplace where you have to drive the kids around everywhere, then having many kids would be an increasing logistical burden, especially with modern US middle class expectations of ‘activities’.

            Combine affordable housing, walkable cities, and free health care+college, and you might get higher fertility. Even more so with free child care, or large stipends to make up for a parent (probably the mother) staying out of market jobs.

          3. Many countries have generous parental leave policies, subsidies for child-bearing, and heavily-subsidized higher education. It doesn’t seem to make much difference in birth rates.

            (The foregoing might be considered the “liberal approach.” There’s also a “conservative approach” of restricting abortion and tax breaks for child-bearing. That doesn’t work either.)

          4. ” It doesn’t seem to make much difference in birth rates”

            I think it makes some difference; the European countries with such policies are at the higher end of fertility rate, though nowhere near replacement, and sharing the rank with some developed countries that are less generous but more religious or more immigrant-y. But way higher than east Asian and many other countries. Compare France to South Korea, say.

          5. The idea that the taxpayers should collectively invest half a million dollars to set up an annuity to help newborn Johnny Smith’s parents defray the costs of raising and educating him would have at least half the voting population up in arms from the outrage.

            This. You see people point out that benefits programs for mothers/additional births/etc don’t seem to move the dial, but that’s because they’re really not that generous compared to the foregone income and opportunities.

            Realistically, to bump TFR in some place like the US by another point, you’d probably have to offer $250,000/additional child. High enough so that people can afford to start households and buy homes earlier, which seems to be a Big Deal in American fertility in general – a big part of why the Postwar Baby Boom defied a longer trend of declining fertility rates was because stuff like the GI Bill and 1950s economic boom allowed people to form households and buy houses earlier, and thus start having kids earlier.

          6. “and buy homes earlier”

            Or have public policies that pursue cheaper and abundant housing, rather than the status quo of keeping housing scarce and expensive.

          7. However, if you do, then you should go all the way, where improvements to neonatal incubators and “boarding kindergarten” (if I can call it that) together could solve the problem</i.

            @BasilMarte,

            serious question, can you flesh out what you're thinking a little bit more? I feel like i've heard a similar proposal before, and the guy who suggested it certainly meant it seriously, but i'm wondring if it's exactly the same thing.

          8. Realistically, to bump TFR in some place like the US by another point, you’d probably have to offer $250,000/additional child.

            250k over, say, 20 years would be like 12.5k a year, right? That certainly seems like an attractive option to me.

          9. @Hector: the example that turned out well is the childcare system used by the Kibbutz movement. Though as it says right in the article, after two-ish generations parents chose to discontinue it.

            Other setups are also possible; the successive generations of foster-care reformers have produced heaps of literature proposing and evaluating a variety of schemata. While “none of them work” (hence the successive reforms), we have to account for the fact that the kids aren’t a representative sample of the population, and furthermore that because currently they aren’t deployed as “fixed” age-based tracks (e.g. 1-6 or 0-12) but are explicitly intended to be temporary⁂, there’s a significant “churn” which causes rebalancing, i.e. even kids in the system long-term get repeatedly sent from one foster family to another, and their “sibling-set” keeps changing even if/when/while they themselves happen to stay in one place. (Foster parents burning out and leaving also, I understand, causes their “household” to be scattered, since it’s not likely to coincide with new parents signing on, especially if new parents systematically aren’t given as many/young(?)/”difficult” kids. Likewise, despite the system mostly trying to keep pre-fostering-event siblings together, it systematically chokes on “large” groups and breaks them up.) IIRC it was mentioned somewhere on this blog that the US army discovered that it had a morale/cohesion/PTSD-shaped problem when it continuously replaced casualties one at a time (“repple depple”), but the problem went away when they moved to replacing casualties in larger batches and giving the unit some time to shake together.

            ⁂: Until such time as the kids can be adopted, whether by the original family, or relatives, or strangers — or else they age out of the system substantially without connections or money. “Instant homeless, add 18th birthday”. This ties into a lot of things mentioned upthread — there is a vast problem if people need parents to buy/lend them a car, housing is eyewateringly expensive per square foot, downmarket options (microapartment/SRO/etc.) have been systematically removed by regulations (if it doesn’t have a kitchen it doesn’t count as residential for zoning purposes, maximum occupancy, minimum parking), and you finding yourself in this situation correlates with not knowing anyone you could be roommates with.

            And of course, kindergarten itself is a late-19th century creation along exactly these lines. Let only a few women care for all the children, so the rest of the women are free to take factory jobs during the day, which is to say, they are willing to have children because it interferes less with their income.

          10. After several looks in this thread at the material aspects of the decision to have children, let’s have a look from a social point of view. In particular, let’s contrast children with the popular “alternative” of getting a dog instead (“fur baby”).

            First, just as men in a shieldwall fear the shame of cowardice more than they fear death, almost all people are terrified of being seen by others as having children when it is not respectable for them to. Refer to variants of the expression “there should be a license for having kids”. In very modern societies, this judgment takes the concrete shape of CPS (and its equivalents) taking children away from parents if there is a disagreement between CPS and the parents as to what constitutes good parenting. (I phrase it this way because this can happen even when the parents are right.) But probably even more important is the factor of having more children than similarly-situated people in one’s social circles. Anyway, the bar is higher (or perhaps just more specific) than for dogs, and “falling short” of it (including by disagreement), or having “too many”, earns much sharper disapproval. By the way, this force gets stronger when taxpayers invest more into Johnny Smith. On the other hand, zero-cost interventions such as the Patriarch of Georgia personally baptizing every third+ child of Georgian families can address some aspects of this concern.

            Second, most cultures used to have trivial explanations for why one should value one’s neighbor’s kid, even if they aren’t going to be net economic contributors. If the culture is heavily conformist, which most cultures were, they are one more person to carry and enforce the culture. If the culture is militarized out the wazoo, more meat for the meatgrinder. If the culture has the right religious belief, they are yet another soul in communion (a spiritual soldier for Good, if you will). Contemporary culture has a hole here: we know we are supposed to congratulate new parents, but why? Many largely consider the case to be that we are only “reflecting” the new parents’ joy (we are happy only because we like the parents and they are happy) rather than going “yay, one more person, thank you for making them”. In other cultures, this was a reason to have children rather than dogs, but this difference disappeared.

            Third, there’s a colorable argument that given the existence of no-fault divorce, people (particularly women) have to continue optimizing for marriageability even after they marry, because their husbands’ workplace, and other public places, potentially harbor rivals. And an apt comparison about the effects of pregnancy I’ve read is as if men lost about an inch of height for each kid they had. No such concern with dogs. (Past cultures being communal and heavily conformist solves some issues here, with major side effects. The limit on the relationship turning sour was not that one party could walk, but that because there wasn’t enough privacy for issues to stay hidden, half the village would have vocal opinions about which spouse should do what to bring the state of the marriage back to the normative ideal.)

            Fourth, related to the previous, but focusing on status goods. We evolved to seek a threshold “marriagable quality of life” so that when the children inevitably came, the family could withstand the drop in quality of life and remain alive. Contraception made children not inevitable, thus confusing people’s intuitions: they aim to end up above the threshold after choosing to have children. This is as if, having followed a lighthouse to find the port, they did not change course to sail into port, but instead crashed into the lighthouse. (Maybe this is just a restatement of the first.)

          11. The way SS works is the government takes an alarmingly large slice of your income during your work life on the understanding that you will be supported in your old age.
            Where will the money to support infants come from? Somebody has to, TANSTAAFL

          12. @Roxana
            The way Social Security actually works is that currently employed people are taxed to pay the Social Security benefits of elderly and disabled people who cannot work at the present time. The money you paid into the system isn’t actually being stored in a vault somewhere. While it would be a grave injustice to tax someone for Social Security for all their lives and then not pay them benefits when they retire, the answer to the question “who will pay for your benefits” is not, in fact, “you yourself.” It’s “whoever happens to be employed during the years when you are retired.”

            If society collectively decided to set up six-figure savings accounts for every child, the money to pay for it would presumably have to come from taxing those who presently control most of society’s wealth: millionaires and corporations. Millionaires and corporations currently profit greatly from the continued existence of an educated workforce and a population that keeps having new babies added to it. But they pay relatively little on a per-child per-worker basis for the privilege of continuing to benefit, even as they shape the economy in ways that depress the fertility rate and thus gradually undercut the nation and its people’s future over time.

        2. It does not look like a one-time correction to me. Just for fun, I’ve been doing a few graphs of fertility rates for a number of countries using some World Bank data (1960 to 2022) and, while rates vary, it looks like fertility rates are declining in just about every one of the 15 or so countries I’ve looked at.

          More developed countries usually have a fertility rate below replacement rate but a lot of surprising countries have nose-diving rates. for example it looks like India went from ~6.0 to ~ 2.1. I’m to lazy today to pull the actual figures.

          Countries with increasing populations such as the UK or my country of Canada look like they doing it by immigration though the UK rates seem to be hovering around 2.0.

          1. for example it looks like India went from ~6.0 to ~ 2.1. I’m to lazy today to pull the actual figures.

            That is correct, yes, at least if you trust the official Indian figures. Actually, as of this year, India is even lower, at 1.9.

            https://www.thehindu.com/news/national/indias-population-reaches-14639-crore-fertility-rate-drops-below-replacement-level-un-report/article69679518.ece

            India is a poor country, but I think they probably have enough state capacity to track births and deaths- even if the numbers are a little off, the trend is absolutely real. It’s not unique to India either- Bangladesh next door has achieved similar progress, so have Southeast Asia, and Latin America, and even much of the Islamic world. Thus far it’s only sub-Saharan Africa, and a few of the poorer Muslim countries like Pakistan, which still have significantly above replacement TFR. Even they are making progress though, although slower than one would hope. The one African country where I lived for several years, for example, had a TFR of about 5.6 when I was there in the mid 2000s, and is down to 3.75 or so today.

            I don’t think there is anything unique about Africa that will make them a holdout to the general pattern in the long run, I expect it will be a matter of time.

            India’s drop to below replacement TFR (replacement in a poor country is more like 2.3 than 2.1) is great, in general, the main problem is that it hasn’t happened evenly. Some states (and ethnolinguistic groups, since states are generally based on linguistic boundaries) have seen much more drastic drops than others, which is of course going to heat up ethnic anxieties.

            https://en.wikipedia.org/wiki/List_of_states_and_union_territories_of_India_by_fertility_rate

          2. Complicating this picture is demographic momentum. In modern industrial societies, the majority of deaths are concentrated in the upper handful of age brackets, so it can take decades for demographic effects to percolate up the population pyramid.

            South Korea has the lowest fertility rates of any country in the world and takes negligible immigration, but its population is (for the moment) stable.

            If you go from a generation of 10 million people to two generations of 20 million people each, the population doesn’t instantly collapse even if birth rates decline and the next generation is only 10 million. The oldest generation that died in the 2010s and the 2020s is about the same size as the youngest generation being born over that same timeframe.

            The population only starts to collapse when those big 20 million people generations start to hit their life expectancies. Push that picture one more generation into the 2030s and the 2040s, and the top generation that is doing most of the dying will be 20 million people, while the bottom generation being born might only be 4 or 5 million people.

            A lot of post-colonial middle- or high-income countries, especially in Asia, have age structures like this. Of course, *South Korea* is an extreme example, but many countries have increasing or stable populations despite having sub-replacement fertility rates because the age structure of their populations still reflects the high fertility that they *used* to have in the not-too-distant past.

          3. @ Russ Kaunelainen
            Thanks.
            The Japanese population plot seems to show exactly this.

      3. Incidentally East Asia is *really* screwed going forward, seems to be bottoming out in fertility lower than Western countries while simultaneously being less open to immigration.

        Depends on how one thinks about such things. Let’s take Japan, generally the go-to example in these conversations.

        https://www.populationpyramid.net/japan/2024/

        From the above (referenced to the UN), we can see that the country is expected to go from….its present-day population of ~123 million to ~77 million by 2100. That is, an archipelago the size of Germany or Italy would go from having a population that would make it second only to Russia if it were a European country to having a population that would still place it in the top 5 of most populous European countries, whether today or in 2100.

        Now, the situation of both China and South Korea is scarier – but not necessarily because the drops are steeper (both are expected to lose a little over 50% of their population, going from about 1.4 billion and 51 million to circa 630 million and 20 million by 2100, respectively). After all, their populations by 2100 would still be roughly the same size as they were in 1950s, which I don’t think is a disaster. The more challenging aspect is that come 2100, roughly a quarter of remaining population of both countries would be composed of an 80+ age category, which seems to require massively optimistic assumptions about economic productivity/mechanization/what have you to be supported. (Interestingly, Japan’s 2100 population pyramid appears quite a bit better – though maybe not truly realistic either.)

        However, I don’t think these countries’ resistance to immigration is necessarily as immutable as many assume. After all, I live in Australia – a country which had a legally-enforced apartheid approach to immigration (as well as its approach to the Indigenous people) in what was quite literally called “White Australia”. Then, one day in the 1970s, that disgrace was eliminated at a stroke of a pen – and now, though things may not be perfect, Australia is certainly in no danger of a demographic crisis. None of the East Asian countries’ immigration policies starts from a baseline as low as “White Australia”, so they have fewer steps from their current position to a fairly open immigration policy.

        1. The problem isn’t population decline, at least not any time soon. The problem is the rapid inversion of the demographic pyramid, where each successive generation has to bear a heavier burden of caring for a proportionally increasing cohort of dependents. Which leads to a vicious cycle where each successive generation is poorer than the one before it because a higher share of its income and time goes to caring for dependents, and so is encouraged to have even fewer children because people feel like they have neither time nor money for children of their own.

          Time is often neglected in this kind of analysis, but even with home health aides and nursing homes, dutiful families still have to spend several hours a week ensuring that their elderly/disabled family members are properly cared for.

          Immigration isn’t a solution. First, because fertility rates are rapidly falling globally, so the number of countries we can drain is rapidly shrinking. Second, because parasitizing other countries — US immigration policy for example encourages migration of skilled/educated/wealthy adults even if it never makes use of their skills because it never bothered to set up a pipeline to do so, doesn’t seem like an ethical approach to solving the results of our poor choices. The Australia/US solution of importing workers might do for another generation, but then we’ll have to make some hard choices,such as whether as a society we should continue to support a small class of oligarchs who eat a giant share of economic wealth.

          1. One path through that demographic trap would be greater automation of the services provided by home health aides or nursing homes. Which, now that I think on those lines, might be part of what’s driving Japan’s focus on humanoid robots with realistic facial expressions and conversational interaction.

          2. This is probably going to be a huge problem in the Arab and South Asian countries that still have very low female labor force rates and weak social support systems. You end up with one guy facing the prospect of supporting his parents, his wife, any kids they have, etc – if he has brothers they can spread the burden but also get way less of an inheritance from their parents.

            Very much a conducive situation for folks to just have fewer kids or put off marriage for a long time, especially when coupled with bad job markets and slow economic growth.

          3. ad9

            Retiring in the mid-late 60s, doing a bit of light work (house, garden, baby-sitting, volunteer work …) until mid-late 70s, then maybe requiring full-time care for a short period (median length of stay in care in Australia is 22 months) does not strike me as an intolerable burden on society. Certainly not more than 3 years child care plus 15 years schooling

          4. @Brett,

            This is probably going to be a huge problem in the Arab and South Asian countries that still have very low female labor force rates and weak social support systems.

            i think the lowest fertility equilibrium seems to be in situations where you have both patriarchal or semi-patriarchal value systems *combined with* expectations on women to work. i remember seeing a study of fertility rates in Hungary, and while Roma women in general had substantially higher fertility than ethnic Hungarians, Roma women *with a high school or trade school education* actually had lower fertility than their similarly trained/educated Hungarian equivalents did.

          5. @LG: So, yes, I understand your point, and I think I have already alluded to it here with “The more challenging aspect is that…” Now, you probably think that undersells the issue: however, Peter T. makes a very valuable point that young children are dependents as well – and I already commented in this thread that medical studies seem to rather strongly indicate old people are generally less dependent for a lower fraction of lifespan than children are.

            Moreover, once you recognize that both children and old people are dependents, the picture becomes much more complex. Our World in Data (cited in the very post we are discussing) had, amongst other things, produced a chart of age-dependency ratios for every country with sufficient data – both the historical, 1950-present charts, and the derivation from aforementioned UN scenarios. Below is my attempt at placing a sufficiently representative group of countries on that grapher.

            https://ourworldindata.org/grapher/age-dependency-ratio-projected-to-2100?country=OWID_WRL~CHN~ITA~UKR~KOR~RUS~JPN~USA~FRA~GBR~IND~EGY~AUS~DEU~NER~ZAF~COD~BRA~MEX~CUB~ARG

            From there, you can see that some of the countries with highest (~100%) age-dependency ratios today are actually…Niger and DRC – i.e. those with the highest fertility rates, not the lowest. In fact, ~2010 Niger had a peak dependency ratio of ~107% – very similar to ~110% ratios both China and South Korea are expected to have in 2100. (Though before they get there, they would first have to navigate 120-130% ratio in the 2080s.) Interestingly, China and South Korea’s current dependency ratios are some of the lowest in the world – in fact, for South Korea, this ratio exactly half of what it was in 1950.

            The above does not mean population pyramid inversion is not an significant economic problem, of course – however, it does imply that in pure economic terms, it and its impact may not be as unprecedented as some assume. Granted, using past as a prologue, no country on the graph which ever had ratio of >80% seemed it was a particularly nice place to live in at that time. On the other hand, those countries clearly did not collapse at the time either. Perhaps, in that sense, one might be able to view as more of a problem of shoring up support systems with the windfalls from currently-low ratio due to the low number of children before it inevitably spikes due to the high number of elderly.

            P.S.

            The Australia/US solution of importing workers might do for another generation, but then we’ll have to make some hard choices,such as whether as a society we should continue to support a small class of oligarchs who eat a giant share of economic wealth.

            This sounds nice, but in this case, who exactly is “the society”? If it’s the specific (developed) countries then, well, isn’t the glaring flaw that said oligarchs are practically defined by their ownership of private jets, and so can move away more easily than practically anyone else, and have likewise access to ways of transferring wealth most will never even think of? If it’s somehow global, then well, good luck coordinating this and dealing with the massive prisoner’s dilemma temptation any given (desirable) country would face of becoming the oligarchs’ escape hatch and having all their money flow into it.

            And that’s not to mention that for a good number of these people, a massive chunk of their wealth is fairly virtual, only existing as long as the owner never has to spend more than a relatively small fraction of it at a time. I.e. Musk did not get to supposedly have >$300 billion because he sold the amount of cars, satellites or services contracts equivalent to that figure after operating expenses. He has this much wealth because a critical mass of investors (still) have ludicrous faith in Tesla’s future earnings potential, and so each Tesla share is valued dozens of times more than what its existing earnings (the fundamentals) ought to suggest (this is represented by what is known as the P/E ratio.)

            That faith is in many ways no more rational than crypto and requires one to buy into all the hype. If Tesla were suddenly nationalized in its current state, it would trigger a massive rush to sell its shares, which would cause their price to collapse back to fundamentals. This would deprive Musk of (most) of his billions, but also yield relatively little money for the government. It could still be said to be worthwhile, but it’s unlikely to do much to solve the demographic issues.

          6. @YARD,

            If Tesla were suddenly nationalized in its current state, it would trigger a massive rush to sell its shares, which would cause their price to collapse back to fundamentals. This would deprive Musk of (most) of his billions, but also yield relatively little money for the government

            Given how much chaos Musk personally has caused, and also given how much harm income and wealth inequality does (both to America and globally) it would be worth it even if it didn’t put a single additional penny into government coffers.

          7. “Retiring in the mid-late 60s, … then maybe requiring full-time care for a short period … does not strike me as an intolerable burden on society. ”

            Perhaps not, but it is certainly a burden. And how intolerable it is will rather depend on how many people bearing that burden there are to tolerate it.

        2. “After all, their populations by 2100 would still be roughly the same size as they were in 1950s, which I don’t think is a disaster.”

          Depends on what you consider to be a disaster. As LG points out, the inversion of the demographic pyramid has the opportunity to dramatically reshape the social landscape. The economic landscape is no less threatened as ever greater resources (public and private) are expended on the support of the elderly.

          And the impact on culture can equally verge on the catastrophic.

          A person I follow on BlueSky, who is well versed in Japan, believes it will be unrecognizable within a few decades… Because so much of what is “quintessentially Japanese” (the small sole proprietor restaurants, bars, shops, etc…) are run by aging ‘Boomer’ generation Japanese with no successors.

          We’re all familiar with the accounts of small villages and towns evaporating under the relentless pressure of migration to larger cities and the aging population… But I’ve also read accounts of shotengai in larger cities that are becoming ghost towns as the proprietors retire or die and their businesses are not replaced. (Which feeds into and off of the habits of the younger generations patronizing chains rather than local establishments.)

          Shinto shrines (fundamental to Japanese culture) are facing a similar pattern… in which there are both fewer uijiko (parishioners for lack of a better term), and ever fewer trained priests (a group that is already strongly inverted demographically) to serve at the shrines.

          1. As economist John Quiggin points out, the elderly are less of an economic burden than the young, and typically for shorter periods (the young require care from birth to mid 20s if educated to the standard needed for industrial societies, while the aged are rarely in care for more than a few years)

          2. PeterT, if people retire in their 60s and live into their 80s, that is still 20 years in which they are absorbing resources, not contributing to them.

          3. Both children and elders are partially abled for extended periods of time. The effects of them working are different, though.
            In classical society, children under 5 or so have always been a burden. And a lot of them, considering the number that had to live for a while but did not end up surviving.
            But as noted, for poor societies childhood was not an “extended vacation” – children aged 5 to 20 were although only partially abled busily burdened with the chores that matched their capacities.
            Likewise their elders.

            Now if children aged 5 to 20 are babysitting their under-5 siblings, picking cacao or sewing Nike shoes instead of learning skills towards their future adult jobs, this has long term effects towards their skills as adults – making them unskilled rather than skilled adult workers.
            If, however, elders who end up living to 85 retire at 80 rather than 65, this is a quality of life issue but does not impact their future skills.

          4. A person I follow on BlueSky, who is well versed in Japan, believes it will be unrecognizable within a few decades… Because so much of what is “quintessentially Japanese” (the small sole proprietor restaurants, bars, shops, etc…) are run by aging ‘Boomer’ generation Japanese with no successors.

            This. It’s just too labor-intensive in a society where labor is increasingly scarce – they need rapidly rising service sector productivity to offset the economic weight of an aging population, and that probably doesn’t happen without more of a focus on larger chains and stores.

          5. As economist John Quiggin points out, the elderly are less of an economic burden than the young, and typically for shorter periods

            I was not familiar with him before, but a brief search turned up a remarkable year 2017 paper along these lines from a British team.

            https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)31575-1/fulltext

            We have shown that from age 65 years, older people can expect to spend on average between 4 years and 7·8 years with low dependency (care less than daily), 1·1 years, with medium dependency (care at set times daily), and between 1·3 years and 1·9 years with high dependency (24-h care). Current older people spend between 1·7 years and 2·4 years more with low dependency and between 0·9 years and 1·3 years more with high dependency than the generation 20 years ago.

            Now, here are their definitions.

            Classification of dependency

            High dependency (24-h care)

            At least one of the following: unable to get to or use the toilet (self-report), bed bound or chair bound (interviewer observed), needs help feeding (self-report or proxy rated), be often incontinent and need help dressing (self-report or proxy rated), or have severe cognitive impairment (mini-mental state examination score <10).

            Medium dependency (care at regular times each day)

            Either needs help preparing a meal (self-report) or putting on shoes and socks (self-report).

            Low dependency (care less than daily)

            At least one of the following: needs help to wash all over or bath (self-report), cut toenails (self-report), shop (self-report), or do light or heavy housework (self-report), or have considerable difficulty with household tasks, for example making a cup of tea (informant report).

            This may not sound very popular, but I think it’s pretty clear that based on this data and this scale, children dependents (formally defined as <15, rather than <18) spend a lot more years being significantly more demanding than old people do! Furthermore, this data may also explain why “have children so that they’ll look after you in old age or help look after your own parents” is clearly not a compelling argument in developed economies – at least, not compelling enough to meaningfully elevate TFRs.

            Shinto shrines (fundamental to Japanese culture) are facing a similar pattern… in which there are both fewer uijiko (parishioners for lack of a better term), and ever fewer trained priests (a group that is already strongly inverted demographically) to serve at the shrines.

            Notably, last year, there was a Hong Kong film called The Last Dance, which depicts something very similar, but with Taoism. (The titular “Last Dance” is a ritual, formally known as “Breaking the Seven Gates of Hell”, which Taoist priests are supposed to perform at the funerals, as they (based on what I got from the film) break the seven ceramic tiles representing those gates barring the soul’s path to with a ritual weapon while performing a complicated dance.) So, proper traditional funerals have to have a priest doing that ritual, but the aging main supporting character of the film (only referred to as Master Man, or with an ironic nickname) is depicted as practically the only one who still truly “believes in his own religion”.

            In contrast, the protagonist, Dominic, is a former wedding planner who went deep into debt trying to keep his agency afloat when Covid shut down all gatherings, and after accepting the inevitable, he had to switch to running the business side of a funeral agency, with the ritual side belonging to the priest. The film performs a remarkable bait-and-switch where at first, we are encouraged to see Master as the moral core of the narrative, while Dominic is practically comically money-grubbing in his attempts to wring out as much profit from each funeral by selling merchandise and the like. As the narrative progresses, however, we not only get to appreciate how deep is the hole he must dig himself out of, but also see a series of funerals where Dom chooses to go against Master and bend or disregard tradition for the sake of clients’ emotional wellbeing – and this is portrayed as the right thing to do, since the film ultimately sees only instrumental value in religion.

            After all, not only are the other priests are said to disregard vegetarian prohibitions in private (i.e. after Master is obviously offended when Dominic orders a roast piglet for his family at the restaurant, the latter defends with “I thought you were only vegetarian at work like the other priests”), but there’s a focus on the inherent patriarchal structure of that tradition. Master’s daughter Yuet still believes fairly strongly, but the tenets forbid any woman from serving in this role because menstruation makes them “impure”*. (To which she retorts with (paraphrased) “Wasn’t so impure when all of you were born.”) Thus, Master’s son Ben is the designed successor of the business, even though he very clearly views it as simply a day job. He had already married a Catholic years ago, and in a pivotal moment, he is revealed to have undergone baptism in secret from Master and his sister – purely so that his own son could have a better chance of getting accepted in the prestigious Catholic school. Once it fails anyway, he offers to rescind it. but such cynicism sparks an argument that gives Master a severe stroke.

            After that, Master becomes dependent on his daughter (the son soon packs up and leaves with his family for Australia – the final argument between the siblings is remarkably well-written and both-sided), and we see her washing him, etc. This is also not the only time the film grapples with the question of dependency – Dominic is himself in his early 50s, and so is his wife. About halfway through, she confronts him about having children – and he retorts that it wouldn’t be fair to leave a child burdened with caring for parents in their 70s right as s/he is entering adulthood. (She’s made up her mind to do this without or without him though, and he ultimately gives in.)

            That’s probably enough detail about that film for this blog, but let’s just say there is a very good reason why in Hong Kong, it became the highest-grossing film of all time. (I definitely found it superior to practically everything nominated for Best Picture from last year.)

            *Interestingly, it wasn’t the only film from the region to make the topic of menstruation a focal point of the narrative even in the past year. Around the same time it came out, a Shanghai film, Her Story featured a scene where the three main characters – divorced mother Tiemei, her preteen daughter Moli and their circa-20s neighbour Xiao Ye – get to talk at a restaurant table about the experience of a first period for several minutes while a couple of male characters sit and listen without objection.

            This is probably the most striking example of how feminist it is, but it’s far from the only one – there is a lot of conversation in the film over what the role of a working mother should be, what does (and doesn’t) make for a good role model for a girl (i.e. Moli enrols as a drummer in a school band specifically to combat the perception it’s an unwomanly instrument), etc. Even the ex-husband namedrops “male privilege” and “structural oppression” whenever he visits, thinking it will win Tiemei back (she is unimpressed, believing he didn’t love her but her income, freeriding in their married life both financially and in childcare.) I am sure there have been American films like this too – I am much less sure if there have been any which also topped their domestic box office for two weeks straight, the way this one did. I think this result a good reality check to the notion PRC society is significantly more socially conservative/anti-feminist than the Western ones.

          6. The folks talking about John Quiggin’s are thinking in terms of a conventional demographic pyramid. One where retirees and the aged represent a relatively small portion of the population.

            Things change dramatically when the pyramid inverts. When retirees and the aged represent a large proportion of the population, if not a majority. When it’s 1 in 10,000 that requires high dependency care, it’s one thing. When it’s 20 in 100… It’s an entirely different thing.

            Ivory tower academics are irrelevant to reality. And reality is that Japan is already facing significant and increasing labor shortages. Reality is that care for the elderly will be impacted as those shortages increase in the coming decades.

        3. Politics is downstream from culture. I think both Japanese and Korean cultures are much more permeated by racism and ethnocentrism than Australia ever was. You might compare Sweden or Denmark: totally non-racist in their politics, but very ineffective at integrating immigrants due to the ethnocentrism of their cultures.

          1. Wait, isn’t “Politics is downstream from culture” mutually exclusive with “Sweden or Denmark: non-racist politics…ethnocentric cultures”? Not to mention that according to demographers, at least, Sweden is expected to grow, if relatively gradually, until 2070 or so, (and Denmark also mostly manages to keep its population fairly stable throughout the century) so if China and Korea “merely” manage to replicate what those Nordic countries have done, that ought to be considered a massive success at least in those terms.

            Moreover, it also does not explain how according to the 2019 edition of MIPEX (Migrant Integration Policy Index), Sweden has one of the absolute highest scores in the world in terms of a whole lot of indicators which I suppose you would call “political” (Labour Market, Mobility, Family Reunion, Education, Health, Political Participation, Permanent Residence, Access To Nationality & Anti-Discrimination), while Denmark is markedly lower, when the “downstream” argument would expect them to be placed around the same position. That is, Sweden is ranked ahead of the U.S., Belgium, Australia or New Zealand by double digits – and all of them are “top 10” ranked countries. Denmark’s score is much lower, to the point it’s actually placed below Korea – whose own score is surprisingly equivalent to the UK and France.

            https://www.mipex.eu/sweden

            Granted, it’s true that those indicators do not necessarily correlate with actual amounts of immigration very well – i.e. Saudi Arabia seemingly has by far the worst ranking in that dataset, but nearly half its population are immigrants (and circa 10% lack the shared religious background either.) Likewise, UAE, Russia and China all have ratings within a couple of points of each other – yet their immigration/demographic situations are extremely different, with UAE is already almost 90% immigrant and its population is expected to double while Russia is receiving significant (primarily) Central Asian immigration and is expected to have a relatively limited demographic decline as the result.

            I think both Japanese and Korean cultures are much more permeated by racism and ethnocentrism than Australia ever was.

            I would like to believe that to be true: as Peter T. points out, Australia nowadays is certainly quite the counterargument to “migration necessarily leads to ethnic strife” claims. However, knowing the details of our history makes me doubt that “ever was” part a lot. Suffice it to say that for much of the 20th century, it was literally official state policy to separate mixed-race children from Aboriginal mothers into state institutions and segregate them until they grow up, to ensure the next generation of their descendants would have another white parent, until the non-white traits would be “bred out” by the fourth generation – with the express goal of making the native population disappear entirely in this manner. I don’t think anything we have heard of, say, the Xinjiang internment camps, ever went this far.

          2. As I understand the expression, “politics is downstream from culture” means that winning elective office and implementing official policies doesn’t really produce social change in a country unless those political acts are rooted in that country’s culture. So the official non-racism of Scandinavian countries does not lead to actual integration of immigrants, because the culture is profoundly insular and ethnocentric.

          3. @YARD,

            While i’m going to stay away from the ‘racism’ term which I think is massively overused in Anglo-American discourse, i think you’re right to differentiate between Sweden and Denmark here. While the cultures as a whole may be fairly similar, the political cultures (especially around immigration, the national question, etc.) seem to me, as an interested outside observer, to be quite different. Denmark seems like they have incorporated nationalist and immigration-restrictionist sentiments into the political establishment to a much greater extent than Sweden.

      4. “The Uk population is holding up much better”

        At the cost of ethnic strife slowly boiling over, while it’s mostly been suppressed by more and more authoritarian means of control there’s a reason reform is the largest growing party.

        I would be surprised if we began to see pogroms aimed against muslim communities or attempts for an ‘islamic revolution’ in britain.

        1. At the risk of dragging the comment section even further offtopic, I don’t recognise the link you’re drawing between “ethnic strife” being suppressed by “authoritarian control”.

          The current erosion of civil rights noticeably accelerated post-9/11 with “antiterrorism” laws given broad powers to basically target whoever they wanted. This was followed up by post-Brexit legislation cracking down on protestors and demonstrations now that we no longer had to deal with the optics of the EU asking us what the heck we’re playing at viz. Hungary.

          What are people mostly protesting about? Oil, Climate Change, Water Pollution, Gaza, and of course the erosion of civil liberties itself. Those are the peaceful protestors being arrested and given jail terms under the new legislation.

          There was also a fun spate of riots last year attacking asylum seekers in hotels, but given that those were violent, and involved setting fire to buildings and attacking police officers and people’s houses, none of that needed authoritarian legislation to prosecute – that kind of behaviour has been illegal since basically forever.

          That said, I don’t disagree that we have very ugly racial tensions simmering that will only get worse, but I blame that primarily on the majority of our press and media being owned and operated by a single oligarch with a toxic agenda of division and outrage generating sales that taps into the worst of British exceptionalism, on top of post-Thatcher resentment and blame of the EU for all of our problems.

          “Attempted Islamic revolution” is… unlikely. To say the least.

    2. The discussion under last month’s Fireside eventually got me to dig out the projected median population trajectories for most of the world’s countries throughout this century. The comment is far too long to quote here, but I found the exercise rather eye-opening.

      Having said that, my first thought upon reading this paragraph actually wasn’t this (although I soon saw that aspect too) but rather what some have described as the present-day situation of “blue states” in the USA. As in…(like you, I apologize in advance if the following excerpts are “too political” for this blog.)

      https://archive.md/EZ8Xv

      As California goes, so goes the nation, but what happens when a lot of Californians move to Texas? After the 2030 census, the home of Hollywood and Silicon Valley will likely be forced to reckon with its stagnating population and receding influence. When congressional seats are reallocated to adjust for population changes, California is almost certain to be the biggest loser—and to be seen as the embodiment of the Democratic Party’s failures in state and local governance.

      The liberal Brennan Center is projecting a loss of four seats, and the conservative American Redistricting Project, a loss of five. Either scenario could affect future presidential races, because a state’s Electoral College votes are determined by how many senators and representatives it has. In 2016, after her loss to Donald Trump, Hillary Clinton argued that she’d “won the places that are optimistic, diverse, dynamic, moving forward”—an outlook that she contrasted with Donald Trump’s “Make America Great Again” slogan. But now Democrats’ self-conception as a party that represents the future is running headlong into the reality that the fastest-growing states are Republican-led.

      According to the American Redistricting Project, New York will lose three seats and Illinois will lose two, while Republican-dominated Texas and Florida will gain four additional representatives each if current trends continue. Other growing states that Trump carried in this month’s election could potentially receive an additional representative. By either projection, if the 2032 Democratic nominee carries the same states that Kamala Harris won this year, the party would receive 12 fewer electoral votes. Among the seven swing states that the party lost this year, Harris came closest to winning in the former “Blue Wall” of Wisconsin, Michigan, and Pennsylvania—at least two of which are likely to lose an electoral vote after 2030. Even adding those states to the ones Harris won would not be enough to secure victory in 2032. The Democrat would need to find an additional 14 votes somewhere else on the map.

      Population growth and decline do not simply happen to states; they are the result of policy choices and economic conditions relative to other states…California, New York, and other slow-growing coastal Democratic strongholds have taken an explicitly anti-population-growth tack for decades. They took for granted their natural advantages and assumed that prosperity was a given. People willingly giving up their residencies in these coastal areas is a sign of how dismal the cost of living is. While the media are likely to pick up on anecdotes about wealthy people complaining about tax levels and political norms in liberal states, data show that population loss is heavily concentrated among lower-income people and people without a college degree. In an analysis of census data, the Public Policy Institute of California found that more than 600,000 people who have left the Golden State in the past decade have cited the housing crisis as the primary reason. When people vote with their feet, they’re sending a clear signal about which places make them optimistic about the future. What does it say about liberal governance that Democratic states cannot compete with Florida and Texas?

      Remarkably, none of this happened by accident. A hostility toward population growth and people in general has suffused the politics of Democratic local governance. The researcher Greg Morrow meticulously documented the political effort in Los Angeles to stop people from moving to the city over the back half of the 20th century. In the early 1970s, the UCLA professor Fred Abraham pushed for growth limits, arguing, “We need fewer people here—a quality of life, not a quantity of life. We must request a moratorium on growth and recognize that growth should be stopped.” Morrow also points to comments from the Sierra Club, which recommended “limiting residential housing … to lower birth rates.” Such arguments preceded a now infamous downzoning in the ’70s and ’80s, which substantially reduced the number of homes that could be legally built, slashed the potential population capacity of Los Angeles from an estimated 10 million people to 4 million, and spurred one of the nation’s most acute housing and homelessness crises. Self-styled progressives and liberals in blue communities across the country have taken similar approaches, all but directing would-be newcomers to places like Texas and Florida.

      I understand that a lot of these arguments are now entangled with the so-called “Abundance Agenda”, driven by a book whose contents often seem to be pure hackery, but the highlighted numbers appear indisputable. I also think it’s a better parallel than “rich countries in the 21st century with low fertility and strict limits on immigration”, because the latter may not be facing a military collapse in relative terms at all if their biggest military threats are also shrinking at similar rates or if they still have faith in a hegemon and/or other deterrents protecting them. On the other hand, under the current American system, it really does seem like the choices made in “blue states” directly reduce their power in a zero-sum game that could lead to them being effectively permanently led at a national level by administrations they are strongly opposed to.

      1. On the other hand, under the current American system, it really does seem like the choices made in “blue states” directly reduce their power in a zero-sum game that could lead to them being effectively permanently led at a national level by administrations they are strongly opposed to.

        India faced the same issue with the 1970s, they desperately needed to reduce fertility but faced the problem that any state which reduced its fertility would be outbred (and suffer declining political influence in the legislature) relative to states which didn’t. They solved that issue by freezing legislative apportionment, so that districts today aren’t proportional to population, they’re proportional to what the population was in 1976. Which tends to favor the states in the south and east.

          1. Forbids it in the lower chamber, but enacts it in the upper chamber; there is even a clause in the section defining the Senate that makes it immune from the normal amendment process, so having a rational Senate would require to distinct rounds of the Constitutional Amendment process.

        1. Lebanon in the 1980s offers a cautionary example of where such a policy can lead, when demographics and apportionment cease to align

          1. Well, the freezing of legislative districts was supposed to last for 50 years which means that theoretically it expires…..next year, and the “Hindi Belt” states are suddenly going to receive an influx of extra political influence (and the south and east will lose it). Maybe that will get postponed for some reason, but if not, expect some major turmoil and infighting in 2026.

          2. I should also have said “states and ethnicities”, since most Indian states are drawn on ethnolinguistic grounds. Which is just going to make the issues of shifting political influence more…visceral, i guess, in a way that they wouldn’t be in America.

          3. Maybe that will get postponed for some reason, but if not, expect some major turmoil and infighting in 2026.

            @Hector: This makes me wonder; could it be connected with what appears to be Indian nationalists’ sudden concern about the stability of the “chicken neck” connecting those eastern states* with the rest of India, the Siliguri Corridor? All in the face of the new Bangladeshi leader Muhammad Yunus apparently being remarkably pro-China and even mending ties with Pakistan, with a trilateral summit recently taking place between the three.

            The argument that the three countries could cooperate to cut off those eastern states from India sounds crazy given what led to the founding of Bangladesh (not to mention the size of Indian military and its nuclear deterrent) – yet, it is apparently at least plausible enough that an “Air Marshal and former Vice Chief of Air Staff” is willing to go on record saying this. In a relatively fringe publication, granted, and perhaps his reputation is no different to someone like Michael Flynn in the U.S., but it still appears remarkable in this context.

            https://www.eurasiantimes.com/defended-by-rafale-brahmos-s-400-can-bangladesh-threaten-indias-chicken-neck-with-chinese-help/

            * To clarify for other readers, at least some of those eastern states have already experienced significant insurgencies and heavy-handed crackdowns: i.e. the notorious case of Constable Thounaojam Herojit in Manipur state, who confessed to personally executing over a hundred suspected militants in “staged encounters” without trial.

          4. @YARD:

            * To clarify for other readers, at least some of those eastern states have already experienced significant insurgencies and heavy-handed crackdowns:

            yes, and to clarify even further for readers, there’s a reaso for that. The northeastern states (well, six of the seven, at any rate) are very……different from the rest of India, to a greater extent even than the differences between the south and north. Many of the peoples in that region are ethnoracially Southeast Asian rather than South Asian; speak languages either distantly related to Vietnamese and Cambodian or distantly related to Tibetan and Burmese, rather than to either of the major two language families in north and south India; and in many cases are either animist or, now, Christian, rather than Hindu or Muslim . The fact that they’re part of India at all is, like many things, an accident of British colonial policy.

            I’ve known a few people who have spent time there- one as a zoologist doing field research, one as an Evangelical Protestant missionary, one as a military officer- and the sense I get is that these peoples are just very different than the rest of the country, that they know it, and that it’s no surprise or accident why the insurgencies exist.

      2. Except that rural red states (and the red rural parts of blue states) are losing population to the cities – which are blue. It’s not a state thing, it’s an urban/rural thing.

        1. This view is rather incomplete; i.e. as mentioned above, Florida had seen the largest number of arrivals, which in practical terms means the city of Miami first and foremost. That is, the city which the current President effectively won last year.

          https://archive.is/NPO2b

          1. And that, in turn, is also incomplete. Miami is a consolidated city-county, putting suburban and even some rural land in the city voting statistics.

      3. Due to how America’s 2 party system has historically worked, particularly the tendency of its parties to fully switch sides of the same argument (if you want to ask which party is pro-war or anti-racism, for example, you really do have to specify a decade, even if that decade is ‘this one, obviously, I hate talking about the past on a history blog’), I suspect what will actually happen is that California and New York will become less influential when it comes to actual policy choices, but the presidency will keep flipping as it always has. Democrats will have fewer choices that appeal to New Yorkers, and more that appeal to Floridians (who were, after all, famously a swing state in 2000), but they’ll not be shut out of government.

        1. This sounds logical (especially considering that Florida was a knife’s edge state as recently as 2018, with the GOP winning both the gubernatorial and the Senate election by less than one percent), but we’ll see.

          I know it’s very early days for nominee polling (though, ignoring these results in both 2017 and 2021 proved unwise, since at this time in the cycle, they had already demonstrated the lead of the future 46th and 47th President in their respective parties), but to me at least, these results do not really portray a party that is ready to make such changes.

          https://emersoncollegepolling.com/june-national-poll/

    3. I cannot help but think of rich countries in the 21st century with low fertility and strict limits on immigration.

      Sorry, I don’t think that’s a good analogy at all. In the 21st century, there are strong norms against cross border aggression and responses to states which break the norms. Even if today’s Russian leadership (and apparently now the government of the US as well) are trying to violate those norms, the norms have held up well enough since 1945 that the level of cross-border invasions and international wars has been much, much lower than it is in the past. We don’t have the same kind of violent interstate competition through warfare that we had in the past (certainly not the same level of it).

      It’s not actually that much a problem for Bulgaria if they have, say, half the population they do today, if neither Russia nor Turkey nor anyone else is going to invade them. To the extent that Russia wants to try and violate the norm against wars of aggression, that’s the problem that needs to be dealt with, not Bulgaria’s declining population.

      I’d say that the bigger military concern today is civil wars rather than international wars, and tht immigration, whatever its other costs and benefits, would tend to make civil war more likely. A big part of why countries like Bulgaria *aren’t* fighting wars against Turkey or Serbia or for that matter Austria today is because that part of the world has been mostly (largely) sorted out along ethnic lines. In places like Bulgaria, we are close to the ideal of “X state for X people”, after the extended ethno-national conflicts of the last 200 years, than we have been for a very very long time, and I think that’s a good part of *why* civil conflicts along ethno-national lines are less than in the past. Re-importing diversity and changing the demographic makeup would just re-ignite ethnic conflict all over again.

      1. That’s the case if a tightly-defined ‘ethnicity’ rules. But up to c1850 most people were in states where loyalty to a dynasty or religion or some other ideal ruled. There is no reason why that cannot emerge again. Australia is now very ethnically diverse (roughly 50% of the population first or second generation immigrants) but has low (not none) ethnic conflict, because Australian-ness is not defined ethnically in mainstream politics (and the results of the last election showed that waving that flag was very counter-productive).

        And near 10% of Bulgarians speak Turkish.

        1. @PeterT,

          I’m aware, but I think it proves my point: that kind of pre-1877 past is exactly what it seems to me a lot people in Bulgaria (and for that matter elsewhere in Eastern Europe, and in other parts of the world that used to be the subjects of large multinational empires) are very glad they escaped from.

          You’re correct that Bulgaria has about a 10% Turkish minority today (as well as a substantial number of Roma), but it was much higher in the past. There was substantial emigration of Turks after the 1877 was, and then after the Balkan Wars in 1912-1913 and then again in the late Communist period.

          1. It took a series of wars to break the connection. The Turkish minority seems to be getting along fine. More to the point, there is a slowly-growing sense of attachment to the EU alongside ‘national’ identity. The EU as a modern version of the HRE or the Danubian Monarchy?

          2. It took a series of wars to break the connection.

            yes, that’s my point! The relative ethnic homogeneity of countries like Bulgaria today is a hard won achievement that took centuries of conflict (military and otherwise) to achieve, it seems to me that they would be insane to throw that away now.

            (then again, people have done a lot of things I consider crazy before, in the past and even in the present).

            More to the point, there is a slowly-growing sense of attachment to the EU alongside ‘national’ identity

            I really don’t think that’s ever going to happen to the extent you want it to, certainly not in countries like Bulgaria. (Since we are talking about Bulgaria specifically, it’s worth pointing out that unlike in many western countries, youth and education seem to be anticorrelated with liberalism: the ultranationalist party, at least last time I checked the polling at Europe Elects, was doing better among young and educated people).

            I think it’s more likely that cosmopolitan western societies will eventually devolve into ethnic conflict, rather than that nationalistic societies in South Asia, the Middle East, Eastern Europe etc. will eventually become cosmopolitan. But, I’m sure some of that comes down to our different presuppositions, and what we expect human nature and human societies to be like as a “default”.

          3. The wars were very recent (mid-late 19th century), and the ethnic cleansing even more recent. A useful saying is Eric Hobsbawm’s ‘identity is not like hats – you can wear more than one at once’. One can be Bulgarian/Bulgarian Turkish and European all at the same time, and see no conflict between them. After all, the various ethnicities of Europe were thoroughly mixed for many centuries with no more than the occasional riot.

          4. The wars were very recent (mid-late 19th century), and the ethnic cleansing even more recent.

            I mean, as is being discussed elsewhere in the thread, the Pill (and subsequent, even better, refinements of hormonal contraception) is pretty recent too. And yet, I’m absolutely certain that the Pill is here to stay, in both the medium term and the long term. I’m just about equally certain that the modern ethnonational state, whatever form it takes and whatever its ideological basis, is here to stay as well.

            You can argue, and you would be right, that there were certainly precursors to contraceptives and abortifacients in the pre-20th century era (various plants that had contraceptive or abortive properties, etc.). But, by the same token, I’d argue that there were precursors and ‘raw material’ for nationalism and national identities as well.

    4. I think the article talks about a lack of upper-class, not lower-class population. And if rich countries only allowed upper-class immigration to supply their native upper-class, wouldn’t it still be strict limits on immigration?

    5. Those strict limits get regularly shirked and are indeed more of a legal barrier to labor rights at the bottom of society than they are to actual migration – after all, having a population of essentially rightless non-citizens who can be deported at will creates a readimade pool of low-cost labor that is under considerably less danger to become unionized or insist on their rights as employees.

  3. > At least to my sense, planning for a radically different world 10,000 years in the future sits largely outside the pre-modern worldview which imagines that the world doesn’t change much and at most goes through a series of repeating cycles.

    >Likewise, of course, in medieval Christian contexts (both Catholic and Orthodox) the deceased were to be buried (if baptized) in consecrated ground to await the resurrection of the dead. This too represented not an exit of the individual from the community but rather something closer to a changing of their position in it: from the active, living part of the community to the waiting, expectant part.

    These two statements do not seem consistent with each other.

    1. I don’t think so? The point is that the expectation is the world will remain the same until it ends, so the concern that your community will change so much that it doesn’t maintain your graveyard isn’t implicated?

      1. But the expectation that graveyards will (and should) be maintained in perpetuity is if anything stronger in America today than it ever was in medieval Europe, where bones were dug up and deposited in ossuaries after a time.

        1. Right, because your community is going to continue to need to use that graveyard indefinitely, so the bones need to go somewhere else. You can’t just keep building new graveyards, sooner or later you’ll have more of them than people!

        2. I don’t know about America, but here in Europe, you rent a grave for thirty years, and then … I honestly don’t know what they do with the parts of the dead person that was in the grave, but since ossuaries aren’t a thing, I assume they just throw the remains away, or put the other corpse on top.
          The graveyard itself may be maintained, but the grave itself will be removed once the relatives of the deceased don’t pay anymore.

          If that ressurection of the dead happens, I really hope god has a good plan for how to deal with that, plus the many people who got urn burials …

      1. I think you mean to say it was a “Good Thing.” At least that is how I learned history.

        (This is from the book “1066 and All That,” for those who don’t get the allusion.)

    1. Why? It wasn’t industrialization that cut the death rate from infectious disease. What cut that death rate was discovering how those diseases spread, and vaccination.

      1. Discovering something is only have the picture, you also have to be able to implement it. Mass vaccination requires mass quantities of the vaccine to be available. Mass use of antibiotics similarly implies the ability to produce mass quantities. And mass use implies not only significant manufacturing capability, but also transportation infrastructure for distribution.

        1. And even water and sewage systems require a lot of work and materials to build, operate, and maintain them. Which has to be supported with resources from the rest of the economy. Which must exist and have enough political and economic support.

        2. Even the discoveries themselves depended on industrialization- scientific research is expensive, and depends on the surplus that an industrial economy generates.

        3. Which is a point that the late Hans Rosling made very eloquently in one of the chapters in the last book he co-wrote before his death.

          Many types of medicine, including several key vaccinations, can only be delivered through a cold chain – that is to say, they spoil *very* quickly, and so must be constantly refrigerated from the point of manufacture until the needle goes in the patient’s arm.

          This requires you to have transportation infrastructure that can handle refrigerated containers. It requires you to have substantially electrified your population: at least enough backup diesel to keep the fridges in rural clinics operational 24/7. It requires you to *have* all those refrigerated containers and fridges in the first place.

          Consider how much all of this costs, and consider what a truly remarkable achievement it is that global measles vaccination rates are now at 85%. It is a triumph of industrialisation, of global trade, and of technological progress.

          Modern industrial society is seriously dysfunctional and faces many severe problems. This is not, however, a cause for doomerism. We draw confidence in our ability to solve these problems from the fact that we have *seen* what miracles are made possible by human diligence and ingenuity.

          Sure, modern industrial society is seriously dysfunctional and faces many severe problems. But as a species, we should be proud as all hell that we built it.

      2. maybe because research equipment produced from factories rather than some village workshops was better, thus enabled better researches?

        1. Not better. Cheaper. The mage/alchemist in his high tower / castle is a trope for a reason – buying laboratory glass and filigrane metal parts was prohibitively expensive when they were all hand-made.

          Cheaper equipment means more researchers, means better chances of discovering peniciline via forgetting a petri dish over the weekend (which wouldn’t have happened to your old-timey alchemist because he lived in his workshop. And also because he was only one guy.)

          Industrialization may also have enabled more people to dedicate their lives to research, since less peasants could feed more aristocrats.

          1. Cheaper *and more consistent* – and available by the gross lot. And that goes for all material inputs, E.G. chemicals, as well as hardware.

      3. You can check the artwork by which we cherish this accomplishment, for example this one. That is a personification of Commerce, wielding a Caduceus (the symbol of Hermes, god of commerce) that is holding off Death. If it were a personification of Medicine, then it would be wielding a Rod of Asclepius (one snake and zero wings) instead.

        The transition from subsistence farming to market farming, with different geographic areas being connected with bulk food transport, was a key improvement in curbing nutrition-associated mortality (both its peaks in years with bad harvests, as well as the background chronic malnutrition). Moving from downside-risk-minimization (maximizing the, say, 5th percentile yield) to maximizing the average yield increases the size of the margin between the average yield and the “minimum yield” below which the family starves. It meant that each piece of land could be used to grow whatever it grew best, rather than a mix of that and other things, thus the total food supply increased. Much as peasants tried to use different parts of the village’s lands as backups for each other (“the valley may flood, the hill won’t”), entirely different regions — including areas on different continents altogether –, which had really uncorrelated weather, could serve as backup for each other.

        Incidentally, some of the key advances did not actually depend on correct understanding. Famously, London’s sewer system — which worked perfectly fine at reducing mortality — was built based on the theory that e.g. cholera is a mass poisoning by bulk pollution, rather than a contagious infection. (It turns out, miasma theory is a description of industrial accidents and/or malfeasance.) Of course, the discovery of germ theory did help with further improvements, such as chlorinating drinking water, while understanding diseases of nutrient insufficiency (goiter: iodine, scurvy: vitamin C) allowed them to be solved by supplementation — assuming that the industrial capacity to make that feasible existed. (Though scurvy is a notable case of a solution becoming undiscovered in the modern era, contributing to the deaths of polar explorers a century after an adequate preventative/cure was in routine use.)

        Some parasite eradications relied largely on convincing extremely poor people to wear shoes and to buy shoes for their children to cut off the route by which the parasite infects people (the region in question being subtropical, the climate does not force the wearing of shoes). Shoe factories, agents of public health.

        1. Yeah, the commercial revolution had changed a lot before the industrial revolution. England’s last famine was in the 1620s. It might not have been an industrial society but it was a very well-run and economically vibrant pre-modern one.

          1. Yes, although England specifically seems like it would have climatic advantages (extreme maritime climate, so not that much temperature variation) that countries further east in Europe would have less of or not at all.

          2. just to Peter here because i can’t reply directly – it’s worth noting that Scotland and Ireland’s famines weren’t due to bad economic management, but Evil economic management. There was still enough actual food, by the by. Also, India also had many famines and was similarly part of the same empire, and similarly ‘not geographically in England’.

        2. The transition from subsistence farming to market farming, with different geographic areas being connected with bulk food transport, was a key improvement in curbing nutrition-associated mortality (both its peaks in years with bad harvests, as well as the background chronic malnutrition). Moving from downside-risk-minimization (maximizing the, say, 5th percentile yield) to maximizing the average yield increases the size of the margin between the average yield and the “minimum yield” below which the family starves. It meant that each piece of land could be used to grow whatever it grew best, rather than a mix of that and other things, thus the total food supply increased. Much as peasants tried to use different parts of the village’s lands as backups for each other (“the valley may flood, the hill won’t”), entirely different regions — including areas on different continents altogether –, which had really uncorrelated weather, could serve as backup for each other.

          That’s true, but i’d add that market trade is one way to get that kind of specialization, but not the only one. The Inca Empire found a different solution (which also made possible specialization, complementation of resources etc.).

      4. To reply to the above: During the classic period of the Industrial Revolution, the only vaccines were smallpox and cowpox. Pustules were removed from someone with a relatively mild case (think about what “mild” means for smallpox), and used to infect other people, from whom further pustules would be taken. The only tool needed was a small, sharp, knife.

        Changes to the drainage system also did not require new technology, although steam powered pumps were helpful.

        But the changes could have been implemented, if with a little more effort, with Roman technology. That’s enough to get from 50% of children dying to 10%. That’s most of the improvement. Further improvements required new technologies – but those were developed after people had already invented the term “Industrial Revolution” to describe something that had happened in the past.

        The point is that public health measures required the Scientific Revolution, not the Industrial one.

        Insofar as the Industrial Revolution led people to move from the countryside to the cities, it almost certainly made public health worse. I’ve seen estimates for Victorian British life expectancy (for the poorer social groups) of 38 in the country, and 19 in the towns.

        (I should add that people moved to the cites for a reason, and it was not to make their lives worse. I am happy to go with their assessment that it made them better off. But not because it made them more likely to live.)

        1. They moved partly because life (for the rural underclass) was getting worse, what with enclosure and stricter charity laws. But people are really bad at estimating risk and often surprisingly indifferent to the prospect of death if the possible reward is great enough. People went to India (or, even worse, Africa) in the hope of making their fortune. The survivors often did – but were few. They still went.

      5. Believe it or not, that’s not completely true! or at least no-one is completely sure what factors drove the decline in infant/child mortality, because while it started in the 18th century (smallpox vaccination) it doesn’t smoothly track medical knowledge. See Kenneth Hill, The Decline of Childhood Mortality (https://scholar.google.com/scholar?cluster=14943748495547396980&hl=en&as_sdt=0,31). IMO industrialization was a factor as well as the scientific revolution, because of the way they worked together to give people the expectation that helpful things could be understood & made uniform. Variolation against smallpox was known for centuries before Jenner; his discovery of vaccination created a medical preventative that could be relied on and spread throughout armies, and then whole populations–but that step took industrialization.

      6. And also a comprehensive system of medical care both available to and affordable for a large majority of the population, significntly supported by social security systems and socialized/public medical insurance schemes.

        But we don’t talk about that.

  4. I think people often fail to appreciate just how vastly different things were. We are a world where poverty is associated with *obesity* in developed countries. Even places like Niger have much higher life expectancy than anywhere pre-1800. Less than half of kids 7-16 are in school and average class size is 36, with a literacy rate of 38%, terrible by modern standards, but impressive by pre-1800 ones. There is still a decent bit of malnutrition in some places, but outright famines are mostly an artifact of wars with siege-like contexts, like Yemen, Tigray, or Sudan.

    Which I think is one benefit of putting things in terms of GDP and dollars. Yes, its going to be hard to measure, but the idea of living on the equivalent of $1 per day helps convey for us in developed countries the shear incomprehensive poverty of the past. Though that can make us fall into the failure mode of overthinking how horrible their lives were given that the vast majority of people of all times considered their life worth living.

  5. In engineering classes they taught about these mortality rates by analogy to artificial systems, through the “bathtub curve”. Failure rates for an object are high at the high end of its lifetime, but also at the low end, creating a “mortality curve” that looks like a bathtub with highs at the left and right sides.

    e.g. a new ship in its shakedown cruise is much more failure-prone than one that’s made it through a year of service, with the failure rates rising again as the ship ages.

    For artificial systems and for modern humans, those failures are both preventable (through early QA inspections, or better hygiene) and curable (through repair, or emergency medicine); but without great effort and ingenuity that bathtub curve tends to reemerge.

  6. You mention men dying directly from military violent, but we hear so often that the majority of casualties on campaign are from disease. What I never hear in those discussions is “compared to what?” Do you have a sense for how much the risk of death by diseased increased for those on military campaign compared to the background rate for similarly aged villagers or city-dwellers?

    1. I have no citations, but my intuition is that men on campaign were at higher risk than men in the village. For most of human history, increased population density led to poorer health. (You can see this until very recently in trends of average height, though density isn’t the only factor there.) Men on campaign, like men in cities, were more closely packed and thus more at risk of disease transmission, probably even in cases where an organized latrine area was removed from camp they were still more exposed to human waste, they were (often) further removed from food sources, etc.

      1. Armies also tended to by their very nature be a bunch of people from all over coming together: A perfect place for infections to get around. (I remember seeing this even at university: Every semester had a few weeks at the start were seemingly everyone was sick as people had been home and collected all the germs from their families…)

      2. I think that this is probably the biggest mistake our host makes here. Battles are comparatively safe. It is the campaigning that is deadly.

        Of course, this is anecdotal evidence, but J.E.O Screen’s excellent history of the Finnish Guard Battalion is illustrative. The battalion took part in three campaigns during the 19th century: Poland, Crimean War and Turkish War. Only in the Russo-Turkish war it was involved in major battles. However, in all these wars, it lost about 75 per cent of its strength due to disease, and mainly as dead. Vomit and loose bowels are deadly.

        So, I would assume that military mortality was probably worse than could be inferred from “slaughter’s bills”.

        1. Yeah, we have the famous case of Bygdeå 1620-1640 where 250 households provided 255 soldiers (lowering the total population from 1900 to 1700 in the period) out of which 210 died.

          There’s another example of an east-geatish regiment of 1086 men being transferred to Germany in 1631, when they were arrived in Erfurt there were 750 still alive. When they returned back in 1633 they’d suffered 710 deaths but only 181 from deaths in battle or wounds.

          I

      3. Also, young men (and that’s what armies are made of) are feckless and, if there are no women around, frequently indifferent to hygiene.

        1. While true, I think that can be alleviated by strict discipline. I don’t think medieval monasteries had higher mortality rates than the male mortality in the nearby villages. (Modern monks have a significantly lower mortality than the average man, but they’re self-selecting to be monks, so that doesn’t mean much.)

          Of course, such discipline would require a knowledge that hygiene is important. That factor would have varied among premodern societies.

          1. Medieval monasteries actually did have notably higher mortality rates than male mortality in nearby villages, but that’s more to do with communal living (everyone sleeping in the same couple of dormitories vs. every family having its own house) and the effect that has on the spread of infectious diseases than worsened hygiene- nunneries would have had similarly increased mortality vs. laywomen for the same reasons.

        2. “Also, young men (and that’s what armies are made of) are feckless and, if there are no women around, frequently indifferent to hygiene.”

          Serious historian: “We need to remember that the conditions of military service, even in the absence of combat, promote the spread of communicable disease, which generally causes more casualties and deaths than actual combat does.”

          Non-serious person: “yeah because boys are icky and smelly”.

          1. I had a teenaged brother. He was grungie and so were his friends. They got better.

      4. I get the impression that in these battlefield numbers, “disease” usually refers to communicable diseases like cholera. Where do deaths from infected wounds appear in the numbers? Do we even have enough data to distinguish “died immediately from a wound” from “died three days later from sepsis”?

        1. My understanding in the early modern there tended to be for every 3 killed in action about 2 later died of wounds. Well over half of deaths though would have just been disease.

  7. In terms of the actual causes of death, generally the top 3 causes of death in premodern societies looks like this (often but not always in this order):
    1. Tuberculosis or Malaria
    2. Other Respiratory Illnesses
    3. Dysentery

    Childbirth, injury infections, other illnesses, accidents, epidemics, and the old-age conditions killed plenty of people but generally not individually the top 3. That is still true in a number of developing countries.

    Most of the difference is not so much medicine in the narrow sense as things like clean water, less crowding, better nutrition. And a huge chunk of what is medicine is fairly basic stuff: IV fluids, antibiotics, wound care are probably the most important. Things like treating cancer or complex procedures are a pretty small percent of the total improvement over the last 200 years. Life expectancy in the modern world varies from 54 to 84 years (big improvement even at low end) and most of that 30 years is either not medical or simple medical.

    1. Malaria of course is a warmer climate thing. Tuberculosis was worse in colder climates because sunlight and air circulation are bad for transmission and crowding is good. I suspect most transmission would have occurred in winter like with other respiratory illnesses (another reason for higher winter mortality in addition to nutrition) though it is slowmoving enough to become symptomatic or die at any time.

      1. “Malaria of course is a warmer climate thing”. This is not completely true. Up until the 1920s Malaria was endemic in quite a large portion of The Netherlands, together with a range of other Moeraskoortsen (swamp fevers). We (the Dutch) managed to both “tame” the wild swampy areas and institute public health practices to reduce, and eventually eliminate them. However recently they seem to be on the rise again.

        1. Malaria’s range used to be much broader, eg that is probably the ague in Little House on the Prairie in Kansas, but it was less demographically significant at the northern end of its range even in swampy areas. In the Mediterranean though some areas were considered basically uninhabitable outside the winter because of it.

          1. Yes. David Crockett (yes, that famous coonskin fellow) noted suffering from malaria in his autobiography, contracted in Tennessee of all places, in the 1830s or earlier. This was definitely a problem in Dixie in the olden times.

          2. Finland also had mild form of malaria until the end of 1800s. Finland has been the most northern country with indigenous malaria. Apparently what made this possible was living in houses without chimneys.

          3. “Malaria’s range used to be much broader”

            Mann’s _1493_ said there are two major kinds of malaria, vivax and falciparum, and the deadlier kind has higher requirement for warmth. (Or maybe the mosquitos do, I forget.) The milder kind stretched into England and New England. In the Americas the deadlier form stretches from the Mason-Dixon line to the southern border of Brazil, roughly — also the range over which American societies were dominated by slavery of Africans (with some degree of genetic resistance to malaria), rather than societies with some slavery mixed in.

          4. Mindstalko has it. There’s two different variants of malaria, with the warmer-climate variant being more virulent.

            Interestingly, both variants do exist in sub-Saharan Africa, but sub-Saharan Africans are genetically immune to the less virulent version (plus, it exists at lower frequencies).

            IIRC it’s the virus itself that has difficulty replicating itself at lower temperatures.

          5. @Ynneadwraith,

            IIRC it’s the virus itself that has difficulty replicating itself at lower temperatures.

            Parasite, technically. The malaria organism isn’t either a virus or a bacterium.

            I’m not sure how much this matters epidemiologically, but the malarial mosquitoes actually seem to have trouble surviving and especially reproducing at *hot* temperatures as well. I remember looking up the upper thermal limit for mosquito reproduction one time and it was lower than I was expecting.

    2. I would be cautious about forager demographics. Most of our anthropological evidence comes from groups adjacent to settled societies, pushed to marginal land and so prey to the diseases that brings as well as the stress of being an exploited group. The few accounts from truly undisturbed foragers suggest fairly low rates of reproduction, low infant mortality, low-ish rates of death in childbirth and a high rate of death by violence among males. Living in small groups, moving around and eating a varied diet mean less malnutrition, less exposure to waste and no zoonotic diseases.

    3. “the actual causes of death”

      My impression is that smallpox was a major, if not the top, killer in recent centuries. Conversely, pre-Columbian Americas didn’t have smallpox or malaria — did they escape the 50% child mortality rate? I don’t know, but I didn’t think so.

      In the endemic case, I wonder how significant any one disease really is, vs. “if X doesn’t get you, Y will”. Especially if the underlying cause is weakness from malnutrition or bad weather or exposure to domestic smoke[1]

      [1] another major cause of impaired health, presumably worse in the winter, at least in colder regions, and for women doing the cooking, or elderly close to the fire for warmth… I said ‘woodsmoke’ at first, but it applies to dung, peat, or coal smoke just as much.

      1. Different fuels have very different smokes. In particular, mineral coal has sulfurous fumes, which is rather more noxious than woodsmoke (bad as it is), in addition to a number of further differences in behavior. All of them put together drive major changes in just about everything, at least according to this book [review].

        Wood-fueled homes often didn’t have a chimney; the smoke just gradually filtered out through the thatch roof. It filled the interior space “from the top down”, under favorable circumstances creating a distinct transition level, thus you would want to stay as close to ground level as possible. (Today legless chairs and very low tables, particularly kotatsu, are best known from Japanese culture.) This cannot be done with coal, you absolutely need a chimney to remove as much of the smoke as it can, which causes much more ventilation of the room (i.e. draws in cold air), consequently while the “standing-up” part of the room is now habitably smoke-sparse, the ground is now cold.

        This is just one of the several differences mentioned.

        1. ‘according to this book [review]’

          The review is fascinating and y’all should read it, and maybe the book too.

          To me it suggests re-evaluating the apparent poverty of a one or two room house with little furniture; you don’t need or want big furniture if you’re sitting on the ground for health reasons. Meanwhile the coal fire and chimney enable (or force) smaller rooms but also forces you to get bedsteads, chairs, and tables just to be comfortable.

          Then there’s coal smoke being greasier and stickier, moving cleaning from “sweeping, sand-scouring, and some use of wood ash” to “soap needed, and twice as much labor too”, with shifts in the desirability of tapestries vs. wallpaper.

          1. I read an article a while back on the neolithic roundhouses in Orkney where they posited that their odd compressed stone ‘bed’ structures were for propping people upright while they were sleeping as the woodsmoke would have impacted their breathing so heavily they wouldn’t have been able to sleep lying down.

            I mainly remember thinking ‘right, so these are people who are farming a remote Scottish island with scratch-plows and grinding their own grain by hand….but they can’t breathe well enough to sleep lying down’.

            Though perhaps they were sleeping quarters for the older inhabitants who *were* suffering from chronic smoke inhalation, but this seemed to suggest that it was everyone in the household.

          2. Pain is a bigger problem when you’re trying to sleep than it is when you’re working.

          3. @Bullseye While that’s true, it’s not really pain that was the argument (from what I understood). It was lung function (i.e. sleeping horizontally with damaged lungs causes drops in blood oxygen severe enough to wake you).

            That’s true…but people with lungs that badly damaged would find it *very* difficult to survive a heavy-exertion neolithic lifestyle. And they’re a common feature in most, if not all, of the houses on Orkney.

            Perhaps there’s a window of time between ‘can’t sleep’ and ‘can’t work’ in lung function I suppose. But it seemed implausible to my (lightly) uneducated mind! Especially as this sort of bed is restricted to Orkney AFAIK and doesn’t particularly show up elsewhere. You’d think lung damage would be fairly common to most cold areas pre-chimneys.

            Though, to be fair, I don’t many more plausible explanations floating around. So perhaps that is it.

          4. There are a lot of “sleeping in an almost sitting position” traditions. Late medieval Western European box beds. 17th/18th c. Central/Eastern European nobility (beds “normal” other being remarkably short), e.g. the preserved stuff of a Rákóczi.

          5. Interesting. The wiki article linked states this (though without a reference, unfortunately):

            “Lying down was associated with death, and therefore sleeping was done in a half-upright position.”

            If that is the case, it does go to show what an uphill struggle we’re on trying to determine what the folks in Orkney were doing over 5200 years ago…

            Also, interesting that the box beds seem to be clustered around Celtic Britain/Brittany, plus the Low Countries (ex-Belgae country?). Exceptionally tenuous to draw a parallel between Orkney upright-sleeping and later post-Celtic practices…but I’m going to throw it out there anyway.

            Probably nothing but mindless speculation.

    4. Hand washing, sterilization, and antiseptics are probably the most basic and taken for granted part of both modern medicine and public health. Childbirth deaths from dirty hands causing puerperal fever were a significant percentage of maternal death. Many deadly diseases can be transmitted by the fecal-oral route, for example with Typhoid Mary (her dormant Typhoid was transmitted by her job as a cook, but its because her bad hygiene that the food got infected).

      The sad thing is the top 3 you list are still major killers, mostly in poor countries, despite being preventable or curable (TB is curable for instance, and malaria has been eliminated from rich countries like the US and Italy, but also middle income countries like Cuba and Mexico).

  8. The way I have phrased it when “average life expectancy” discussion comes up is thus:

    The arithmetic average of a bimodal distribution is not a useful number.

  9. “a woman who gave birth five times with a 2% chance of dying each time has a total chance of dying at one stage or another of 9.67%, which as you will note is not far off from the chance of dying in a battle (though, contra the House of the Dragon team, who do not appear to understand statistics, it is not that each birth has the fatality of a battle but that a lifetime of births does, which is not the same).”

    Medieval Europe had convenient control groups for both sexes!
    Nuns were mainly ladies and those who went to nunneries as virgins (as a lot did) simply averted the risk of dying in childbirth. A large part of clerks were from nobility, whose fathers and brethren were liable to fight.
    For women, menopause or say age 50 places a clear upper limit to liability to die at childbirth. For men, death in battle is hazier – Antigonos Monophtalmos fought and fell age 81. Still: what were the chances of a wellborn 15 year old nun to survive illness and accident and live to 50? How did it compare to the chances of her married sister of dying as direct or indirect complication of her pregnancies? And now compare the chances of living 15 to 50 of their brother who was a clerk, and the brother who was a knight of falling in battle.

    1. Well I didn’t expect this blog to make me cry today. Those burial inscriptions are heartbreaking, though.

      1. Thought the same thing.

        It’s the counting of the days for the 1-and-a-bit yr olds that did it for me.

        Seems to be a broader cultural practice judging by the other inscriptions, but it certainly humanises it that little bit more.

      2. I didn’t cry… but that’s only because I’ve previously spent an entire evening sobbing over the sheer amount of human suffering that “roughly half of all people born didn’t reach adulthood” represents. It gets worse when you consider just how many social traditions boil down to “try to make it hurt less when babies die”.

        It’s a miserable topic, with the only upside being how sharply the infant and child mortality rates dropped as soon as we actually had the means to do something about it.

    2. I think the estimate of women giving birth only five times is an underestimation – with no reliable contraception, women could be pregnant about twenty times if they married in their twenties and entered menopause in their fourties. And a woman would have to be pregnant more often than five times to have even just three surviving children. (5/2 equals 2.5.) Since many women would have died in their first childbirth, and some became nuns, I don’t think 2.5 would have been replacement level.

      A woman with a rapist husband who gets her pregnant against her will and against the midwife’s advice whenever he can, would have a maternal mortality risk of at least 2*10 = 20%, and that’s assuming she cannot get pregnant while breastfeeding, but as many women have found out, breastfeeding is not a safe method of contraception.

      I’d be really interested in the results of research using nuns as control group, because those would give us something closer to the real numbers, since they would include female mortality from malnutrition (triggered by infectious diseases) after too many pregnancies. (Nuns might of course have suffered iron deficiency from fasting, but the average peasant woman wouldn’t have been able to eat meat every day, either, so that should even out.)

      Comparing fighting and non-fighting men, likewise; although it must be said that monks live longer than the average man even in the modern world with zero war mortality. (And since no one is forced to enter a monastery in modern times, we can’t really tell whether monastic life improves men’s longevity, or if men who are more health-conscious than average self-select to be monks.)

      1. A woman with a rapist husband who gets her pregnant against her will and against the midwife’s advice whenever he can, would have a maternal mortality risk of at least 2*10 = 20%,

        higher actually- if one has a 2% chance of dying each time, and 20 pregnancies, that’s a combined mortality risk of 33%.

        for 10 pregnancies it would be about 18%.

      2. “women could be pregnant about twenty times”

        Poor Queen Anne had 17 pregnancies, 5 live births, and zero adult children.

        OTOH, methods of family planning and contraception were quite inferior but weren’t non-existent, even beyond nursing. Varying by society, like Roman silphium or pessaries vs. more medieval ignorance. I wonder if we have any good data on birth rates across societies.

        1. The birthrate of the French peasantry crashed after the revolution, as land tenure encouraged small families. This was in the early 1800s, so some combination of pre-modern methods could work.

        2. Pre-modern peasant fertility tended to be in the 4-6 kids range. Long before modern birth control people have controlled their fertility. Judging from the early modern period, abstinence once you had as many kids as you wanted was probably pretty common and there were various other strategies of varying efficacy.

        3. Silphium’s primary use was simply as a spice for food, no? I can’t help but wonder if, after its extinction became widely rumored, some standup comedian remarked “if you only have sex after eating silphium, you won’t have kids” with too straight a face, and some hapless source wrote down that it used to be a contraceptive, what a pity it went extinct.

          That said, some dirt common spices such as rosemary are abortifacients, and perhaps people just did not care too much about the distinction before (very recently) science cleared up fertilization and embryology? To them, the important marker may have been the quickening. (Something soul something.)

          1. Largely this, yes.

            https://talesoftimesforgotten.com/2020/01/04/no-the-ancient-romans-didnt-overharvest-silphium-to-extinction-because-it-was-a-highly-effective-contraceptive/

            TLDR; some of the only mentions of silphium being used as a contraceptive appear in Pliny the Elder – but in the context which effectively calls it a miracle cure – akin to say, the contemporary claims of ginger being able to cure Covid. Moreover, it’s unclear if it actually went extinct, or if people simply forgot which kind of fennel the Romans meant by that. (Similar to the way people forgot what the third shaker in the 19th century restaurants was used for, but which clearly doesn’t mean some mythical spice disappeared.)

            Granted, the blog mentions that the book which started it all does cite research in rats which shows that extracts from some of those obscure fennel species may have ~50% contraceptive efficiency (and one species was even claimed to be nearly 100% effective in some study, but it’s very inconsistent, with other studies I found showing no effect); but again, even if silphium is one of those, there is no evidence it was consistently used for that purpose, in contrast to the mountain of sources praising its value as a spice.

          2. Silphium is the most famous herbal abortifacient, but there’s tons of others about. For Europe you have pennyroyal, tansy, safflower, scotch broom, angelica, mugwort, feverfew, herb-of-grace, white hellebore and wormwood as a literal starter for 10.

            All of those were known abortifacients named in medieval literature as being known to common herbalists as well as the burgeoning medical professions.

            Most have primary uses other than as an abortifacient (angelica is quite tasty, and 5 guesses as to what feverfew is for), but these are all readily available wild herbs. I’ve got mugwort, feverfew and pennyroyal growing in my garden in the South of England as we speak.

          3. Yep. The same here. I think there are at least three or four abortifacient herbs growing in my garden. However, I would never recommend anyone to try an abortion with them, as most of those tend to have also a wide range of other effects: liver damage and nervous symptoms are common, and the lethal dose is often very close to the dose of the desired effect.

            There was a very good practical reason for old church fathers to oppose abortion: ancient methods were extremely dangerous also for the mother.

          4. @Finnish reader Oh yeah absolutely!

            AFAIK the actual process works similarly to chemotherapy. Poison yourself just so much that your body aborts the pregnancy, but not quite enough to kill you. That window is bigger with some herbs than others, but it’s still a window you can readily miss!

            Especially if you don’t have generations of traditional knowledge telling you how strong to make a medicinal tea.

      3. I think you’re missing consideration of the widespread knowledge and availability of abortifacients in most places in the world. There’s absolutely tons of them, and most of them are readily available wild-grown herbs and pose a low risk to the mother.

        Silphium is the most famous (mainly for being harvested to extinction by the Romans), but there’s tons of others about. For Europe you have pennyroyal, tansy, safflower, scotch broom, angelica, mugwort, feverfew, herb-of-grace, white hellebore and wormwood as a literal starter for 10.

  10. One of the things that I ponder about is the change from the post where old people tended to “disappear” (to make more room for the people in the prime of their lives who were still having children) when times get tough throughout most history to the modern era where we have people typically living record long lives.

    Most of the world (regardless of economic system) is experiencing severe fertility crisis that is going to cause a problem because we’ll have an older generation that just can’t get the support they were expecting as there won’t be enough young people to provide services for them (Japan being among the worst in this regards). This has been pretty well discussed recently, and goes beyond financial things like Social Security as there *literally* won’t be enough young people to provide things for older people.

    But what happens if those older people still monopolize the labor of that small amount of young people where they don’t have the time and resources to comfortably raise children, and then the problem becomes worse within another generation or two? Do we enter a (literal) death spiral?

    1. But what happens if those older people still monopolize the labor of that small amount of young people where they don’t have the time and resources to comfortably raise children, and then the problem becomes worse within another generation or two? Do we enter a (literal) death spiral?

      I mentioned this in another comment, but I think this projected figure deserves being linked to on its own, since it really is quite striking.

      https://www.populationpyramid.net/china/2100/

      That is, I do not think the idea of China dropping to a “mere” 630 million people is that dramatic. It had the same population in 1960, and it was obviously lower still up until that. It was also one of the most populous countries in the world with those numbers then, and it would stay that way in 2100 if that was the whole story.

      What makes things much more interesting is the part of the projection where ~25% of the country would be 80 and older by then – and around half the population would be older than 60. (This seems to be the most prominent example of such an “inverted pyramid” structure I could find – though places from South Korea to Ukraine, Belarus and (North) Macedonia are not expected to be far off in this regard.) This blog at times features quite interesting comments by those who know more economics than I do, and I now wonder if they would be in a better position to find serious works attempting to explain how such a societal structure could work, since I’m at loss right now.

      1. Don’t those figures assume that TFR stays constant from now on? Which seems unlikely, if it has fallen by a factor of around 3 within living memory. Think about what happens if current trends continue.

        1. Yeah, UN population projections for a lot of countries are too optimistic in that they generally for countries with low fertility assume plateauing or even slight increase even if there is no sign of that. A lot of developing countries are going to go through the same transition Japan did, just faster, further, and poorer. China is a good bet to become the go-to example given its shear size and the speed its population will start aging (especially if its birth numbers have been inflated in recent decades as some suggest).

          Japan is in some ways a sobering reminder. The 1990s were very rough, but the last 25 years it has actually seen nearly as fast growth as the US. In terms of GDP per working age adult that is. I.e. nearly stagnant in absolute terms.

          I do worry what happens when the world as a whole is a Japan (especially since many societies, eg Spain and Italy haven’t handled their transition as smoothly) between declining prime age population and research suggesting older societies have less dynamic economies. Especially since I feel technology is anyway slowing down quite a bit outside computer-related stuff. It is now nearly as long from the last landing on the Moon as it was from the first heavier than air flight to landing on the Moon! In a lot of ways a household in 1970 I think would be more recognizable to us than a household to them in 1915.

          1. Concerning your last paragraph, I think that is a transient artefact of global development.

            The *median* household in 1970 lived in a pre-industrial house built of traditional materials and enjoyed *just barely* enough wealth to not go hungry after a bad harvest. The median household *today* lives a solidly industrial lifestyle, making several times the subsistence income.

            Looking at the world as a whole, the pace of economic growth and technological development 1970-2025 looks just fine.

        2. I really don’t think you can say that.

          https://population.un.org/wpp/definition-of-projection-scenarios

          In addition to the projection for the medium scenario based on probabilistic methods, which provide 80 and 95 per cent prediction intervals, the 2024 Revision includes thirteen different deterministic projection scenarios that convey the sensitivity of the medium scenario projection to changes in the underlying assumptions, and to explore the implications of alternative future scenarios of population change. The results for these additional projection scenarios are available from the Download Center (for the standard projection results in Excel file format as extra worksheets in each workbook file), or CSV file format. Probabilistic projection results are available as a separate set of files, see Download > Probabilistic Projections and Figures.

          Medium scenario projection: In projecting future levels of fertility, mortality and international migration, probabilistic methods were used to reflect the uncertainty of the projections based on the historical variability of changes in each variable. The method takes into account the past experience of each country, while also reflecting uncertainty about future changes based on the past experience of other countries under similar conditions. The medium scenario projection corresponds to the mean fertility and mortality and median net migration of several thousand distinct trajectories of each demographic component derived using the probabilistic model of the variability in changes over time. Prediction intervals reflect the spread in the distribution of outcomes across the projected trajectories and thus provide an assessment of the uncertainty inherent in the medium scenario projection.

          Fertility scenarios: Eight of those scenarios differ only with respect to the level of fertility, that is, they share the same assumptions made with respect to sex ratio at birth, mortality and international migration. The eight fertility scenarios are: low, medium, high, constant-fertility, instant-replacement-fertility, no fertility below age 18 years, accelerated decline of adolescent birth rate (ABR), accelerated decline of ABR with recovery. A comparison of the results from these eight scenarios allows an assessment of the effects that different fertility assumptions have on other demographic parameters. The high, low, constant-fertility and instant-replacement scenarios differ from the medium scenario only in the projected level of total fertility. In the high scenario, total fertility is projected to reach a fertility level that is 0.5 births above the total fertility in the medium scenario. In the low scenario, total fertility is projected to remain 0.5 births below the total fertility in the medium scenario. In the constant-fertility scenario, total fertility remains constant at the level estimated for 2024. In the instant-replacement scenario, fertility for each country is set to the level necessary to ensure a net reproduction rate of 1.0 starting in 2024 (after taking into account the survival up to reproductive ages). Fertility varies slightly over the projection period in such a way that the net reproduction rate always remains equal to one, thus ensuring the replacement of the population over the long run.

          For the 2024 revision, three new scenarios were introduced to consider the potential impact of changes in fertility rates among adolescent women and girls: (1) The no fertility below age 18 years scenario assumes that fertility rates at ages below 18 years will immediately fall to zero in 2024 and remain at zero throughout the remainder of the century. (2) The accelerated decline of adolescent birth rate (ABR) scenario considers that fertility rates at ages below 20 decline by 20 per cent annually, beginning in 2024, until the adolescent birth rate falls below 10 births per thousand women aged 15 to 19 years. (3) The accelerated decline of ABR with recovery scenario assumes the same accelerated fertility decline as in the second scenario, but also that half of the reduction in fertility among women and girls younger than 20 years is recovered once those cohorts have aged 10 years (i.e., half of the reduced fertility among women aged 17 is recovered 10 years later among women aged 27).

          I’ll also note that, quoting the main Methodology page, the raw data for this 2024 revision comes from:

          1,910 population and housing censuses for 237 countries or areas; Information on births and deaths from civil registration and vital statistics systems for 169 countries or areas; 3,190 surveys, including demographic and health surveys, conducted in 237 countries or areas, among which 483 were administered in 2015 or later; Official statistics reported to the Demographic Yearbook of the United Nations, and to Eurostat (European Commission);

          In addition to the national data sources described above, the 2024 revision has considered international estimates from: Refugee statistics from the Office of the United Nations High Commissioner for Refugees (UNHCR); Estimated time series of adult HIV prevalence and coverage of antiretroviral treatment from the Joint United Nations Programme on HIV/AIDS (UNAIDS); Estimated time series of infant and under-five mortality from the United Nations Inter-agency Group for Child Mortality Estimation (UN-IGME); Estimated of international migration flows and stocks of foreign-born persons from the United Nations; Various other series of international estimates produced by international and regional organizations, and academic research institutions.

          Apologies to Our Gracious Host for taking up so much comment space with this quotation, but considering the multiple posts on here elaborating just how much people underestimate what it takes to be a professional historian, I hope he will understand the need for this demonstration that professional demographers employed by the UN are professional for a reason, and are most certainly not going to be stumped by the absolute basics of their field!

      2. @YARD,

        North Macedonia, interestingly, may be the first country in Europe to lose its demographic majority, as the fertility rates among “Macedonians” (I guess some linguists don’t consider them to be separate from Bulgarians) has declined faster than among Albanians (though i think both of them are below replacement now).

        In terms of the elderly, I do think that with health improvements, you’re likely to see people actively contributng to the economy (and/or being required to) till more like 70 rather than 60, so we’re talking about 10 years of dependency rather than 20. On a slightly more morbid note, I also think that you will see stuff like “Medical Assistance in Dying” becoming more acceptable and widely practiced. Especially in places like Japan and India, which subscribe to non-Abrahamic religions that don’t consider life to be sacred in the same sense, but also in the US and Europe as they de-christianize.

        1. About that last point, a look at the countries which have legalized assisted suicide and/or euthanasia (which is apparently Australia, Austria, Belgium, Canada, Luxembourg, the Netherlands, New Zealand,[86] Portugal, Spain and Switzerland), suggests that a common denominator is the proportion of unaffiliated reaching ~40%, with Portugal and its 80% religiosity as the only big exception. (And perhaps Italy and Ecuador, with their ~16% and ~7% irreligiosity, if we are counting them in this column, in spite of their governments seemingly unwilling to formalize the decisions of their supreme courts on the issue. Germany and Estonia are similar, but they also have the same ~40% fraction of the unaffiliated.)

          On the other hand, all the key East Asian countries have similarly high fractions already, yet none have formally moved in this direction. For that matter, neither has the Czech Republic – quite possibly the least religious country in Europe. I suspect that reasonably high trust in government/social institutions (i.e. ~50% or higher) is likely another factor required for a formal move in this direction, as people have to believe the state/hospital admin would have their best interests in mind and not just kill them or their loved ones to save costs as soon as they can get away with it. At first, I was thinking that this theory wouldn’t explain the reluctance in Japan, often (perhaps stereotypically) known for such trust – but apparently, a range of European countries actually score higher on this metric.

          https://worldpopulationreview.com/country-rankings/trust-in-government-by-country

          1. I’m not an expert on Japanese culture, but I can easily imagine it being the kind of place where no one wants to be the person to suggest physician-assisted suicide to an elderly person.

  11. I do think a lot about how this model changes in the stereotypical pseudo-historical fantasy setting with magical healing but few signs of greatly increased agricultural production (except insofar as the creators have simply forgotten about the need for farmland near their cities). Say you assume medicine roughly equivalent to that in the mid 1950s US (which I use because that’s when my parents were born). Maybe vaccination / inoculation isn’t common as common but this is made up for by better treatments for viral infections. What happens?

    Probably population density probably doesn’t massively increase. Maybe more marginal land comes under cultivation but if people don’t voluntarily limit family size—and by hypothesis magic’s effect on food production is at most a modest boost—at some point you *do* hit the Malthusian limit. Like better treatment for infectious disease can increase how much malnutrition can weaken your immune system before you are more likely than not dead of infectious disease, but at some point starvation kills you directly.

    Probably average number of people living under one roof is higher, but not massively so. Peak number of related individuals under one roof might be six? Maybe seven? But I’m not even sure about seven. Might depend a bit on the extent of famine-related child mortality.

    Going from 50% of your population under 15 to 20-25% of your population under 15 probably has interesting effects though. A lot of education probably still comes in the form of apprenticeships but on the margin kids probably have a lot more time for stuff like “learning to read even though strictly speaking you don’t need it to be a successful farmer”.

    1. Presumably either, A) magical birth control exists but is not mentioned, with the result that the overall population is about the same (because it’s still constrained by the food supply), but with fewer births and fewer deaths. Demographics relating to age, nuptiality, etc. would look more like the modern world- basically an otherwise modern society practicing agrarianism. Most pseudo-historical fantasy settings seem to meet this description.

      Or, B) the number of births is the same, but people die of natural causes a lot less, meaning a much greater incidence of violent conflict as way-too-many healthy adults fight over limited food and farmland. This would explain why in every fantasy video game the population is mostly bandits.

      1. A few fantasy writers mention magical birth control – Mercedes Lackey, for example, notes that the Healers in her Valdemar series can make contraceptive medicines.

        And there were quite a few claimed birth control methods in real life antiquity – some of which actually worked without too many side effects unless your name was Onan, some of which were extremely dangerous, and some of which were total balloney.

        1. Besides the demographic implications, these can also help justify societies with less rigid gender roles and double standards about sex.

          1. I’d say child mortality is a big factor too. If your society needs a high birth rate to maintain population, there’s going to be pressure to push women into motherhood.

      2. I would imagine a world with access to cheap and effective magical birth control would also lead to improved social status for women, just as the birth control in our world led to the sexual revolution…a pretty good explanation for those medieval fantasy settings that inexplicably have modern social norms?

        1. Ready access to ‘magic’ for everyday purposes, then low infant mortality, therefore much less or no patriarchy. What then imposes the limit? Few fantasy writers think this through consistently (a lot seem enamoured of mud, blood and poverty, even where this is unnecessary).

        2. I think women’s improved social status stems from early feminism and from the fact that men died a lot in World War I and II, (and women didn’t die in childbirth as much as previously) giving women the advantage of numbers.
          With patriarchy still firmly in place, women wouldn’t have been able to get the contraceptive pill without their husbands’ knowledge and permission.

          We observe a severe backlash against feminism right now, which makes sense seeing as the generation whose males died in World War II has pretty much died out by now.

          That said, with healing magic, women at least wouldn’t have the disadvantage in numbers they’d otherwise certainly have if expected to fight in wars in addition to having children.

          It would depend on whether men want to/ are able to hoard the magic all to themselves. (If you look at Afghanistan, where men deny women access to modern medicine … I think the maternal mortality will soon be at medieval levels, perhaps even worse, exacerbated by widely accepted rape of little girls, malnutrition, and Vitamin D deficiency, among others.)

          Many a fantasy rpg explains its modern social norms with the fact that in addition to an unending supply of 100% reliable contraceptive plants, women have the same fighting ability men have, and can get pregnant and survive pregnancy *additionally*. (Which is somewhat unrealistic, seeing as men’s massive advantage in almost all sports from iceskating to weightlifting is from their narrow hips and the very cost-intensive muscle mass that would likely mean that they’d starve if confronted with a pregnancy and a famine at the same time. And women have a lot of muscle mass in the uterus, very strong muscle, which is sadly completely unusable for punching bad guys in the face; they’d have to have that arm muscle for fighting in addition to the uterus! )
          Playing a female character just wouldn’t be much fun in those heavily fight-based games otherwise, so that’s totally understandable and I wouldn’t dream of complaining about the lack of realism.

          Of course, once you have women who can do magic, which is not impacted by a physical ability to survive pregnancy and birth, that can be used to explain any and all changes you make to the social structure.

          I wouldn’t be able to name any fantasy universe that has completely thought through the implications of women having magic, though – the fantasy settings with their inexplicably modern social norms are just that – modern. They imply a very recent patriarchal past instead of a world where patriarchy was never a thing.
          It is, I suppose, very hard to imagine such a world.

          1. Great points!

            And yes i’d be interested to read a fantasy that took the concept of a world without patriarchy seriously =. I actually think- speaking as someone trained as a biologist- that the invention of contraception was really one of the biggest changes in human history, and will turn out to be one of the most lasting. arguably more important than the development of capitalism, communism, liberal democracy, most of our religions, or almost anything else.

          2. All of that and more. It is hard, and harder when one considers the inevitable variations due to culture. I’ve had a stab at it in my own writing, but keep thinking of more angles.

          3. Feminism and WW1, yes — but for slightly different reasons. As explained here, earlier the household was the natural unit of production, and it was not particularly artificial to identify it with the paterfamilias. Work was also strongly gendered due to the sheer amount of time that had to be spent spinning thread (see the post on this blog) and the upper-body strength needed to till the soil. With urbanization and industrialization, spinning was taken over by machines (and the productivity of farming increased), allowing women to work in factories and clerical jobs — largely the same jobs as men did — while the constituent unit of production became the individual, not the household. Thus feminism predates WW1, though the latter was a great boost to it: women working in factories making artillery shells (or just generally replacing in various jobs the men who went to the front) were now crucial to the war effort.

            (“Numerical advantage” is not, in this sense, something you necessarily want! Pass out some number of green and yellow sashes to a group of people, offer $100 to yellow+green “couples” but nothing to singles, allow people to split the $100 however they want. If there are more yellows than greens, some pairless yellows will approach existing couples and offer a more generous split to the green if they switch. Soon the equilibrium becomes that the green partners get nearly all of the $100, the yellows next to nothing.)

            One way to have nonpatriarchy (for certain values…) is to limit population density by something other than food availability (e.g. tropical diseases). People can use more land and less labor, i.e. food is cheap, single women can mostly feed themselves and their children with little help from men. (The men collectively clear patches of forest for shifting cultivation hoe agriculture.) This can accommodate a mixture of relationship types according to individual taste: serial monogamy, polygyny, and, err, promiscuity. In fact, you can crank up the social complexity and population density, and the combination of large (I would say multinuclear, but then they aren’t quite that) households with matrilineal descent and (unusual sense!) matrilocal residence is a pattern that exists and works. (You need a large household to even out birth sex ratio noise. In small households, not every man will have a sister and nieces/nephews to take care of, instead he will use his resources to pursue e.g. women who don’t have a brother, collapsing the system into the patriarchal setup.)

          4. In fact, you can crank up the social complexity and population density, and the combination of large (I would say multinuclear, but then they aren’t quite that) households with matrilineal descent and (unusual sense!) matrilocal residence is a pattern that exists and works.

            @BasilMarte,

            I’ve read about the Mosuo, yes. Before I clicked on the link, I thought you might be thinking of the Nairs of Kerala, who also had a matrilineal society (although their sexual morality, though quite permissive in some ways, would probably not be exactly to the taste of American progressives). Like you allude to, they also had the expectation that men invest resources in their sister’s offspring rather than in their own offspring. Which, in a highly, errr, promiscuous society makes evolutionary sense: you can be absolutely certain that your sister’s children are your genetic relatives, while you can’t be sure of your sexual partnr’s children.

            https://caravanmagazine.in/vantage/what-end-kerala-matrilineal-society

          5. Depends on what the magic does. To really escape patriarchy you need more than contraception. How about male lactation, the possibility of gender change, magic providing strength and speed? I think people would still mostly pair off, but other bonds be more important than they are with moderns, as children need continuity.

          6. “To really escape patriarchy”

            One idea: magic potential follows the actual magical power of the mother, in a Lamarckian way. (Maybe the mother’s magic field influences the fetus.) Powerful women have powerful children. ‘actual power’ means you can’t just use untrained women as breeding stock.

            Might get you avunculate matrilineality and matrilocality as a standard social form.

          7. Why assume men would oppose contraception? With better child survival rates they’d have every reason to want to limit the size of the family, and they’d certainly want to prevent unfortunate evidence of their philandering!

          8. @Hector, Graydon Saunders’ “Commonweal” series might be interesting; here’s a review of the first book: https://jamesdavisnicoll.com/review/a-world-of-ancient-magics

            (To give a flavour of the series’ stance on a) patriarchy, b) infodumping: it can be difficult to know for sure what gender a character is as gendered pronouns are used as intimacy markers – and you don’t find that out until the second book because the protagonist of the first book is single.)

      3. Either that or it’s the whole ‘everyone identifies with the elite’ situation all over again.

        What we see in fantasy worlds is a snapshot of the upper classes, with the overwhelming agricultural majority (who live without regular access to fancy stuff like extremely effective healing magic), give birth/live/die in precisely the same way as they do in the real world.

        The story just never acknowledges that they exist (see the utter lack of farmland surrounding cities as a prime example of this erasure).

    2. That’s going to depend on how common magical healing is. Fantastically rare? No demographic impact (oh sure, the King’s family might have better survival rates, but that’s it).

      Rare, but not unheard of? The nobility’s children will probably have a vastly higher survival rate, and their elders might live much longer, but anyone under them probably aren’t impacted much. Even if you have wandering clerics who visit each village once a month to impart a bit of healing, their impact is probably lost in the noise.

      Magical healing would probably need to be about as common as modern medicine is today (i.e. 70%+ of the population in a wealthy society has access to magical healing on a regular basis and, most importantly *on demand*) for it to really impact anything.

      1. Healing, yes. But if magic were applied to food storage or clean water or crop enhancement or such-like, the society that did so would have a significant edge.

    3. If this pseudo-medieval soceity had lowered infant mortality rates but comparable birth rates, and the resulting higher population density placing more pressure on both food production and other factors, this would mean a lot more war and conflict- both a significant increase in the baseline level of banditry and constant low-level skirmishing between neighbors, and probably also lead to apocalyptic magic-augmented mega-wars and widespread social collapse roughly once a saeculum as populations hit a hard limit and gets culled back down again.

      Which, come to think of it, maps pretty well onto the average RPG setting. Lots of bandits, lots of wild space, lots of ruins and abandoned treasure everywhere. A world that is permanently either post-apocalyptic or barely-pre-apocalyptic without ever truly stabilizing.

      1. I’ve sometimes thought that in a world where magic worked, humanity would still be hunter-gatherers. Many cave paintings and petroglyphs are interpreted as magical invocations for successful hunts. If those worked, I could easily see agriculture never developing. I can also imagine tribes getting stuck in a boom-and-bust population cycle, where their numbers increase until they eat all the game in reach of their spells, then crash until the wildlife recovers.

        1. If women had magic, and men therefore couldn’t oppress them, those hunter-gatherer societies would not have population explosions. Very few women want more children than replacement level, and those would be counterbalanced by women who don’t want any children.

          Factor that in, and you have the hunter-gatherer elf societies that many fantasy settings feature.

          1. Very few women want more children than replacement level,

            I don’t think that’s quite true *yet*, but you can certainly argue that desires for large families are a persistant legacy of Abrahamic religions and other patriarchal institutions, and that they will go away with time (on the large scale, i mean, obviously there are and always will be a small minority of women who just really want tons of kids for personal/affective reasons). And that those attitudes (again, at the large-scale cultural level) would never have evolved in the first place in a world without patriarchalism.

          2. “Very few women want more children than replacement level, and those would be counterbalanced by women who don’t want any children.”

            That should lead you to expect sub-replacement fertility.

            It’s quite a justification of patriarchy, to say that humanity will inevitably go extinct without it.

          3. “It’s quite a justification of patriarchy, to say that humanity will inevitably go extinct without it.”

            But low female and infant mortality, low fertility is pretty much the forager pattern. They spread over the world rather than go extinct.

            Patriarchy in the sense that men were almost always higher in the pecking order, yes. But control of fertility was pretty much a female thing.

          4. “But low female and infant mortality, low fertility is pretty much the forager pattern.”

            Why do you say low mortality? My impression is that forager infant mortality is still quite high. Fertility is potentially lower with a typical 4-year spacing due to nursing and mobility, vs. a potentially 2-year spacing for sedentary mothers. But as other comments have said, farmer mothers don’t always pump out as many babies as they can.

          5. @Mindstalko I think the argument is that the hunter-gatherer mortality rates we have access to through modern anthropological studies are heavily skewed by the fact that modern hunter-gatherers are entirely pushed onto *very* marginal lands by agriculturalists, and are affected by disease transmission from agricultural populations and other such factors. As such, they’re not representative of hunter-gatherer populations in the past.

            I don’t personally know the studies that attempt to untangle all of that (presumably using archaeological data), but maybe Peter has them to hand.

          6. “But low female and infant mortality, low fertility is pretty much the forager pattern.”

            But not sub-replacement fertility. If no one wants above-replacement fertility, the average has to be sub-replacement.

          7. As Ynneadwriath said, but, on reflection, qualified. The great killers of children are dysentery and infectious diseases. Most of these require either static populations or considerable numbers of hosts to remain viable (cholera needs a few tens of thousands). But the exception is Africa, where we evolved and the diseases with us. EG, malaria kills a lot of infants, and does not discriminate between foragers and farmers. Elsewhere, we are basically a feral species.

            Parasites are another concern. Again, Africa has most, but there are some elsewhere. Some transfer from livestock, others from wild animals.

            The best anthropological evidence on undisturbed forager life is from Australia (often from convicts who joined aboriginal groups). It may not be entirely typical.

    4. Yeah, nicer ‘medieval’ fantasy tends to have population control (implicit or explicit) and low mortality — I think more from a simple absence (or low prevalence) of disease rather than from explicit magic or blessings. (One could imagine an early blessing doing the work of childhood vaccination, but don’t think I’ve ever seen that.) And thus ‘modern’ demographics and (sometimes) gender roles — ‘modern but agrarian’ as someone said.

      I feel a common depiction and plausible consequence is that most women still tend to be mothers (and peasants/farmers, like most people), but they have more spare time after the 2-4 children have passed early childhood, and there’s more tolerance for women who want to do something else.

      Tolkien elves have their own biology (naturally limited birth rates and immunity to disease.) Humans, no clear explanation, but Gondor is said to be able to cure almost any disease, and there’s also a lingering blessing over Numenorean populations. The Shire, I dunno. Some fans think the dead kids are simply left out of the family trees, but there’s no mention of dead kids anywhere.

      Jo Walton’s _The King’s Peace_ has land gods which provide ubiquitous magic for contraception and healing (especially of wounds; you can even re-attach severed limbs if you have the weapon that dealt the wound.) She has a higher rate of women in warfare, and the magic was in large part to justify that.

      Twelve Kingdoms is pretty out there, but people grow on trees by prayer and divine grace, and disease mostly happens when the emperor has offended Heaven. Also Heaven chooses emperors and empresses roughly equally. So pretty different society makes sense, even if it’s still rooted in wet rice farming.

      Some settings have magic that makes _growing_ crops easier — optimal yield magic (and implicit suppression of weeds and pests), weather prediction, stuff like that. But you still have to bring the harvest in the old fashioned way, and plow and sow the land. I’ve never seen a setting that went “no guns, but let’s have horse-drawn harvesters”, even though I suspect the Romans could have built harvesters if you gave them one to copy.

      Lois Bujold is more biology-aware than many SF authors, yet her Five Gods universe seems to fall into “nice fantasy” mode without clear explanation (the gods are real, but have spiritual rather than material power.) Her Lakewalkers (basically Tolkien’s Rangers with well-defined powers), OTOH, are fully covered, from universal birth and healing magic to a magic crop that minimizes food production labor (but needs some magic to germinate, so the mundane farmers can’t benefit from it.)

      Pern is a pre-modern agrarian society without any relevant magic and hardly any war, but seems to be in the nice-fantasy mode too. In this case it’s possible the colonists left most diseases behind, or benefit from some past genetic engineering.

      Bujold’s Barrayar, another lost colony, probably had realistic population dynamics before being contacted again, but we don’t see it that far back.

      1. Barrayar traditionally had an explicit policy of infanticide of any “mutant”* children. And I’d suspect that what qualifies as a mutation might vary a lot depending on local economic conditions.

        One story in the series “The Flowers of Vashnoi” deals with an outcast woman who collects and raises the babies abandoned at the edge of the fallout zone she lives in.

        * In practice any kind of deformity, regardless of cause.

      2. Bujold does mention “the red dysentery” and other gritty bits of pre-industrial life on Barrayar. Which of the many Twelve Kingdoms books are you referring to?

        My take is that general, mostly utilitarian magic, with the crucial exception of bulk transport, leads to rough gender equality, less hierarchy (medieval historian Chris Wickham points out that in better times the peasants have more advantage over the lords, because they can keep a larger share of the harvest), less control of sexuality, a less sharp divide between high and low cultures. Not sure how urban-rural relations work out. Towns would not be hungry for immigrants, but specialist crafts tend to be where people are concentrated.

        1. “Which of the many Twelve Kingdoms books”

          AFAIK there is only one series called “Twelve Kingdoms”. But I refer to the books by Fuyumi Ono.

      3. Behold: the Roman harvester. Probably what they needed even more than that was a heavy plow (such as the caruca) that turned the land, for the wetter provinces north of the Alps. The ard is fine for dry Mediterranean areas. Other minimal inventions: spinning wheel, (hemp) paper, pound lock for canals, horse-drawn buses for huge cities (100k+). Unfortunately, another key invention — the threshing machine — seems to have been technically difficult. Even a version that isn’t overcomplicated by attempting to imitate flailing, but is just a revolving cylinder, is simultaneously subject to enough force and precision requirements to probably require steel. It’s not that the Romans couldn’t build one — they had the Antikythera mechanism! — but the craftsmen capable of building them were too valuable to spend their time building agricultural implements.

      4. Bujold’s Barrayar, another lost colony, probably had realistic population dynamics before being contacted again, but we don’t see it that far back.

        I recall Komarr had a passage where Katerin and Professora are hiding away from the conspirators in a secure room, and to make the time pass, they discuss various things, including the Barrayarans apparently nostalgic for the “Isolation”, some of whom apparently gather to stage reenactions. Both agree those never seem to roleplay death in childbirth or from an easily preventable disease.

        Unfortunately, people are already forgetting that point in the real world.

        https://www.cidrap.umn.edu/measles/us-measles-cases-reach-new-post-elimination-high

        The US Centers for Disease Control and Prevention (CDC) today reported 21 more measles cases from the past week, pushing the year’s total above a record set in 2019 for the most cases since the disease was eliminated in the United States in 2000. So far this year, 1,288 cases have been reported from 39 states, and 88% have been part of 27 outbreaks. Among confirmed cases, 92% occurred in people who are unvaccinated or have unknown vaccination status…Measles activity has increased globally, including in North America, where the virus is spreading in communities with large numbers of unvaccinated people—including Mennonite communities linked to large outbreaks in the United States, Canada, and Mexico. Canada has reported 3,703 measles cases this year, the most since it eliminated the disease in 1998.

  12. The part about grief reminded me of an inscription on the wall of the San Pietro in Vincoli in Rome. Under a formal epitaph carved professionally with uniform letters and refined words, one can see a much older engraving of meandering letters without proper gaps between words carved by an unsteady hand, a message for the ages to come in plain Latin with a few mispellings: “To the worthy, innocent, peaceful Aselia, their most sweet daughter his parents made (this epitaph), who lived nine years, eleven months and seven days”, along with a simple engraving of two doves and Chi-Rho symbols. The epitaph is dated to the 4th-5th century.
    In a baroque church where some popes are buried (if I’m not mistaken), among grandiose artworks any of which is about death, e.g. I remember a relief showing the Grim Reaper reaching out scythe in hand) the contrast of this humble inscription is a striking reminder of the humanity of people of the past. One can imagine the hands of the grieving parents shaking as they carve these simple words onto a wall, trying to create a memento for poor Aselia.

  13. Indeed, it was possible for this to happen long enough that the great global Malthusian Crisis simply…never happened. And never will! Instead, we now produce enough food to feed the world and indeed to feed the expected peak human population of c. 10-11bn people on the farmland we have.

    I don’t think it’s at all safe to assume tht the Malthusian crisis will never happen. From a 2015 review paper:

    However, this shift in funding may be myopic in the face of current global population and food consumption trends. Notably, the global population is expected to increase from just over 7 billion today to 9.5 billion by 2050, a 35% increase (USCB, 2015). An increasing proportion of the population will be urban, resulting in diets shifting increasingly from staples to processed foods, fortified with more meat and dairy products, which require large amounts of primary foodstuffs to produce. For example, 10 kg of feed is required to produce 1 kg live cattle (Smil, 2000). Thus, an increase in urban population will result in an increased demand for high-quality animal products, requiring an increase in crop production that is substantially faster than that estimated based solely on the projected population growth. This trend is expected to continue, and it is predicted that the world will need 85% more primary foodstuffs by 2050, relative to 2013 (Ray et al., 2013).

    So is our current rate of increase in crop yields sufficient to meet this rising demand? It doesn’t seem to be the case. If current rates of crop yield improvement per hectare are simply maintained into the future, supply will fall seriously below demand by 2050 (Figure 1; Ray et al., 2013). The resulting rise in global food prices may have the largest impact in the poorest tropical countries, which have the highest population increases. A compounding factor is that improvement in subsistence crops in these tropical countries is even slower than in our four leading crops. For example, the global average increase in yield per hectare of cassava, a major staple for sub-Saharan Africa, between 1960 and 2010 was 63%. This is less than half of the 171% increase for wheat over the same period (Figure 1). The problem is further compounded by the fact that the rate of improvement in yield of even our major crops in some areas of the globe is stagnating or even moving into reverse (Long, 2014; Long and Ort, 2010; Ray et al., 2012). Indeed, China, India, and Indonesia are the world’s largest producers of rice, where yields per hectare across these countries increased by an average of 36% between 1970 and 1980 but only by 7% between 2000 and 2010 (Long, 2014). When faced with such numbers, one may rightfully ask: why are yield improvements stagnating?

    https://www.cell.com/cell/fulltext/S0092-8674(15)00306-2?_returnURL=http://linkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867415003062%3Fshowall%3Dtrue&mobileUi=0

    Also worth pointing out that global warming, potential phosphorus shortages and other environmental issues may reduce agricultural production *separately* from the issues that this review is talking about.

    1. This trend is expected to continue, and it is predicted that the world will need 85% more primary foodstuffs by 2050, relative to 2013

      Worth noting that this is an older figure. More recent research arrives at rather lower levels.

      https://www.nature.com/articles/s43016-021-00322-9

      Quantified global scenarios and projections are used to assess long-term future global food security under a range of socio-economic and climate change scenarios. Here, we conducted a systematic literature review and meta-analysis to assess the range of future global food security projections to 2050. We reviewed 57 global food security projection and quantitative scenario studies that have been published in the past two decades and discussed the methods, underlying drivers, indicators and projections. Across five representative scenarios that span divergent but plausible socio-economic futures, the total global food demand is expected to increase by 35% to 56% between 2010 and 2050, while population at risk of hunger is expected to change by −91% to +8% over the same period. If climate change is taken into account, the ranges change slightly (+30% to +62% for total food demand and −91% to +30% for population at risk of hunger) but with no statistical differences overall. The results of our review can be used to benchmark new global food security projections and quantitative scenario studies and inform policy analysis and the public debate on the future of food.

      1. Six years more recent, but, yes. What were the assumptions that they changed, for them to arrive at the lower figure? Because it seems pretty clearly correct to expect that in 2050 people are going to be eating more animal products, per capita, than they do today, and that therefore food demand will rise more than population itself does.

        1. But if food demand exceeds supply, wouldn’t people cut down on expensive animal products and eat more cheap things like beans on toast?

          1. You try getting half of the internet to consider beans on toast as anything other than an abomination.

            Clarification: I’m British. I’m aware myself that beans on toast is just fine, especially with Lea and Perrins (and people say the Brits don’t like flavour!).

        2. Well, I was able to find the full text in a public repository – and it certainly accounts for dietary changes.*

          https://edepot.wur.nl/551559

          We selected 57 global food security projec-
          tion studies for further review…Nearly all studies include assumptions on future population (n = 57) and income growth (n = 52), which are key drivers of food demand, and assumptions on technical change (including total factor productivity growth, crop yield increase and adoption of advanced inputs) (n = 52), which is the main driver of food supply (Fig. 2c). Other drivers that are covered by more than half of the studies include land availability (such as protected areas and land degradation) (n = 41), diet change (n = 39), trade (n = 37) and climate change (n = 32).

          …Nearly all SSP scenarios project an increase in per capita and global food consumption in comparison with the 2010 levels, but the relative sizes of these increases differ. In future worlds that are characterized by fragmentation (SSP3) and inequality (SSP4), per capita consumption will increase by 4% to 7%, while in scenarios that assume sustainability (SSP1), business-as-usual development (SSP2) and rapid growth (SSP5), the increase is 12% to 16%. Taking population growth into account, total food consumption increase is the lowest in SSP1 (+41%) and the highest in SSP2 (+51%).

          And technically, the 85% figure is not actually only 6 years older, since it is not truly original to that Cell review, but is cited to a 2013 paper. Moreover, that paper was in PLOS One (generally decent, yet still distinctly “second-tier” journal compared to Nature), and it wasn’t the primary source for the figure either. Instead, it cites a 2011 paper (Tilman et al.) to say “the only peer-reviewed estimate [3] suggests that crop demand may increase by 100%–110% between 2005 and 2050”. (85% is apparently an adjustment made by the review for starting from 2010-2015 rather than 2005.)

          Either way, that is a difference of a full decade. Notably, the Tilman estimate, and the discrepancy, is discussed in the full meta-analysis.

          Comparison with the prevailing discourse on future global food demand.

          The expected increase in food production and associated impacts on land use change, biodiversity and climate change depend heavily on projections of global food demand and consumption. The most cited figure, originating from an FAO briefing paper, states that world food production needs to increase by 70% to feed the world population in 2050. Although this number was reduced to 60% in a revision of the original study, it continues to be used as a reference point by companies and scientific papers. Another widely cited paper is Tilman et al., who present a much higher increase in global food demand of 100–110% between 2005 and 2050.

          …The projections in Tilman et al. cannot easily be compared with
          most of the studies in our review because of differences in approach. In contrast to most other studies, which use diet projections, food
          consumption is approximated by total crop calories (including food
          and feed), resulting in much higher per capita projections. The study implicitly assumes that food and feed have the same relationship with income per capita. This deviates from most model studies, which assume an increase in feed-to-food conversion efficiency rates and hence a lower relative future demand for feed. Not accounting for potential efficiency improvements in the livestock sector might explain why the food consumption projections in Tilman et al. are nearly twice as large as those in most other studies.

          Now, I suppose being very pessimistic about the future of animal husbandry could make the Tilman figure more accurate…but considering that the greatest space for efficiency improvements generally lies in implementing already existing “first-world” techniques in places which are currently too poor to afford them, rather than in inventing something “sci-fi” that may never exist, I find that somewhat unlikely.

          *I should note that unlike the first quoted part, the full figure includes the 95% confidence interval, so 41% to 51% becomes 35% to 56%. Having said that, the lower figure is arguably not relevant, since the SSP1 is the one scenario which actually does assume people on the whole would start eating (considerably) less meat per capita than today. This, combined with the fact that stopping the warming below 2C is largely impossible outside of SSP1, is unfortunately one of the reasons why that temperature goal is mostly hypothetical nowadays.

          1. @YARD,

            Well, Tillman (and Long et al. in the 2015 paper) are both extremely esteemed people in their (separate) fields. And I think the importance of journal rank is sometimes overstated (I’ve found some really interesting results in obscure places like the “Czech Journal of Food Sciences”, with modern search engines it isn’t that hard). But, I’ll actually agree with you that assuming “no improvements in the food to feed ratio” is a weak assumption I definitely wouldn’t make. I also think that “meat” isn’t a homogeneous category, and the composition of what goes into that category, too.

            To take one example, I’m sure demand for non-vegetarian foods in South Asia will increase a lot, and while I have no time for the lazy assumption that most Hindus are vegetarian (they are not, and most would happilly eat more meat if they could afford it), it is certainly true that most Hindus don’t eat beef (and pork is mostly uncommon probably because of Muslim influence) and I don’t expect that to change. I would expect the increased food demand to come mostly in the form of poultry and fish, both of which have much better feed conversion ratios than ruminant mammals. So, even if the feed conversion ratio in poultry and fish is as good as it gets, and even if demand for animal products continues to go up, one can expect the overall feed conversion ratio on a global level to improve as South Asia develops.

            Also, even if food demand by 2050 may be hard to meat, any such quasi-Malthusian crisis would probably be temporary anyway. Human population will not stay at 11 billion forever, the expectation is (and if we are lucky, this will be borne out) that after that it will start to *decline*. Even if food constraints exist by 2050, I don’t think they will still be there in 2150 or 2250.

          2. @Hector

            Great points! In particular

            Human population will not stay at 11 billion forever, the expectation is (and if we are lucky, this will be borne out) that after that it will start to *decline*. Even if food constraints exist by 2050, I don’t think they will still be there in 2150 or 2250.

            It’s actually a fascinating thought that after 50-70 years, we might be seeing a complete inversion of the Malthusian framework. That is, instead of a race between the growth of human population and the growth of agricultural production, the race could be between the demographic decline of the former and the primarily climate-driven decline of the latter. (Though of course, limitations on phosphorus and perhaps even on mechanization/industrial distribution due to a potential shortage of rare earths, rubbers, etc. may also play a role.)

            While I have already pointed out that the apocalyptic narratives are really overblown for the next 30-50 years in particular, it might still be difficult to avoid a net decline in the agricultural output circa 22nd century as these effects accelerate without, say, “bulldozing the equivalent of the entire Amazon” (bulldozing all of the actual Amazon would backfire, since such a move would cause most of the rainfall over the area to disappear and wreck the soil.) The most ambitious projection I have seen yet went up to 2500; however, it had to sacrifice detail to do that and painted in fairly broad strokes.

            https://onlinelibrary.wiley.com/doi/10.1111/gcb.15871

            Our analyses suggest declines in suitable growth regions and shifts in where crops can be grown globally with climate change (Figure 4). By 2100 under RCP6.0, we project declines in land area suitable for crop growth of 2.3% (±6.1%) for staple tropical crops (cassava, rice, sweet potato, sorghum, taro, and yam) and 10.9% (±24.2%) for stable temperate crops (potato, soybean, wheat, and maize), averaged across crop growth-length calibrations…By 2500, declines in suitable regions for crop growth are projected to reach 14.9% (±16.5%) and 18.3% (±35.4%) for tropical and temperate crops, respectively…By contrast, if climate mitigation is assumed under RCP2.6, a decline of only 2.9% (±13.5%) is projected by 2500 for temperate crops, and an increase of 2.9% (±3.8%) is projected for tropical crops.

            Now, as I said, this is relatively simple, – thus, it uses “suitable area”, which isn’t yields, and its results would probably be quite different if it could use the proper crop models like the shorter-term papers instead of simply plugging in crop requirements from FAO database. (They also fail to account for things like the salinization of coastal aquifers from the sea level rise – which would likely be in 5-10 meter range under the realistic scenarios by then.) On the other hand, it’s worth noting that RCP 6 is a relatively plausible outcome for a world where countries wage war and are too riven by conflict to ever properly decarbonize, so the scenes of 2500 that paper depicts (i.e. that the U.S. Midwest would become practically tropical, three-quarters of the Amazon would desertify and much of India may require “still suit” equivalents to go outside during the hottest month(s)) are probably worth keeping in mind.

            In that case, its 2500 figures would almost certainly mean lower total output than today (at least assuming some countries don’t go crazy wrecking their remaining wildness – which they probably won’t if their own supply is sufficient and it’s only the exporters who are at risk), without potentially impossible crop varieties, etc. Yet, if the 2500 population had long figured out a way to stay around replenishment at some level lower than today (or at least lower than the projected 2050-2080), it also shouldn’t matter quite as much as if those changes were happening to today’s population.

          1. Agreed, but they certainly won’t be happy about it (and if demand for meat drives up prices enough, may not even be able to afford rice) and that kind of unhappiness can lead to wars, chaos and revolutions. In 2047, India (or its successor states, if the current state no longer exists with current borders) will be celebrating 100 years of freedom from the British, and they are not going to want to be told “after 100 years, you still have to be eating rice and beans”.

    2. > I don’t think it’s at all safe to assume that the Malthusian crisis will never happen. From a 2015 review paper:

      > An increasing proportion of the population will be urban, resulting in diets shifting increasingly from staples to processed foods, fortified with more meat and dairy products, which require large amounts of primary foodstuffs to produce.

      Basic economics says that this sort of thing can’t cause a Malthusian crisis.

      If people are rich enough, they can afford to feed grain to cows and eat beef.

      But consider a world where the food supply is getting tight, and prices are going way up. It’s not a Malthusian crisis yet.

      In that world, beef, and other more luxury foods, are becoming fairly unaffordable.

      That is, there is a sliding scale of food availability. On one end, your eating whatever delicacy you feel like. On the other end, everyone starves. But in the middle, your eating whatever is cheapest, and paying half your income for it.

      (This being a modern context, a food shortage may have people eating all sorts of synthetic biochemical creations)

      There is a reason that pre-modern societies didn’t starve via feeding all their grain to cows. And it wasn’t that those people didn’t like the taste of beef. It’s that they couldn’t afford it.

      Rising demand means people are rich enough to afford beef, and choose to spend their income on beef not something else. This implies that beef is affordable. Which, if the cows are fed grain, implies that grain is very cheap.

      1. “In that world, beef, and other more luxury foods, are becoming fairly unaffordable. ”

        Except that market incomes vary widely across the world. Given rising grain prices, beef might continue to be affordable for developed nation people, but not to people in the poorest countries. People could starve because they can’t outbid beef consumers for grain, even with their lives on the line.

        1. I should note that this entire subthread seemingly presupposes that cattle mostly eat grain – which is far from the truth.

          https://www.sciencedirect.com/science/article/abs/pii/S2211912416300013

          Beef production, in particular, is often criticized for its very high consumption of grain, with cited figures varying between 6 kg and 20 kg of grain per kg of beef produced. The upper bound of this range is, however, based on feedlot beef production, which accounts for 7% of global beef output according to Gerber et al. (2015) and FAO (2009), and 13% according to this analysis. It does not apply to the other forms of beef production that produce the remaining 87–93% of beef…86% of the global livestock feed intake in dry matter consists of feed materials that are not currently edible for humans.

          Granted, at the same time,

          Livestock consume one third of global cereal production and uses about 40% of global arable land…Livestock use 2 billion ha of grasslands, of which about 700 million could be used as cropland.

          So, on one hand, the situation where “people could starve because they can’t outbid beef consumers for grain” is arguably already happening: as the paper points out, ~800 million people were malnourished in 2010s in spite of there being up to 700 million hectares that could have theoretically been used to feed them instead of livestock. Following that logic, you could attribute many of the modern famine deaths to such unwillingness.

          On the other hand, it is also unclear if this situation would necessarily worsen in the future: again, according to the previously cited review, the total number of malnourished people (let alone their fraction of a global population) is generally expected to fall by a few hundred million people over the next 25 years in all but one scenario (SSP3) – or rather, a subset of that particular scenario. (Though to be fair, the pandemic and its aftermath did seem to make the world resemble that subset a lot more than the other scenarios – at least in this specific regard, as say, the global population still doesn’t seem to be on track to hit its projected 12 billion in 75 years.)

          1. @YARD,

            Yes that’s correct, cattle don’t *mostly* eat grain, nor do sheep and goats. Poultry and pigs are largely/mostly fed on grain though. As are farmed fish like tilapia and catfish (although, being cold blooded, both of them have much better feed conversion ratios than a chicken or a pig would).

        2. @Mindstalko,

          This, exactly. Corn producing countries in Africa, that really need export income, might face the choice between exporting corn to use as livestock feed in China, or maybe Europe, (America produces more than enough corn that I doubt they’d be exporting it to America) versus sending it to victimes of food shortages somewhere else.

          Also I did say ‘quasi Malthusian’, food shortages and insufficient food production could lead to other issues (lower quality of life, poorer health, revolutions and other forms of political chaos) that make a world population of 11 billion unsustainable even if actual famine doesn’t happen.

          Though as I said, the expectation is the world won’t *stay* at 10-11 billion for very long, it should eventually start decreasing.

  14. Indeed, it was possible for this to happen long enough that the great global Malthusian Crisis simply…never happened. And never will! Instead, we now produce enough food to feed the world and indeed to feed the expected peak human population of c. 10-11bn people on the farmland we have. The demographic transition means that the human population will likely effectively peak at that point, rather than growing endlessly. Maybe if we invent immortality, we’ll get a different kind of population crisis, but in the meantime, Malthus will have been wrong – usefully wrong (which is the best kind) – but wrong.

    I would say that the bolded part appears to present an unwarranted amount of confidence – all the more so considering it’s not backed up with any link, unlike many other statements throughout the post. In contrast, I would present this figure for your consideration.

    https://ars.els-cdn.com/content/image/1-s2.0-S0959378016300681-gr4.jpg

    It’s from a very long and complex paper, The Shared Socioeconomic Pathways and their energy, land use, and greenhouse gas emissions implications: An overview, by Riahi et al., 2017. Said paper describes the assumptions behind the scenarios (titular Shared Socioeconomic Pathways) which are effectively state-of-the-art in climate science; every projection done with the current generation of models uses one or several of these scenarios.* In short, the important thing is that as this graph shows, the amount of future cropland and pasture is believed to increase, to the tune of several hundred million hectares, in every scenario but the greenest one, which we are not really following (and which, according to the rest of the paper, seems to assume even more aggressive demographic transition than what we are currently seeing, with the global population peaking somewhere around 9 billion and then declining to ~2000 levels by 2100.*) Predictably, the graphs for cropland + pasture extent and for forests + “Other Natural Land” are effectively mirror images of each other.

    * Combined with the radiative forcing (i.e. the amount of heat trapped in the atmosphere relative to preindustrial, to give a simplified, non-technical explanation) expected under these scenarios’ assumptions + global climate policy assumptions, and represented by a second number; i.e. the median, most likely scenario is SSP2-4.5. (Median in the sense that it sits roughly between the “worst-case” scenario which effectively assumes every country chooses to emit as much as possible, and the goals we actually hope to hit, since at nearly 3 degrees of warming from preindustrial by the end of the century, it does not represent a good outcome by any means.)

  15. they were not crushed to death
    (stupid Brain immediate Crucible, “More Weight”)

    interesting stuff although imagine huge variable for the richer places versus the poor and more important are you in the path of armies fighting near you issue.

    also the whole war disease kills most thing.

    if 10% for a big battle on both sides

    ( how much generally expected of defeated versus the victor, and hoe not sole focus on Romans alone)

    the how many dies to diisease in all the rest of the time is curious.

    from start of War to end how much of the army survives through it.

  16. “ this is generally what we mean when we say someone died ‘of old age’ – they were not crushed to death by the prodigious number of candles on their birthday cake”

    Leave it to Bret to have the most casually morbid sense of humor (this is a compliment, if that isn’t clear)

    1. You laugh, but when I was 17 I put 50 candles on my mother’s birthday cake, nearly burned the house down. It was only an ordinary sized cake, there were only three of us. (Me, Mommy, and my sister – our father had died.)

      1. For my late grandfather’s 80th, my dad baked him a (normal 9″ round cake) with 80 candles. The heat from the flames was melting the frosting by the time we finishes singing, and he needed help blowing all the candles out. My child self found this delightful. Thank you for bringing back this memory 🙂

        (He died 10 years later by causes unrelated to the prodigious quantity of candles.)

    2. Re: death of old age – I kind of suspect that there might be fewer people who “officially” die of old age today than a hundred years ago, because medical science knows more about the exact specific illnesses old people can die of now than back then, and therefore, doctors today might be more likely to put something more specific than “old age” under “cause of death” on the death certificate than they were back then.

      Any doctors here who could tell me whether I’m right or wrong on this?

      1. I’m not a doctor, but from what I’ve heard *no one* officially dies of old age anymore. There’s always some specific ailment that kills them.

  17. This has me wondering whether Bret got annoyed at Theoden’s “no father should have to bury their children” line at Theodred’s graveside in The Two Towers. It’s a good scene and consistent with the emotional themes of the grave inscriptions here, but it sounds like a preindustrial parent would not understand their child’s early death as a disruption of the natural order of things.

    1. We actually see this exact sentiment, that it is a horrible breach of the natural order for parents to bury children, expressed in Roman epitaphs (including one of the ones I included), but it usually shows up for *adult* children. Parents expect, we might say, to bury infants, but not children, if you take my meaning. Theodred died as a young man, so I think Theoden’s response that his funeral represented a violation of the natural order makes a lot of sense and is consistent with the sentiments we see historical people express.

      1. Reminds me of a quote I saw on a loading screen in Total War: Rome 2, of all games:

        “In peace, sons bury their fathers. In war, fathers bury their sons.”

    2. No explanation is given for populations other than elves or Dunedain, but high child mortality does not seem to be a thing in Tolkien. So the line isn’t out of place (pretty sure it’s movie-only, too.)

  18. Well, I think you are wrong about Malthus, and not usefully wrong either. Every now and then Malthus takes a little holiday; humans extended their range, or developed a more productive technology, or recovered from a mass dying event. But until about 1870 or so, the pace of productivity improvement was not fast enough to escape the potential pace of population growth.

    So Malthus chose a bad time to publish, but has been mostly right for a billion years or so and the critical question is whether the sea change of 1870 can be sustained or will turn out to be just another Malthusian holiday, but on a global scale.

    1. So Malthus chose a bad time to publish, but has been mostly right for a billion years or so

      This statement itself is factually wrong for one extremely simple reason. That is, Malthus could not possibly have been right (or wrong) “for a billion years or so” when the human species had only existed for a mere three hundred thousand years! And if this is meant in the sense that he was right about the relationship of life in general with food supply, not just humans, then the dates would be also wrong, though by a lesser magnitude and in the other direction – since life on Earth appears to be at least 3.5 billion years old. (Multicellular life is somewhere around 1.5 billion years old, to be fair, but arguing that these dynamics only apply to them and not to bacterial colonies seems strange.)

      I would additionally note that the second interpretation arguably betrays ignorance of what Malthus actually wrote, since his essay didn’t merely point out that population exceeding food supply starves until it depletes to an earlier level, but rather specifically argued that human population grew at a geometric rate, while food supply grew at a much lower arithmetic rate, and so population growth results in famine. Since no other known species appears capable of controlling their own food supply in the way humans can (the closest comparable example is probably the way common ants are known to “shepherd” aphids), this argument seems inapplicable to them.

      Finally, I’ll just quote our host’s earlier writing, which explains why he considers Malthusianism overly primitive and insufficiently representative of history.

      https://acoup.blog/2020/08/21/collections-bread-how-did-they-make-it-part-iv-markets-and-non-farmers/

      Moreover, if a large, interconnected zone of monetized markets should form and stick around long enough (as under the Roman Empire, for instance), long distance sea-trade can cause local economies to reorganize around comparative advantage, specializing in producing cash-crops for other regions, while importing missing essentials…Since certain areas of the world are better suited to some kinds of agriculture over others, the end result of this was increased production efficiency overall. This actually leads back to a point that has come up in the comments a number of times: why I don’t ascribe to the automatic assumption that there is an inherent Malthusian trap in the pre-modern world.

      While certainly there is some population figure that would trigger a Malthusian crisis, the normal assumption being made here is that agricultural production is fundamentally static. But it isn’t. What we see instead are agricultural systems capable of operating at multiple equilibria (the plural of equilibrium). You can imagine a low-equilibrium society, where trade and monetization are minimal. In this environment, it is very hard for small farmers to get access to productive capital (plow-teams, manure, mills) and so agricultural productivity is low. Because agricultural productivity is low, it is hard for the society to support many specialists, which in turn means fewer tools, plow-teams, manure and mills. The system is in a stable equilibrium, but at a relatively low level.

      But take the same society and increase trade and monetization. Access to capital gets easier through monetary means and increased trade means increased agricultural specialization, which increases overall out; the trade compensates for the added risk of pushing closer to mono-cropping in each region by evening out prices. Because agricultural productivity is high, the society supports many specialists. Some of those are freeloading aristocrats and large landholders who do little but extract rents and live lavish lifestyles, but many are productive specialists who produce the capital necessary to improve yields, or maintain the trade systems that support everything.

      This society is operating at a higher, stable equilibrium. Without changing any farming technology – that is, we haven’t invented anything, although existing technologies are more available in our high-equilibrium society – or the amount of land available, or the quality of the land, the second society is going to support far more people, potentially at a significantly higher standard of health for the decades or centuries it takes for population growth to catch up to the increased production ceiling.

      Even once the total population pushes against that ceiling, it may benefit from the availability of produced goods (tools, textiles, buildings, infrastructure) which continue to be produced by the specialist non-farmers, even if food once again becomes tight. It is better to live in danger of starvation in a well-made house with decent clothes, good tools, fine poetry and fun civic festivals than it is to live in the same danger of starvation, but alone in a mud-hut with none of those things.

      (As an aside, this is the crux of my Grief and Loathing argument against Sparta. Instead of allowing the helot surplus to create a class of specialists who might improve life for everyone, Sparta redirected that surplus to the most unproductive class of individuals ever to live, whose sole product was violence against the helots. Effectively the Spartan system created a uniquely low equilibrium society, even under conditions where every other neighboring Greek polis was moving to a higher equilibrium under the influence of coinage, urbanization and specialization. It is not that the helots were poor – many farmers were poor – but that they were artificially kept poor, by a uniquely exploitative and useless ruling class.)

      We can actually see the effects of a society moving from a lower equilibrium to a higher one and then back down again in the Roman world. Because the nature of the Roman economy is such a long-standing debate, a tremendous amount of archaeology and scholarship has gone in to charting it; what they tend to show is that economic activity increased significantly in the Mediterranean from the second century BCE to the first century CE, before holding steady at a relatively high level into the second and possibly even third centuries. Population expands, urbanism increases, evidence suggests that diets, even among the poor, seem to improve. Life appears to have – slowly, fitfully, and in ways that while significant compared to other ancient societies would seem tiny in comparison to modern economic growth – gotten better, in an absolute sense.

      And then the fragile systems of trade and monetization that created that prosperity begin to break down as the Empire collapses. Bryan Ward-Perkins documents the archaeological evidence for real decline in living standards (which, by the by, runs counter to what was often supposed – that regular people did better once their imperial masters were out of the picture; in this case, it turns out they did not, though in other cases they may have). Population contracts and the loss of specialist non-farmers leads in some cases to the loss of key productive technologies, perhaps most famously, ceramics – that is, the making of pots (I struggle to communicate how important a technology this is, or how fundamental) – becomes a lost technology in post-Roman Britain, when the cities where the professional potters lived faded away as the trade which sustained them broke down.

      (You can obviously map the above onto the present-day arguments about trade and tariffs Though, I now genuinely wonder if our host ever stated a position on the Biden-era tariffs/trade restrictions, with the national security rationales behind them. That, and his contemporary position on the decision to mostly leave the tariffs implemented during 2016-2020 in place. If the rationale for that was the fear of upsetting “Rust Belt” voters, then knowing what we know now, we can fairly safely say it didn’t work – but predicting this ahead of time was certainly not trivial.)

  19. Those funerary inscriptions giving the lifespan of the deceased to the day are very interesting, because the impression I’ve always had of the past is that people didn’t care much about birthdays (since there are so many historical figures where we know when they died but not when they were born). And then it’s easy to come up with a rationalization, like “Oh, well, if half your kids are gonna die before they’re a few years old, why bother remembering their birthdays?” It’s nice to have that misconception corrected.

    1. If the Ancient Romans charged by the letter like the moderns do, then those ancient epitaphs must’ve cost quite a bit to put up. I don’t know if that changes things.

    2. I should also say that this is more than just your impression. When I first discovered the blog, one of the first things I saw was the post arguing why the Industrial Revolution could not have occurred outside of England for (largely) coal mining-related reasons. It spurred me to do some searches, which led then me onto The Handbook of Cliometrics (again, this was before I found out our host takes a very dim view of that discipline!) – and one of its most-favoured suggestions for why the Industrial Revolution could not have occurred earlier was the idea that literacy and numeracy simply wasn’t high enough up until 1600s and beyond.

      A core piece of evidence for that was the analysis of gravestones and legal documents suggesting that a large fraction of even the Roman nobles could not count well enough to properly record their own ages. The accuracy of these childhood gravestones may be seen to contradict that – but then again, it might simply be that in this case, the numbers were low enough and close enough to memory in order to be accurately recorded. I’ll repost only the most directly relevant quotes to provide context, and link to that comment for the rest.

      https://acoup.blog/2022/08/26/collections-why-no-roman-industrial-revolution/comment-page-2/#comment-74338

      A prosperous landowner in Roman Egypt, Isidorus Aurelius, for example, variously declared his age in legal documents in a less than 2 year span in 308–309 AD as 37, 40, 45, and 40. Clearly, Isidorus had no clear idea of his age…Another feature of the Roman tombstone age declarations is that ages seem to be greatly overstated for many adults. Thus, while we know that life expectancy in ancient Rome was probably in the order of 20–25 at birth, tombstones record people as dying at ages as high as 120. For North African tombstones, for example, 3% of the deceased are recorded as dying at age 100 or more. Almost all of these 3% must have been 20–50 years younger than was recorded. Yet, their descendants did not detect any implausibility in recording these fabulous ages.

      1. “the numbers were low enough”

        But Bret’s inscriptions include a dead 18 year old wife, with both her age and the 5-year duration of marriage given down to the day. I went and looked at Hemelrijk, and an inscription there says “She lived fifty-three years, five months and three days.”

        1. Well, the The Handbook of Cliometrics did not argue that absolutely nobody knew their true ages – only that a lot more people failed at it then modern people would expect, and the ability to do this gradually grew after the end of the quote-unquote Dark Ages. The part I quoted from it last year also included the line “Age awareness did correlate with social class within the Roman Empire. More than 80% of officeholders’ ages seem to have been known by their relatives.” All in all, it appears inarguable that such highly accurate tombstones and those tombstones with ~120 ages have coexisted at the time.

          P.S. I should also correct my earlier statement to note that our host objected to cliodynamics, rather than cliometrics – no surprise, given that the former is orders of magnitude more far-reaching!

          https://acoup.blog/2021/10/15/fireside-friday-october-15-2021/

        2. For such an exact counts I doubt the parents/husband did the counting themselves. I assume they knew the birth and death dates and had the priest or other professional calculate the age. When Isidorus was filling those legal documents and suddenly needed his age he just did a quick calculation/estimation.

          I know whenever I need to calculate my age it takes a long moment when I remind which year I was born, compare it to this year and then try to figure out which side of my birth date today falls. I have never tried to calculate my age down to months since I got past single digits, and if I need to know something to the accuracy of a day I go to online date calculators.

      2. One also wonders if some of the inaccuracies are the result of having to deal with Roman numerals. Good night, can you imagine trying to do calculus in the Roman instead of the Arabic system? The mind boggles.

        1. You don’t do calculus with numbers but with algebraic formulas. The actual numerical operations are usually trivial.

          However, doing multiplication and division using Roman numerals is very difficult, and that is why the mathematicians of late antiquity used Greek numerals, and if necessary, used Babylonian 60-based fractions for quantities smaller than unity. And the ordinary people tended to have units of measurement and currency in so close a progress that the numbers you used remained suitably small.

          1. I don’t know anything about Greek numerals. I’d always assumed ancient math was done via abacus: put numbers on abacus, do the calculations, write down result.

      3. I mean…

        To pick a modern equivalent, that’s like saying that modern people are bad at counting because women often under-report their age without being called on it. Or that John Doe is confused about his age when he tells a bartender that he’s 21 and the draft officer that he’s 17.

        1. I was going to say that, yes. Can we be so sure that some of this haziness about ages in classical antiquity wasn’t just about people fudging their age for vanity’s sake (or for some concrete purpose), just like they do today? (And people do it today *in spite of* the fact that our ages are recorded on government documents).

        2. Well, a little sanity check (which also goes to everyone who “liked” this comment): if you, (presumably) a non-historian, can think of this, then just how plausible is it that the people who spent years specializing in this intersection of economics and history never once considered such a simple idea? This kind of unthinking dismissal of specialists’ painstaking efforts brings to mind our host’s recent observations on his linked profiles regarding what he terms “vibe history”.

          In fact, the work I quoted from explained pretty well why they ruled out this idea. Since I already quoted it once, I decided to keep the quotation short this time and provide a link to the longer quote. Predictably, it doesn’t appear like many (any?) have followed it, so here’s their extended explanation again.

          A lack of knowledge of their true age was widespread among the Roman upper classes as evidenced by age declarations made by their survivors on tombstones. In populations where ages are recorded accurately, 20% of the recorded ages will end in 5 or 10. We can thus construct a score variable Z which measures the degree of “age heaping,” where and X is the percentage of age declarations ending in 5 or 10 to measure the percentage of the population whose real age is unknown. Z measures the percentage of people who did not know their true age, and this correlates moderately well in modern societies also with the degree of literacy.

          Among those wealthy enough to be commemorated by an inscribed tombstone in the Roman Empire, typically half had unknown ages. Age awareness did correlate with social class within the Roman Empire. More than 80% of officeholders’ ages seem to have been known by their relatives. We can also look at the development of age awareness by looking at a census of the living, as in Table 9. Some of the earliest of these are for medieval Italy including the famous Florentine Catasto of 1427. Even though Florence was then one of the richest cities of the world and the center of the Renaissance, only 68% of the adult city population knew their age. Medieval England had even lower age awareness. The medieval Inquisitions post mortem, which were enquiries following the death of landholders holding property where the King had some feudal interest, show that the exact ages of the heirs to property was known in only 39% of cases.

          In comparison, a 1790 census of the small English borough of Corfe Castle in Dorset with a mere 1,239 inhabitants, most of them laborers, shows that all but 8% knew their age. In 1790, awareness correlates with measures of social class, universal knowledge among the higher-status families, and lower age awareness among the poor. But the poor of Corfe Castle or Ardleigh in Essex had as much age awareness as officeholders in the Roman Empire.

          …For North African tombstones, for example, 3% of the deceased are recorded as dying at age 100 or more…Almost all of these 3% must have been 20–50 years younger than was recorded. Yet, their descendants did not detect any implausibility in recording these fabulous ages. In contrast, the Corfe Castle census records a highest age of 90, well within the range of possibilities given life expectancy in rural England in these years.

          So, any “they knew and were just lying about it” theory also has to explain this statistical trend over the centuries. I suppose one can presume the desire to lie about age became less socially prevalent at this exact rate, but let’s just say it does not seem to pass the Occam’s Razor test. Moreover, this is really not a new concept at any rate – it had apparently been established in general since the 1950s, and in relation to Rome in particular since the 1990s.

          https://www.sciencedirect.com/science/article/abs/pii/S0014498309000357

          A half-century ago, an influential study by Bachi (1951) and Myers (1954) investigated age-heaping and its correlation with education levels within and across countries. Thereby, Bachi (1951) was able to analyze the degree of age-heaping among Jewish immigrants to Israel in 1950 and among Muslims in Mandated Palestine in 1946, finding, amongst other things, that the increasing spread of education resulted in a better knowledge basis according to lower age-heaping. Myers (1976) found a negative correlation at the individual level between age-heaping and income. Another innovative example is the study by Herlihy and Klapisch-Zuber (1978) who used successive Florentine tax enumerations and found distinct heaping on multiples of five for adults, which declined substantially in the period from 1371 to 1470. Furthermore, they showed that age-heaping was more prevalent among both women in rural areas and small towns, and among the poor. Duncan-Jones (1990) employed this technique to study age data from Roman tombstones.

          Mokyr (1983) was the pioneer who established the age-heaping measure as an explicit numeracy indicator in economic history. He employed the degree of age-heaping to assess the labor-quality effect of emigration on the Irish home economy during the first half of the 19th century, as emigrants from pre-famine Ireland were less sophisticated than those who stayed behind. Thomas (1987) considered the slight but discernible improvement in the accuracy of age reporting as evidence that numerical skills in England had improved between the 16th and 18th century. Budd and Guinnane (1991) studied Irish age-misreporting in linked samples from the 1901 and 1911 censuses. They found considerable heaping on multiples of five in the 1901 census, which was also more frequent among the illiterate, poor, and aged. More recent research was conducted by Long, 2005, Long, 2006 who analyzed age data from the 1851 and 1881 British population censuses, identifying urban migrants in Victorian Britain as being educated beyond average. By exploiting repeated observations, he was able to show that individual age discrepancies (another measure of missing age awareness) had a significant negative impact on socio-economic status and wages. De Moor and van Zanden (2006) studied the relative numeracy of women during the Middle Ages, and Clark (2007) has recently reviewed the evidence.

          From the literature cited above, we can conclude that demographic evidence exhibited significant age-heaping at least until the turn of the 20th century, and that the degree of heaping varied across individuals or groups in a way that makes age-heaping a plausible measure of human capital. The correlation of age-heaping and the prevalence of illiteracy among the population was explored in more detail by A’Hearn et al. (2006) who found in their analysis of 52 countries or 415 separate regions that the level of age-heaping is indeed correlated with illiteracy. The authors also concluded that the probability to report a rounded age increases significantly with regional and personal illiteracy.

          In sum, we would argue that age-heaping is a proxy for basic numerical skills. As such, it is an important component of human capital and a precondition for more advanced skills…The age-heaping technique can be applied to many sources of age data such as census returns, military enrollment lists, legal or hospital records, and tax data. However, care must be taken as to by and to whom the age question was posed, how it was formulated, and whether self-reported ages were compared to birth registers. Double-checked age information does not normally reflect any age-heaping besides minor, random fluctuations and therefore cannot be used.

  20. Out of curiosity, how did you derive a 9.67% chance of death after 5 births (“woman who gave birth five times with a 2% chance of dying each time has a total chance of dying at one stage or another of 9.67%”).

    I would have guessed the formula to be 1-(1-x)^y for an event with probability x on one trial to happen after y trials assumed to be independent of each other. This would yield with x=0.02 and y=5 a value of 1-(1-0.02)^5 =9.60792032% according to my calculator.

    [My reasoning for the formula is that (1-x) is the probability of surviving one childbirth. So the probability of surviving after y births is (1-x)^y if all births are statistically independent, and so the probability of not surviving after y births is 1-(1-x)^y)]

    Was your calculation more complicated or did I make a subtle mistake in my reasoning?

      1. Thank you for letting me know. The change certainly doesn’t make any difference for the point you are making.

        And thank you so much for all of the articles you have written and made available to the public. I have learned a great deal from them and I very much appreciate all the time you have dedicated to making history available to the public on this site!

  21. Typo hunt

    > the pattern of mortality of pre-industrial life combined with drive to sustain population functionally
    Should be combined with the drive? drives?

    > This is a point where the way we walk about pre-modern mortality
    talk

    > 26.6 years of life expectancy is expecting to life to age 51
    live to

    > weakness of immunity, which correspond with age but also nutrition.
    Should be weaknesses OR corresponds

  22. “when your ruler turns 20 or so, the game informs you that you are old and must think constantly about death”

    https://i.redd.it/onr9q12cyxra1.jpg

    It’s a somewhat accurate portrayal of a teenager’s life crisis though. “Wait, I didn’t think I’d live to be this old! What do I do now?”

  23. I’ve heard a claim that midwives are actually better than doctors especially in 18th and 19th century, and this squares both facts of “doctors were absolutely bad their job, not even washing their hands” and “18th century people somehow didn’t go extinct”. Looking at your maternity mortality data, this sounds about right because horror stories of doctors from 19th century sounds like it’d result in at least >30% maternal mortality. Is it actually true? I found the reasoning intriguing but I haven’t found it anywhere else.

    1. Yep it is true! Among other things midwives were more likely to wash their hands! Professional doctor performance at routine medicine in general took awhile to clearly pull ahead of more traditional medical practice.

      Incidentally it still very far from clear midwives with access to a hospital are worse than doctors even today. Doctors tend to overmedicalize birth pretty badly, eg the C-section rate is a good 2X what research suggests is the ideal in the US, which adds risk in the moment and to future pregnancies.

    2. I don’t know if midwives were more likely to wash their hands between patients, but their last patient was presumably more likely to have been another expectant mother, rather than someone with an infectious, potentially lethal disease.

      There is something to be said for specialization.

      1. Yeah, but I suspect midwives did wash their hands between patients.

        Heck, the Bible (Old Testament, iirc) advises to wash your hands after touching a corpse. People may not have known WHY it was a good idea, but they knew THAT it was a good idea.
        So even if the midwife also washed the dead (and that could well have been the case; this was considered a women’s job, too), she would have washed her hands afterwards.

        Then came modern doctors who said that there was no scientific proof that infections could be transmitted via unwashed hands, and may have intentionally not washed their hands between patients to prove how modern and scientific they were. (And perhaps also because men wash their hands less than women in general, for whatever reason.)

        There’s also the fact that a midwife would likely have had at least a couple days to get through between patients, and definitely eat a meal between patients, so if she washed her hands AT ALL, she would have done so between treating two patients, just as part of her normal everyday hygiene.

        1. > There’s also the fact that a midwife would likely have had at least a couple days to get through between patients

          Not if she’s working at a hospital ward.

          This story about midwives and doctors is not just a fiction – it’s an early scientific study on the deaths between patients in two different wards at the same hospital, one staffed by midwives and one by doctors.

          1. Yep. The “midwife” is still, in Europe, a protected professional title. There are standardised educational requirements that define the theoretical and practical studies needed. In Finland, all midwives are registered general practice nurses (Bachelor’s of Nursing), with additional education for midwifery. The education takes four and half years.

            Here, a healthy woman with a normal pregnancy will not see a medical doctor at all during a delivery, if everything goes well, except perhaps for epidural anesthesia. I have myself seen this in practice, when being present during the birth of my own children.

            There are more modern, contemporary studies that show that deliveries handled by midwives tend to be safer than those where a doctor is present. The midwife is trained to assist in the delivery, while doctors are trained in medical intervention. If physicians are present in a normal, healthy delivery, they will commence, on statistical level, unnecessary operations.

    3. Ignaz Semmelweis was the man who noted that the ward in his Viennese hospital in the mid-1800s that was staffed by midwives had fewer deaths due to sepsis than the ward staffed by doctors. And he did recommend – to little avail – that the doctors wash their hands in a chlorine solution in between attending patients and dissecting corpses.

      Even today, doctors aren’t very good at remembering to wash their hands. (The easy availability of hand sanitizer doesn’t help. Hand sanitizer just not as effective, not unless it’s used after handwashing.)

  24. I would love to see this analysis applied to ancient Sparta, where we know so little (?) about inheritances and how full citizens might fall out of homoioi status and how a falling birthrate combined with higher than normal military deaths among citizens fed into the decline of Spartan military power.

    1. I think one of the links in the post takes you an earlier post on the Spartan demographic failure.

  25. I’ve followed this blog ever since your Helm’s Deep/Siege of Gondor analyses years ago, but never commented until now. Unlike other commenters, I have no clever insight or relevant information to contribute; I just want to say I was touched by the raw universal *humanity* in those epitaphs you shared: parents grieving for children, spouses grieving for their lost loved one. Sorry to get sappy, but across the millennia the obvious love these people had for their families leaps out of their words. Thank you for including those.

    I always love the information you give, but while you often remind your readers that your posts are about *real people,* it’s often easy to get caught up in the interesting facts and forget. Today was an exception: I was able to stop and think a bit about Claudia, Euposia, Zosime, Aemilia Donativa, Cornelia Anniana, Blandinia Martiola, and Veturia Grata. Makes me realize I shouldn’t take for granted that I live in an era where death is not quite so omnipresent.

    Looking forward to your next post!

    1. Hemelrijk’s book – which is where the translations are from – is spectacular in this regard. Reading it, you are struck so fiercely by the deep humanity of the people in those inscriptions, both the dedicators and the dedicatees.

    2. Seconded. Long-time reader, first time poster to say thank you for humanizing the people of the past (especially those that so often don’t write to us).

  26. Among the early Slavic people, there was a custom of first haircut (polish: postrzyżyny). Once a boy reached 7 years, his hair was cut for the first time, he received a name, and he was passed from mother’s care to father’s care.

    I used to think living for nearly 7 years without a name was weird. I think it was a reflection of high children mortality. They didn’t want to get attached too much.

    1. This custom also existed in Medieval Rus, a least in aristocratic families, but as far as I know it didn’t include giving a name to the child. In Christian times the name was obviously given during the baptism. We do not know a lot about the Slavic customs during the times of Paganism, but at least the Norse people had the custom of Ausa Vatni, somewhat similar to baptism, which included naming the child and sprinkling them with water, and tat happened very soon after the birth of the child. I doubt that the Slavs had higher child mortality than the Norsemen so it seems more likely that they had similar customs and also gave names to children soon after they were born.

    2. Eh, I bet the boys were given names by their mothers.

      In Ancient Rome, women didn’t have names AT ALL … officially. But of course, inofficially, they would have, as names were invented for a reason.

      I cannot imagine a mother with three sons under the age of 7 wouldn’t have named them for practicality if nothing else.

          1. A Polish guy told me once that the surname “Trzeciak” (I used to live in a part of America with a lot of Polish surnames, and was asking what they meant) literally meant “third” and probably originated as a nickname for a third child which eventually became a surname. Not sure if that was really the origin, but it might be?

          2. More common than you think. Practically all of Mesoamerica had day-names based on the Mesoamerican calendar.

            That’s where all of those Mesoamerican ruler names like ‘8 deer’ come from (who was a Mixtec ruler, incidentally). That’s the day they were born, and forms the first name they received.

      1. From what I’ve read, the “women didn’t have names in Ancient Rome!” thing is actually a misunderstanding – it’d be more accurate to say “women in Ancient Rome lost their vestigial praenomens faster than men did” (the famous trinomial system was actually a transitional state between two binomial systems). It also doesn’t help that the Romans had social taboos against using the names of “respectable” women in public.

        (OGH can correct me if I’m wrong – I just like learning about naming practices, and it’s entirely possible that I’m going off of incorrect or out-dated information).

        1. Basically no Roman had an individual name but was labeled with the family names. Even the use of praenomens was limited by tradition to one to three out of a limited choice. The Claudii for example favored Appius, Caius and Publicus. The Julii Sextus, Caius and Lucius. Life would have been hopelessly confusing without nicknames.

  27. Fascinating as always!

    My question is: why did no ancient society mimic the changes that made mortality fall between 1800 and 1900?

    This was before antibiotics and Haber-process nitrogen fertilizers. It seems to have mostly been down to sanitation and hygiene. But we all know the Romans had plumbing and aqueducts. How come nobody figured this stuff out when it would have been such a colossal benefit?

    1. Sanitation costs labor, so it’s harder when society is using most of its labor to grow food.

      Rome had sewers, but was still filthy. Public toilets were openings into the sewer, but private toilets were just pots. You were supposed to dump out the pot into the public toilet, but a lot of people didn’t want to carry it that far and just dumped it into the street. Sewer access in every home would have fixed this, but wasn’t feasible.

      1. That makes sense for Rome, but as he points out, most people are peasants, not living in big cities. If an 1850AD farming village has a much lower infant mortality rate than a 50AD farming village, it seems strange that nobody figured out how to do the same thing, and then got imitated.

    2. The infrastructure / cultural changes that led to lower mortality in the 19th C farming villages were not originated by those farmers. They were absorbed (or sometimes imposed) from the outside, the industrialised towns and cities. A pre-industrial society, as noted elsewhere above, cannot afford to support many specialists such as the equivalent of our sanitation engineers, let alone medical research scientists.

      Even a farming village is a very complicated “system” and it’s very hard to sort out cause and effect. If the harvest was bad, could be insects, could be soil exhaustion, could be those blasphemers who didn’t show up for the spring ritual.

      And these are subsistence farming villages where making a change that doesn’t work is likely to kill people. “Why don’t we try this and see what happens” is a lot less attractive when the consequence might be you and your family dying from malnutrition. Peasant farmers are conservative for a reason.

      1. They are not that conservative – new crops spread quite rapidly, as do new tools. You can experiment on a small scale (let’s plant this in the kitchen garden and see what happens) with minimal risk. A whole range of plants and techniques reached Europe from the east between 600 and 1000.

          1. The short answer is that we don’t know in detail, but the general impression is more people and a somewhat healthier population – a slow rise in per capita gdp.

        1. I’d note that ‘try this new crop that the village over has also successfully used’ is a comparatively easy sell, all told. It fits well within existing agricultural schemas.

          No-one wants to be the *first* person to try the new crop, but if it’s already been demonstrated to work well then that’s a different kettle of fish.

          1. Somewhat off-topic, but you remind me of just how inventive — or desperately hungry — people had to be to figure out how to eat cassava, or any of dozens of other foods that are poisonous if not carefully prepared. To say nothing of mushrooms…

          2. There’s always someone adventurous, whether it’s running off to seek their fortune (or with someone else’s partner) or trying a new crop. People are like that.

          3. I suspect in some cases the person trying out the possible new food item was under coercion.

          4. Just to clarify things, there’s fairly simple processes you can go through to determine whether something is likely to be poisonous or not. I remember an old Ray Mears survival video walking through them.

            It starts with things like rubbing it on a patch of skin and waiting a while to see if there’s irritation, then progresses up through things like tapping your tongue against it, rolling it around in your mouth for a while, chewing a bit then spitting it out, trying to eat a tiny little bit, and various other steps. All leaving a decent amount of time between each step to see if there’s a reaction.

            I sincerely doubt they were just picking straws and getting someone to eat the random mushroom they’ve just come across. Human cranial capacity peaked sometime during the last ice age, and if there’s anything that humans are it’s pattern-recognition machines. We’re smart, and we’ve always been smart.

            This isn’t a foolproof method, of course. There are foods that kill you through gradual accumulation (e.g. vitamin a poisoning from consuming carnivore livers), or foods that are only poisonous after certain events (e.g. eating mussels after algal blooms). Animal behaviour can help with some of those (e.g. native Americans in the Pacific Northwest paid attention to when bears stopped eating mussels to know when they were poisonous).

            Though there’s always likely to be some risks that you can’t particularly avoid, there’s a lot of very practical ways to minimise those risks when testing if something is edible or not.

          5. Animal behaviour can help with some of those (e.g. native Americans in the Pacific Northwest paid attention to when bears stopped eating mussels to know when they were poisonous).

            I’ve actually worked with semi-literate small farmers before and they were pretty smart about drawing connexions between what animals would eat and what was likely to be poisonous (e.g., to the extent of hypothesizing, “well, this yam grows far enough underground that animals wouldn’t be able to reach it anyway, so it’s unlikely to be poisonous.”) I wish American kids in high school science classes were that thoughtful.

          6. This isn’t a foolproof method, of course. There are foods that kill you through gradual accumulation (e.g. vitamin a poisoning from consuming carnivore livers), or foods that are only poisonous after certain events (e.g. eating mussels after algal blooms).

            and of course there’s a lot of poisonous species where a small amount will make you sick without killing you, or that have less poisonous relatives, so that you can learn about the poison through exposure to small doses. (or as you indicate, maybe by feeding it to a pig or chicken first). There’s a big gray area between “totally benign” and “immediately, lethally toxic”, and lots of space for people even in a premodern, pre-scientific era to learn about toxicity.

    3. How much of that rising life expectancy was just better access to food, as richer societies are more able to buy food from elsewhere when the harvest is bad?

      The British reaction to the Irish famine, is often and rightfully maligned, but 200 years earlier it probably would not even be possible to import corn from the Americas into the affected regions.

  28. “But who would be so incredibly foolish to do that?“

    Warhammer Fantasy’s Bretonnia comes to mind. Which, while marketed as the token medieval faction, comes across to me as less that and more Arthurian-flavored Sparta with horses.

    Though the older lore was less ridiculously oppressive.

    Still, who knows? Presumably the Blessing of the Lady includes a low infant and maternal mortality rate, for noble and peasant women alike.

    On that note, one wonders if the Pedant will be inclined to grace us with an examination of Warhammer Fantasy someday.

    1. One usual problem of fantasy: far too many sentient species coexisting and fighting in too small a geographic area. You can’t start a world from the early neolithic and end up with this. Even if we don’t have an extensive paleolithic with expansion and extinction, at the dawn of agrarianism, one species (let’s say, the elves) will beat the rest in the race to the grain. Then they expand and bulldoze everyone else into extinction within the feasible range of the agrarian subsistence system.

      Another usual problem of fantasy: what even do they eat, and how do they grow it?
      – Skaven live underground.
      – Dwarves live underground.
      – Chaos dwarves IIRC live in a giant city in the middle of a smoke-clouded desert.
      – “The Greens” (orcs, goblins, snotlings) seem to live in a thoroughly barren (by aridity) place, and have occasional mushroom theming.
      – Elves seem not to have any peasants.
      – Dark elves at least have slaves, but probably not nearly enough (and they keep killing them).
      – The multiple Lizardman species live in a jungle. Go on; and…?
      – Chaos-men live on tundra (or perhaps an icecap)? Beast-men live in taiga (boreal forest)? Somehow I don’t imagine the proud WARRIORS OF KHORNE, clad head to toe in metal armor, being reindeer herders.
      – The undead of Nagash cackle at the puny notion of “subsistence system”.
      + Humans and halflings have one!

      The reindeer-herding heavily-armored chaos-men turning up in “agrarian-sized” armies (i.e. with the same points budget as the Imperial player) is the crux of the problem. This is a battlefield tactics game, this is the element that needs to be balanced. That as a matter of worldbuilding, it should be categorically impossible for this to happen for multiple reasons? Too bad. This is much the same reason monstrous megafauna (dragons, trolls, squigs, rideable-size spiders, etc.) are nonsensically extant.

      1. Well, yes. And in detective fiction small villages have a murder every month, solved by amateursn and the laws of physics are cruelly treated in much sf. There are more thought-through fantasy worlds out there, but not everyone wants to read them, or write them. Sometimes you just want to ride with the orcs.

        1. Theory: cosy village murder mysteries appeal to humans because the natural human condition for 99% of human existence was to live in a small rural community at least one of whose members was violently killed every week.

      2. “One usual problem of fantasy: far too many sentient species coexisting and fighting in too small a geographic area. You can’t start a world from the early neolithic and end up with this. ”

        But the Warhammer world is, IIRC, not a world with a normal prehistory, it is a) an explicitly creationist world which was settled and shaped by precursor aliens who created quite a lot of its inhabitants and b) a world in which various other sentient races do not exist as reproductively isolated self-sustaining populations, but are constantly created ab nihilo by the action of, essentially, two large doors leading into hell, one at each pole.

        There are no Chaos Warrior peasants because you don’t get new Chaos Warriors when a boy Chaos Warrior and a girl Chaos Warrior love each other very much and give each other a special sort of hug. You get new Chaos Warriors because a normal human decides to become a Chaos Warrior.

        1. I know the crisp winter wind on your face makes you feel alive, but that doesn’t actually solve the problem of getting calories into your body.

          A normal human decides to go Chaos Warrior. He gets a stupendously expensive set of full armor somehow. (It sure doesn’t look like the sort of armor any playable human faction/culture makes, so he isn’t even taking it off an assassinated nobleman or suchlike.) He walks off into the snow to group up with all the other Chaos Warriors. They all starve to death. The end.

          1. If I remember correctly, the literal actual demon gods of chaos source the armor from the same place they get their infinitely spawning demon armies.

            A Chaos Warrior and a zombie are basically the same thing – they might have been a human at some point, but they aren’t anymore and human logistics no longer apply.

          2. “A normal human decides to go Chaos Warrior. He gets a stupendously expensive set of full armor somehow (It sure doesn’t look like the sort of armor any playable human faction/culture makes, so he isn’t even taking it off an assassinated nobleman or suchlike.)”

            Per the rules as written, he is generally granted it by his patron deity, along with various other mutations and special powers. (And not only isn’t he taking it off an assassinated nobleman, he generally isn’t taking it off at all, ever.)

            “He walks off into the snow to group up with all the other Chaos Warriors. They all starve to death. The end.”

            Rules also say that Chaos Warriors don’t need to eat or drink. Or indeed sleep.

          3. Erm…my knowledge of Warhammer isn’t that great, but I thought it was common knowledge amongst those who can already name even the smaller factions like Chaos Dwarves and Lizardmen that Chaos Warriors are basically the closest counterpart to 40K’s Space Marines (even the regular ones, let alone the Traitor Legions). That is, they get transformed from the most elite of normal humans aligned with Chaos, in a pathway that kills the overwhelming majority of those attempting it – just like the selection process for every Loyalist Marine Chapter/Traitor Legion/warband.

            In fantasy, those normal humans are the Chaos Marauders, and numerically, they make up the overwhelming majority of the Chaos forces. However, Chaos Warriors are not only literally superhuman, but like ajay said, they are literally sustained directly by their deity, to the point they don’t need sleep and can march 24/7, which logically means that they can be present on the battlefield a lot more often than their numbers alone would suggest. I was also going to say that their armour is forged directly in the Warp, but apparently that is too 40K of me – it seems that the lore is that such sets do exist, but are in the minority, while most of these sets are forged by Chaos Dwarves, paid for in furs, meat, steel and slaves.

            Now, you might assume from their name that the Marauders are the setting’s Dothraki (even before we get into the whole “worshipping the primal representations of bloodlust, disease, depravity and mutation (and knowledge)” thing), but a quick scan of the setting’s wiki reveals it’s a little more complicated. For one thing, the lore at least gestures towards the “Atreides gaze” aspect (amongst their own people, they are simply known as warriors, and in times of peace, they often serve the clans as guardians and hunters.) Apparently, they consist of the Hung, the Kurgan and…literally the Norse, and while all three raid raid if they can get away with it even when Chaos doesn’t care (and all send their forces whenever the gods wish for a campaign), there is a spectrum of savagery. For the latter, “many deal with their neighbours honestly and fairly…part of this “civilised” behaviour stems from Marienburg’s efforts to… cultivate a buffer state against the Kurgan tribes and the more savage Norsemen….As a result of this new era of prosperity the Norse have slowly returned to the ports of the Old World, signing on as mercenaries or even as merchants, selling whale oil, ivory and lumber among other goods.” (Note, this is text from the setting’s wiki, which is apparently expressly forbidden from directly quoting the written lore, so it’s all a paraphrase which might introduce some of the editors’ own interpretations.)

            The Scythian/Sarmatian equivalents, the Kurgan, are clearly more violent yet still a significantly better representation of horse nomads than the Dothraki (admittedly a drastically low bar). After all, they are at least said to actually master horse archery and spear/horseback combat, unlike the arakh-centric nonsense of ASOIAAF, and “the Dolgans, one of more southern Kurgan tribes, had once lived in relative peace with the Ungols of Kislev. Today they have become less ambiguous, and now form together under Zars in imitation of the Tzars of Kislev…the tribes have goats to provide milk and cheese and bison and cattle for meat whilst wild roots and vegetables are also gathered” – which is rather a step-up from the Dothraki somehow subsisting on horses alone and slaughtering sheep “whose hooves are unfit to tread the Sea of Grass”! They are even said to have some women fighters and chieftains (in keeping with their closest historical inspirations.)

            Unfortunately, the depiction of the Mongol equivalents, the Hung, appears very similar to Martin’s. (In fact, it might have been directly drawn from it – since apparently GW’s lore mirrors/parodies Daenerys-as-khaleesi plotline, with the Dark Elf “Hag Queen” Morathi arriving to their lands with a ton of Daemonettes and making them (re-)embrace Slaneesh.) Either way, the wiki summarizes their lore as “Perhaps out of all the people living within the Far North, the people of the Hung are perhaps the most primitive of them all. Even though their brethren in the West have built themselves a selection of settlements and massive fortresses, the people of the Hung can amount themselves no better then primitive hunters and gatherers, with little to no concept of even the barest forms of civilization…Though they owe their allegiance to the greater tribe, the Hung honor no promises and abide by no pacts. They are famous for their treachery and for their willingness to kill each other as well as others they meet. They are a sly people, cunning in their dealings and quick to double-deal. For example, they might encircle a town and promise to leave the town unscathed if the people give over their daughters. Once the town complies with their wishes, the Hung butcher the townspeople and burn down all the buildings simply because they can…They violate them [the prisoners] in every conceivable way before giving the broken captives to their slaves as playthings… They have a taste for fine things, so they snatch up gold, silks, even gaudy ornamental rugs, which they display proudly whenever they settle in to camp. Despite their pretense, they know nothing of civilization and are an unsophisticated lot. In truth, they are not much more than simple hunter-gatherers, and the hunting aspect forms a cornerstone of their culture.”

            Remarkably, even with all of the above, their depiction might actually still be a little ahead of Martin’s/HBO’s. I.e. their sustenance is depicted as The Hung’s territory does not produce much in the way of food, so their diet can be macabre. They readily devour game and fish, but when the hunting is scarce, they feed on rats, insects, even the lice on their bodies. Some witnesses report seeing these savages devour the afterbirth from a mare’s foaling. And failing that, they drink the blood from their steeds and even turn to cannibalism if necessary – with at least the blood-drinking being historical, even if adulterated with ridiculousness like lice. YMMV on how this line “The Hung see all members of their tribe as equals and make no distinction between men and women.” compares in historical authenticity to “every night watching women dance [before getting raped] and men die” and khals sharing their wives with bloodriders, but it’s certainly different. And it definitely deserves some points for recognizing the whole difference between grass-fed and grain-fed horses Dr. Devereaux kept mentioning in this blog’s first year (“they breed tough, small horses on their cold mountain slopes which would survive where larger southern warhorses would starve”).

            https://whfb.lexicanum.com/wiki/Chaos_Marauder

            P.S. Looking at the wiki, it’s interesting to see that while their current depiction obviously prioritizes “verisimilitude” (“As befit their barbaric reputation, these axe-wielding barbarians often charge into the fray within great howling mobs without care for armor or tactics. They have little fear of dying in combat, for to fight and die under the gaze of their gods is considered the greatest death one can achieve in their lifetime.”), it was a bit more varied earlier on. In the 2nd edition, they wore chainmail and carried shields by default, and could use crossbows as ranged weapons! (Which is admittedly worse than their current, bare-chested-by-default version in terms of logistics.) In the Third Edition (and seemingly 4th and 5th too?), Marauders were basically non-supernatural Chaos Warriors, with heavy armour and the same weapon options (either a shield, or two weapons, or a halberd, or a greatsword/axe/hammer), while their current role was filled by Thugs – who still had the default “Light Armour” and had access to pistols! (in addition to bows and throwing axes, of course) 6th edition (which was in 2002), is apparently what settled on their current role, where they are bare-chested by default, cannot use any ranged weapons, and their “upgrade” options are light armour and/or either shields, or two-handed flails, or greatsword/axe/hammers.

          4. @ajay (also @mindstalko here)

            That solves an internal coherence problem by creating an external analysability problem. The Chaos Warriors join the undead of Nagash and the global pop cap of the elves in this class.

            Loosely following an idea our host laid out in the Rings of Power series, there is something of a hierarchy here:
            – Gladiator, 300, and innumerable other works erase peasants that should obviously be there. There is an unambiguous “right answer”; the clear and shallow mistake can be straightforwardly fixed.
            – Elves living in a forest have the right fundamentals for a solution to exist; what we know and have already seen do not uniquely specify which one. On the one hand, if the author doesn’t realize there is a question here, that is worse than having answered it incorrectly; on the other hand, what is required is more a clarification than a fix.
            – Folks living in a desert/tundra/cave are, under standard assumptions, missing some fundamentals required for any solution to exist. The hole in the world is deep enough to reach the elephants. It could in principle still be resolved, but it would require some change to the nature of the setting, and finesse to not demolish other things already built on top while doing so. In the process, it would tend to obviate more superficial “mistakes”. If the dwarves do turn mined coal into food, if the infantry moving 50 miles a day do ride trucks, of course it is pointless to ask about peasants and food (as opposed to fuel) endurance.
            – Nagash, etc.: the author throws the puzzle out the window, says that there isn’t any solution to be sought, please ignore this question. Fair enough, if this really is irrelevant to enjoyment of the work — which in this case it is, unlike Rings of Power. But jumping to this step does not solve a problem, it admits defeat in face of the problem, in the process removing the constraints which would otherwise allow inferences to be made beyond what we get to directly see.

          5. I suppose I’m a tad late to the party but I didn’t think my dig at Bretonnia was going to generate such a long discussion.

            On the subject of Warhammer, Chaos, etc. Discussions of this nature are exactly why I’d love for the Pedant to grace us with his usual in-depth research and analysis.*

            Moreover, as with most things, I think it’s important to consider the source material and especially, those who created it. Warhammer Fantasy was written, at least in part by people who had a background in history and this trend was continued in the later editions and the novels. William King’s Gotrek and Felix short stories are a prime example of this.

            As such, things like sustaining a population to generate the amount of troops needed in such a setting were taken into account.

            Now granted, the tabletop tactical gaming side of things might not display that very well, but then in a game where you’re averaging about 1,500 points worth of miniatures, that’s not really it’s purpose.

            However, a deep dive into the lore, canon, background, worldbuilding, whatever you wish to call it, can definitely provide some insight into the logistical and infrastructural aspects of the world. From my understanding, which is cobbled together from different lore videos (ones that bothered to cite their sources) and the setting’s novels, we can glean some relevant information.

            And of course, the Warhammer world is a massive place. The Empire of Man alone is apparently the size of real-life Europe. And as such, has the appropriate population and amount of peasants needed to sustain some truly massive armies, when it needs to.

            The history of every faction often reflects a great out uplifting and development, usually from an outside faction. The Elves “aided” Bretonnia and the the Dwarves gave the Empire a technological leg up. The world itself was created by the Old Ones and they exerted a great deal of influence on the world before they vanished.

            The Lizardmen, while a shadow of their former power, are still a force to be reckoned with. They benefit from living in a temperate and verdant environment, where they’ve plenty to subsist on and dedicated labor castes, to include things like farming. These are spawned from magical growing pools, whenever their Slaan needs more of a certain variant of Lizardperson.

            There are allusions to both High Elves and Dark Elves having dedicated agricultural areas and some form of labor force/magical means to sustain their caloric intake. Even the Dark Elves still need to eat and have manual laborers.

            Likewise the Dwarves figured out ways to cultivate crops in their mountain holdfasts and had a steady stream of trade during their high water mark.

            As for the Skaven, they can subsist on nearly anything. Much like the meme, the answer to that is basically, “Yes.”

            Bretonnia of course has no shortage of peasants to farm and labor for their lords. But apparently, much like the French after disasters like Agincourt or Pavia, they can still replenish their pool of knights without too much trouble.

            Chaos is a faction that I don’t should be evaluated through our conventional and modern understanding of logistics. Though the bulk of their forces come nomadic tribes and Norscan warriors, as was stated earlier, by the time someone reaches the status of a Chaos Warrior, they are often more daemon than human at that point.

            Their arms, steeds and armor can be gifted from their god(s), found in a quest or simply a daemon in sword, steed or armor form. In addition, they can get a great deal of their harness from the forges of the Chaos Dwarves. A faction which also has a dedicated slave population, comprised primarily of goblins, to do the bulk of manual labor and agricultural work.

            CL Werner’s Blood for the Blood God, does provide some insight into how a tribe living below the wastes might sustain itself and in context, even with the tribes’ empire long gone, it makes a certain amount of sense.

            So then, especially with “blessings” of Chaos magic, it makes sense that Chaos can field large forces and sustain them as well.

            The undead of course benefit from being well, magical and not having to worry about getting, sick, tired or needing to sleep and eat. Though some Nehekharan cities still farm and have living members of the population.

            Cheers!

            – Finn

            *With the caveat that whenever you tackle anything related to Warhammer Fantasy (40K fans are even worse), you can expect the “Um Ackshually” tribe to come crawling out of the woodwork in droves to tell you how wrong and stupid you are.

      3. “Even if we don’t have an extensive paleolithic with expansion and extinction, at the dawn of agrarianism, one species (let’s say, the elves) will beat the rest in the race to the grain. Then they expand and bulldoze everyone else into extinction within the feasible range of the agrarian subsistence system.”

        I’d say we have extremely limited actual evidence that this is the way it would play out, considering that we have a sample size of zero to check this against in the real world.

        I suppose you could see the expansion of neolithic farmers from the levant into Europe as a potential model, but even then that was *way* more complicated than the broad overview suggests (and is then promptly thrown for a loop with the Indo-European migrations). Similarly there’s considerable fluidity between hunter-gatherer and agriculturalist populations in Mesoamerica and the United States.

        You could say that it’s likely that the disparities between different species would likely shake out to a definitive advantage for one over the others (it certainly would be more likely than equilibrium). But even if there wasn’t equilibrium, would the disparity be enough to cancel out other potential factors? Even if it was, over what timescale?

        Hell, we can’t even agree whether humans outcompeted neanderthals, or whether we simply moved in after environmental factors stressed a formerly healthy population into terminal decline (or a combination).

        To be fair, the sheer quantity of different species makes it significantly improbable, but improbable things happen a fair bit in the real world.

        1. “One usual problem of fantasy: far too many sentient species coexisting ”

          I’d say fantasy is very broad and any talk of “usual problems” is suspect. There are lots of human-only fantasies. Or multi-species ones where competitive problems don’t arise.

          Tolkien: well he’s weak on non-human food production, but that aside, elves and dwarves both have low birth rates — elves probably designed to level off to a fixed population, rather than growing exponentially. And looking at the main texts, one might guess elves prefer “managed forest” sustenance and don’t want to compete with humans for farmland. Likewise, however dwarves eat, they prefer living inside mountains, which again gives them a niche separate from most humans.

          “Elves seem not to have any peasants.”

          Possibly justifiable if they’re living in a magic-enhanced orchard.

          These elf/dwarf comments can apply to a lot of other fantasies. I note dwarves tend to not have visible peasants either: elves all seem to be nobles and/or artisans, dwarves all seem to be smiths… plus nearly all the males being warriors, which might solve some inequality problems.

          Glorantha: lots of species, lots of gods taking an interest, lots of niches, and also the world is only 1500 years out from the invention of Time, so even if there’s a stable monoculture in the future, they can avoid having gotten there yet

          1. Tolkien has a pretty detailed descriptions for non-human food production, especially if the drafts are counted as well. The Elves were mostly hunters (and presumably gatherers), but in many cases they also ha orchards, fields and herds. The Dwarves were initially hunter-gatherers as well, and later developed some primitive agriculture, but they mostly preferred to buy food in exchange for their crafted goods, skills and labor, trading with the Elves, Men and Hobbits. And the agricultural economy of the Hobbits is also described in some details.

          2. “especially if the drafts are counted as well”

            I didn’t want to get into the weeds, thus my saying “main texts”. Yeah, there’s one old poem with a brief mention of Nargothrond orchards, and some obscure linguistic notes about elf occupation, or notes about dwarves pulling plows until they could trade with humans. But if you read Hobbit/LotR/Silmarillion, you don’t get any of that. Various mentions of elves hunting, yeah, but enough to feed armies? No mention of what dwarves do when they can’t trade (as Thorin described); how did Balin’s colony feed itself for 5 years? (I reject trade; no obvious neighbors who’d trade with them.) No mention of farms or even orchards surrounding elven settlements, unlike Minas Tirith or Laketown, or of elves being peasant farmers.

            I did say “non-human” so that’s on me, but I group hobbits in with humans on this issue. For just about every human or hobbit population in Tolkien we _can_ say, or make a good guess, how they’re getting food. Shire has its farms, Smeagol’s people were fishers, most human populations have farms or fishing described. Not sure if the Lossoth are more like Eskimos or reindeer herders, but either would be plausible. Wainriders probably livestock nomads, or at least it’d fit.

            But for elves, dwarves, and orcs we don’t have that, and Rivendell doesn’t _feel_ like a place with a bunch of elven peasant supporting Elrond as the lord of the manor.

            Plus, for most of the First Age they all somehow got by without more than starlight in Middle-earth!

          3. “The Dwarves were initially hunter-gatherers as well, and later developed some primitive agriculture, but they mostly preferred to buy food in exchange for their crafted goods, skills and labor, trading with the Elves, Men and Hobbits. ”

            Yes, I remember being struck by this even as a child reading The Hobbit; there’s a large human town, Esgaroth, just downriver from the Lonely Mountain, which is suffering severely, not just because of the dragon but because of the loss of their profitable trading relationship with the dwarves.
            Similarly Moria is upriver from the elf kingdom of Eregion/Hollin and it’s made very explicit as the Fellowship arrive at the gate that the two nations were on good trading terms; there are holly bushes planted near the gate, and the writing on the gate says “Celebrimbor of Hollin wrote these signs”.

            The question of where the dwarves got their food is very clearly answered!

          4. ” the loss of their profitable trading relationship with the dwarves”

            More explicitly, Thorin had already boasted about the dwarves not having had to grow food, buying it all from Dale or Esgaroth, presumably. Conversely, Lake-town _does_ have farms and such on the shore, though only mentioned after Smaug’s attack.

            “where the dwarves got their food is very clearly answered”

            Except that Eregion was destroyed around 1600-1700 Second Age (which ended around 3500 SA), and Moria kept going until 2000 Third Age. So that’s nearly 4000 years of Moria’s history _without_ friendly Noldor in Eregion (and how did _they_ make food?)

            Lothlorien is also nearby, but was less friendly to dwarves, at least with Sindar from Doriath (like Celeborn) and their long grudges in charge.

            Humans in the Anduin valley would be more likely long-term trading partners, though the rise of Dol Guldur should have put a crimp in that. And in any event, Moria was a major city of the dwarves; the Lonely Mountain was kind of rump by comparison.

          5. I think there were a lot of other trading partners for Khazad-dum, Eregion definitely was not the only one. Considering that Khazad-dum existed many centuries before the founding of Eregion, it was not the earliest one either, and probably not even the most important one.

            Before Eregion was founded, the Dwarves of Khazad-dum traded with the Northmen (distant ancestors of the Rohirrim), who inhabited the Vales of Anduin (and maybe other areas as well). The Northmen (and maybe other Men) were herders and agriculturalists, and the chief providers of food to the Dwarves. These early Northmen were scattered soon after the War of the Elves and Sauron, but some of them survived and could continue to trade with Khazad-dum afterwards.

            Since the early Third Age (at least) there were Woodmen (presumably the descendants of the early Northmen mentioned above), inhabiting the Vales of Anduin and the eaves of Mirkwood. Mannish populations in the Vales of Anduin continued to exist throughout the Third Age, even by the end of it there were “few Men”, who dwelt on the shores of Anduin between the Gladden and the Rauros. Men of Dunland also weren’t hostile to the Dwarves (Thrain and Thorin lived there for some time), so some trade could be possible there.

            I suppose Lorien might have some trade with the Dwarves of Khazad-dum. They weren’t completely hostile to each other, and the elves surely needed some metal weapons and armor. Rivendell was another possible trading partner, considering that Elrond was generally quite friendly to the Dwarves.

            Throughout the Second Age the Gardens of the Ent-wives existed in the area later known as Brown Lands. It is said that men learned agriculture from the Ent-wives, so I suppose there was a mannish population there as well. Khazad-dum could easily trade with these Men using the Anduin as the trade route. The destruction of these Gardens by Sauron might be one of the reasons why the Dwarves of Khazad-dum joined the Last Alliance.

            The Numenoreans started colonizing Eriador and Enedhwaith since the early Second Age. We do not know when exactly Tharbad was built, but it seems like a good trading post between Numenor and Khazad-dum. We know that Tar-Telemmaitë loved silver and mithril, so I suppose he would be willing to pay a good price for it. I doubt it would be reasonable to transport food for such a long distance, but the Dwarves could offer skilled labor (as road-builders, bridge-builders and stone-masons) to the Numenoreans, and of course these laborers would be fed. Later in the Second and Third Age Khazad-dum would be able not only to continue trade with Arnor (via Tharbad), but also with Gondor (down the Anduin). We know that the crown of Gondor and the helmets of the Guards of the Citadel were made out of mithril, so there are at least some indications of that trade.

            Lastly, the Halflings settled in the Vales of Anduin (at least) in the early Third Age. The Harfoots lived in the foothills of the Misty mountains and “had much to do with Dwarves”, presumably selling their agricultural products to them. The Stoors crossed the Redhorn Pass (so literally going above Khazad-dum) around 1150 T.A. and settled between Tharbad and Dunland were another group of Hobbits that could provide Khazad-dum with food.

            Considering that we have a pretty vague understanding of Middle-earth history, with entire centuries described by a couple of sentences, I suppose there might be other groups that were completely unmentioned in our sources. Generally speaking, Khazad-dum was pretty well-situated and had easy access to food both from Enedhwaith and Eriador and from the Vales of Anduin.

            ***

            The Balin’s colony was rather small, as far as I understand, so they could survive by hunting and gathering (and maybe some herding and agriculture as well). However, I don’t think that trade was impossible – as I said, there were “few Men”, who dwelt on the shores of Anduin between the Gladden and the Rauros. There also were the settlements of the Stoors in the Gladden Fields (according to one version, at least), and boats of some Northern Men (likely Woodmen or Beornings) reached Osgiliath, so trading with Balin’s colony wouldn’t be very hard for them. Of course it likely wasn’t regular trade, but some occasional exchanges were possible. Balin exchanged messages with Dain, so his colony wasn’t entirely isolated from the outside world.

            ***

            As for the Elves, by the late Third Age they were presumably mostly hunter-gatherers. You can easily support an army by hunting and gathering if your army is a militia that consists of hunters (which was likely the case for the Elves of Mirkwood, and probably with the Elves of Lorien as well). Although we have a mention of the orchard of Galadriel (“In this box there is earth from my orchard”), so presumably the Elves of Lorien had orchards as well, and considering the magical effects of this earth, Elven orchards might be extremely productive.

            Rivendell is probably the most mysterious in that regard (since we know both a lot and very little about it). Surely, Rivendell isn’t a manor of Elrond. Elrond was the chief of the elves of Eriador (and the Northern Dunedain/Rangers as well, to some extent), and Rivendell was his residence. The Elves and the Rangers wandered across Eriador, surviving by hunting and gathering, or maybe had small homesteads hidden here and there (like the house of Arador “in woods near the Hoarwell north of the Trollshaws”), but came to Rivendell (“the Last Homely House”) semi-regularly.

            So the population of the valley itself probably was rather small, and could easily survive by hunting and gathering. However, they likely had some grain-fields (Arwen knew how to make lembas, and although there was no true lembas-grain in Rivendell, some less magical analogue seems likely – Gildor’s company had “bread, surpassing the savour of a fair white loaf to one who is starving”), orchards (Gildor’s company had fruits, particularly apples) and bee yards (miruvor was made out of honey).

            It is quite likely they had flocks of sheep as well – Elrond provided the Fellowship with “thick warm clothes”, which almost certainly were woolen (“woolen hose” are mentioned later). They also had horses (like Asfaloth, the horse of Glorfindel) and ponies (like the one Elrond lent to Gandalf in The Hobbit), and some other livestock also seems likely. The paths in the empty lands south of the valley were well-known to the people of Rivendell, so maybe these lands served as their (summer?) pastures and/or hunting grounds.

            There were Elvish smiths in Rivendell (who re-forged Anduril), alongside other craftsmen (such as cloth-makers and boot-makers). So Rivendell was not just a house, but a self-sufficient farm as well, and Elrond was probably less similar to a late Medieval lord of the manor, surrounded by Elven serfs, and more similar to early Medieval hersir – a wealthy farmer who owned land and a longhouse and had the status of a leader of his community.

            And for the First Age (and likely the Second Age as well) we have the description of Elven economy in the drafts – Sindar who lived on the plains east of Doriath had sheep and cattle and grain-fields, Nargothrond had orchards, Gondolin was definitely self-sufficient (and had lembas) and so on. Considering that Eregion was settled by Noldor from Gondolin and/or Nargothrond, it clearly had its food production as well.

            ***

            Mordor had slaves working on the fields of Nurnen and tribute from its tributaries to feed its Orcish armies. Isengard also had slaves working the fields around the city, and it imported food from the Shire (and maybe from Dunland as well). The Orcs of the Misty Mountains also had some slaves, but likely survived mostly on hunting and gathering (as well as fishing and occasional raiding). The only group which is really mysterious to me are the Orcs of Angband from the First Age. I suppose they also were mostly hunter-gatherers, inhabiting the vast cold lands of Dor Daedeloth. Orcs are sometimes described as “orc-hunters” or “Morgoth’s hunters”, so hunting definitely wasn’t unfamiliar to them.

        2. Sample size of several.
          – The agricultural Austronesians overran the Aborigines (Australo-Melanesians) everywhere except Australia (wait, what?) and the uplands of PNG.
          – The agricultural Bantu overran the Khoisan everywhere except straightforward deserts and the southern tip of the continent (just slightly outside the growing range of their crops).
          – As you mention, European farming; additionally, the IE expansion shows that even things other than “agriculture, yes/no” can drive these upheavals. (Horses?)

          Of course, in reality most of these are somewhat questionable on the “extinction” front, since the invaders and the locals could — and being humans, did — interbreed. (Though watch for divergence between somatic chromosomes, Y chromosomes, and mtDNA.) However, in fantasy universes we usually have separate species that either can’t produce offspring at all, or sometimes they relate as horses and donkeys, producing a distinct mule that is infertile.

          Speaking of Neanderthals and Denisovans: we know that populations had picked up a little genetic material from them. (Seriously, some people… I suppose it does beat sheep, especially since sheep weren’t available back then.) So the “environmental stressors” look an awful lot like “humans”.

          On small geographic scales, if there isn’t an equilibrium, then the population ratios shift in favor of the previous round’s winner. The process is unstable, it actively snowballs away. To keep a long-term viable dwarf population in the face of the flood of elvish farmers, you either need a large area totally unsuitable for agriculture (“nobody wants to be an eskimo, not even the current eskimos; the farmers don’t bother turfing out and replacing them merely because the RoV (return on violence) of trying to take their neighbor’s farmland is better”) or the ultimate Spiteful Dwarf Move of laying a rune magic curse on their land, one which is clearly harmful to the dwarves, but is even more harmful to the elves (dwarves have higher Constitution, right?).

          1. And why exactly the Dwarves couldn’t be miners and smiths, producing tools and weapons for the Elven farmers in exchange for food? Sure, eventually the Elves might conquer the Dwarves (or vice versa), but exterminating the conquered population might be a stupid decision. And the process of assimilation might take thousands of years or even be completely impossible due to biological or cultural reasons.

          2. “Mixing in different species makes that situation *more* complicated, and makes a difficult thing to predict with any certainty *even harder*.”

            I dunno. If they’re actually different species, then they can’t mix genetically the way human populations did. In the long run, we do expect one species per niche: competitors will either go extinct or find their own niches to monopolize. Under the assumptions of “elves invent farming first by a good margin, and have unlimited population growth, and no qualms about sweeping others before them”, we would expect elves to fill all the farming space.

            Of course, that’s a lot of assumptions. Fantasy elves often worry more about stagnant or declining populations than exponential growth. Silmarillion shenanigans aside, Tolkien elves are Good People and unlikely to murder you for your land (or to have to, because again, declining birth rates.) Even if elves grow, they may do so slowly enough that other populations learn agriculture from them and then grow faster until unfarmed land. Elves may prefer forest permaculture to spending immortality behind a plough. (And depending on forests might give them a disadvantage in warfare, without the right magic — burned forest grows back slower than burned cereal fields.)

            My vague idea of stereotypical D&D-ish fantasy is more of elves living in forests and high-magic areas, humans farming the plains, dwarves maybe living among humans as artisans plus sustaining their mountain communities somehow (goats and potato terraces?), orcs leaving their caves for nocturnal raids, drow living on underground farming sustained by Underdark magic fields… Not exactly solid worldbuilding but at least pointing toward niches.

          3. “Of course, in reality most of these are somewhat questionable on the “extinction” front, since the invaders and the locals could — and being humans, did — interbreed.”

            That’s kinda what I was getting at. Not that the agriculturalists didn’t handily win out, but that when you look past the overall trend you see considerable ebb and flow from different populations over time, and for a number of different reasons. Presumably for us, because different cultural packages suited the environment better at different points in history.

            Take the neolithic farmers. Although the general trend was for replacement of Western Hunter Gatherers (WHGs), there was both prolonged periods where the populations survived alongside each other (enough that most European peoples of the time were some proportion of admixed WHG and EEF), and time periods where hunter-gatherers experienced significant resurgences in certain areas.

            Similarly, although the overarching simplistic narrative of the Indo-European migrations was that of large-scale genetic replacement (especially of male lineages), when you look a little closer it gets a lot more complicated. For a basic example, the initial waves of Indo-European migrations reached all the way to Iberia, but skipped out the British Isles. Ten, the Bell Beaker phenomenon developed in the Iberian mixed EEF/WSH (Western Steppe Herders) peoples, which then back-migrated out of Iberia into Western Europe and up into Britain (part-migration anyway).

            Mesoamerica is another fascinating case, with very well entrenched and developed agrarian societies being successively overtaken by migrating hunter-gatherer ‘chichimec’ groups from the bajio (the nahuas, of Aztec fame, were the last in a long line of these groups).

            I know less about the development of agriculture in other places, but I note that the Khoisan, Aboriginal Australians, Highland Papuans all still *exist*.

            My point was really that the whole thing is extremely complicated and contingent on *way* more things than just ‘whoever discovered grain first’. If that was the case we’d all be Sumerians by now.

            Mixing in different species makes that situation *more* complicated, and makes a difficult thing to predict with any certainty *even harder*.

          4. The agricultural Bantu overran the Khoisan everywhere except straightforward deserts and the southern tip of the continent (just slightly outside the growing range of their crops).

            Overran the Pygmies, for the most part, as well, outside of the actual rain forest (where agriculture tends to be less productive, particularly before they got access to bananas from Asia). Apparently the ancestrors of the Pygmy peoples used to live across a much larger area of central Africa than they do today.

            Then again, different sentient species might have much stronger niche separation even than different human groups do (and it’s also not like one group overruning the other *always* happens, you point out some counterexamples yourself).

      4. C. S. Lewis in his Space Trilogy had three sentient species coexisting amicably on Mars, and actually devoted some thought to how they would interact, but his solution to the problem you suggest was that this was an unfallen world, they didn’t have original sin, so *of course* they would be able to peacefully and amicably coexist. (For similar reasons, his solution to the Malthusian problem was that at least one of the sentient species, and the one he clearly thought was “best” in some sense, was just practicing extrmely strict Christian chastity).

        If you don’t share his theological presuppositions, of course, those solutions are going to seem less plausible. That said, I don’t think it’s at all implausible that three sentient species could coexist, as long as there’s some kind of niche specialization. We often see different human groups coexisting in the same environment, and not having too much conflict except at the margins, if they can focus on different economic/occupational niches. There are fishing and herding cultures that coexist fairly amicably with neighboring farmers, for example.

  29. “Indeed, it was possible for this to happen long enough that the great global Malthusian Crisis simply…never happened.”

    It seems to me that much of the point of this post is that the great global Malthusian Crisis is what people lived with through the almost the entire history of agriculture. It is, after all, claiming that the death rate of children and the elderly was heavily dependent on the food supply, and that depended on the quantity and quality of land available for food production.

    1. No, that’s not how it worked most of the time. Population growth mostly wasn’t constrained by food supply (there was usually more land that could be brought under cultivation) but food supply was always uncertain: What you’d get is a few years of steady population growth then an epidemic/bad harvest that sets you back a couple of years. But population is pretty constantly growing. (albeit extremely slowly by modern standards)

      You sometimes get a local malthusian crisis, and sometimes you get close (europe seems to get pretty close to its limits in the 1300’s, but then the Black Death hits) but you never get a point where global population is constrained by food supply.

      1. Ancient Greek colonization is usually described as avoiding a Malthusian crisis (Malthus usually isn’t named.) Population pushing the boundaries of Greek agriculture, and invading other lands.

        1. Yes, but that’s not the global malthusian crisis that Bret is talking about. There was never a point where “There’s not enough food so population became stagnant” in that sense. (What you get is an ebb-and-flow, and as mentioned, people moving elsewhere)

      2. Lets assume the food per person goes down as the population goes up, as seems likely in this kind of peasant society. If you have a society in which the food supply varies from year to year, but a lot of people die of hunger in the bad years, your population is near the limit allowed by the bad years. If that is not Malthusian, then no society could be.

        And if every society in the world is like that, then the world is Malthusian.

        1. To add to the above: Even if *no one* died of hunger, but the population was limited by people deciding not to have children they could not afford to feed, that would still be Malthusian, so long as food per person went down when the population went up.

        2. That doesen’t really happening though: Population increases usually just means new land is taken under cultivation.

          The point is that these kinds of societies are population-food limited in that sense, but rather what happens is there is an ebb and flow: You get years of growth following by years of contraction.

          Malthus comes in just as the point where the agricultural revolution means that (in western europe) these contractions become a lot less severe. (though they still happen) which means population starts to grow much faster. That’s what he’s worried about, but it doesen’t really hold for previous eras.

          The pattern over time is generally slow population growth, but still population *growth* you get a few global setbacks: The time around the fall of Rome is pretty dire, and so is the Black Death. You never reached a “cap” on population.

        3. The safety valve is emigration – to towns where deaths outnumber births, to foreign wars or just on the road. Young men (and to a lesser extent women) are pushed out – more in bad times. In that sense it resembles a lot of animal populations, where the young who cannot obtain a territory get shunted around until they either can or die. Baboons, possums and crocodiles all fit this pattern.

  30. “the basic pattern of pre-modern mortality changes very little from one society to another or from one pre-industrial era to another. It also varies very little from one class to another: the fantastically wealthy aristocracies of pre-modern and early-modern societies could buy many fantastical things, but nothing they could spend money on would do much to preserve their children from shockingly high child mortality rates”

    One obvious thing they could spend money on was FOOD.
    What did that “33% of children die before age 1” consist of?
    Did it mean “33% of babies died every year, good year and famine year, all families, rich families and poor families”?
    Or did it mean “In a good year, 25% of babies died, but in a famine year 75% did, and averaged over the few famine years and majority of good years, the average comes out as 33%”?
    Also, did the rich show seasonal mortality? And did the rich die of famines?
    If lean seasons caused death more by decreased resistance to infections than by complete starvation, did it mean that during lean seasons and famines, although the rich continued to eat well, they were exposed to more infections from their poor neighbours who circulated more infections than in good times and, despite being as well fed and resistant to infection as always, were prone of dying of infections contracted from their starving neighbours?

    1. “One obvious thing they could spend money on was FOOD.”

      Well, yes. But the children are mostly dying of infectious diseases, not starvation as such. So if the children of the rich are better fed they will be a little less likely to die of disease.

      But if an immunologically naïve person catches smallpox, or malaria, or TB, or even measles, they are not guaranteed to survive just because they are not hungry.

    2. The rich spent more time in urban areas, where infectious diseases were rife. They often also married younger (so more prone to die in childbirth), and often had less healthy diets (less greens, more meat)

  31. Charon, cloaked in darkness,
    Before you row death’s ship
    Through the reeds to Hades,
    Steady the ladder.

    Reach out a hand for the son of Cinyras.
    Help him aboard.
    He is too young to walk well in sandals
    And frightened to touch the sand with his bare feet.

  32. “We’ll come back around to this, but, as Crone observes repeatedly, pre-modern societies were markedly less individualistic than most modern societies, understanding individuals primarily as parts of a household, a family, a community rather than as individuals.”

    As an aside: is there any good writing that explores what this means? I don’t doubt it in the slightest, but I feel like fish that someone is trying to explain land to. What does it mean to say that societies are less individualistic? What does it mean for how they think about themselves and others?

    1. So, the first in the series of posts spurred by Ridley Scott’s Gladiators had actually been one on Roman names, and there, it contained the following:

      https://acoup.blog/2024/11/22/fireside-friday-november-22-2024-roman-naming-conventions/

      This naming convention actually indicates something quite important to know about the Romans which is that this is not a strongly individualistic culture, at least by modern standards. Romans are defined, in their names, mostly by being one more iteration of a family tradition and even the most individualistic part of their name – the praenomen – is a cookie-cutter nod to family traditions. The ideal Roman family was, in effect, one Appius Claudius after the next, each one quite a lot like his father, on and on forever.

      Maybe some of the earlier posts have shed more light on this as well – I still cannot claim to have read every single one of them.

      Having said that, I find it interesting that in this post, there is now a seemingly new claim “the Romans are, as ancient cultures go, unusually individualistic in their outlook“, with relatively little elaboration. I certainly wonder who had actually compared these ancient cultures on any sort of an individualism-collectivism scale. So far, I cannot readily locate actual scholarship directly addressing this subject.

  33. Hi Brett, I think you’ve misplaced the decimal point in your figures for 21st century maternal mortality. Going by the Wikipedia table you link, figures of 1-25 per 100,000 should be 0.001% to 0.025%. Hope you don’t mind the peripheral correction, thank you for an excellent piece!

  34. Girls being pregnant and giving birth and nursing before they are fully mature, has a bad effect on the girl’s health and her subsequent health if she lives through it. It also has a negative effect on actually bringing a birth to term, which has a negative effect on her subsequent health and ability to conceive. It also has a negative effect on the infant’s ability to survive childbirth. It also has a negative effect on the infants ability, if surviving birth, to live to the first year. This increases over the lifetime of the girl, now woman’s reproductive life. This a big reason so many women died early from childbirth.

    Factor in there being a famine while pregnant and / or nursing, or nutrient deprived for other reasons, we have more reasons why so many women perished from being pregnant.

    Factor in that the girl-woman had to continue doing difficult and hard manual labor during all that — again, why so many women perished from being pregnant, giving birth, nursing an infant.

    Historians who work in the field of African-Atlantic slavery eras see this starkly. It was only in North America that this kidnapped labor force was able to replace itself through natural reproduction, and even to increase. Everywhere else the work force (sugar!) tended to kill the average laborer within 7-10 years, along with malnutrition. The few women who arrived to the sugar fields were so malnourished they never got pregnant all, or did not carry to term, or died. On the plantations of the most cruel and greedy an selfish and stupid here in North America, this was seen as well, for the work force denied minimal nourishment, clothing and protection from the elements.

    ~~~~~~~~

    When speaking of rice and other non-meat sources as food in our challenged future being so easy as replacements — already drought has shrunk Japan’s and other sections of South Asia’s rice crops. Nor is rice the only global staple suffering this kind if climate-caused shrinkage.

    1. “Girls being pregnant and giving birth and nursing before they are fully mature, has a bad effect on the girl’s health and her subsequent health if she lives through it.”
      https://pmc.ncbi.nlm.nih.gov/articles/PMC11698334/
      Please note Figure 2.
      Yes, being a young mother is a risk factor… for ages under 15. 15 is already in bracket 15-19, which is the safest age to give birth.
      And while the maternal mortality for mothers up to 14 – the whole age bracket, including mothers 13 and under – IS higher than the mortality for ages 15-39, it is lower than for all ages over 40. It seems that a girl is better off giving birth at 14 than at 40.

      1. Should be noted that you should be careful about taking modern statistics as read for historical societies: One of the things malnutrition often does is delay puberty (and reduce fertility in general)

      2. That’s with modern nutrition and health. The age of menarche in pre-industrial times was later (around 16), reflecting this plus hard labour. The ‘safest age’ might then be a bit later too.

        1. Or maybe pre-industrial girls became fertile when their bodies were ready for it, and modern lifestyles mess up the timing. Becoming fertile at an age when pregnancy is likely to fail and ruin future fertility seems like a pretty serious flaw – something I’d expect evolution to fix unless it were a recent problem.

          1. Evolution is not an optimizing process, and in particular isn’t built around optimizing things that we might value. There is no shortage of serious flaws all over the natural world that you would expect a smart and benevolent designer to have fixed (this is actually some of the best evidence for why evolution is true).

          2. > Evolution is not an optimizing process, and in particular isn’t built around optimizing things that we might value.

            The thing we’re talking about is the #1 thing evolution values. If a trait destroys your fertility, that trait won’t get passed on to the next generation.

          3. Evolution optimizes for the transfer of genetic material and it may not be obvious what kind of scheme would be optimal. Or just good enough for now.

            Last weekend I was reminded about the horrifyingly stupid way hyenas give birth, through an elongated clitoris. Hyenas probably have a higher mortality during delivery than humans, and I doubt giving birth older would give much advantage. Evolution has painted itself in corner with hyenas, switching to a different method of birth would be a major change that evolution is unlikely able to do. But the overall hyena package seems to successful enough in it’s ecological niche that a one fatally major flaw doesn’t kill the whole idea.

            What if humans normally gave birth for the first time around age of ten, with a 50% change of mother’s death. This would be obviously bad for the genes of the mother’s line. But in this kind of situation the man would likely have 3 or 4 spouses during his life and mixing his genes with so many different females would give him a great genetic advantage, maybe enough to offset the evolutionary disadvantage suffered by the women. But for this scheme to work humans would probably need to give birth in 1:2 male:female ratio and it would be difficult to switch to that since the XY chromosome system seems to naturally give an even split.

            There is also the human culture which can provide a very powerful counterforce to mess with evolution’s plans. It’s hard to know what would be the optimal reproductive system for humans when we can invent all sorts of weird alternatives and be able to enforce them. Very few of these alternatives would be stupid enough to die out naturally.

          4. “What if humans normally gave birth for the first time around age of ten, with a 50% change of mother’s death. This would be obviously bad for the genes of the mother’s line. But in this kind of situation the man would likely have 3 or 4 spouses during his life and mixing his genes with so many different females would give him a great genetic advantage, maybe enough to offset the evolutionary disadvantage suffered by the women.”

            I think the confusion here is that you aren’t considering that daughters also inherit 50% of their genes from their fathers, and ending up in a body that has a 50% chance of dying aged ten is not great from the genes’ point of view. Further, ending up in a body that has a less than 50% but still large chance of growing up without maternal care is also not great from the genes’ point of view.

        2. I recall a college intro biology course (60 years ago, admittedly) where the professor said that was more about winters without fresh food than nutrition over all. In northern Europe you spent several months of the years eating mostly “cornmeal mush”* and puberty tended to come around 16. Down around the Mediterranean it came around 12.5 and had been that way since ancient Roman times.

          * Note that corn doesn’t necessarily mean maize in Europe.

      3. 15-19 is a very wide bracket. Perhaps it’s grouped that way due to limits of data collection but those few years of puberty bring rapidly dropping risk while it increases much more slowly at the tail end. 14 and 40 are both terrible extremes. Instead of “15-19 is safest”, I’d say “19 is safest” or whatever number.

        1. It takes a few years after menarche for the reproductive system to kick into full gear. Before than chance of getting pregnant is much lower and more likely to end badly. But by late teens even pre-modern you are good to go. Which of course people were well aware of.

          Even nobility tended to marry late teens. With young marriages, they often seem to have waited several years to consumate. Only clear exceptions I believe among English kings are Edward I (young also and loving marriage, so may fall in category of consensual but unwise) and King John (an utter creep).

          The reproductive system of course also deteriorates fast at the other end of the age spectrum. Fertility, miscarriage rates, maternal death rates all increase fairly rapidly over age 35.

          It is interesting that fertility in humans is skewed early relative to intelligence. A lot of higher level brain development is happening up to 25 or so. Something like half of prime fertility years for women is *before* their brains are done developing.

          1. ” is happening up to 25 or so.”

            AFAIK “your brain doesn’t mature until 25” is a myth. The study found brain changes up to 25, _and didn’t look at older ages_. Basically, your brain keeps changing, period.

            But wikipedia Adolescence#Improvements_in_cognitive_ability says many parts of the brain are mature by 15, with pruning of the pre-frontal cortex leveling off.

            “15-19 is a very wide bracket.”

            True, it’d be nice to see finer resolution.

            But wikipedia Teenage_pregnancy says the risk premium has basically vanished by age 16, _if_ you control for socioeconomic status and health care access. The risks of mid-late teen pregnancy in a developed country come more from being the sort of teen and family background who gets pregnant than from the sheer biology of it.

            How steep the biological risk curve is from 12 to 14 to 16, I dunno.

          2. What mindstalko said. Actually, there **are** studies that show intense synaptic pruning happened after 40 years age.

          3. Your brain keeps changing until the day you die. As does every other part of you.

            I gather fluid intelligence peaks in the mid-20s.

            Note the word “peaks”. It is all downhill after that.

          4. @Mindstalko,

            AFAIK “your brain doesn’t mature until 25” is a myth. The study found brain changes up to 25, _and didn’t look at older ages_. Basically, your brain keeps changing, period.

            As far as I remember from hearing various talks on related topics, personality doesn’t crystallize until around 30, and emotional intelligence doesn’t peak till about 50.

        2. @xellos,

          yea, on a general note, i always get annoyed by the “15-19” bracket on these sorts of surveys. 15 and 19 are very different, in all sorts of ways!

      4. Ya, well, by age 40, when having first been pregnant at 14 – 15, the woman has probably been pregnant multiple, multiple times, with more and less difficult and more and less successful outcomes, which makes recovery from the pregnancy and childbirth less and less successful.

        Moreover, equating menarche with full physical (and emotional) maturity is incorrect. When a girl’s mestruation begins, she still has a lot of growing to do.

        These poopoos are all prompted by male vision, which unfortunately, as proven over, and over, and over, is proven to be insanely ingnorant of what it means to be pregnant, give birth, and the rest — even to how reproduction takes place. Certainly ignorant of the incredible health risks to the woman pregnancy and childbirth are.

        1. The data for the analysis of age risk was for this century when women were mostly not pregnant at 14-15 and generally weren’t having multiple, multiple pregnancies.

    2. When speaking of rice and other non-meat sources as food in our challenged future being so easy as replacements — already drought has shrunk Japan’s and other sections of South Asia’s rice crops. Nor is rice the only global staple suffering this kind if climate-caused shrinkage.

      Beware of the “If it bleeds, it leads” principle. Droughts tend to get reported on much more than record harvests do – especially if they are foreign (why would most readers want to read about some farmers they never met doing well?) While “all models are wrong but some are useful”, etc., etc., it’s probably still worth noting that a fairly recent combination of twelve of the most advanced crop models and five similarly advanced climate models found that even under the absolute highest-warming scenario (with its emission growth rate only ever accelerating, in contradiction to what we have seen even in recent years, it’s practically hypothetical), global rice yields would be more likely than not to increase slightly. (Though much less than the earlier generation of models assumed they would increase under the same conditions, to be fair.)

      https://ris.utwente.nl/ws/portalfiles/portal/268928774/s43016_021_00400_y.pdf

      The SSP585 ensemble estimates for soybean are revised downward from +15% (IQR: −8% to +36%) to −2% (IQR: −21% to +17%) and for rice from +23% (IQR: +1% to +33%) to +2% (IQR: −15% to +12%)…Ensemble median soybean and rice productivity peak midcentury and decline towards the end of the century at the global level. The soybean response exhibits late-century negative TCIE (year 2096) under SSP585; rice, on the other hand, shows early positive TCIE (year 2030, SSP585) but late-century declines are not projected to reach the level of negative TCIE at the global level (38% of GCM × GGCM combinations under RCP8.5 indicate negative TCIE by 2099; Supplementary Fig. 4). Rice is the only crop in this study that indicates positive TCIE in the tropics, which drives early net global gains before productivity is simulated to decline again by about 2060.

      The biggest, most conclusive findings of the paper, though (the only ones it placed in the abstract) were to do with maize and wheat.

      Wheat results are more optimistic, while maize, soybean and rice results are decisively more
      pessimistic. For maize, the most important global crop in terms of total production and food security in many regions, the mean end-of-century (2069–2099) global productivity response is ∼10% SSP126) and ∼20% (SSP585) lower than in GC5. This shifts the SSP585 estimate from +1% (interquartile range (IQR) of crop–climate model combinations: −10% to +8%) to −24% (IQR: −38% to −7%) and for SSP126 from +5 to −6%. For wheat, the second largest global crop in terms of production, the SSP585 ensemble estimate is shifted upwards from +10% (IQR: −1% to +15%) to +18% (IQR: −2% to +39%), and under SSP126 from +5% to +9%.

      Again, models are not perfect, and the studies may also not be using them perfectly – but there is still a lot more to the science than what makes it into most headlines.

      1. [” … (why would most readers want to read about some farmers they never met doing well? …” ]

        Because we depend on those crops and harvests for feeding ourselves. The price of the rice I get at our local Japanese market was increasing already due to the droughts, BEFORE stoopid tariffs.

        Get real, get some idea of how globalization works, particularly for the food supply for everyone just about on the planet, and particularly here in the USA.

        1. Firstly, you seem to assume I’m in the US; I’m not.

          Secondly, you misunderstand; my point was that mainstream news – particularly in the West – generally only seem to write about crops when they have bad stories to tell, or maybe when there’s something good in their own countries specifically. (Or perhaps when they have some speculative start-up snake oil like vertical farming to pump up.) Good news about the conventional agriculture globally or in distant countries do not appear to interest them at all.

          Your own comments seem to prove this perfectly. You say “already drought has shrunk…Nor is rice the only global staple suffering this kind if climate-caused shrinkage.” – meanwhile, just three weeks ago, the UN’s Food and Agriculture Organization posted the following:

          https://www.fao.org/worldfoodsituation/csdb/en/

          Global cereal production projected at all-time high, but uncertainties persist

          FAO’s latest forecast for global cereal production in 2025 has been lifted by almost 14 million tonnes (0.5 percent) in July compared to the previous month, and is now pegged at 2 925 million tonnes. The revised outlook is driven by improved prospects for wheat, maize and rice (in decreasing order of magnitude) and puts the forecast of the global output 2.3 percent above the previous year’s level, marking an all-time high.

          Global wheat production has been raised by 0.7 percent in July over the previous month’s level, now standing at 805.3 million tonnes in 2025, up 0.9 percent year on year. The monthly increase primarily reflects recent official data from India and Pakistan pointing to better-than-expected yields, with a record output forecast in the former country. Global coarse grain production is also revised marginally higher this month to 1 262 million tonnes, now standing 3.5 percent above the previous year’s level, and remaining the main driver behind global production growth this year.

          Most of the coarse grains quantity is comprised of maize and the improved prospects this month are driven by stronger maize yields in Brazil and an upgrade to the maize production outlook for India, where robust domestic demand for feed and industrial use is seen encouraging a larger area in 2025 compared to preliminary expectations. These positive revisions more than offset cuts to production forecasts in Ukraine where, in addition to the effects of the conflict, dry-weather conditions are weakening yield prospects, and in the European Union, owing to a minor revision to the area.

          (It keeps going on like that for several more paragraphs, but you get the point.)

          Now, when I tried searching for this in English, on the first page I could only find links from websites operating in…India, Ghana, Ukraine and Kazakhstan. All nations which play significant role in global agriculture as either producers or consumers, of course, but certainly nothing someone like you would have heard of, or would have considered “reliable” – yet it is completely real, but the “gold standard” Western press does not seem to care.

          1. Japan is barely in the top 10 of rice producer countries, responsible for only 1.5% of the global annual supply. (7.48 million tons out of 502 million.) Why should their struggles be treated as a bigger story than an all-time high in global output?

            https://www.statista.com/statistics/255945/top-countries-of-destination-for-us-rice-exports-2011/

            The other thing is interesting, but it has notable limitations. For instance, that quote here

            factoring in adaptation and income growth, they find that the yield losses fall to 7.8% in 2050 and 11.2% in 2098 under moderate emissions.

            Does not account for the beneficial effects of increased CO2 on plant growth at all. According to the source paper’s supplementary data, an attempt to account for it effectively halved the impact under that scenario – to a 5.9% decrease in 2098. (And rice specifically increases by 4.9% – the only crop to do so.)

            Here is an even more revealing quote from the article you linked:

            Dr Jyoti Singh, a climate-crop modeller at Columbia University’s Center for Climate Systems Research, tells Carbon Brief that the dataset assembled for the new study is its “most noteworthy strength”. Singh, who was not involved in the new work, adds that it “significantly contributes to empirical agricultural impact modelling”.

            However, she says, there is a “big limitation” of the study in that empirical models are based only on past data – they cannot account for the full range of potential futures. Therefore, the results from the new study cannot be compared directly to results from models that more explicitly represent the processes that influence crop growth, she says.

            This is in contrast to the paper I linked in an earlier comment, which used the most advanced crop models to date. To clarify how advanced:

            Twelve process-based global crop models
            participate in this study: ACEA, CROVER, CYGMA1p74, DSSAT-Pythia, EPIC-IIASA, ISAM, LandscapeDNDC, LPJmL, pDSSAT, PEPIC, PROMET and SIMPLACE-LINTUL5… The full ensemble, therefore, consists of roughly 240 future crop model simulations per crop plus one historical reference run for each crop and climate model and one historical reanalysis run per crop model.

            Whenever the two contradict each other, I think the paper with more detailed modelling should be given precedence. For now, at least, that more detailed modelling seems to produce substantially more positive results. (I.e. under the same scenario, they have literally diametrically opposite results for wheat – as I quoted above, that paper finds an 18% increase for wheat yields, while the one from the article you linked has an 18% decrease by the same year.)

            Lastly, I should mention that both papers calculate changes in yields, which is not equal to food supply, since they both seem to assume no changes in crop area. This makes sense (after all, decisions on what to do with land are sociopolitical and there is no way to say for certain which simulated parcel of land will get cleared by which year), but as I already mentioned above, it is believed there are still a lot of forests and grasslands which are likely to get ploughed if/when the yields on current cropland become insufficient.

          2. Whenever the two contradict each other, I think the paper with more detailed modelling should be given precedence

            I think I would side with @Foxessa more here. I’m not a ‘modeling’ person so I can’t judge the accuracy of the models, but the new paper which you cite runs against what seems like the consensus of many previous studies (that crop yields especially for plants like wheat are going to come out). I am going to go with the consensus, until enough new papers come out challenging the consensus that it breaks down.

            Yes, higher carbon dioxide would be good for many/most plant species and yields *on its own*, but higher temperatures are very bad for them, and my general sense is that for most plants, on the whole, and in particular for wheat, the temperature effects are expected to outweigh the carbon effect. Though it has been quite some years that I looked at the actual studies.

          3. my general sense is that for most plants, on the whole, and in particular for wheat, the temperature effects are expected to outweigh the carbon effect.

            Well, the “new” paper (technically, it’s from 2021; one Foxessa is referring to is from the other month, but also appears to ignore its findings entirely) still expects overall effects across crops to be negative – just not as negative as some of the others, in large part because it simulates more benefits for wheat in colder areas. (In contrast, the other paper seems to estimate more benefits for rice than once it accounts for CO2, but to a much lesser extent than the 2021 paper does for wheat.)

            I’ll note that after doing a few searches, I actually noted quite a few recent papers suggesting agricultural effects would be more limited than earlier projected. Now, they are usually smaller-scale and have their own limitations, but it is still worth keeping in mind the research isn’t all “It’s worse than we thought”, as one might assume when looking at just the media headlines. In fact, there are to many to be linking them all here, considering the spam filter, but here is an interesting one which seems to give (one of) the reasons why the statistical paper is giving more negative results compared to the modelling one.

            https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2024EF005072

            In effect, it argues that the warming to date had often occurred at the same time as ground-level ozone pollution had increased – which is quite harmful to plants as well. However, the statistical models to date did not explicitly consider that impact, and so all of that damage effectively ended up attributed to warming. In the future, however, ozone pollution will not only not grow at the same rate as warming, but is actually expected to decrease under most scenarios. Again, accounting for that does not make the warming positive for crops (the ozone-specific effect is apparently in low-single digit percentages, while the warming impact goes into double digits for the worst-affected crops), but it is something which deserves to be better-known.

            P.S. It’s probably worth noting also that in theory, solar geoengineering (usually through spraying sulfates in the stratosphere) would enable the plants to benefit from greater carbon dioxide while not experiencing greater warming. Of course, it would also reduce the amount of sunlight they receive. I believe it’s accepted that the sunlight effect, being limited to a couple of percent, will not be devastating, but I’m not sure there is a consensus on whether (agricultural) plants would actually benefit relative to now, or simply not get hit worse than they would have been under the unmodified warming.

            More importantly, once these things start, they cannot stop until/unless carbon dioxide is reduced all the way to the same level as the warming they have been offsetting. Mathematically, that will take centuries even if things go well. Realistically, it doesn’t appear very congruent with history to imagine this intervention would manage to function uninterrupted for so long (and the “snapback” from any interruption for longer than several years could have truly unprecedented impacts.)

        2. P.S. It’s probably worth noting also that in theory, solar geoengineering (usually through spraying sulfates in the stratosphere) would enable the plants to benefit from greater carbon dioxide while not experiencing greater warming.

          Yes, that’s exactly right. One of the reasons I like the idea of solar geoengineering measures, at least as a stopgap until we can actually get carbon dioxide levels down. You’re already starting to hear talk along these lines from scientists, at least in private, since it seems increasingly clear that it will be a long while before America, at least, starts getting serious about carbon reductions.

          A cool, high CO2 environment would really be the best of both worlds for a plant like wheat. I’m not sure to what extent reduced light would offset the effects of higher CO2 + cool/optimal temperature. For a leaf at the top of a wheat canopy, not at all, those leaves are already at near-saturating light but are very far away from saturating CO2. For leaves lower in the canopy, though, lower light might make a difference, but those contribute less to the whole plant productivity in any case. It wouldn’t be hard to come up with back-of-the-envelope estimates for how a wheat plant might do under solar geo-engineering scenarios, with the help of a few citations plus an hour or so in Excel.

          1. “since it seems increasingly clear that it will be a long while before America, at least, starts getting serious about carbon reductions.”

            Given that the US dropped its carbon emissions by 21% between 2000 and 2023, while experiencing a population increase of about 17%, I’m kind of curious about what you would consider “getting serious about carbon reductions.”

          2. Given that the US dropped its carbon emissions by 21% between 2000 and 2023, while experiencing a population increase of about 17%, I’m kind of curious about what you would consider “getting serious about carbon reductions.”</i.

            Considering how high the US is on the list of global warming contributors, I'd say that a lot bigger than 21% reduction over 23 years would be needed to qualify as 'getting serious'.

            To be fair, the US isn't the worst by a long shot, nor is it even the worst among industrialized 'western' countries (Canada looks to be a bit worse and Australia a lot worse, and many petrostates and small island countries are *much* worse), but it's still very bad.

            https://en.wikipedia.org/wiki/List_of_countries_by_greenhouse_gas_emissions_per_capita

          3. Given that the US dropped its carbon emissions by 21% between 2000 and 2023, while experiencing a population increase of about 17%

            It might help to think about the issue in fiscal terms. That is, we all know that reducing the rate at which the deficit is growing annually is better than said rate increasing, but it still leaves one with a larger deficit than the year before – not to mention larger total debt! Likewise, though it’s better to reduce emissions than to increase them, any increase in total emissions still leaves the world with greater overall warming than it would have had before!

            In this metaphor, the warming which has already happened in the past ~200 years is effectively our mountain of debt – one which is largely impossible to pay off within a realistic timeframe (see below), and which is instead costing the world in other ways for as long as it’s around. (I.e. the warming which has already happened is already sufficient to gradually melt a notable fraction of polar ice sheets over the next several thousand years if it simply stays where it is right now.) Annual emissions are the deficit which is adding to that total debt. A country stops adding to that deficit once it reaches a state of net zero – when any emissions produced within its borders are small enough to be fully absorbed by carbon sinks within its orders (mostly correctly managed forests, peatlands, etc. + some still-unproven industrial “carbon capture” methods.)

            Thus, “getting serious” is creating a realistic plan to get to that state, and reasonably quickly. The ideal would have been for the world to reach that state around 2050, which ought to have been enough to stop the warming from going far beyond 1.5 degrees. (To give just a few examples of the difference it makes – according to CarbonBrief database, it is believed that extreme rainfall would increase by 15-25% in different parts of the U.S. under 1.5C, but 30-50% under 2C. One practical example is that the Mississippi River would flood an extra 18% of the time at 1.5C, but an extra 56% at 2 C. “Warm extremes” in North America would also be more than twice as frequent at 2C relative to 1.5C – that is, ~3.5 times rather than 1.5 times more frequent relative to baseline.) The problem, of course, is that the world is currently probably heading for nearly double that, to 2.7-3 degrees.

            Under the previous Administration, the U.S. did agree it should get to that state by 2050, and drew up a plan for doing so. It had also committed itself to reducing emissions (relative to 2005) by ~65% – but for all the discussion of IRA, it was falling short of reaching that target it had set for itself, unless a range of other policies got enacted in this term. (Moreover, even that target was still very unlikely to have met the 2050 target for 1.5C – not unless other countries collectively didn’t just do their expected share but went to extra lengths to pick up the slack.) Nowadays, of course, all of those commitments (excluding those made at state and city level) had been torn up.

            https://climateactiontracker.org/countries/usa/2035-ndc/

            It wouldn’t be hard to come up with back-of-the-envelope estimates for how a wheat plant might do under solar geo-engineering scenarios, with the help of a few citations plus an hour or so in Excel.

            I suppose you can, but I suspect crop modelling is more complex than that for a reason!

            I like the idea of solar geoengineering measures, at least as a stopgap until we can actually get carbon dioxide levels down.

            The problem is that in practical terms, one or several of the world’s most powerful countries would first have to step up to overcome opposition to the idea, often completely unscientific (just count the number of morons who’ll cry “But the Matrix!” or “But Snowpiercer!” if you mention that on a typical digital square.) – and then everyone else would have to consent to the leverage it gives over them – and then this new status quo would have to hold for a long time. (Even the less ambitious scenario where it simply prevents more warming and does not attempt to undo it will almost certainly need to stay in place for several decades. More ambitious ones would go into centuries.) If the PRC becomes the one to start this program, for instance, how long before it starts being used in negotiations over, say, trade or Taiwan? Maybe it becomes too dangerous to wield, similar to Iran’s leverage over the Strait of Hormuz, but these are uncharted waters.

            Of course, the U.S. could theoretically attempt the same, but given how many programs there are a shutdown away from being interrupted and a DOGE away from being stopped, I don’t really think it has the capability to do it with its present-day political system. Especially not since the “chemtrails” crowd on one side of the aisle have already politicized it; some small places in the U.S. already passed laws trying to ban the practice by now. One nightmare scenario could be PRC or the EU initiating the program with the consent of the U.S. administration which is then in power, but sooner or later, an “MTG” Administration gets elected in part by credibly threatening to shoot down planes dispersing the sulfates. What then? Even a brief “termination shock” would represent such a rapid perturbation of the climate that the damages could well exceed those in the base case of “unshaded” warming over the same timeline. There is a reason only relatively few scientists have been willing to endorse these proposals – and they are generally those whose impression on the severity of “the base case” is much worse than the consensus.

          4. I suppose you can, but I suspect crop modelling is more complex than that for a reason!

            Oh, no question about that. I’ve worked with people who do crop modelling before and seen how much work it involves. A few hours with some citations, datasets and spreadsheets, though, can still get us some way to figuring out what we should expect, at least under certain conditions. Probably enough for a non-specialist blog comment section.

            There is a reason only relatively few scientists have been willing to endorse these proposals – and they are generally those whose impression on the severity of “the base case” is much worse than the consensus.

            Well, I think part of the reluctance of people to publicly endorse solar geoengineering is because of ‘moral hazard’, we don’t want people to start thinking that the current unsustainable level of greenhouse gas production is OK. Also partly because CO2 has other effects (on ocean acidity for example) that reducing sunlight would do nothing to affect, and then partly for the reasons you mention. I tend to be on the side of those who think a warmer world would be *extremely* terrible, for a variety of reasons, and therefore I’m pretty comfortable with using the ‘nuclear option’ (i.e. solar geoengineering) to get there. I think as the situation gets worse, though, you already are seeing these solutions become a lot more acceptable to people than they were a decade or two ago.

            The rise of the “chemtrails” sentiment (and more generally, science denialism of all types, e.g. the people who are skeptical of evolution, vaccines, genetically modified crops, the health consensus surrounding ‘seed oils’ vs. things like butter and lard, overpopulation and its subsequent environmental impacts, global warming in general), which you correctly point to, is also terrible and I don’t know what to say about it.

  35. Unfortunately, the antiabortion states in the US are perfectly fine with forcing children to have their rapists’ babies, which may kill the girl or destroy her capacity to have any future babies (*see* Margaret Beaufort, for instance). The increase in the maternal death rate in those states doesn’t seem to bother them (nor does the increased death rate for women and girls in Arizona, who are denied many kinds of lifesaving medication (including some forms of chemo for cancer) for a FETUS WHICH DOES NOT EXIST. Ahem. Many doctors and nurses are leaving such states, which lowers the staffing levels in many hospitals and affect the care for valuable men and boys as well.

    Also, the care for the ‘rape babies’ in antiabortion states is often not of the best, especially with cuts in federal funds. I suggest that some tactics to increase birth rates and population currently in use in those states will decrease their population instead.

    End rant.

    1. “Unfortunately, the antiabortion states in the US are perfectly fine with forcing children to have their rapists’ babies,”

      And pro-abortion states are perfectly fine with killing children because their fathers were criminals.

      1. nd pro-abortion states are perfectly fine with killing children because their fathers were criminals.

        Yes, that’s right. If it comes to balancing the interests of an 18 year old woman who’s been a rape victim, against the interests of an early term unborn child, guess what, i’m going to side with the rape victim. She is more relatable to me, more ‘like’ me, can talk to me and explain her sentiments and experience to me, which the first-trimester unborn child can’t, so she is closer to the centre of my circle of concern, or whatever the term is that the philosophers use.

        Not to mention, that of course except in the case of late term abortions, what we have isn’t really a conflict between the interest of mother and unborn child. It’s a conflict between the interests of the mother, and the interests of (Christian) ideologues who think they are speaking in the name of the unborn child. I don’t see any reason why I should trust their insight or judgment, do you?

        1. NOT unborn child; an embryo, fetus. Though usually, if things are handled properly, not fetus.

        2. “Not to mention, that of course except in the case of late term abortions, what we have isn’t really a conflict between the interest of mother and unborn child.”

          I don’t really care whether you call it a child, fetus, or subhuman parasite. I feel fairly convinced, however, that it has an interest in not being aborted. Declaring otherwise seems unlikely to convince anyone who doesn’t already agree with you.

          1. I feel fairly convinced that a brainless clump of cells has no interest whatsoever. Declaring otherwise seems unlikely to convince anyone who doesn’t already agree with you.

          2. I’m not ‘declaring’ anything, I’m staying agnostic, for the time being. For all we know, the aborted fetus gets reincarnated into a better life, or goes straight to heaven and enjoys an eternity of bliss. We can’t *know*, and again, i’m certainly not going to trust the opinions of the Christian Republican Right on the topic any more than I should have trusted them about what was best for the benighted people of Iraq (and before that, plenty of other countries), yearning to be “free”.

            I might have my opinions, but any such opinion is necessarily going to involve metaphysical beliefs and is probably best left outside the realm of US public policy.

          3. A chicken has an interest in not being eaten, but we still slaughter it. And we do this because our right to eat and survive trump’s a chicken’s.

            Pregnancy is dangerous. I have a right to defend myself from people or things that might harm me.

      2. Pregnancy is inherently dangerous. The USA has one of the highest mortality rates in the world. I have a natural right to protect myself from anybody or anything that would harm me – isn’t that the reasoning behind “Stand Your Ground” laws?

        1. “I have a natural right to protect myself from anybody or anything that would harm me”

          So, like all the other rights, this one also has limits.

          1. Even under stand your ground, you can’t willfully and knowingly create the situation where you might be harmed, e.g., you can’t claim self-defense if you sucker-punch someone bigger and stronger than you and then shoot them after they come at you with blood in their eye. Or if you, y’know, consent to having sex and accidentally make a baby.

          2. Even if you didn’t willfully and knowingly create the situation, the probability of you coming to serious bodily harm is also a factor. E.g., if a six-foot four, two hundred pound man shoots an unarmed five-foot, hundred pound woman coming at him, at best he’s going to have a very rough time in court, and the fact is that while the US’s maternal mortality rate is pretty bad for a developed country, it’s also about thirty deaths for every 100,000 births, or around one in every 3,333.

          3. Furthermore, self-defense also requires the threat to be imminent. For example, if my extremely violent neighbor says that one of these days he’s going to kill me and the next day I burn down his house with him inside it, I’m still on the hook for arson and murder, though the jury and judge will probably be sympathetic. This is where the “life of the mother” exception comes in, something that is included in all anti-abortion legislation, because it’s better that one person die than two.

          IOTW, no, actually, you can’t use self-defense as an argument for abortion.

          1. Harm does not have to be lethal to be significant. Pregnancy can have a transformative and permanent effect on the mother’s body.

            Not to mention the whole aspect of forced metabolic support for 9 months. Weighed against the “interest” of a being that has never been conscious, and in the early stages is utterly incapable of being as conscious or aware as a chicken.

          2. “Already answered by part two.”

            Not at all, because your part two only talked about mortality, ignoring all the other effects of pregnancy.

    2. False. Exception is made for rape and for underaged and high risk pregnancies everywhere. Abortion is BTW a great way to conceal the evidence for rape and abuse.

      1. “Exception is made for rape and for underaged and high risk pregnancies everywhere”

        False. https://en.wikipedia.org/wiki/Abortion_law_by_country

        A good handful of countries, particularly ones strongly influenced by the Catholic Church, prohibit abortion in all circumstances, even to save the mother’s life. Many more (and some US states) exclude one or more of rape, risk to health, or fetal impairment.

        The modern Catholic Church opposes abortion absolutely. Pregnant 10 year old girl who’s a victim of incestuous rape? Too bad, time for her to give birth.

          1. (if 3 copies of this show up, blame wordpress)

            ‘Abortion is illegal in Alabama,[12] with exceptions to preserve the woman’s life or physical health, or in the case of fatal fetal abnormalities. There are no exceptions for rape or incest.[13][14] ‘

            ‘Abortion is illegal in Louisiana,[107] except in cases of fetal abnormalities or when performed to save the woman’s life. There are no exceptions for rape or incest.[35][13][108] ‘

            ‘Abortion is illegal in Oklahoma,[184][185] unless necessary to save the life of the pregnant person. There are no exceptions for rape, incest, or fatal fetal abnormalities.[13] ‘

            ‘Abortion is illegal in South Dakota, with exceptions to “preserve the life of the pregnant female”, given “appropriate and reasonable medical judgement”.[208] There are no exceptions for rape, incest, or fatal fetal abnormalities.[13][96][209] ‘

            ‘Abortion is illegal in Tennessee,[213] with exceptions to terminate molar or ectopic pregnancies, to remove a miscarriage, to save the life of a woman who is pregnant, or to “prevent serious risk of substantial and irreversible impairment of a major bodily function of the pregnant woman”… There are no exceptions for rape, incest, or fatal fetal abnormalities.’

            ‘Abortion is illegal in Texas,[215] except when necessary to save the pregnant woman’s life. There are no exceptions for rape, incest, or fatal fetal abnormalities.[13][133][216] ‘

            That’s 6 USA states that ban abortion even for rape or incest. 4 of them ban it even when the fetus is going to die of fatal deformity. I have seen no law that bans abortion for adults but allows it for underaged pregnancies.

            So why are you so insistent that the opposite is true?

  36. Thank you for this post, Bret. I have to say (as an assiduous reader) that the paragraph about child mortality moved me a bit. The more I learn about the past the more I realize how alien it is.

  37. “we often do see upper-classes in pre-modern societies fall below population replacement”

    Ok, but why? And did they really? There were, after all, lesser sons of the nobility who needed to figure out something for themselves.

    And I do wonder, how either flow (upper class overflowing downwards, lower class overflowing upwards) works in societies with heavy caste barriers – including nobility.

    1. Upper classes do war. It’s kinda their defining feature in the time-period we’re talking about. While in absolute numbers they are dwarfed by the lower classes, they are *relatviely* speaking overrepresented and because they’re smaller groups those casualties hit harder. (especially in societies with monogamy!)

      The socail abnners to some extent add to this: Not neccessarily immediately but by adding “friction” (IE: You only have a limited amount of acceptable spouses which might mean you have to wait to marry one, or they die inconveniently, or…)

      And that’s in addition to haf the kids dying as usual. And nobility might actually have enough food to get into rich men’s diseases even! (while still suffering a ton of accidents and deaths by violence of course)

    2. There’s always the handy fictional ancestor. Even in India, lower-caste people who made it big re-defined their sub-caste (if member of x caste is raja, then x caste is now kshatriya …). Did not happen often, but it did happen.

      1. There’s always the handy fictional ancestor. Even in India, lower-caste people who made it big re-defined their sub-caste (if member of x caste is raja, then x caste is now kshatriya …). Did not happen often, but it did happen.

        yes there are definitely cases where whole castes got themselves redefine in the hierarchy, and there are also cases where individuals did (famously, Shivaji found some “genealogy expert” to cook up a concocted noble genealogy for him, before he was crowned Emperor).

  38. While in modern society these causes of death tend to be the sort of things – heart failure, cancer – which will kill you at any age but mostly only happen to the very old, in pre-modern societies, the diseases that tear at the elderly population tend to be the endemic infections, fevers, pneumonias and such that the younger adult population might generally survive – age and nutrition diminish the immune response until these sicknesses become lethal at one age or another.

    To be pedantic, with modern medicine, things like heart failure, stroke and cancer (especially its treatment) very often don’t kill outright but weaken the organism enough for those same “pre-modern” factors to finish it off. Even today, the main reason people die in the intensive care is still the pneumonia they got after arriving in the hospital, rather than the thing which landed them in the ICU in the first place.

    https://en.wikipedia.org/wiki/Hospital-acquired_pneumonia

  39. Casualties from pitched battles might average around 10% of all combatants,6 so a male peasant population that served as a militia in these sort of battles, like Greek hoplites (who are probably just the upper third or so of peasants, wealth-wise) might lose something like 4.5% of a male birth cohort (remember that child mortality) on average in a pitched battle.

    This is a very minor point but could you please be clear whether you mean casualties – killed, wounded or missing – or fatalities – killed only.

    Also I can’t make this maths work either way. Half of a birth cohort survives to adulthood. The richest 33% of the peasant population serves as militia. 10% of combatants in a pitched battle become casualties.

    Either the richest 33% of the surviving 50% of the birth cohort serve of whom 10% are killed, which means the birth cohort loses 1.7%, not 4.5%;

    or the richest 33% of the surviving 50% serve, of whom 10% become casualties, which (assuming 3:1 wounded to killed) means the birth cohort loses 0.4%.

  40. I’m sorry if another commenter already brought this up – I haven’t read all 295 comments. But you mention mortality from military fatalities for young men and from childbirth for young women – but what about civilian war fatalities for people of all ages and genders? I man, in the old days, wars were fairly common, and during wars, armies moved around a lot, and when they did that, they would both forage, and try to deny other armies opportunities to forage, so I’d expect that kind of mortality to be quite a factor.

    This is somewhat off topic, but since we’re talking pre-industrial mortality: How much of pre-industrial *elite* mortality was, in your estimate, caused by ending up on the wrong end of court intrigues, and similar things?

    1. We need to distinguish between different styles of warfare, I think. The semi-industrialised way of early modern war, with armies looting (“foraging”) areas systematically and in pre-planmed manner, even consciously destroying large areas, is one, rather rare extreme. And such warfare clear demographic effects that were often visible for centuries.

      On the other hand, lower-level endemic fighting was usually structured so that the damage to the population was more limited and localised, often with some unwritten rules on the types of acceptable atrocities. (E.g. “no cutting olive trees”.) The sack of Magdeburg and its followup give a good example of what happens when these unwritten rules are broken.

    2. “How much of pre-industrial *elite* mortality was, in your estimate, caused by ending up on the wrong end of court intrigues, and similar things?”

      Interesting question – probably less than one might think, because the elite consists of noble heads of families and their families, and the heads of family were generally the decision-makers and therefore the ones who got involved in, and tended to die of, intrigue. I would guess that very few noblewomen died as a result of backing the wrong contender to the throne, for example.

      1. “the heads of family were generally the decision-makers and therefore the ones who got involved in, and tended to die of, intrigue. I would guess that very few noblewomen died as a result of backing the wrong contender to the throne, for example.”

        How many noblewomen died as a result of her husband, son, brother or father having backed the wrong contender to the throne? How many little children of both sexes were killed for having been born with some claim to a throne?

      2. Depend where and when. Entire families killed is not at all unusual for the losers. Not to mention that even if not outright execute the women were expected to kill themselves from shame and / or solidarity.

        Yet, of course, far more women died from being pregnant than men died from that.

  41. “For all societies, everywhere at every time before about 1750 (and in most places for a long time after that) it was simply a fact of life that half, HALF of all children died”

    This hit me really hard. I was born premature (I don’t know offhand by how much), enough that I was purple and had to be put in a NICU for a while. In an earlier age I know, for certain, I would have been among the dead.

    For this reason I get pretty viscerally offended when people suggest the modern world is not worth it.

    1. I had scarlet fever as a kid. Even with modern medicine I was extremely ill. And my family red “The Velvatine Rabbit” to us, which didn’t go well–I thought for sure they were going to burn my stuffed animals, so I hid my favorites in hopes that they’d survive. I think I was four at the time, maybe five.

      Even a few generations ago I’d have been dead, pure and simple. I know it wasn’t a sure death sentence, but the doctors didn’t really beat around the bush; my case, and my sibling’s, were bad enough that without the antibiotics we were on, there would have been no hope. (This was a combination of rough bedside manner and them not realizing I could hear the conversation.)

      Really brings to life that “half of all children died before age 10” statistic when you would be one of those statistics. And yeah, like you I’m fairly hostile to people who think the past was inherently better. I love learning about the past, and think there are parts that were better in some ways, but on the whole, to paraphrase “Timeline”, the only thing worse than dying in the past would have been living in it.

      1. Although scarlet fever is a weird case – when my mother had it as a child, she was bedridden for over a month, and when they finally let her out she was only able to go three steps to her chair before collapsing.

        The first (!) time my niblings had it they went through the entire thing and were almost well again by the time I took them to the doctor. I was that shocked, let me tell you – how could it be scarlet fever when they were barely sick?

        Well, apparently, for reasons we just don’t understand, scarlet fever got a lot less serious sometime in the mid-20th century – if it even gets that far, which it mostly doesn’t because most people use antibiotics.

        (It turns out that the kids reacted very atypically to strep in general, and so I got used to going to the doctor and saying “They have no symptoms whatsoever, but they’re cranky and I think they have strep” and every time, damn if I wasn’t right. But they still got full-blown scarlet fever a couple of times before I caught on, with the rash and the sore throat and all.)

    2. Every now and again I like to play the game of ‘how many of the people I know would be dead without modern medicine?’.

      Just from my immediate family I’d have been born, but my mum and sister would have died about 4 years later with pre-eclampsia. My dad would have died of a twisted intestine when I was about 15. I’d likely have died aged 26 of sepsis (teeny-tiny little cut on my hand got infected and started tracking up my arm). My other half would have been born, but both her and our daughter (had we met sooner, or the sepsis not proved fatal) would have died during childbirth as she tried to come out in an orientation that wouldn’t have worked.

      It’s truly sobering.

      1. Assuming that scarlet fever and complications didn’t get me as a child, appendicitis would have gotten me, and later my wife if she otherwise made it that far. Older sister probably dies in infancy, younger sister dies of breast cancer if nothing else gets her first.

        1. Yep! It really is genuinely sobering.

          Also worth noting that all of these people would have been dead *at modern levels of disease prevalence*.

          No smallpox to worry about. Practically no cholera. Tons of other infectious diseases present but suppressed by widespread vaccination (even in Texas), public health initiatives (e.g. stuff as basic as functioning sewers), or effective medical treatment partway through a disease’s lifecycle (limiting its R value).

    3. My older sister and I were both born premature, and I’m pretty sure that if my mother had had to give birth to my sister the “natural” way it probably would’ve killed her too. Instead, we’re all here and my sister has a family of her own. Together, collectively, that represents ~130 years of human life saved by child mortality no longer being a coinflip thanks to routine medical interventions.

  42. A few people in the comments here seem to be defending Malthus and his predictions, and I think it’s an interesting example of how easy it is to give historical prophets too much credit. Malthus, after all, did not argue that at some point in the future, for some reason, the world in aggregate would produce less food than the humans living on it need. What he argued for was a specific mechanism by which this would happen — namely that increases in productivity and health would inevitably lead to larger and larger population growths until a carrying capacity was reached.

    And he was *wrong* in that argument. Birth rates (and hence growth rates) have consistently fallen across societies that have reached a certain level of human development, from Norway to Iran to South Korea. So it doesn’t matter what the future holds, we can say now that the mechanism Malthus proposed was wrong because he didn’t anticipate that development would ultimately lead to people choosing to have fewer children.

    If in fifty years climate change has reduced the agricultural output of the earth and caused widespread famine, Malthus will still have been wrong, because he did not predict that climate change would cause global famine in 200 years, he predicted that people’s tendency to have as many kids as they could afford would cause global famine.

    All in all, this feels like the same sort of thinking that leads to people saying that Sallust was correct in his complaints about Roman decline because Rome did eventually fall. Or, for another more modern example, the people who seem to feel that any crisis of capitalism proves Marx correct, whether or not the details match his proposed mechanism for how capitalism fails. Point is, “this system will fail some day for some reason” is a trivial prediction to make, and not very insightful. The details are what matter.

    1. “increases in productivity and health would inevitably lead to larger”

      I don’t have time to read (re-read) the whole essay, but looking at the wikipedia page on it, that’s not what he said. He described a dynamic process (and seems credited with helping found demography), with privation _or_ voluntary reduction of birth rates (“preventive check”) as the inevitable outcomes, and describes how such preventice checks operating in various classes of English society.

      Memory says he was skeptical of such checks holding in the long term, or everywhere, but I dunno. Anyway, I don’t think the value of Malthus is in whether he made some prediction that came true, but in his elucidating processes.

  43. “A few people in the comments here seem to be defending Malthus and his predictions, and I think it’s an interesting example of how easy it is to give historical prophets too much credit.”

    To be picky, I described Malthus as almost the exact opposite of a prophet. I said (or wrote) that he was right about the past, in that increases in agricultural productivity of the previous several millennia of human history had produced more numerous people, rather than better fed ones.

  44. Of all the epitaphs I’ve ready, the one that always stands out to me most is by the Roman poet Martial, for a slave girl that wasn’t even his own child:

    “To you, father Fronto and mother Flacilla, this girl
    I commend: she was my sweet and my delight.
    Little Erotion must not be frightened by the dark shades
    and the monstrous mouths of Tartarus’ hound.
    She was due to complete the chills of a sixth midwinter, no more,
    Had she not lived that many days too few.
    Now let her frisk and play among old friends
    Now let her chatter, and so lisp my name.
    And let the soft turf cover her brittle bones:
    Earth, lie lightly on her; she lay lightly on you.”

    I don’t know how well this particular translation holds up to the Latin, but I always found this particular rendering of that last line devastating.

  45. This post says,

    “there are a lot of ways for agricultural production to rise modestly enough (increased trade and specialization, bringing more land under crops, better farming techniques, selectively bred crops, etc. etc.) to sustain population growth for a really, really long time and even some very long-settled parts of the world didn’t reach agricultural saturation where basically all useful farmland was in production, until quite late.”

    and it uses this to dismiss the impact of “malthusianism” on peasant life. But the previous post (part I) mentioned,

    “families that are often awkwardly large as units of labor for the farms they have (that is, they’re long on people and short on land)”

    I’m trying to understand how peasants would have actually experienced land/food scarcity, and I can’t quite reconcile these two things in my head. Long term perhaps marginal land was gradually brought into cultivation (although how this happened is presumably unknowable: did peasants claim a sort of “allodial title” to land they developed, did they have to “buy” it from the lord of the manor etc), and so long-term the population did grow.

    But then short-term you always reached a state of having too many people from the land? So surely you _were_ restricted, in some sense, by the amount of food available (or by the amount of land available, which is very closely linked)? We don’t expect a “collapse” like malthus predicted (but that’s because malthus expected exponential and not logistic growth), but it seems like a situation where you are “short on land” will end up with at least intermittent partial starvation.

    If not, then how did this play out? What was the limiting factor? Have I just misunderstood what was being denied?

    1. It’s not total food supply (some always goes to elites and urban dwellers) as rural population vs disposable surplus. The various outlets include emigration (a constant from poorer areas in particular, but also from places like Flanders), in later times ‘putting-out’ (shifting some craft production to the countryside to tap into the cheap labour), war (those who went for a soldier mostly never came back), periodic famine and plague.

  46. IN the meantime, in the NYT today, one may read an opinion explaining to us that the current birth rate drop may be largely, if not entirely, blamed upon … prosperity. It takes so much time, resources and plain old money to have children and rear them to $uce$$ful maturity, which in times in which almost everyone was poor, wasn’t necessary. SO, of course, women spent their lives pregnant, which led to early death. Also no independence, etc.

    What wasn’t said out loud is that poverty is very good for wealthy white (mostly male) supremacists, so maybe we should go back that, and get those birthrates up again.

  47. How is it that the figure for “likelihood of dying in a particular battle” and the figure for “proportion of military-class men surviving to adulthood who are likely to die in battle” are so similar? (~10%; ~10-12%)

    Would the average soldier not take part in more than one battle? Was most of their risk of dying in their *first* battle and they had a much lower risk of death in subsequent ones?

    It feels as though it would have been clearer if, having clarified that a lifetime of births, not a single birth, is a nearly-equivalent risk of death, we were reminded that apparently the risk from a lifetime of battles is not much higher than a single battle – but I don’t understand why it’s reasonable to add up the risk per event in one case but not the other?

Leave a Reply to Nate TCancel reply