Collections: What Do Historians Do?

For this week, I want to take a step back (we’ll be back to our series on Rings of Power next week!) and talk about the craft of history: we’ve talked about “How Your History Gets Made” from the perspective of the different people who do it – research historians, public historians, educators and so on – but this week I want to talk, in very broad terms about how historical research works, about the process of discovering things about the past.

Naturally this is going to be a very broad overview. “Historical Methods” or some equivalent is typically a full semester-long advanced undergraduate course in most history programs, while “Historical Theory” is typically an early graduate course and we’re covering the broad sweep of both of them here. But I want to outline some of the basics because there is a lot of misunderstanding, sometimes willful but frequently unknowing, about how historians go about uncovering the past.

In particular, I want to take a crack at the mental model many folks have of historians that imagines history as a basically static and known set of data, which does not change or improve over time, such that the main job of historians is to read history textbooks really hard and memorize the names and dates and then regurgitate them in a particular form. I sometimes term this the “history as scripture” understanding. And it comes with a corollary assuming that historians do not require any particular specialized skills or training.

Now, of course, few people if asked will offer this as their mental model of history straight. But the model emerges in many of the assumptions they make about what historians do. History as Scripture is how you get the folks offended at the idea that their favorite 18th or 19th century historian’s work might not be particularly useful anymore; the only way Edward Gibbon (d. 1794) is going to help you understand the Fall of the Roman Empire is if our understanding of that all is unchanged over the last two centuries. History as Scripture also shows up in demands that historians demonstrate their expertise through recall, reciting lots of facts; it assumes the main skill of the historian is remembering details in our heads.1 History as Scripture sits behind the assumption that our historical knowledge can never (or ought never) to change and that all such change is thus just politically motivated ‘revisionism’ – because it assumes all of the evidence is already known, the ‘correct’ conclusions long ago drawn.

In short, ‘History as Scripture’ is actually a really common view of how the field works. And that is, perhaps, not surprising: if your only experience with history was a high school classroom or an introductory college lecture, your learning mostly focused only on what we know not how we know it. And its a seductive vision of history too, because it is comforting to imagine that the past is fully known and that its fully known form conforms perfectly to what you were taught as a child…but this is the same false comfort of imagining all of physics is the simple version of Newtonian mechanics you learned in a high school physics class: the closer you look, the clearer it will be that this model does not describe the world as we actually experience it.

So: what do research historians actually do? How do we discover the past? To demonstrate this, we can effective walk through the lifecycle of a single research project, a discrete ‘unit’ of research, as it were, from concept to publication and review.2

From Wikimedia Commons, the famous “Sappho” from Pompeii, a Roman fresco now in the Museo archeologico nazionale di Napoli. And on the one hand it is neat because it’s an image related to writing (but not history) and on the other hand its a fantastic example of how our understanding of the past changes over time. When originally discovered, this image of a woman writer was assumed to represent the famous Greek poetess Sappho and it is often still described as an image of her.
Except that as more scholarship has been done on both Roman artwork and recovered objects in every day life, its become clear this isn’t Sappho. The tablets the woman is holding are wax tablets (in wooden frames) which wouldn’t be used for literature (which would go on papyrus scrolls) but which are probably meant to signal financial documents. The woman in question – probably the person who commissioned the artwork – is signalling not that she is a literature person, but rather than she is a capable financial manager of a wealthy household and its likely quite vast assets.

As always, if you like what you are reading here, please share it; if you really like it, you can support me on Patreon. And if you want updates whenever a new post appears, you can click below for email updates or following me on twitter (@BretDevereaux) and Bluesky (@bretdevereaux.bsky.social) and (less frequently) Mastodon(@bretdevereaux@historians.social) for updates when posts go live and my general musings; I have largely shifted over to Bluesky (I maintain some de minimis presence on Twitter), given that it has become a much better place for historical discussion than Twitter.

Training and Theory

We actually have to start before our sample research project begins, because of course to start you generally need a historian.

And here we run into one of the frequent misconceptions, which is that because history is a field carried out in plain English (or the language of your choice), it doesn’t require specialist training or knowledge. But history research generally does require specialist training. Of course every so often you will get self-taught historians making significant contributions to the field, but I’ll say both that this tends to be quite rare and that often one finds those scholars did, in fact, start working with a trained, credentialed historian. So, for instance, in my field (ancient warfare) the contributions of the late Peter Connolly (1935-2012) were considerable, despite his background being initially in art. But then if one knows how Peter Connolly got started, they’d know he initially worked quite heavily with H. Russel Robinson (1920-1978, Keeper of Armour in the Tower Armouries and a Fellow of the Society of Antiquaries),3 beginning as Robinson’s illustrator and establishing a firm base of knowledge that way before moving into his own scholarship. So this happens, but its uncommon.4

And before we go further I want to lay out some terms. I am mostly focused here on historians doing original research – discovering the past.5 Mostly these are going to be academic historians (that is, historians working in history departments in universities or similar institutions), but some folks doing this work are independent researchers, historians employed by other organizations, public historians who also do research and so on. So to capture that whole group, I am going to use the phrase ‘research historians.’ Again, most research historians are academic historians, but by no means all of them are. But this is distinct from most (though not all) history teachers and many (though not all) public historians whose work focuses on transmitting historical knowledge, rather than discovering it; I consider all three groups to be within the broad profession of the historian, just in different parts of that profession.

Back to the question of training.

We can break down training in the historian’s craft into three groups, two of which are of general use to all historians and the last of which is a field specific package of skills. First, we have what often gets termed “historical methods” (or indeed, the Historical Method), which is focused on source criticism and historical reasoning; often, paired with writing, this is offered as an advanced undergraduate course in history departments (usually in the form of a supervised research project). Second, we have “historical theory,” which we’ll come back to in a moment, but which relates to how we frame and understand the questions are are asking, as well as avoiding common pitfalls in historical research; this is invariably taught early in graduate study.

Finally, after that, historians will invariably need a package of field-specific training. An ancient Mediterranean historian needs to read both Latin and Greek, to be able to parse a site report, to understand archaeological methods, decipher inscriptions (and possibly ancient handwriting), and so on. By contrast, a historian of, say, 19th century Europe may not need Greek or Latin, but will certainly need French and be able to read 19th century cursive writing, along with knowing how to navigate European documentary archives and records. A historian whose work touches on law may need legal training for the laws and legal terminology of their period, to – for instance – avoid accidentally inventing hundreds of executions by failing to realize that the phrase “Death Recorded” in 19th century British legal records, when in fact that notation almost always meant the person was not executed.6 Because historians engage with historical documents, records and artifacts ‘in the raw,’ there’s often special training required to know what one is looking at and understand it fully, beyond the more general historian’s training.7 At the same time, you’re also learning where your sources are in their raw form, which might be important archives, key reference works, edited texts, important manuscripts and so on. All of that ends up as field-specific specialist training.

But lets loop back for a moment to the two generalist types of training and discuss what you are learning to do.

Via Wikipedia, in another Roman fresco from Pompeii, Clio, the Greek Muse of History.

First, there is the historian’s method, which we can basically subdivide into source criticism and historical analysis or historical reasoning. When we say ‘source criticism,’ this isn’t some newfangled thing; Thucydides is doing it in c. 400 BC, for instance, in the introduction to his history of the Peloponnesian War (we call this section “The Archaeology”) comparing oral histories with physical evidence and plausible deduction and finding the oral histories lacking (Thuc. 1.20, 6.54-55). Source criticism is just the process of trying to evaluate a source of historical evidence for reliability. If this seems such a basic thing that no training is required for it, consider how many people continue to treat Plutarch’s Life of Lycurgus as a perfectly reliable source on the Spartans despite the obvious ways that it is clearly unreliable, including but not limited to the fact that the author says it is unreliable in the very first paragraph (Plut. Lyc. 1.1-4).

So we might ask when was the source written or produced (is it, for instance, a much later accounting of an event, when memory might have dimmed or was it produced perhaps before key information about an event was widely known). We ought also ask where, by whom and why it was produced (is the author reliable in other instances or not? does he have an ax to grind we should know about? what sort of work is this and how does that influence its purpose and presentation?). We might also ask what the sources of our source were (eyewitness? or other intermediate sources (perhaps now unavailable – witnesses that have died, works/archives now lost)). How reliably was it transmitted to the present (modifications in storage, lost passages, alteration in copying, etc.). All of that contributes to assessments of the credibility of the source in question.

This is, I should note, a bit more than a yes/no “bias detector,” because in practice all sources are biased. So the historian is not asking “is this biased or not” but how is this biased and how does that impact its credibility and usefulness. Even a source filled with absolute falsehoods can be revealing.8 But at the same time, your goal is getting to what actually happened, or what people actually thought or experienced, so judging pure reliability (is it likely the things my sources say happened, actually happened – especially if they disagree) is important here.

On the other side is historical reasoning, which is about how we draw conclusions from our sources, once assessed. Because we treat all sources as suspect, historians are rarely operating with full certainty, so the approach, the “argument to best explanation” is generally to find the most ‘parsimonious’ (uses the fewest assumptions, especially inherently improbable assumptions) explanation which best fits the most observed evidence (as compared to other, rival explanations). Now the observed evidence here of course begins with the evidence of our sources, but it can be broader than that: we might, for instance, use comparative evidence (the example of other similar societies or situations) to plausibly fill in gaps. To take an example from my own work, our ancient sources report ‘elders’ as leaders of pre-Roman Celtiberian settlements in Spain, but don’t give a detailed rundown of what these elders do. So I suggest a set of roles consistent with the specific things the sources say they do (that is, which matches the observed evidence) and which also matches quite closely with what Gallic councils of elders – a similar institution in a related, nearby and contemporary set of societies do. The idea being it is more plausible that two similar societies with similar institutions that are related to each other function similarly than that they are radically different.9 This all sounds very simple and conceptually, of course, it is, but simple things in explanation become complex in application, which neatly leads us into the other category of general training.

Historical Theory. We’ve actually touched a bit on one sort of historical theory, the Annales school, but it is hardly the only one. This is a topic of sufficient complexity (and jargon, sometimes necessary, sometimes…less so) that I am hardly going to explain it fully here. In practice, learning historical theory is the process of learning the ways other historians have broadly conceptualized history: how they understood societies to work, what they thought was most important to study and how they went about studying and understanding it.

The purpose of that learning is twofold: first, historical theory provides a framework to understand your own research topic. It can inform what questions you ask of your sources, as you look for answers which will explain larger historical trends or events. Say, for instance, your source base is a body of diaries and letters from a regiment in the Civil War. A typical Annales framework might approach the documents from the perspective of mentalités (the set of cultural assumptions and worldviews we absorb, unthinkingly, from our society, which change only very slowly): how do these fellows view the world (including in ways that might be alien to me) and how does that influence their actions and experience? Alternately, a critical theory approach is going to want to ask questions about power in this campaign community: who is in charge, really (it might not be the fellow with the notionally highest rank!) and how is their power created and expressed in the group? Meanwhile, a ‘Face of Battle‘ theoretical approach is instead going to look for answers in the experience of campaigning and (especially) battle, which is going to mean probing more at the physical realities of the experience: how did they fight, what did they eat, how did the wash, where did they sleep and so on.

Each of these theoretical models (and there are many, many more) comes with questions it likes to ask, a vision of what motivates historical change and usually its own set of useful terminology to use to categorize and understand the evidence you are observing. I used one of those terms up above – mentalités – and you can see how (so long as everyone involved knows what it means) it compresses down a very big idea into a nice, compact technical term. That’s quite handy when explaining our historical work to other scholars, who will share that vocabulary.

The other reason we study historical theory is because a lot of these theories have quite well known flaws, gaps in their understanding: no single framework or question captures the full complexity of the past. So learning a bunch of them serves to both illuminate the things we didn’t know that we didn’t know, but it also serves to illustrate the pitfalls and potholes in the historical path, by watching how ‘pure’ versions of these theories fail in to one or another of them. For instance, leaders are often important historical focal points, making decisions with big impacts, but if you assume that historical change is always and everywhere the product of super-capable leaders (‘Great Man Theory,’ advanced by Thomas Carlyle), you are going to completely miss the impact of all sorts of other things and be entirely unable to explain some historical events that just lack a single, central figure motivating them at all. On the flipside, an ‘all structural factors, no agency’ framework (such as an extreme version of Marxist historical materialism) is going to fall into the trap of ignoring the very real agency of people making decisions (be they big important leaders or just regular folk). Historians learn a lot of different frameworks because each one exposes the gaps in the others: there is no perfect framework (and no way to proceed without a framework – to attempt this is merely to proceed with an assumed framework you are merely blind to) so one must be aware of the limitations of each approach in turn.

What I want to stress here is that historical theory is not a straight-jacket. (Almost) no historian is out there as a doctrinaire devotee of a single school of historical theory – often not even the historians who pioneered a school. Ferdinand Braudel was one of the most important figures of the Annales school – which de-emphasizes the role of ‘great men’ in favor of structural factors – and yet his great work is The Mediterranean and the Mediterranean World In the Age of Philip II (1949); while he isn’t the central element, evidently Philip II matters to this story.10 You are not going to find – or at least ought not find – historians who only use one school or type of historical theory.

Instead historical theory is a toolbox with many different tools; the mature historian pulls out the tool that best fits the job. Now, naturally we do all tend to end up with some favorite tools, which influence the questions we ask and the way we approach our sources. As you may gather, I tend to reach for my structural Annales wrench first, but also sometimes individuals matter a whole lot and you have to discuss their individual decisions, sometimes you need to think about power and sometimes what we’re trying to get at is the experience of a thing. Part of the reason historians read other historian’s work (and also work from other fields) is to try to broaden our toolbox by seeing the tools other scholars are using and thinking about how they can apply to our own topic.

So, we have our basic set of skills (historical methods), a bunch of more advanced specific skills fit to our period and topic and a toolbox of theoretical approaches (our historical theory), all a product of formal training in history: we now set upon our sources.

Sources and Research

And here it is important to begin by correcting a frequent misconception that historians are mostly engaging in reading and transmitting the work of other historians. Instead, most large history research projects have a similar sort of structure: there is a framework built out of the work of other historians which provides context for the core original research of the historian and this original core is the product of direct engagement with primary sources.

The term ‘primary source’ often gets reduced in introductory classes to something like ‘eyewitness accounts,’ but in a research context, a ‘primary source’ is really a source for an event for which there are no closer available sources. For a modern event, this is almost always a contemporary, often eyewitness source, but for the Middle Ages or Antiquity, the nearest source is often still quite distant from the events. You could thus, for instance, call the sources for the life of Alexander the Great primary sources in the sense that there are no closer sources available to us (anymore), while at the same time they are technically also secondary sources, reporting the testimony of other closer sources (now lost).

In any case, historians seek to engage with this evidence ‘in the raw,’ which is to say with the minimum number of possible filters. In almost every field, for instance, it is an essential, non-negotiable element of historical training to be able to read one’s main body of source evidence in the original language, If that means mastering archaic syntax and vocabulary or entire dead languages, then that is the ‘price of admission.’ Working in translation may be enough for hobbyists and some ‘pop history,’ but a research historian must work with the original text. For modern historians, this often means directly, personally combing state records or archives or collecting eyewitness testimony (‘oral history’). The complications of ancient evidence sometimes means additional specialists are required: for instance a given ancient work might exist in a number of manuscript copies each with its own small differences and errors; a philologist is going to have collated those traditions and produced a single ‘edited text,’ with which the historian will work.11 Or it may be an inscription carved on stone, which requires a trained epigrapher to transcribe accurately (although ancient historians are then trained to read the technical notion systems epigraphers use to publish their inscriptions).

Now the historian’s training comes in at this point in a few ways. The first, of course is knowing where to look. Whereas secondary historical sources are sold in bookstores and stocked in libraries, primary source material, especially ‘in the raw’ is often scattered or relatively inaccessible. For most historians who work on modern (and to be clear, when I say ‘modern’ I mean ‘post-1500’ not ‘right now’) societies, their evidence is often found in archives, in the form of official documents, records, letters, diaries, memoirs and so on. In most cases, the historian looking at those documents is often the first person to do so in decades; in some cases the first person since their production. Ancient historians don’t generally do archive work (because our archives don’t survive!) but the closest I’ve come to this is working with museum collections, requesting the museum’s records on file for various objects. My own project involved dozens and dozens of such requests to quite a few museums.12 I’ve also had secondary sources come in as microfiche or microfilm, which requires special machines to read, an experience that may be more common for historians working in other eras. In a lot of cases, the funding and use of these archives and sources is too low to support large digitization projects, meaning that while these sources are being digitized, that’s happening only slowly and as a result it is often necessary to engage with non-digital sources (which of course also means you need to go there to look at them).

Of course I should note that museums and archives have their own staff, many of whom also have quite a bit of historical training. Without those folks keeping, preserving and organizing the material they steward, the research historian would have far fewer sources (in many cases, effectively none). These folks aren’t our focus today, but they are very important to our understanding of the past.

In addition to being potentially hard to find or access, a lot of this material is very ‘raw.’ Whereas other fields often focus on working with ‘data‘ – evidence that has been processed, homogenized and turned into standard numbers – historians work with evidence in the form it comes in, often because that evidence either has not or cannot be converted into ‘data.’ A historian sorting through an evidence base that is, say, twenty-five thousand handwritten letters from the American Civil War (split between a great many archives) is sorting through twenty-five thousand handwritten letters: having to make out the handwriting, determine the context and dates of the letters and so on. While the body of ancient Greek and Roman literature is mostly edited and all collected now (mostly – this is not true for the epigraphic and papyrological evidence!), my medievalist colleagues often find themselves working with medieval sources that may exist only in a handful of manuscripts (or just one!) which may not yet even have an edited form. In some cases, they are still working with the original manuscripts (with their centuries old handwriting, which yes, we are trained to read).

Finally – and this is going to bring us back to theory – there is a tremendous amount of this material. Because this stuff is not yet sorted into ‘data,’ the historian is working through a large and laborious set of evidence. Of course sometimes we can make that task easier by sorting material into data – a lot of digital humanities approaches do things like this – but in a lot of cases that isn’t possible without a ton of initial work or even possible at all. Again, if your source-base is twenty-five thousand handwritten letters and you want to understand the values and worldview of the people who wrote those letters (and the even larger body of their non-letter-writing fellows whose views the letters might help you reach more clearly), there really isn’t a substitute to sitting down and reading all of those letters – especially as most of the archives and collections historians work with are not yet even digitized, much less sorted with say, OCR to produce machine-readable texts. I do wonder how advancing technology (like large language models) may give us powerful new tools to engage with large source-bases, but I suspect we will still always need to engage with our sources as raw evidence to a considerable degree.

What the foundation in theory gives the historian is a framework to know what questions to ask all of those letters (or whatever the source base may be) and how to interpret the results. Note that it doesn’t mandate an answer (this is often misunderstood by lay audiences who assume that a given historical theory is about the answer of the sources; it is about the question you ask), instead the theory helps suggest the sort of questions that might produce interesting answers, generally because they’ve produced really interesting answers in other source bases. So for instance, you might go to those letters with an institutional/strategic-culture framework, looking for how the letter writers are shaped by the military as an institution and how that shapes their decision-making. Or you might take a critical theory approach and look at how the background of those writers shapes them. And so on.

In terms of scope, most historians are going to organize their research by roughly ‘book length’ projects – history is, as we’ll get to in a moment, a ‘book field’ not a ‘paper field.’ That tends to mean even a project that is very narrow – say, the history of the development of one town over a few decades – is going to embrace an enormous amount of source material (probably including a huge chunk of that town’s record archive). The size of these projects vary, but a five-year cycle for a monograph is broadly typical.

What happens during that process? In an ideal, platonic form of research, the historian would:

  1. Read the existing literature on the period and topic in question, familiarizing themselves with the contours of the problems, then
  2. Using their foundation in historical theory, formulate a question, then
  3. Identify a source base that can provide an answer – or at least partial answer – to that question.
  4. Initial ‘raids’ into archives or other sources may be necessary as a first step to try to figure out where the most useful evidence is; these hotspots may then be ‘sieged’ (probed more thoroughly, often over several days in the case of archive work).13
  5. While both raiding and sieging the places your sources are at, the historian is taking lots of notes. Everyone has their own note-taking system (mine is terrible and I plan to overhaul it from the ground up for book project 2); by way of example, the core of my book project’s archaeological work is a OneNote file with information on about 500 archaeologically recovered weapons, organized so that each object has a digital ‘card’ with a unique reference ID and all of the relevant notes, bibliography, measurements, current location and so on.14
  6. Those notes in turn form the writing foundation of the publication process, which comes next.

This is why, as an aside, your research historians seem to publish so glacially – usually a few years between books – while ‘pop’ historians can seem so prolific. If you aren’t doing that primary source research and merely summarizing and reformulating the primary source work other historians have already done, its possible to write much faster! But of course you aren’t writing with any new evidence (even if you may have new conclusions) so there’s a real limit to how far our understanding of the past can get merely remixing what we already know.

Nevertheless, no one pays historians to just learn things for themselves. Instead, we expect historians to teach and write, which brings us to:

Publication

Most research historians are employed as faculty at universities, which means their job duties involve a mix of teaching, research and publication. The exact mix of those varies from institution to institution – roughly equal time at R1 research-oriented universities, whereas teaching oriented colleges expect a lot less research than teaching. We’re not going to treat teaching in depth here, but I just want to note it because of course one way all of this research ‘comes out’ is through the teaching process as well as the publication process.

That said historians are expected, of course, to publish their findings. As we’ve discussed before, that publication takes two basic forms: field-to-field (writing by academic historians for academic historians) and field-to-public. Those two forms are not always done by the same people: you might have research historians whose work is published in field-to-field form (which we’ll get to in a moment), which then ends up in field-to-public form (textbooks, popular books, blogs, etc) through the work of other historians. Often that second group are also research historians and the way that historian A’s work actually reaches the public is that their very narrow, particular conclusions become part of the framework for historian B’s work, which is in turn broad enough to excite public interest.

You all have seen that a fair bit here, of course, where very niche publications often appear in my footnotes or bibliography notes – but of course this is a field-to-public venue. A lot of what I do here is translating that field-to-field scholarship for a public audience (whereas, by contrast, my book project is very much a work of original research).

I used this chart in a previous post to trace the ways that history go from the work of researching historians (generally, though not always, academic historians) and filter through to the public in various forms. In that post, we covered all of these connections a little bit, whereas in this post, we’re really focused only on the left-most side of the chart, particularly those boxes in purple.

Publication for historical research comes in three major forms: the conference paper, the journal article and the book.15 Generally speaking the way this works is that the overall, large-scale project is the book project, while smaller chunks become articles, chapters and conference papers. That is, again, because history is a ‘book field,’ – what matters the most for impact in the field and professional advancement is the big argument expressed in a monograph (a single-author scholarly volume).

The conference paper is the least impactful form of publication, in that it isn’t really published: these papers aren’t usually published in a written form. Instead, the conference paper serves two main purposes: it alerts other historians in the field to the state and nature of your current project (good for collaboration!) but it also walks out the argument in an abbreviated form for feedback. It can be hard to take conference criticism – and you will get harsh conference criticism – but that lets you know where the weak points in your argument or your evidence are, so that you can strengthen them (or, if necessary, trim your argument’s sails a bit).

Far more substantial is the journal article or its close cousin the chapter in an edited collection. While we generally plan research projects as book projects, while we are sifting through all of that material above, historians inevitably run into situations where they have a research-dead end or an interesting point that simply isn’t going to fit structurally into the book. Those components get broken off and become articles. And of course sometimes what was intended as a major point turns out to just not have much more to it than the c. 10-15,000 words of an article, so in that case, one publishes that as an article (might as well get it out there!). Finally, you will also see ‘trial’ versions of a book’s argument walked out as an article as well: a smaller, more restricted version of the argument (say, over a smaller chronological period), often a few years before the book proper is ready.

Journal articles, edited chapters and scholarly monographs all go through a process called peer review before publication, though this works a bit differently for each type. In the case of a journal article, the historian submits a finished, polished draft which the journal editor reads; the editor can then reject the article or sent it out for review by two reviewers who are also experts in the subfield in question. Those experts provide detailed – often famously harsh – comments and a recommendation to either accept the article, reject it or require substantial revisions. The final decision is with the journal editor or editorial board but in theory the idea here is that no article goes out without at least being run past two other experts to make sure – even if they don’t agree with the argument – that it is basically sound.

For an edited collection – a book of chapters, each written by a different historian on related topics – generally the book editor assembles the authors, who write their chapters, and then the entire volume is submitted for peer review by the publisher. Those reviewers basically then review each chapter, with each chapter author getting responses back, but in theory the whole volume doesn’t move forward unless all of the chapters pass muster.

For a scholarly monograph, peer review is handled by the publisher and their editor. That step can happen earlier for monographs, because peer review generally happens before the press signs a book contract with the author. That may mean the book is not entirely finished: the package for review is generally several chapters (but can be the whole book) and a proposal outlining the entire project. In all of these cases, the process is double blind: the reviewer is not told who they are reviewing and the author is not told who their reviewers are, in order to avoid any sense of fear or favor (though with scholarly book monographs in small fields, it is possible to run into a situation where practically any qualified reviewer is going to know simply from the content of the text whose project it is).16

Now it may seem strange that books thus seem to get a bit less scrutiny, but they have an additional layer of review, the book review. Many academic journals in both history and classics have a book review section. Publishers will give out copies to journals so those journals can send a copy out to an academic reviewer (that is, another scholar in the field) who will write a book review, published in the journal. For the ancient-history-curious, the ‘review of record’17 for all things Greek and Roman in English is the Bryn Mawr Classical Review (BMCR), gloriously online and open access. Unlike pre-publication peer review, book reviews are not at all anonymous, but they reflect an early stage of scholars in the field taking stock of a new work and gauging its likely importance and impact.

I should note that there’s often an assumption that this process is deeply political – to which I’d encourage folks to actually read a bunch of academic book reviews, for instance those at the BMCR (link above). These tend to be fairly technical reviews, covering the structure of the work, its main arguments and evidence, and the audiences likely to benefit from reading it. The core question being answered is not ‘did you like it’ (although ‘does the argument actually work’ matters) but ‘who should read this and why?’ It is very common to see book reviews where it is clear the reviewer disagrees quite strongly with the argument being made but nevertheless concludes by noting that the argument is likely to stir the pot in a given subfield and so everyone in the subfield probably ought to read it, if only to register objections!

Now the thing I’ve left out in all of this, of course, is the actual writing process. This isn’t the place to get into a detailed discussion of how historians write our books – that would be its own huge topic and not one I necessarily feel qualified to hold forth on. What I do want to note is that the demand of scholarly historical writing is that it be precise and meticulously sourced. For a sense of how meticulously, my current book draft is about 190,000 words (c. 600 pages) and cites about 750 works over about 2,000 footnotes; that is longer than most history monographs but the ‘citation density’ is not unusual.18 Meanwhile, the demand for very precise writing means that sentences have to be labored over for clarity and there’s often a fair bit of technical language.

All of that means that scholarly writing is unavoidably slow. My sense is for most historians, putting down around 500-1000 words in a day is a very good ‘writing day’ – and of course we’re teaching and doing other professional tasks, so not every workday is a writing day! Many writing days are much slower. So assembling a typical length 75,000 word monograph might involve at least 150 writing days at minimum. In practice the figure will be much higher and given the scarcity of writing days, that’s probably anywhere from one to two years of writing process, which is in turn almost invariably sitting on top of at least as much time – often far more time – spent in the research process.

A Possession For All Time

Then, as we’ve discussed before, those scholarly monographs (as well as articles and such) get used by other scholars as the building materials for public-facing history products: they are the bricks and mortar of your (good) pop-history books, podcasts, YouTube channels, and, yes, blog posts. Those final products thus sit on top of an enormous amount of research labor, conducted slowly and painstakingly and requiring a lot of specialized skills.

When folks imagine that ‘historian’ is a job that doesn’t require special training, they are often imagining the job of the historian as reading history books. But the task of writing history books and more importantly doing the research to discover the past does demand quite a lot of specialized skills and training. And of course now is where I note that the institutions we created in our society to train those historians and support their research are in distress. This is not some new political thing (although current political trends are negative for the field), they have been in distress for going on two decades now. We do not know anything like ‘everything’ about the past: vast troves of evidence remain largely untouched, be they archaeological evidence from the ancient world or the seemingly endless archives of modern administrative states (especially outside of Europe and North America – historical scholarship on the rest of the world is very thin). And of course linear time has a habit of minting one new minute of history every minute.

The process of discovering that past is a painstaking, careful, meticulous job, largely carried out by specialists, but of course we keep at it for the same reason Thucydides laid out at the very dawn of the discipline in c. 400 BCE:

If it be judged useful by those inquirers who desire an exact knowledge of the past as an aid to the understanding of the future, which in the course of human affairs must resemble, if it does not reflect it, I shall be content. For I have written my work, not as an essay to win the applause of the moment, but as a possession for all time.19

  1. It isn’t; we have reference works for this sort of thing. Modern historical study, like most pathbreaking intellectual work, demands more ‘data’ than the human mind can hold in attention all at once, which is why we have reference works, databases, spreadsheets, and notes…like every other field, humanities or STEM does.
  2. This is naturally something of a simplified model: most research projects will be more complicated than this, as the core effort generates blind alleys and dead ends which themselves often become smaller projects (often published in articles or chapters) and so on. But we have to start somewhere and a simplified model is that somewhere.
  3. Who in turn worked under Sir James Mann (1897-1962), who among other things was at one point a Reader of the History of Art at the University of London.
  4. True autodidacts also exist and can be valuable iconoclasts within the field, but their work often exhibits what I’ve heard one colleague phrase as the “scars of autodidactism,” an often quite visible pattern of idiosyncratic errors where the autodidact scholar does not know what they do not know and so stumble into precisely the sort of pitfalls historical training is meant to avoid, some of which are discussed below. That doesn’t mean autodidacts are bad mind you: their tendency to come at problems from unexpected angles can help keep everyone else honest. But, to be frank, they tend to make more grievous and more frequent errors as well.
  5. The normal academic phrasing here is ‘knowledge creation,’ but I’ve found that’s often pretty off-putting to the public – it sounds like ‘manufactured truth.’ I find ‘discovering the past’ communicates much the same idea – new knowledge, things we didn’t know before – but stresses the important notion that this new knowledge is tethered to historical evidence.
  6. This error, famously made by Naomi Wolf back in 2019, is actually a really good example of those “scars of autodidactism” I mentioned in the footnote above. Wolf’s training was in English literature, not history, and it showed, as this is the sort of error a historian of British law is unlikely to make, as among specialist the meaning of ‘death recorded’ is, as I understand it, very well known as it was quite common.
  7. For instance, I took coursework on the ancient side epigraphy, paleography, quite a lot of archaeology and of course a bunch of Greek and Latin. I also did more ‘literary’ coursework (reading tragedies, comedies, oratory, philosophy) because given the literary nature of many of the ancient sources, being able to engage in that kind of literary textual criticism – to understand our historical texts as literature – is actually quite important. On the military history side, I also took a couple of intensive upper-level graduate seminars working through military theory: major historical thinkers, trends and schools within military history and so on.
  8. I lead off my ancient history survey with the Kadesh Inscription for this purpose. It is almost entirely a lie, fabricating a glorious victory in a battle that was actually a debacle, but we can learn things about ancient Egypt by the way the text lies.
  9. Obviously, if I had evidence that they were radically different – for instance, Celtiberian elders doing something Gallic senates don’t do or can’t do – then that would massively reduce the value of the comparison and I’d probably have to drop it. To put this approach another way: I have two sets, both of which I can observe only imperfectly, but which behave similarly when fed into a given problem (the output of which I can observe; this ‘problem’ is ‘Roman armies’). If one set is [B C D E F H] with two missing values and the other set is [ D E F G ] with four missing values, it makes sense to suppose, because they produce the same outputs (this doesn’t work if you have no reason to suspect the sets are similar!), that the first set is more likely to have ‘G’ as a missing value and the latter to have ‘B’ ‘C’ and ‘H’ than any other possible value, though note that this is ‘more likely,’ not certain. And note that I cannot assume ‘A’ or ‘I’ in either set, just because of the progression – even if I am right that the sets have overlapping values, each still has one value I cannot fill in. The historian’s solution here is not to guess but to admit the unknown.
  10. The Philip II here is of Spain, not Macedon. Originally published in French, so the original title is La Méditerranée et le Monde méditerranéen a l’époque de Philippe II.
  11. Although we are expected to be able to understand and engage with the way that philologist records their process (an apparatus criticus) and engage in our own philological arguments – including potentially weighing in on the correct ‘reading’ (the best effort to reconstruct the original text) of a passage. Thus while not specialist philologists, ancient historians require some philological training, even beyond simply learning Latin and Greek.
  12. “How much does this weapon weigh?” “What is on the provenance card for that object?” “Who was responsible for this dating?” and so on. And on. And on. In a few cases, it was a matter of finding an object reported in a museum publication or even an archaeological site report and then having to go through the chain to figure out where it is now in order to get it weighed or measured to get information that wasn’t included in initial publication.
  13. I owe this raid-vs-siege metaphor to Wayne E. Lee, who used it when teaching us research methods in grad school. He used it in the sense of raiding and sieging archives, but it can apply more broadly. I might wonder, for instance, if a given ancient source is going to be valuable for my question (because I have a general sense of what sources discuss what things); the ‘raid’ is often a quick read in translation to get a sense of what gets mentioned – if the raid rewards my efforts, the ‘siege’ is a closer read in the original, often with resort to historical commentaries and secondary scholarship to provide added cross-references.
  14. For the very curious, you can see the smaller (c. 400 artifacts (in c. 300 entries as some entries are ‘groups’), as I recall) catalog-form of this pile of information in the catalog appendix to my dissertation.
  15. The journal article has a major parallel subtype, the chapter in an edited collection.
  16. Though I should note a fair bit of care is made to keep the anonymity. Among other things, authors are expected to strip out anything identifying in the text (e.g. a reference to their previous work as ‘my work’ or an acknowledgements section).
  17. The review forum in a given field or subfield which carries the most prestige and weight.
  18. Please note that is not the final size of the book, which may well get edited down substantially for publication.
  19. Thuc. 1.22.4

161 thoughts on “Collections: What Do Historians Do?

  1. Would a perhaps more successful (as in valuable as an outside voice for the development of the field) example of an ‘autodidact’ than Woolf be Edward Luttwak, and his work on Roman strategy and the frontier? I haven’t read his work in depth other than a brief brush with in in undergrad, but my understanding is he is a fairly archetypal example of someone who came in without historical training and asked valuable new questions that reframed work done by others, even if his conclusions sometimes reached further than his evidence and approaches supported?

  2. The role of archaeology and sciences also comes in. For instance, if a source text says “we carried their bodies into the sea and denied them a decent burial” but you find graves, it’s likely the bias is being uncovered.

    If the source text says “we slaughtered them man, woman and child not a slave was taken nor a family spared” and DNA extraction finds bloodline evidence to the contrary, likewise. I doubt this is ever black and white, but as historical triage of knowledge improves historians get to be revisionist.

    1. Dna has re-written the Anglo-Saxon invasion of Britain (turns out Celts were largely absorbed rather than driven out).

      1. Sort of? I recall a lot of older, pre-DNA works treating that as a pretty dubious myth already: a lot of “the story is that the Celts vanished, but it sure seems like the island is full of them today.”

        I’m not sure to what degree that may have been a minority view that I just happened to be overexposed to, or whether it was widely accepted among specialists but had not yet displaced Victorian myths among lay readers. Either way, it is certainly not the case that DNA evidence produced some kind of completely unexpected result.

        1. Certainly there are parts of the island that are obviously full of Celts, because the island includes Scotland and Wales. But the DNA showed even England is only 30% Anglo-Saxon and 70% Celtic. This isn’t what you’d expect from the archeology, which shows Celtic artifacts suddenly and completely replaced by Anglo-Saxon artifacts.

          1. Also not what you would expect from the culture, which shows Celtic religion (i.e., Christianity) thoroughly displaced by Saxon paganism. Or from such primary sources as we have (basically Gildas, to my knowledge). I find the whole situation puzzling.

          2. ey81: Yes, Gildas is the only narrative insular source we have who actually lived during the period in question. There’s a whole subfield of Gildas exegesis which appears to revolve around having a subtle command of 5th-and-6th century Latin* to try and lock down exactly what Gildas is saying; there are multiple plausible ways to interpret the typical English translations of his Latin, with significant implications for when he lived and what exactly he is describing.

            *I used to have “did well on the AP exams” high school Latin, and it did not remotely help clarify ambiguities in Gildas’ syntax.

            The other problem with Gildas is that trying to understand British history in this era through the lens of his writing is like trying to understand American history, 1950-2000, through the lens of an essay collection written by Jerry Falwell; his recounting of “history” is entirely subsidiary to his rhetorical/theological motivations.

          3. There are no Celtic artefacts in lowland Britain in the 5th century AD. There are indubitably Roman ones, and others that have been classified with various degrees of plausibility as Saxon.

          4. ey81: It is very much debatable (AFAIK, practically unknown), how much Christians there were in lowland Britain on the eve of the departure of the legions (380-409 AD). Christianity was strongest in the East, in cities and near the emperor, and Britain was as far away from the East, as weakly urbanized and as far from the emperor (after 380 AD) as a Roman province could be. You are right, though, that the religion of lowland Britain in 600 AD may have had a stronger streak of Germanic than of Celto-Roman paganism. One last point: Christianity may not have disappeared completely from lowland Britain by 600 AD. One hint is the continued veneration of St.Alban at the place of his martyrium.

          5. I wonder how distinct Anglo-Saxon DNA and Romano-Briton DNA would be. I think I first saw the 70% British figure thirty or so years ago – has it remained unaltered?

        2. One challenge for the resource-deprived would-be amateur historian is that if you’re restricted to what’s available in your local bookstore and nonacademic library, the scholarship available to you is likely to be several decades out of date. So in the early 90s, as a young enthusiast of the Late Antique/Early Medieval, I read a lot of books which were written in, or were based on work done in, the ’50s and ’60s. The idea that the Angles, Saxons, and Jutes by one means or another functionally replaced the incumbent Romano-British population of south/east/central England was often taken as a given, even if the attendant mythology around the subject (Hengest, Horsa, three ships, etc.) was not to be taken seriously.

          But at the time young medrawt was reading those books, the pendulum of scholarship had already swung very far in the direction of saying:

          “NO, actually, we doubt that happened, the whole ‘migration era’ theory of the period which models it as movement and displacement of entire peoples is discredited, it’s essentially a Naziphilic conception of history [Germans on the March!] which was then recast as a Naziphobic conception [Germans on the March! {derogatory}], but we need to throw out the whole framework, nobody had a conception of ethnicity which would be recognizable to us anyway, there’s no contemporary evidence of violence driving the cultural changes in England at this time, the whole thing is bunk.” The actual model being, basically, the ethnic identity of the overlords changed, and with it linguistic/cultural practices changed as well, but the underlying demographics of the peasant and artisan classes was not meaningfully altered.

          DNA evidence has since been used to knock down that backlash to the older model, sometimes a little bit and sometimes a lot, as holdthebreach’s comment indicates. For the nothing it’s worth, my mental model, which is STILL based on out-of-date (but more recently so) scholarship, is that the A/S/J [or whomever, really] arrived in sufficient numbers and with sufficient violence to meaningfully alter the demographic composition of south/east/central England, through death, displacement, and subjugation, but not anywhere near close to the profound extent indicated in old scholarship and/or the histories of these events actually written in the Middle Ages.

          1. There are some other options. Once you find one or two contemporary works which survey the field (note that many popular works will have a large bibliography), you have some idea of what contemporary writings you should check out. From there, you might be able to obtain access to jstor through your alma mater. You can obtain access to the catalogues of major public libraries (like the New York Public Library, which I think is bigger than Columbia’s), and turn to interlibrary loan.

          2. “NO, actually, we doubt that happened, the whole ‘migration era’ theory of the period which models it as movement and displacement of entire peoples is discredited […]”

            I think that is based on the 19th century myth of ethnic groups as biological units. When an archaeological culture replaced another it was believed its bearers had biologically replaced the previous inhabitants of the area. It was taken for granted they had the capacity to kill or force away all members of the pre-existing population. I don’t think they had that. Also, 19th century people might not have though about the possibility of transporting large numbers of people. The increasing motorisation of transportation could have made people lose track of how hard it had been.
            A fair chunk of colonialist pipedream may have gotten into such ideas too. European colonialists expected loads of their countrymen to emigrate to their colonies in Africa and Asia. Turns out they mostly did not want to. New Zeeland and parts of Australia did get enough people for their descendants to form the majority there. Some also settled in South Africa were they form a considerable minority. But most emigrated to North America (Canada, US) and the Southern Cone of South America (Argentina, southernmost Brazil, Chile, Uruguay). Attempts to found white settlements in the tropics usually failed. Too many colonists died from tropical diseases too quickly. So much for the claim of hereditary superiority.

      2. Paleogenetics is still an emerging field, and it’s only really in the last 10 years that
        the available data and methodology are only robust enough for its conclusions to be robust. Before that, every new result seems to completely overturn the previous ones.

        It deals with older migrations than the Anglo-Saxon’s, but David Reich’s “Who we are and how we got here” is an absolute must read for anyone interested in what we can learn from ancient DNA, how we can learn it, and the current state of the field (or, rather, the state 5 years ago).

      3. That’s not that happened (to the best of my knowledge). Old historians favored “the Celts were driven off and/or slaughtered” narratives but there was a huge push back of that in later decades with historians claiming that there was very little population replacement due to the Anglo-Saxon conquest.

        If anything DNA evidence forced historians to move away from more extreme claims of minimal population replacement with many “pots not people” historians/archeologists being dragged towards the middle of the road kicking and screaming by the DNA evidence showing pretty unquestionable evidence of (some) population replacement.

        1. Also note–I’m afraid our ancestors weren’t much into “me too”–that one often finds the male DNA replaced, while a good bit of female DNA persists. There was an article in today’s WSJ about Reich’s recent genetic study of the Yamnaya (Indo-European?) expansion which makes this point.

          1. True. Of course, then we consider that if the actual evidence goes something like:

            “We know the Whozits conquered the Boronians and established the kingdom of Placeland, and evidence from DNA extracted among Placelanders 100-200 years later indicates that of the pool of their ancestors who lived through the conquest, 30% of the men and 80% of the women were Boronians, with the balance being Whozits.”

            Well, it invites the question: if we know that something like this is what actually happened, what does it say about our historical framework and our conception of what “being a Boronian” or “being a Whozit” means, if we say “the Whozits destroyed the Boronians” or alternatively “the Whozits merged with the Boronians?”

          2. Well, to the extent that we are talking about rape, concubinage and forced marriage of Boronian women, “merged” seems awfully weak from any perspective.

          3. what does it say about our historical framework and our conception of what “being a Boronian” or “being a Whozit” means, if we say “the Whozits destroyed the Boronians” or alternatively “the Whozits merged with the Boronians?”

            I wouldn’t really say either one- I’d say the indigenous group was replaced, not by the Whozits but by a new, mixed-origin group. It’s still not the original people though.

            That said, I don’t think male-biased genetic signatures necessarily always correspond to conquest, though in some cases they do.

          4. I’m not sure what other plausible interpretation there would be for male-based genetic signatures other than conquest. A group of men with game?

            (If you don’t know that expression, your life is better for it, but you can google yourself into the PUA cesspit if you are curious.)

          5. I’m not sure what other plausible interpretation there would be for male-based genetic signatures other than conquest

            A group of foreign men arriving with access to resources / wealth, seems like an obvious alternative to me. Native women are going to be more likely to consensually choose them as partners.

          6. “I’m not sure what other plausible interpretation there would be for male-based genetic signatures other than conquest”
            “A group of foreign men arriving with access to resources / wealth, seems like an obvious alternative to me. Native women are going to be more likely to consensually choose them as partners.”

            For which we have an actual pretty obvious case.
            The Ashkenazim of Poland have a noticeable admixture of female line local Gentile ancestry which, back in the early 14th century, was not present in Ashkenazim of Germany.
            There is no historic record of Jews forcibly conquering Poland in 13th…14th century (otherwise decently covered by sources), nor contemporary allegations of them forcibly kidnapping Gentiles there. (Little “St.” Hugh of Lincoln was in England not Poland).

          7. There is no historic record of Jews forcibly conquering Poland in 13th…14th century (otherwise decently covered by sources), nor contemporary allegations of them forcibly kidnapping Gentiles there. (Little “St.” Hugh of Lincoln was in England not Poland).

            That’s a good example!

            I was also thinking of Madagascar where the overall genetic ancestry is roughly 59% from southeastern Africa, 37% from Indonesia, and 4% other, but the southeastern African ancestry is weighted towards the male side. There’s certainly no evidence of Bantu settlers along the coast *conquering* the more Indonesian-descended people in the interior (in historical time it was more the reverse), but we do know the people arriving from Africa probably were the ones who brought cattle to the island (or at least, in larger numbers than were there already) and in that context cattle are an important economic resource and source of wealth and status.

          8. “the overall genetic ancestry is roughly 59% from southeastern Africa, 37% from Indonesia, and 4% other, but the southeastern African ancestry is weighted towards the male side”
            “we do know the people arriving from Africa probably were the ones who brought cattle to the island (or at least, in larger numbers than were there already)”
            Do you mean the quoted “overall genetic ancestry roughly 59% from southeastern Africa” is weighted towards the CATTLE side?

          9. @chornedsnorcack,

            sorry, what was your query?

            Malagasy are more Southeastern African Bantu than Indonesian genetically (though their language is thoroughly Malayo-Polynesian). African settlers seem to have been the ones to introduce cattle, at least most cattle there today traces its origins to East Africa.

          10. I am under the impression it was Malays which took agriculture to Madagascar. My idea is it would have been easier for them because they had better boats. However, if Africans later took cattle to this island it would explain its native population being more than half African.

        2. I am under the impression it was Malays which took agriculture to Madagascar. My idea is it would have been easier for them because they had better boats. However, if Africans later took cattle to this island it would explain its native population being more than half African.

          1. It might have been better boats, or it might have been the fact that the ocean current are more favorable (the Mozambique Channel between Africa and Madagascar is notoriously hard to navigate). But, yea, the people from present day Indonesia were the first to bring agriculture. Africans were probably the ones to introduce cows though (on the basis of the cattle genetic study) and also probably introduced chickens (since that’s one of relatively few Bantu loanwords in the language).

          2. You may have mixed them up with guineafowl (also known as pearl hen) which belong to a different family of birds. The chicken is descended from the red junglefowl found in Southeast and South Asia. Most Polynesian peoples had chickens at the time of European contact. Unlike the people taking agriculture to Madagascar the Polynesians did not have metal tools. They must have splintered from their common Austronesian-speaking ancestors before acquiring this technology.

          3. You may have mixed them up with guineafowl (also known as pearl hen) which belong to a different family of birds

            I assure you I didn’t, haha. The words for chicken and for guinea fowl, in Malagasy, are *both* Bantu loanwords, among the relatively few Bantu words in the language, indicating that both species arrived from Africa, probably with people. A 2017 study on the genetics of Madagascar chickens also confirms that they arrived from Africa, not from island Southeast Asia:

            https://royalsocietypublishing.org/doi/full/10.1098/rsos.160787

            Madagascar wouldn’t be the first place where Austronesians were able to introduce some, but not all, aspects of the Southeast Asian agricultural package.

            Interestingly enough, while guineafowl were probably domesticated at the time when Africans introduced them, they’re only semi-domesticated in Madagascar today, at least the part of it I’m familiar with. People catch them in the wild and raise them in captivity (as was done with Asian elephants historically), so they’re clearly OK with a domestic existence, but they tend not to breed in captivity anymore.

          4. I actually checked out were the helmeted guineafowl originated. Being the wild form of the domesticated guineafowl it is indigenous to the grasslands of Sub-Saharan Africa. Similarly, I checked out where the red junglefowl lives too. It is possible Southeast Asian chickens did not survive in Madagascar. If so they could be introduced from Africa later.

          5. @Lena Synnerholm,

            Yes that does seem to be the case, for both cows and chickens, on both genetic and linguistic grounds. It’s a long trip, I’m not surprised some of the introductions may not have been successful.

            Interestingly enough, nearly all the names for domestic animals in Malagasy derive either from East Coast Bantu loanwords (chickens, guinea fowl, cows, sheep, maybe dogs), from novel native coinages (turkeys, guinea pigs, both of which were of course fairly recent New World introductions) or from Western European loanwords (pigs, probably rabbits) and not from Austronesian roots. Goats might be an exception. Linguistics of course can’t prove anything (and of course is not a hard science itself) but it’s suggestive.

          6. If the similarities are systematic and not arbitrary the etymologies are probably correct. I consider such correct etymologies pretty strong evidence of where the speakers of a language got something from. One could compare to this map (with text in French) showing the way sweet potatoes spread through etymologies:
            https://commons.wikimedia.org/wiki/File:Dispersion_de_la_patate_douce01.svg
            It is possible no domesticated animals brought by the original Southeast Asian colonists survived on Madagascar. The possible exceptions are dogs and goats since their etymologies are uncertain.

          7. There are a lot of false etymologies out there. We can know they are false because they are arbitrary. Oftentimes those are used as unscientific arguments for relationships between languages. It does not occur to them grammar and phonotactics has to be considered too. What I mean by grammar and phonotactics I have explained here:
            https://blog.ifraagasaetterskan.se/What-is-linguistics#wbb1
            While I know almost nothing about Austronesian grammar Malagasy very much sounds Austronesian to me. Once I was shown a sample text in Malagasy without being told on the spot which language it was. I was then able to identify the language as an Austronesian one. Definitely not Atlantic-Congo like the languages on the other side of the Mozambique Channel.

        3. It now seems most likely that people from Borneo were the first to successfully settle Madagascar. (Malagasy is most closely related to some languages spoken on Borneo.) To start a viable human population one needs at least 500 people to start with. Or one needs continuous contact with the rest of humanity. Without sails I think people would be limited to island-hoping the Comoros. This might have been too difficult to do on a regular basis without sails.
          Moreover, Madagascar being settled no earlier than Antiquity would explain how its native megafauna could survive for so long. This island has been isolated from all larger landmasses since the Cretaceous. The only non-cetacean artiodactyls naturally found there were hippos. All other niches of large herbivores and omnivores were taken up by very different animals. These were flightless birds and the offshoots of early monkeys (lemurs). The top predator of this ecosystem was a relative of the mongoose the size of a leopard. I think this megafauna was gradually squeezed out by habitat destruction. Their eventual extinction was late enough for them to be remembered in oral traditions. At least Madagascar’s 17th century inhabitants were aware of the elephant birds.

    2. There are also a number of scientific techniques people don’t really think of that can produce a lot of new evidence for historians, e.g. my field (neutron imaging) has been used quite extensively to look at the composition and metallurgy of ancient artifacts – even uncovering some things like armorer’s marks that have completely eroded away to the naked eye.

      1. I know a couple fellow chemistry undergrads that are doing something similar – working with a history prof to scrape off some residue from an ancient pot and running it through an LC/MS and such

  3. > When folks imagine that ‘historian’ is a job that doesn’t require special training, they are often imagining the job of the historian as reading history books.

    The root of this misunderstanding might partly lie in the double meaning of the word “research” in English. One meaning is meticulous study pushing the boundaries of current knowledge/understanding resulting in new discoveries. The other is looking for articles / books that answer a question you have. Because those two very different activities are called the same thing it’s easy to get them confused. That leads, in extremis, to people on the ancient-aliens to anti-vaxxers spectrum equating their 10min of googling with the life long efforts of whole academic fields or medical research labs.

    1. To be fair, I don’t think your anti-vaxxer really thinks their googling is the same as a clinical trial.

      It’s more that they think their googling is the same as a Cochrane review or the like. Which it very much isn’t, but they are broadly doing a similar activity, in that they are both reading about clinical trials and trying to decide what they mean.

      1. While I’m sure that there are some anti-vaxxers who think what I said, you’re probably right on the whole. I was being somewhat poetic / hyperbolic.

        I think, however, that a Cochrane review *is* still research type 1 (going by my definition above) It’s just that rather than turning data about individuals health after taking medication into medical papers, it uses the data in medical papers to produce statistical meta-medical papers. But that’s still discovery – statistically aggregating different experiment while taking into account still differences in methodology and corresponding bias is an act that produces something more than the sum of it’s part. That’s very different to research type 2, which is just learning from others with creating any novel output. A facebook post summarising what you read (and probably misunderstood) doesn’t count as novel output.

    2. Maybe that is where the myth of scientific research as some sort of high-pay schoolwork comes from? Plus the mix-up of “highly skilled” and “highly paid”.

    3. There’s also the not insignificant factor of anti vaccines advocates lying, intentionally, because they hate knowledge, humanity, and truth as concepts. Most of them know they’re saying crazy things when they equate their activities to research, and that’s the point. They are idealogically and financially opposed to research. Plus a lot of people whom are antivax otherwise are just following an authority or conspiracy that appeals to them, and aren’t the people we interact with whom use the rhetorical techniques your referring to.

      1. They might not hate humanity so much as being wrong about what is the best for it. Much of what can be mistaken for hatred is lacking grasp of other points of view.

        1. No, in this case they’re really just deeply evil. A lot of the anti vax movements intellectuals are demonstrably dishonest with their views, evidenced both through obviously hypocritical actions and expose pieces that have shown a disconnect between their private beliefs and public ones. They have motivations- typically money or political power- other than helping people, and are either completely indifferent or actively gleeful about killing rubes.

          It is hard to accept, but the anti vax movement is helmed by actual monsters.

          You’re not wrong that it’s possible to mistake hate for differing views, but I am fairly certain that they’re legitimately evil in this case.

          1. So yes, indifference is more common, as is simply wanting different things more than you want good things for others-selfishness and indifference-but there truly are people who enjoy, explicitly and directly, violating social norms and standards of behavior. People who enjoy being transgressive because it is transgressive. They are still acting out of what are real incentives and trained behavior, they didn’t become that way through being inherently made that way, any inherent tendencies have to be trained, but the end result is that if you describe evil they want to do it because it is evil.

            Transgressive asocial sadists are real.

            Those types are a not insignificant number of the people who end up on, say, Epstein Island or in the current administration, a venn diagram with not insignificant overlap. They’re as close to just being evil as an actual thing can, in fact, be. That’s the only way to explain some of the excesses of the powerful, and a small part of why they’re letting RFK flirt with another pandemic, despite the risk it poses to them-they just like the idea of people dying.

          2. You should consider Hanlon’s razor:
            “Never attribute to malice that which is adequately explained by stupidity.”
            I have written two blog posts about why I don’t think ill will exist and where the idea comes from:
            https://blog.ifraagasaetterskan.se/Why-ill-will-is-a-myth#wbb1
            https://blog.ifraagasaetterskan.se/Hierarchism-and-the-myth-of-ill-will#wbb1
            About the current American administration I think it genuinely is horribly incompetent. Its members are so obsessed with scapegoating people different from themselves they don’t care to figure things out.

        2. I see anti-vaxxers just as the embodiment of critical difference of prevention and cure. It’s super hard to determine whether a prevention actually works. At best it’s status quo, which is hard to discern from when the prevention is not used. At worst it’s tragedy. The opposite is cure. At worst it’s status quo (mostly, there are cure that can make things worse). At best it’s manna from heaven.

          As such, prevention (which a vaccine is) will always be under harsher critical eye and resistance rather than good old cure.

          1. I think it is rather a matter of not being able to grade risk. This is due to a combination of being prone to fear and having poor reasoning ability. “Any risk at all” is then misunderstood as “overhanging risk”. Moreover, people might believe in risks which ‒ at closer inspection ‒ turn out to not be there. I have heard of people believing certain vaccines to be potentially deadly. Something modern vaccines are not. Maybe some experimental vaccines were a century a more ago. But such vaccines would be illegal today.

          2. People may have the wrong idea of what the risks are. Nowadays vaccination for various potentially deadly diseases is widespread. Deadly diseases not commonly vaccinated for are either rare (like plague) or can better prevented in other ways (like cholera). As a result people have started to take it for granted they would survive infections. This is why Covid-19 frightened so many people when it started spreading around the world. Here was a new disease being very evidently deadly. At the time I calculated its untreated mortality to 3%. “Not particularly deadly” as Jared Diamond put it.
            At the same time people might fear vaccines to cause autism. The connection was originally based on a fraudulent study. Andrew Wakefield turned out to have forged his data. This to promote his own vaccine against measles alone instead of the MMR vaccine. Unfortunately, people started believing all vaccines given to small children to cause this. Large cohort studies have found autism to be no more common among vaccinated then unvaccinated children. I think autism is inborn, really.
            I don’t think there is any epidemic of autism spectrum conditions. Instead there are several types of problems of varying severity within the spectrum. The adding of less severe types to the group of autism spectrum conditions has resulted in extended definitions. I am one of those ending up within this extended definition of autism spectrum conditions. Moreover, society has developed so as to increase demands on the very capacities which are impaired in such conditions. This means smaller degrees of impairment now causes more problems. All this has contributed to an increase in autism spectrum diagnoses. Which in turn is misunderstood as a real increase in profound autism:
            https://www.bbc.com/future/article/20250415-the-genetic-mystery-of-why-some-people-develop-autism

  4. The opening section on history-as-scripture has crystallised why I find engaging with much amateur history so frustrating. I have no formal training in the field myself, but I’ve picked up enough to have an alarm go off when someone treats history as unquestionable scripture whose main purpose is to provide anecdotes / examples for why one side in a modern political debate is right and the other wrong. Not only is such disdain for the intrinsic value history rather sad in itself, but cherry fishing for examples is always intellectually dishonest anyway. It also belies a far deeper conceptual short coming.

    It confuses knowledge with understanding. Even if there was no new evidence (which is a glaringly false assumption in ancient history) to change what we know happened where and when in history, our understanding of it would still evolve as we ask new questions and apply additional layers of analysis. But history-as-scripture essentially denies that understanding is a thing (or that it has any value) and so strips away all layers of analysis. All that’s left is a fetishization of primary sources that are subject to only the most direct of interpretations.

    As an example, I recently tried to read Michael Duncan’s “The Storm Before the Storm” but couldn’t make it more than a couple of chapters in. It ignores all modern scholarship (almost all citations are from ancient sources), it barely engages with source criticism, and jumps to allegorical commentary on US politics at when ever it can. So I’ve now switched to Christopher J Dart’s The Social War, which is a little narrower in scope and is a less flamboyant read, but it actually does the work of carefully constructing an understanding of what happened and why.

    1. Without defending the part where it’s deployed as ammunition in current politics:
      “Scripture” is an interesting coincidental (or deliberate?) choice of comparison for this approach to history, since there was (is?) a very famous movement sort of centered around the idea that (religious) scripture should be read with the minimum possible interpretation. It also straightforwardly claimed that “…our [religious] knowledge can never (or ought never) to change and that all such change is thus just politically motivated ‘revisionism’ – because it assumes all of the evidence is already known, the ‘correct’ conclusions long ago drawn.

      For secular knowledge, I think an honest mistake with similar effects can easily be made. The industrial (let’s say, 20th century) experience of knowledge is that one is surrounded overwhelmingly by objects that have been designed. The engineers who designed each item know how/why it works. With relatively little effort, some net of threads connecting its functioning all the way to the most basic and general principles can be grasped. (Incidentally, this is so pervasive that both anti-mainstream quackery (e.g. “magnetic healing”) and practices accepted in polite society but clearly behaving as fashions (e.g. fad diets, turbo-childcare advice) seek the appearance of scientific legitimacy and prominently bring up stipulated connecting threads (“something something hippocampus something”).)

      In an environment where the hard sciences are this prominent, people will reasonably generalize from them to the other sciences. If experiments in them are, in a sense, only open to a limited amount of useful interpretation, not just because new experiments could be thought up instead, but also because the subject matter (and/or energy; pardon) is simple and thus even arbitrarily large amounts of further analysis would be unlikely to find things not already known, it is a natural impulse to assume the same also holds of other fields of study. If e.g. the Rutherford experiment disproving the “plum pudding” model is extremely unlikely to hold revelations after, say, 50 years after it was conducted, then surely archaeological artifacts likewise will have ceased being objects of cutting-edge study 50 years after they are found and analyzed. (“…which is exactly why they are on exhibit — performing magnanimity to the public is nice, but everyone would understand that as long as it held value as an object of study, the latter would take precedence; it would belong in a laboratory(?), not a museum.”)

      There is no denial of the concept of analysis, or of its value. Instead, one merely assumes that humans and societies are as simple as electrons and electron gas.

      1. I believe the religious framework you’re looking for is sola scriptura, and is very popular amongst Protestants, especially American Evangelicals. I’ve had people literally tell me that Catholics aren’t true Christians because they take things other than the bible as a source of religious authority. They weren’t happy when I pointed out that would make Christianity only about 500 years old, and mean that non of the apostles or church fathers who wrote the bible were Christian (what religion was Paul then?). The blog “History for Atheists” (recommended by out host in the past) has a fascinating break down of the history of sola scriptura.

        The comparison with the physical sciences is an interesting one I’d never considered before. I don’t think the difference in the lifetime of the usefulness of data points comes from the relative complexity of the subject matters per se (a single atom is of bewildering complexity in QFT), but because the two fields have very different goals. In the physical sciences, the aim is to construct a prediction-making-machine (called a theory) that agrees with pre-existing data when fed some inputs (aka matches experiments) and produces useful outputs when fed novel inputs (ie, we want to predict if a bridge would fall down without having to go to the expense of building the damn thing). This gives science an arrow of progress, pointing towards constructing theories that are more accurate or can be used in new situations. That doesn’t mean science is about facts only, a side product of theories is an abstract, conceptual understanding. This is used both to construct new, more powerful theories, and as a heuristic to bypass computationally heavy theories when only an approximate prediction is needed.

        But history isn’t like that. There’s no way in which you can repeat an experiment changing one of the dials slightly to see how it affects the output. As this blog post says, “theory” in history means something very different to a prediction-making-machine, it’s more of a question-generating-machine. The goal in history isn’t predicting alternative histories (or the future), it’s the understanding of the human condition in different circumstances. Of course, many lay people do think that the only reason to study history is to predict the future. Brett debunked the strong men -> good times -> weak mean -> bad times -> strong men meme a while back, and there are other similarly popular facile aphorisms. Of course any attempt to actually use them to predict the future is impossible, because such cyclic theories are so vague, ambiguous, and inconsistent that they can always be interpreted in slightly different ways that result in contradictory predictions. But they remain popular nonetheless.

        To circle back to your point, I can see that those who misunderstand both science and history as being about finding some perfect static Platonic “truth” that contains all the facts and nothing but the facts would treat historical artefacts the same way as the results of scientific experiments. But it takes a lot of misconceptions to get there.

        1. Yes, I was alluding to the Reformation.

          Don’t undersell history. Put cases of success front and center! A subfield of history (game theory) is known for its ability to prevent nuclear war and hostage-takings (e.g. aircraft hijackings). Yes, I’m slightly facetious, but predicting possible futures, however uncertainly, and thereby guiding decisions, is an important purpose. (Dare I say, the telos, to which the “understanding humans” business is an intermediate step.) Understanding the human condition is indeed nice, but it wouldn’t on its own justify a large, institutional structure without (whether alongside or leading to) other, practical benefits.

          Now, that can be (and often is) taken too far in the other direction and bastardized, whereby history is about “dates” (events). “We already know who won which battle of the ACW, and we don’t fight our wars in that manner anymore, so what is the purpose of reading those letters? Of course, if the letters’ writers and/or intended recipients were still alive, that would again suggest useful purposes — just ones that don’t count as history.”

          Turning back to a scientific comparison: many things that astronomers study cannot be experimentally controlled, either. It’s a predictive science anyway. They aren’t Platonists, don’t expect “all the facts” to ever be known or for that matter knowable, or even all the laws of nature to be exactly found, but it’s still a science with an arrow of progress and mathematically convenient approximations (e.g. patched conics in Kerbal Space Program, which is probably the Paradox-equivalent of that field).

          1. Game theory is a branch of mathematics, not history. It can be applied to people, including to people in historical context, but it’s very much an application of mathematics to idealised interactions, not a product of historical research per se. While it is indeed very useful to understand the human condition for practical purposes, the point I was making is that we don’t study history in order to have a Met Office like ability to predict the future. It’s more about grasping the range of possibilities, both in courses of action and in outcome, that the future holds. History gives us both examples of what might be, as well as the the perspective to escape presentism and understand what those options might mean.

            As for astronomy, yes, there are very few experiments. I should have said “experiments and/or observations based on a large enough data set to observe trends”, but I feared my message was already far too long and off topic. I would also nit-pick calling KSP a paradox-equivalent for astrodynamics. Patched conics is far more rigorous (and even used by real space agencies as a first approximation) than any theory of history or statehood Paradox comes up with. KSP is a gamified physics simulator; Paradox games are games with mechanics and an ambience that represent an abstract interpretation of history / economics, but never tries to be a simulator. I think that’s a substantial difference: you can learn a lot of real orbital dynamics from playing KSP (far more so than in getting a physics degree), but I’d be sceptical of saying you can learn anything but the broadest gist of medieval history by playing CK.

            Apologies to everyone else having to scroll this long dialogue to get to the other comments…

        2. Ah, but history can be like that. A frequent frustration I have is actually that it’s not enough like that; I maintain that you should absolutely always be using history as a way to more correctly form predictive methods of analysis, i.e. history as a science, and failing to either do that or address that you *are* doing that is just failure. Understanding the human condition *is* predictive and creating a model of history that helps predict human behavior is both possible and implicitly the goal of most histories, historically and now.

          In this sense history is a technique in the social sciences, one which has often I’ll defined rhetorical methodology but which is still clearly designed to create a predictive model…

          Except when it’s just open propogandizing, which is the other major branch and the one you alluded to in your first post. The frustrating part is when people narativize over using history to predict humans, and it’s frustrating because it suggests one of several motives that *conflict or misalign* with yours. Notable ones include-profit, political power, or emotional validation. Writing history to sell books, manipulate public perception, or to succor your own psychology. When we perceive these as motives in historiography we tend to react poorly to the history, particularly when it says things we don’t accept as true a prior.

          Of course people are bad at distinguishing from valid conclusions they don’t like and conclusions reached from motivations they don’t like. There are absolutely allegories to be made to modern politics from historical sources. Duncans book, meanwhile, might also fail as a body of scholarship. It depends on if the books understanding of human behavior leads to allegories or if the allegories are leading to narrativizing.

      2. Well there’s some cross pollination of approaches inappropriately, sure, but science is also structured somewhat differently when dealing with hard science subjects not because of the approach but because of the subject. Each experiment is self contained because the theses can be separated, not because the topics are simple; understanding the system by which a molecules electron fields interact is incomprehensibly complex, but because it is also governed by principles that can be separated clearly experiments follow a predictable progression; we don’t really need to revisit them.

        Not all such sciences are structured in such a way, hence why sciences including the biological and social sciences but also fields in the hard sciences frequently revisit old experiments and systems with new techniques or variables. They aren’t less “hard”, but they are less easily or *validly* separated from context to design a self contained variable, so revisiting them leads to new insights.

        For a generic and pertinent example, in the past genetics was considered as an indicator of development in an isolated sense to explain disease, based on experiments with testing animals. However recent advances have made clear that the epigenetic activation and deactivation of genes is just as important as their presence. This has led to old studies which involve gestation being revised to account for how the the gestational animal (mother is imprecise because of implanted embryo) influence pathology.

        My point being that this is a cultural “mistake” or misunderstanding specifically limited to specific fields of the hard sciences. It’s not as accurate to read into it as a consequence of industrial science or the simplicity of science as a field compared to history; the flow of historiography is quite understandable to some fields.

  5. This is a really great summary and insight about the field! While I’ve been vaguely aware of most of the process now (thanks in great part to the insights in this blog), I remember my general understanding of 10-15 years ago, and I can safely say that the general public impression of what a historian does is very much…”not this”. Especially the “history as a scripture” approach is REALLY common, due to how it’s taught in schools I’d expect. Would be a good place to start thinking about how to break that misconception.

    Oh, and, as someone who works in a somewhat similar field (lawyer), 500-1000 words per day is a really good pace in my experience, for a book in any case. I can usually put down some 2000 words in a day when writing legal texts, but as a practitioner, most of my work is “quick ‘n’ dirty” and not academic level. Not to mention that having one or two days of writing a distinct piece every now and then is *significantly* easier than writing day-in-day-out for a book. Just to give some additional perspective about how *slow* non-fiction writing really is.

    1. Reflecting on my own school experience, I wonder if students shouldn’t be given more experience in the methods of history (and the humanities and the social sciences in general).

      In my secondary school natural science classes, at some point in the final two years before graduation, we had to write an Assessment, a piece of research producing a c. 2,000~4,000 word school paper. Of course, by no means was this meaningful or original scholarship – I still have my physics assessment on my computer drive and I cringe every time I accidentally glance at it – but we still went through the motions of formulating a question that is interesting (we were discouraged from simply replicating known studies, which led to some amazingly specific/local research questions by students trying to be unique and clever), designing an experiment or simulation to answer it, and then analysing the results. For those of us who went into the natural sciences, it was probably better practice for the sort of thinking involved in real research than our undergraduate lab courses. For my classmates who didn’t, I imagine that they learned something about how knowledge is made in our fields.

      It could be very useful, in dispelling the myth of history as scripture, to require something similar for history. Have 16- or 17-year-old students identify a research question, ideally a topic that is too local for there to be many tertiary pop history sources, and have them hunt down sources in the wild. Or, alternatively, the other way around – throw them at an archive of the local newspaper and have them formulate and answer questions by it.

      1. Yes, I think that would be a good approach. Focus less on “learning the facts” and more on the methods. The first time I was confronted with something like that was in fact for the graduation, which is definitely too late. And I absolutely love the idea of focusing on local topics for this to introduce primary historical work.

  6. “The conference paper is the least impactful form of publication, in that it isn’t really published: these papers aren’t usually published in a written form.”

    Huh, that’s surprising! In physics, when we call something a “conference paper” it’s because it has been published in written form in some sort of “conference proceedings”, otherwise it’s just a talk. Do you guys call it that because the talk tends to consist of reading from a prepared paper-esque script? (I’ve seen a few older physicists do that but it’s very uncommon these days, most people work from slides and memory.)

    On a separate note, I’m curious about what critical theory looks like as a historical school. I get the impression in other areas of the humanities it’s heavily influenced by Derrida’s “Il n’y a rien hors-texte”, which seems like a counterproductive attitude to take in history where so much of what a professional historian brings to the table is their understanding of the context “hors-texte”.

    1. Physics is also where my personal experience is, but I think it’s the exception in academia. Talking to engineer friends, for example, conference proceedings are major thing and almost a parallel track to publishing in a journal. Very different to physics, where a conference talk is almost always about presenting an existing journal (or arXiv) paper.

      1. Yeah, the impression I had is that engineers frequently publish original work as conference talks, but that those conference talks almost always are printed as a conference proceedings. It seems a bit weird to present original work but leave it inaccessible to anyone who happened not to be there at the time.

        1. I’ve sometimes found notes for conference talks uploaded by classicists on their academia.edu pages. These can be quite interesting but often are less properly cited or include things like [presentation slide goes here] which makes them less useful afterwards

    2. So
      1) Conference proceedings sometimes happen, but their the exception, rather than the rule, for most history conferences.
      2) There is something of an irony that despite the fact that in history our authors are the most dead, they are also the least dead – undead, perhaps. On the one hand, you will see a lot of historians who interact with literary texts talk about them using the standard battery of dead-author critical theory (where the text creates, reifies, establishes, etc), but on the other hand of course the entire reason they’re looking at that text is to tell them something about a real society that actually existed who either created it, read it, or both. I’ve encountered the most ‘nothing outside the text’ sort of scholars in medieval history spaces (as a subfield, it is perhaps the most influenced by literary theory, because medieval literature and medieval history are essentially conjoined fields), but they often understand themselves really as historical literature or art history scholars rather than historians, narrowly. For most historians, there’s quite a lot outside the text – in a way, *everything* is outside the text.

      1. I feel an important matter there is that historians have to approach texts with a different vector than critics of fiction – the literary critic is looking for meaning, the historian for truth. To illustrate, when analysing a statement in fiction, say “Childe Roland to the Dark Tower came”, we already know it’s not a real event, and it’s only significant in the broader context of the message of the work in general. In comparison, a statement as a historical source like “The Battle of ‘Ura’ir was a healing”, it is useful to ask questions about the reality of the event. Was that battle real? Can it actually be “healing” as our author claims? The broader work and its meaning matter to those questions only insofar we have to be wary of the truth being distorted for the authorial intent.

    3. My memory of Derrida is that nothing is outside the text in the sense that nothing is irrelevant to it – but it was a long time ago. And Derrida is notorious for writing in such a way that allows different sometimes flatly contradictory interpretations of what he’s saying to proliferate.

      1. Making confirming what you wrote, but my understanding of Derrida’s aphorism is that everything is a text, or putting it another way, there is no more fundamental reality than verbal (or semiotic, if you prefer) expression.

        1. A perniciously anti-intellectual view, to the extent that the purpose of intellectual activity is to understand objectively verifiable phenomena like, say, infectious disease.

          1. that’s the purpose of *some* intellectual activity. even in the sciences, how we understand something can be vastly more important than what actually happens. for example, the idea of a human sperms “penetratinf” a human egg has reinforced traditional ideas of male/female activity/passivity, but more recent research has found that it’s the egg that actively draws in the sperm. check out the field of science and technology studies (sts).

          2. CJ Sheu, some people might think most systems of male supremacy are based on the idea that in general men can hit people harder than can women, and I doubt that observation is going to be invalidated any time soon.

            All else is some daft justification-after-the-fact.

            (TBH, Alice Evans would talk about men having more wide ranging social networks in many societies, and the effects of that: https://www.kcl.ac.uk/people/alice-evans)

          3. ad9, note the difference between “reinforced” and “based on.” Many systems of belief about society and the social order predate scientific study of the relevant physical things, but then selectively draw on any bits or pieces of scientific findings that are seen as supporting the ‘proper’ rules and beliefs, and then use those bits of science to reinforce the ‘proper’ system in the face of challengers.

            So for instance, we can clearly see that patriarchal gender relations must have arisen in antiquity long before our understanding of reproductive biology evolved past the level of what any human couple can see and experience for themselves. Presumably “man punch harder” played a big role.

            But we also see that the proto-‘scientific’ beliefs that the scholarly and cultured parts of ancient and medieval societies had about reproductive biology were, shall we say, looped back to their notions about patriarchal gender relations. For instance, we see recurring beliefs that heredity or life-creation was entirely the product of the male’s semen, and that the female was simply a passive vessel or receptacle that existed to permit this created life to develop.

            Now clearly, nobody was actually going “aha, I believe this thing, therefore let us have a patriarchal order in society.” But there’s a self-reinforcing loop of confirmation at work here. The patriarchy is that little bit stronger because of all the people going around convinced that God made women to be insignificant helper-things in the background. And that belief of theirs is in turn reaffirmed because all the authoritative voices are telling each other things that reinforce this belief (“women are biologically literally just failed males.”) And since the scholars live immersed in a society that rewards and confirms some beliefs while ignoring or punishing others, decisions about what theories to promote and present evidence for are in turn heavily influenced by the prevailing belief system.

            Going back and noticing that a lot of this ‘received wisdom’ about why men are greater and women are lesser was not, in fact, founded in science as we know it or even in time-tested observational experience, but just in some random guy literally making things up, does not cause patriarchy to evaporate. Nor does it mean that these ‘received wisdom’ pseudo-facts are the ‘real cause’ of patriarchal systems and not something like “man punch harder.”

            But it is still very relevant to our understanding of history, of gender, and of how our own society today may be using perceived scientific facts about gender to simply reinforce its existing beliefs.

            We may see that the evidence doesn’t necessarily match the paradigm that is being claimed as ‘founded in scientific fact.’ And digging into the truth of what the science says therefore becomes important because claims to be validated by scientific fact are a valuable asset that ideological systems carefully husband and do not relinquish lightly.

          4. Of course theories about male/female difference are cited as justification and reflection and explanation (all three) of the established order, but I’m too materialist to be persuaded that theories cause the creation or maintenance of the established order. Historically, we encounter people claiming that women are weak, timid, and passive creatures who need to be guided and directed by men, and that they are willful, licentious, and lustful creatures who need to be controlled by men. Either way, the men are in charge.

          5. Simon, that may all be true, but there are six thousand million people in the world, and how many of them have views on patriarchy that have been significantly altered by the discovery that “it’s the egg that actively draws in the sperm”?

            Some discoveries are just too esoteric to have any detectable effect.

          6. Lena, touché.

            But I still doubt that many of those eight thousand million people in the world have views on patriarchy that have been significantly altered by the discovery that “it’s the egg that actively draws in the sperm”.

          7. The idea that the egg draws in the sperm is not exactly incompatible with traditional gender ideas. As Herodotus said, it is common knowledge that women are not abducted unless they wish to be.

        2. I think if I remember correctly Derrida’s basic hypothesis is that no part of reality is more fundamental than another. So according to Derrida “The(*) Western Metaphysical Tradition” says that things are more fundamental than texts, because texts have features x, y, z. But Derrida argues that “The” Metaphysical Tradition understands things to have features x, y, z as well – and since (according to Derrida) “The” Western Metaphysical Tradition defines texts as anything with features x, y, z as a text therefore everything is a text according to that definition.
          The suspicion that Derrida phrases things the way he does more for the sake of provocation than understanding is hard to avoid. (Although he’d then argue that all understanding has the features of provocation…)5

          (*) The definite pronoun here is doing a lot of questionable work for Derrida. For example, Derrida uses Hegel as a representative thinker who denigrates writing, whereas in my opinion Hegel actually uses writing to make a point that anticipates Derrida.5

  7. “I do wonder how advancing technology (like large language models) may give us powerful new tools to engage with large source-bases”

    As a data scientist, I can definitely imagine amazing possibilities but I can’t help but be cynical about how I expect it would actually turn out at least in the short to medium-long term. Thankfully I expect historians to defend this “turf” from the techbros and avert most of the harm.

    1. If this is basically “the computers are finally smart enough to reliably read old handwriting”, that’s just going to be straightforwardly good. Look at the Herculaneum papyrus thing, for example.

      But yeah, lots of other things could also be done that are much less beneficial.

  8. “And of course linear time has a habit of minting one new minute of history every minute.”

    It is well known that a watched pot never boils. (Source: my mother) This can be generalized to the unceasing supervision of any task, as is well known to someone who has had to work towards a deadline while being micromanaged. (Source: Office Space, 1999) With this in mind, I propose that all of us who are not research historians be assigned to watch events without pause, thus preventing them from happening, for as long as is required to catch up.

    1. I’m reminded of some philosophy guy I encountered in my high school philosophy class who had some idea like “nothing exists that is not observed” brand of radical scepticism, but solved having an actually existing reality because *God observes everything even when no one else does*.

      1. Not a new observation:

        There was a young man who said “God
        Must think it exceedingly odd
        When he finds that this tree
        Continues to be
        When there’s no one about in the quad.”

        REPLY:

        “Dear Sir, Your astonishment’s odd;
        I am always about in the quad,
        And that’s why this tree
        Will continue to be
        As observed by yours faithfully, God.”

      2. Was it an Irish bishop named George Berkeley perhaps?

        I had read about him a few times, including in Bertrand Russel’s History of Western Philosophy. IIRC he claimed that matter did not exist and objects only exist as a result of being perceived. According to Russel his argument had involved that the matter the objects exist out of itself is not perceived but only its properties, like colour and taste; then Berkeley went on to claim that those properties also did not exist, because their existence would lead to contradictions.

        I suppose @ajay has also read about Berkeley as ajay quoted a poem by Ronald Knox satirising Berkeley position.

      3. This is actually a real interesting question in simulation theory. Does anything get simulated when no one observe it? Maybe it actually isn’t and it’s simulated retroactively?

    2. Time might be going by at one minute per minute but history is being made a little faster than that these days. The great powers have populations that are like 98% literate (and lesser countries are around at least 50%) and lots of those people are on the internet giving takes about historical events they are experiencing. We might be getting a few days of history every minute now. This is only going to get worse.

  9. In this post you talk about how some views of history remove the agency of people in deciding the course of history, but a few days ago you were in a twitter argument against Nate Silver calling him out for insisting that certain individuals can have significant agency on the course of history. So how do you square all this together?

      1. So why would historians “abandon this model of reasoning” rather than use it as one useful framework among several others, like mentalités, face of battle, critical theory etc.? To me it sounds like every theory of history has its strengths and limitations – what is specifically bad about this one?

        1. I think he means abandoning the naïve version of this idea taking too much for granted. In reality such individuals are limited by the means provided by the societies they are part of. In addition there are always factors outside human control.

      2. I can’t read the entire thread. But I am under the impression Nate Silver was taught a dated, person-cantered version of history. The idea of “great men who are born differently” only sounds like replacing privilege with geniality. IQ is not entirely hereditary or the Flynn effect would not exist. Moreover, being a bad person on a societal scale is more about not being kept responsible by other people than any internal characteristic.

      3. Holy shit get me two rounds of willy pete your replies are full of goddamn krauts; how can you stand twitter

  10. As someone who works in tech… I can’t help feeling that the *raw research data* you are producing is almost more valuable to the field than whatever your book conclusions are from it. Will your database of 500 relevant archaeological finds ever be published in a form that prevents other researchers from having to recreate it from scratch? That seems relevant for *all sorts* of other research “questions.”

    At the end of the day, historical analysis based on those sources is essentially a very well-informed opinion — it is the answer that makes sense to you and that you can support from all the data you gathered through your research, and it may spark other ideas in other historians. But man… how much faster and better could all that data analysis happen if someone had just handed you that database on the first day…

    The more I read your research updates, the more I feel that *discoverability* and *accessibility* is slowing down progress on this kind of research so much. There’s all this information, but you have to go search dozens of books and articles and museums to get it… I get not having the cash to digitize every civil war letter on microfilm in the archives, but a public database of archaeological findings (with, a man can dream, creative commons licensed photos) seems achievable… well, at least for “everything but pot sherds.”

    1. There’s a big problem, not just in history, but across academic research: we don’t have good ways of giving credit to the people who fill the repositories that other people can then plug into their models and algorithms. And since careers and funding in academia are largely based on getting formal, public credit for the work done… like I said, that’s a big problem.

      So for example, last year the Nobel Prize in Chemistry went to the founders of AlphaFold, a protein folding algorithm trained on ~200,000 structures from the Protein Data Bank. Each of those structures took hundreds of hours of specialist time create, and hundreds of hours more was spent organizing the community, hammering out data standards for a free, public repository, lining up funding sources, etc. The people who put all that work in were rewarded by being collectively thanked in someone else’s Nobel Prize speech. (The researchers depositing protein structures did, in fact, get credit and career advancement through their work, but only because in addition to putting their data in the Protein Data Bank they *also* published it in journal articles which could be cited and referenced in grant proposals. The repository itself is limited as a tool for providing researchers with credit).

      It’s not so bad in biochemistry, because, as academic fields go, biochem gets a shit ton of money. The less money, the harder it is to rally the community around projects that do not have robust means of giving credit. I talked to a researcher doing plant sample collection in the Congolian rainforests. He was openly bitter about the fact that, in his mind, western researchers are constantly pressuring him to make his results more open and re-usable, so that they can write analysis papers from labs at Harvard or Oxford while he, based in Central Africa, struggled to ensure his students had enough to eat.

      So I guess this is a long way of saying that the issue isn’t a technological one — the issue is that at the moment we lack the structures to incentivize the work needed to organize, fill, and maintain data repositories, and to reward that work when it does happen.

        1. Yes, lots of people are working on this! I work in the publishing arm of a large scientific society, so I’m even one of those people.

          The number of proposed new structures for recording, communicating, and crediting research results is enormous, and every year we get more. And yet we are still mostly stuck with traditional publishing structures.

          The biggest issue from where I’m sitting is that academic publishing has evolved over the past four centuries to do a lot of different things, and in the process has become entwined with many other systems — indices, libraries, funders, conferences, societies, &c.

          Most of the proposals I’ve seen for overhauls take a tiny slice of that system (usually the contact point of depositing original research into a public forum) and seek to improve it. Which I think is a fine thing to do, IF you keep some awareness of the larger system this is all a part of. Unfortunately that awareness is something I see lacking in a lot of proposals for changes to the publication structure.

    2. I had a similar gut reaction. I’ve read machine learning papers where creating/assembling data that was appropriate for training their model was explicitly listed among the primary research contributions.

      As for “Whereas other fields often focus on working with ‘data‘ – evidence that has been processed, homogenized and turned into standard numbers,” well…
      The processing and homogenization to create such ‘data’ is typically the most time-consuming and least interesting part of a new project. And since the data never exists until someone does this, it’s a service to make the data publicly available if anyone else is going to be following up. The STEM version seems to involve less manual note-taking and more stressing over the parsing of (many, many) digital file formats, but I don’t see much structural difference.

    3. There are sometimes attempts, but they are often fairly low on the priority lists and not particularly well rewarded. It’s one of those problmes everyone knows about but no one has come up wtih a good enough solution.

      Of course, another problem tends to be that turning stuff into data is often… lossy, which can be less of a problem when you’re looking for something specific but more of it when you’re just triyng to put data together and don’t know what specifically it will be used for.

    4. One issue with publication and reuse is that the assembly of a database like this is enough work that it will usually be optimised for the question being asked.

      As a toy example, imagine I’m working on a paper that discusses trends in the blade lengths of swords, so I assemble an inventory of swords with their lengths. Some of these swords are from archaeological reports that include weights, while others probably are not – do I put in the work to write to the current holders of every object and get the weight as well? No, that’s a huge amount of extra work and has minimal or no value for my project. In turn, this means that the “next” person doing this can’t use the data set to ask questions about trends in weight.

      The other and more subtle version of this problem is editorial selection. What am I defining as a ‘sword’? Is the minimum blade length 50cm, or 40cm, or 30cm? I need to make some sort of choices about this – but those choices then constrain other people’s use of the data. When I pick the minimum length of a sword blade as 50cm, someone else who is trying to look at trends to shorter sword design is going to find my data at best unhelpful and potentially actively misleading.

      Neither of these are inherently insurmountable problems, and sharing the data set is still useful in general, but it has to be worked with quite carefully – if you’re asking new questions of it, there is likely to be a substantial amount of additional work required which the first compiler very reasonably didn’t do.

  11. Monographs aren’t just used as building blocks for public facing history – they’re also essential building blocks for primary research. Someone looking at hundreds of civil war letters is not going to have the time to also become an expert on the details of each battle that the letter writers talked about, so they turn to earlier monographs to provide context and elucidate otherwise obscure points.

    Once someone has written about those civil war letters, later historians can build on that foundation to delve into other letters that the first historian missed, or to study other aspects of the lives of the letter writers, their families, etc.

    1. That is a very good point. And collected letters which are published by literary editors, biographers, or heirs can of course also be of good use to historians later

  12. As a semi-insider to professional history (being an undergraduate but also contributing mostly by independent reading to r/AskHistorians, and corresponding with some actual scholars) I knew most of this but what did surprise me was journal articles as by-products of book research rather than “research-trails” in their own right

  13. I wish to ask the assembled experts here what is your opinion of historian Richard Miles? (I am not a fan. Maybe I am missing something?) I do think the question is important because Mr. Miles’ books can be frequently found at public libraries. He has gotten onto the public library acquisitions purchase list, IDK how that works, favorable reviews in certain venues, perhaps? I doubt our host will be similarly honored, alas.

    1. It’s been a while since I read his book on Carthage, but it seemed fine to me? It’s neither true pop-history aimed at a very wide audience, nor aimed at a narrow sub-field. I’d view it as being for an undergraduate reading list, or a keen amateur already familiar with ancient Mediterranean history. That target audience possibly explains why it’s in public libraries. No obvious mistakes struck me, nor did the line of argument appear exceedingly tenuous. I think I’ve seen him cited by Goldsworthy, Hoyos, and others, which is a decent seal of approval IMO.

      Can’t speak about his other books though.

  14. One area of the training of historians that often comes up short is the part of the “package of field-specific training” that relates to the technical details of the things that the historians are studying.

    For example in a comment a while ago Bret mentioned a historian who had focused a lot on the ideas of chivalry…who had no clue what kind of equipment a knight would take with him into battle. Similarly I’ve read comments by a Roman historian complaining about historians of ancient Mediterranean trade from the Midwest who had never set food on a sailing ship in their life and had not a clue about the nuts and bolts of how sailing worked.

    Personally, the only technical area I know well enough to fact check historians is brewing, since I’ve brewed for years. I’ve listened to a lot of podcast interviews with historians and three of them were about historical brewing. One of these three historians was awesome and really knew their shit, the second made really elementary errors, and the third laughed about how they had not a clue about how to brew beer. The second two historians had WRITTEN BOOKS about historical brewing. I just can’t wrap my head around not learning how the thing you wrote an entire book about works and then laughing at your own ignorance as if it’s no big deal.

    Bret isn’t an example of this, as can be seen from his “how they made it” series. But this problem seems to be really common among historians to the extent that I often can’t trust historians when they talk about technical details, which is a real problem as they’re trying to analyze texts about various subjects that they don’t know well which must massively handicap them when it comes to parsing those texts.

    1. “The second two historians had WRITTEN BOOKS about historical brewing.”

      Had the first done so?

      (The funniest answer would be “no” but it is best to check.)

      1. No, the historian who knew what they were talking about (and who was incredibly fascinating) had just written some articles about historical Chinese brewing and had analyzed the residue on ancient pottery and was able to tell that some of the grain used was malted and some was unmalted.

        The two who didn’t know WTF they were talking about WRT Brewing 101 had both written books about the history of brewing.

  15. I’ve ofne found that one of the most telling “scars of autodidacticism” is the tendency to go to raw source materials (often ancient literary texts) *first* without considering the overall literature and “state of field” and generally sense of what the thing you’re reading actually *is*.

    This doesen’t mean you shouldn’t go to primary sources and raw data! As Bret mentions it’s extremely important, but if you want to know about the Persian Wars I’d never suggest going to Herodatos first: Rather try to find a good overview of the subject (which can be hard, like raw data collecting public-facing overview stuff of good quality tends to have lower priority than original research) *Then* once you’ve got a basic idea of where the text “sits”, go read it.

    1. I’ve noticed a similar thing in other fields, people trying to learn chemistry by reading Lavoisier, or trying to learn about economics by reading Adam Smith, when really they would be better suited to reading first a calculus textbook and then a textbook/journal article in the relevant field

  16. “For an edited collection – a book of chapters, each written by a different historian on related topics – generally the book editor assembles the authors, who write their chapters, and then the entire volume is submitted for peer review by the publisher. Those reviewers basically then review each chapter, with each chapter author getting responses back, but in theory the whole volume doesn’t move forward unless all of the chapters pass muster.”

    How is the balance of control between the editor and individual chapter author, generally? Is it acceptable for an editor to simply excise a chapter that gets too harsh a review (and thus may not be sufficiently rewritten in time for the scheduled publication)? Or are chapter authors owed a chapter in the final collection once they are brought on board? Does the editor provide strict or lenient guidelines (with regards to content, not style) to chapter authors, keeping some semblance of cohesion between different chapters in the collection?

    I am most used to a journal articles-only academic environment and find it interesting how this “edited collection” appears to straddle the line between articles in a journal volume and collaborators in a single article. Or am I misreading the dynamics between the people involved here?

    1. Chapter authors aren’t owed anything, generally, in such a collection so if a chapter is somehow deeply insufficient, it could be dropped from the final published version. I don’t think that happens often, though – more often, extensive edits are demanded of chapters that fail to pass muster.

      The level of editorial control varies a LOT between different volumes. Some volumes are tightly edited, with each chapter having a clear and defined remit and a generally consistent style; this is better. Many edited collections are…less well edited…and chapters vary a lot in approach, style and quality.

      You can see the disadvantages of the format, compared to journals, pretty clearly. The advantage is that you can get a bunch of article-length chapters on a single topic in a single volume. The best value for these volumes is often as ‘companions’ – a volume meant to provide a ‘state of the question’ treatment of a topic, rather than advance the debate, or to present examples of a range of methods. For that sort of thing, a dozen different chapters written by different authors covering different aspects of one topic can be really valuable.

  17. Another sometime “scar of autodidactism” is a lack of general knowledge not of the historical literature as such, but of the overall culture and mindset of the period under discussion. At dinner tonight, I was telling my wife the sad story of Naomi Wolf and “death recorded.” When I got to the part where Wolf claimed that large numbers of gays were executed in 19th century England, she became animated and burst out, “No, that’s not true.” It’s not because she was familiar with 19th century English legal terminology (neither am I, though we are both lawyers), it’s because she has read Dickens and Trollope, plus numerous 19th century biographies, and she knows that just wasn’t the way it was.

  18. even though history is never ‘set in stone’ and there are always new things to discover about the past, does it ever happen that a field or subfield gets “plowed over”, especially in ancient history/literature/studies? I assume that must be an especial issue in the classics, because the study of the classics has been going on for a very long time, so it seems at some point there has to be a question where the consensus is “after two millennia of talking about it, we’ve pretty much examined this thing from all the angles imaginable, and while there probably still is *some* new discovery out there, it would likely be such a small crumb its more valuable spending time on other problems”. having walked through a classics department at a R1 university, this is definitely the (surface-level) impression I got – a lot of the scholarship on display seemed like reception studies to me (i.e. how the classics influenced the thought of so-and-so, or how classical art inspired such-and-such contemporary art movement) – undoubtedly valuable work, but when your classics department seems to spend all of its time researching how other periods and cultures have understood the classics its hard to escape the impression that “we’ve run out of research in our field to do, so now we’re slowly but surely encroaching on other people’s fields”. and from what I’ve seen of scholars such as yourself the research being done nowadays looks like its either reception studies or super highly specialized – again, not a bad thing, but it makes it seem like there are big sectors of pre-modern scholarship that have been gone over so many times nobody thinks its worth looking at again. is this a fair characterization?

    1. I’m not a professional classicist myself, but it seems to me that there are still lots of big debates and changes of the consensus in the field; caused by both new discoveries (in archaeology and similar) and scholars looking at things from a new angle.

      For examples (where the debate really has shifted in the last half-century or so) our host has previously discussed: the question of the nature of the Spartan state; how influential popular will was in Roman politics; if the Roman economy was “primitive” or close to “modern”; and whether the western fall of Rome was a catastrophe or a gradual transition. Then there are also current classicists looking into questions or bringing in viewpoints that previous generations of scholars did not seriously consider: for instance obviously the study of ancient conceptions of gender and sexuality in has changed enormously, and now there is also increasing interest in disability in Antiquity.

      As for reception studies, I have noticed it becoming more prominent, but I wonder if that may be more due to interest from scholars than necessity due to Antiquity itself being over-studied. I’m personally quite sceptical towards the recent focus on reception, but I’d be interested in what professional academics think of it.

  19. Thing to note is, that even if on thinks all necessary sources are edited, looking at the archive (if possible) is still worthwhile if one can do it.

    For my Masters Thesis on the dissolution of the state of the Teutonic Knights most of my (quite extensive) sources had in fact been edited beforehand. But while skimming the files at the archive (noting that the last who had it in hand was the editor about 30-40 years beforehand; every user was required to note hist name and use date in the file) I came about a note that, quite frankly, was insignificant for most people (it was a list with recipients of a certain letter) and thus not chosen by the editor for his edition, but was enormously helpful for my argument.

    Note here, that for the medieval and early modern times we have so much sources, that editors often musts make a choice which of those to edit. To wit: The Monumenta Germania Historica is a project to transcripe and edit all the documents of the german medieval ages. They started in 1819… and are still going.

    Also besides all that, holding in hand for example a letter written by someone like Martin Luther by his own hand about 500 years prior is quite the experience (and in some way quite horrible, the man for all his intellectual heft had terrible handwriting…). Shout out to the Secret Prussian State Archive for being so unbelievable helpful an accessible.

    1. As a mediaevalist myself, I cannot but concur with all your points. Although I must say that the shine from holding an autograph from someone famous can fade away rather easily: when you’re reading the umpteenth letter from Machiavelli, the Medici, etc. etc. it becomes quickly very mundane.

  20. Does the US really not cover historical method at all until advanced undergraduate courses? In the UK we started studying it at 11 and it’s the main focus of the GCSE (age 16) exams.

    1. Between 50 and 60 territorial units in the United States make independent determinations as to what parts of educational practice are handled at a local level and what ‘local level’ means in their bounds. Roughly one percent of all people in the United States are enrolled children in New York City public schools, but that school board has less national influence than the statewide education department of Texas, which does not delegate textbook choices to it’s local school boards. In some parts of the country, school board boundaries are coterminous with local government boundaries, but in most of the country school boards are drawn without reference to other kinds of district boundaries, and it has become common for majority-White communities that neighbor majority-minority communities to create their own school districts or adjust boundaries in other ways to partially re-create public school segregation. It would be a serious research project to determine the number of curricula used in US public schooling, but it is easily thousands. If you did determine the number, it would still be wrong because many jurisdictions charter one or more schools in their areas for the express purpose of having a local publicly funded school that is not bound by the school board curriculum.

      It is quite common to have a large, chronically underfunded school board making curriculum decisions based primarily on avoiding textbook and other equipment upgrades; and to have that next to a tiny, lavishly resourced school board making curriculum decisions that swing wildly between boosting college admissions and boosting bigoted culture-war nonsense.

      There are things that can be discussed as similar to the GCSE and the Office of Qualifications etc, but there aren’t American things that are actually analogous.

      1. Thanks. I know a bit about the chaotic nature of the US system, but it’s always interesting to see more.

        I guess I just wonder what they’re teaching instead, to some extent. Is it all fact-learning, and much less about theory and method, even to age 19?

        (I loved history at school because it had a “scientific” process, much more so than the other humanities, and I suspect I’d have given it up at 14 and been much the poorer for that had it just been learn-all-about-british-kings nonsense).

      2. I’m curious how this plays out in real life. Surveys tend to show that large numbers of Americans, maybe even a majority, don’t know, e.g., in what century the Civil War took place, or from what country we won independence. So teaching historical methods seems rather beside the point.

        But how are things, really, in Britain? Can the man in the street say what king was overthrown in the English Revolution, or in what century the Napoleonic wars took place?

          1. The historical method was the primary focus of the Advanced Placement test, at least when I took it 50 years ago. And college undergraduate courses today, at least at Columbia, presume that students are capable of source criticism and historical reasoning. So it appears that the UK and the US are actually pretty similar. In both jurisdictions, elite high school students learn the historical method, while the mass learn a handful of names and dates which most of them forget by the time they are adults.

        1. it’s possible they knew this stuff at one point (when they learned it in high school) and then forgot it, voluntarily or involuntarily, because they didn’t use that information in their daily lives.

          If so, then that’s not really the fault of the teachers or the educational system.

  21. Bret, no one else has mentioned proofreading points, so I’ll offer these:

    theories fail in to one or another (fall into)
    read the technical notion systems (notation)
    that history go from the (goes)
    reject the article or sent it out (send)
    the way that philologist records their (philologists record)

  22. I’m curious, when historians (and other academics) teach on a subject for which there is active debate in their field, and for which they are firmly in one camp, how much are they expected to teach “the other side” of the controversy?

    1. They’re generally supposed to cover both sides, although obviously they can mention they favor one side or another, but it’s bad form to emphasize one contested viewpoint too heavily. My ethics class started with the professor announcing “My goal is for none of you to figure out what ethical theory I support.”

      If there’s something that’s contested but only by a few people, it’s more acceptable – most psychologists (in the US, at least) will readily admit they don’t really buy into psychoanalysis, even though there are still some diehards out there.

      1. I’ve encountered a couple of instances of “Some people think X, I however think Y, which is the right viewpoint as can be seen if you read my paper…” or very similar.

      2. Paleontology tries to do this. The example that springs immediately to mind is the idea that the Deccan Trap volcanism in India caused the K/Pg mass extinction. It’s pushed by a relatively small group of people, and folks outside that group tend to think they’re bonkers. It’s mentioned, briefly, before returning to more widely-accepted ideas (the Alvarez Hypothesis, that the asteroid is what did it).

        On the flip side, I learned taxonomy as well as cladistics and phylogeny, because there’s active debate about how to classify living things and the element of time in paleontology makes it even more complicated. I knew which method my professor preferred, but she taught it as “Taxonomy lead to cladistics which lead to phylogeny which is currently being revised”; not that any are invalid, they built upon one another. (Another professor, who was a taxonomist, took a very different view!)

        At the end of the day, your teachers are going to influence you. That can’t be helped. It’s just a question of what you do with that influence.

      3. Good they say not believing in psychoanalysis. I consider psychodynamic therapy somewhat questionable. It might be useful for some types of problems. However, psychodynamic therapists have tendency to get stuck in premature conclusions. I consider such behavior antithetical to science. More importantly psychodynamic therapists tend to blame all mental problems on parents or other caretakers. As if everyone was born with the potential to develop a mind with characteristics within the normal rage of variation. In reality individual differences are huge. But the larger the divergence from the average is the rarer it is. Presupposing capacities which are not there leads to mental problems being misunderstood.

  23. Regarding the discussion of analytical models, one phrase I find myself quoting surprisingly often is “All models are wrong, some are useful”. Neatly sums up a lot of what you’re saying, I think.

    1. “All models are wrong, some are useful”

      I have also read that phrase previously, mostly being used by economists. I have also even once encountered claiming something in the vein of “The model to be used depends on the exact environment and circumstances. For some cases this model works better; for other cases that model.”

      Which is an understandable and helpful attitude considering that economics is very much a soft/social science.

      1. It’s very true in the hard / physical sciences too. Planets don’t move in exact Keplerian orbits (because the universe is not a 2-body system), but it’s often very useful to model them that way because it’s usually an excellent approximation. On the other end of the scale: atoms in a crystal don’t exactly obey the Ising model, but studying that model is still an excellent way to understand magnetism.

        The aim of science is very rarely to arrive at some objective truth, but to make empirically verifiable predictions and learn how to improve prediction-making. A useful model does both, by being as simple as possible to catch the essence of what’s happening and no more.

        1. I think that the ‘All models are wrong’ part does fit economics at least better than physics.

          For example, when it comes to gravity there may be things we don’t know, like quantum gravity, but General Relativity is a good enough model that when you extrapolate the solar system in the far future it are such things as measurement errors and computational power you have to worry about, not an incomplete understanding of gravity.

          However, in economics you are dealing with systems influenced by human actors who have their own models on which they base their actions; which leads to problems that as far as I am aware are absent from hard sciences, like meteorology.

          To provide a general/abstract/theoretical example, this also poses a problem with reliably forecasting financial crises. Suppose you find a way to predict, from public data and known methods, that the stock market will crash next year; however, everybody else has the same information and models; then the stock market will crash this year instead as everybody wants to sell before it crashes.

          Or for a more practical example, Long Term Capital Management. They had developed mathematical models about the prices which bonds or stocks ‘should’ have relative to each other, then they went looking for two otherwise similar bonds or stocks which had price differences larger or smaller than predicted by their models, and then ‘bet’ on their relative prices correcting with lots of leverage. At first their models seemed to work and they made profits like 40% a year.
          Then their models no longer worked. Russia defaulted, their direct losses were small; however, this spooked many other traders on Wall Street who then decided to sell of riskier stuff. Many of LTCM’s trades turned against them because prices no longer behaved as they ‘should’. Finally, LTCM’s rivals noticed their trouble and began to either bet against the firm or move out of shared positions out of fear of also going into trouble; another effect not predicted by LTCM’s models. Then all of LTCM’s trades went against the firm and then they went bankrupt.
          Or at least that was how it was explained, here: https://www.npr.org/transcripts/1232862544 I suppose that is likely to be a great simplification.

          1. > For example, when it comes to gravity there may be things we don’t know, like quantum gravity, but General Relativity is a good enough model that when you extrapolate the solar system in the far future it are such things as measurement errors and computational power you have to worry about, not an incomplete understanding of gravity.

            But measurement errors and computational power matter! And GR requires so much more computational power that it’s usually a very poor tool to model the solar system with. If you want to actually make predictions with your model (which is the point of them), you have to actually be able to do stuff with it. Using SR or Newtonian Mechanics means that, for the same computational buck, you get far more bang in terms of accuracy of long term predictions (because you have to make fewer approximations elsewhere). The skill of the scientist and engineer – both in research and in applications – is to a large extent knowing which model is best when. You could try to use quantum field theory and your favourite choice of quantum gravity to model every quark in the asteroid 2024YR4 to see whether it’s going to hit the Earth in 2032 or not, but it won’t work. In fact the vast majority of physics, like thermodynamics or solid state or condensed matter or optics or etc…, is about constructing models that abstract away the unimportant details of other models in order to get something that works on a larger scale. To quote another aphorism: “More is different”.

            Yes, economics is very different to physics both in terms of making predictions and epistemologically. But it remains that, in both cases, “all models are wrong, but some are useful”.

  24. Dear Pedant,

    Braudel’s first name was Fernand, not Ferdinand,

    Thank you for your work !

    Guillaume

  25. In the “new tools” category, there is also archaeomagnetism, or the use of paleomagnetism to analyze archaeolological finds where radiocarbon dating is either not possible or not suitable. As forged metal and metal-bearing mud and clay cool past a certain temperature, their structure reacts to the local geomagnetic field and essentially records its strength and direction. The magnetic properties of these items can then be determined with modern methods and compared to a database of paleomagnetic intensities to narrow down the date and location of their creation, as has been done with some bricks from the Ishtar Gate: https://doi.org/10.1371/journal.pone.0293014.t003 to gauge how long it took to construct the date and how it related to the reign of Nebuchadnezzar II.

  26. Well, this was an interesting post to read. My experience with history professors was very mixed, some really pushed me to look for sources, consider alternatives, be knowledgeable across the domain. Others seemed to be only interested in having their views parroted back to them. After starting with a double major, History & Math, I ended up dropping the History part. I still took some history courses from professors I respected. With Math, it was a matter of ‘correctness’, rather than ‘judgement’.

    I still read a lot of history, but haven’t read a math book in decades. (I do have one in the to-be-read stack, though.) Right now I’m working through (and at times suffering through) Syme’s “Roman Revolution” His blatant antipathy to Augustus is sometimes hard to read, and that makes me re-think a lot of his conclusions.

    1. Keep in mind that Syme wrote “The Roman Revolution” in the ’30s, a time when Augustus was a big centrepoint of fascist propaganda. The fascist regime was very keen on celebrating Augustus, and in 1937-38 they held a huge “Mostra Augustea della Romanità” to celebrate the 2’000 years from his birth; the obvious implication was that fascist Italy was the true and only heir of Rome, and Mussolini a new Augustus.
      So yes, Syme was reacting to all this; it’s easy to see why he wasn’t exactly enthusiastic about the man (both men, actually).

  27. The comment about memorizing dates and figures reminds me a bit of how calculus is treated in undergrad-level physics. It used to be very important to be good at calculus but nowadays you don’t really need the skill of doing calculus. But what you DO still need is the conceptual foundation of what calculus is, and the related intuition to look at a calculus equation and understand what it’s saying, as well as what the a plausible solution would look like, and when given a solution what it means. If you just need to do an integral you huck it into Wolfram Alpha and it spits out the right answer.

    1. Usually, calculus and physics are taught in parallel. But I was a year ahead in calc, so Newtonian Mechanics was obvious to me. 🙂

  28. I am gladdened to see praise for Peter Connolly, since I have wondered what our Host thought of his work. This makes me happy because I have a number of his popular history illustrated works (very helpful for wargamers and miniature soldiers modellers and painters) and having Connolly given an imprimatur of approval gives me more confidence in relying on his reconstructions.

  29. A side quibble

    “When we say ‘source criticism,’ this isn’t some newfangled thing; Thucydides is doing it in c. 400 BC, for instance, in the introduction to his history of the Peloponnesian War (we call this section “The Archaeology”) comparing oral histories with physical evidence and plausible deduction and finding the oral histories lacking (Thuc. 1.20, 6.54-55).”

    It seems a shame not also mention Thucydides was doing some criticism of his contemporaries (or maybe predecessors ) as well. Thus his comment about Hellanikos at 1.97. Its always useful to recall that Thucydides and Herodotus had other contemporaries doing the same or similar work. Its even more amusing to know that this comment by Thucydides has been taken to be such a firm to absolute commitment to chronological order its been used emend out the man’s own statements about say how long a siege took even without any evidence of text corruption.

  30. I’m just finishing up Syme “The Roman Revolution.” I found it hard to read. Sometimes the language is confusing. But what really bothers me is Syme clearly despises Augustus, and everything Augustus does/says/is involved with is taken in the worst possible interpretation. So often I’ll stop and think, “Is there another interpretation? Should Syme have considered that?” Reading a bit about Syme, he was strongly influenced by the dictators of the 1930s as he was writing that book.

    I’m curious how you reacted to “The Roman Revolution” (I’m presuming you’ve read it) and in particular to how you view an historian’s obligation to separate his feelings and to consider alternatives that don’t agree with the historian’s primary perspective.

    1. The difference between an absolute monarch and a modern dictator is the ability to think outside the box. Most Roman probably did not even know there was a box to think outside.

  31. I’d like to see more clarity on how the field is in distress. This blog has commented that enrollment in history courses has been level for a while, which pretty much requires that the number of teaching professors has been level (as measured by FTEs anyway). And the production of monographs does not seem to have collapsed, which suggests that the number of research professors has been reasonably level.

    The data I’ve taken on that last point is the number of books named in the BMCR “Books received” article in the last 20 years. Specifically, since 2009 the articles with URLs https://bmcr.brynmawr.edu/20YY/20YY.MM.01/ seem to all be lists of books available for review, and the total counts of mentions of “ISBN” in those articles for each year is:
    2009 – 866
    2010 – 897
    2011 – 1005
    2012 – 1694
    2013 – 2376
    2014 – 2308
    2015 – 1200
    2016 – 1022
    2017 – 1093
    2018 – 1087
    2019 – 1005
    2020 – 942
    2021 – 1170
    2022 – 1052
    2023 – 1075
    2024 – 892

    What *does* seem to be happening is that teaching professors seem to be being degraded from tenure/tenure-track jobs to adjunct jobs, reducing teaching professors to the dystopian hellscape that most non-professors labor in: no job security and being paid only as much as it would cost them to hire someone else to do the work.

    Of course, one solution would be for history professors to drastically throttle back the production of history Ph.D.s so that each professor mentors one graduate student during their career on the average. Once the supply of Ph.D.s matched the number of tenure-track job openings, the pay in the field would rise.

  32. Can we perhaps tempt the AI barons to fund digitization of archival material by telling them they can use it as training data? Everything pre-1930 is public domain! There is a LOT of money in AI right now, carpe diem.

Leave a Reply