This week, we continue our four(and a half)-part (I, II, III, IVa, IVb, addendum) look at pre-modern iron and steel production. Last week, we looked at how a blacksmith reshapes our iron from a spongy mass called a bloom first into a more workable shape and then finally into some final useful object like a tool. But as we noted last week, the blacksmith doesn’t just need to manage the shape of the iron, but also its hardness and ductility.
As we’ll see this week, those factors – hardness and ductility (and a bunch of other more complex characteristics of metals which we’re going to leave out for simplicity’s sake) – can be manipulated by changing the chemical composition of the metal itself by alloying the iron with another element, carbon. And because writing this post has run long and time has run short, next week, we’ll finish up by looking at how those same factors also respond to mechanical effects (work hardening) and heat treatment.
As always, if you like what you are reading here, please share it; if you really like it, you can support me on Patreon. And if you want updates whenever a new post appears, you can click below for email updates or follow me on twitter (@BretDevereaux) for updates as to new posts as well as my occasional ancient history, foreign policy or military history musings.
What Is Steel?
Let’s start with the absolute basics: what is steel? Fundamentally, steel is an alloy of iron and carbon. We can, for the most part, dispense with many modern varieties of steel that involve more complex alloys; things like stainless steel (which add chromium to the mix) were unknown to pre-modern smiths and produced only by accident. Natural alloys of this sort (particularly with manganese) might have been produced by accident where local ores had trace amounts of other metals. This may have led to the common belief among ancient and medieval writers that iron from certain areas was superior to others (steel from Noricum in the Roman period, for instance, had this reputation, note Buchwald, op. cit. for the evidence of this), though I have not seen this proved with chemical studies.
So we are going to limit ourselves here to just carbon and iron. Now in video-game logic, that means you take one ‘unit’ of carbon and one ‘unit’ of iron and bash them together in a fire to make steel. As we’ll see, the process is at least moderately more complicated than that. But more to the point: those proportions are totally wrong. Steel is a combination of iron and carbon, but not equal parts or anything close to it. Instead, the general division goes this way (there are several classification systems but they all have the same general grades):
Below 0.05% carbon or so, we just refer to that as iron. There is going to be some small amount of carbon in most iron objects, picked up in the smelting or forging process.
From 0.05% carbon to 0.25% carbon is mild or low carbon steel.
From about 0.3% to about 0.6%, we might call medium carbon steel, although I see this classification only infrequently.
From 0.6% to around 1.25% carbon is high-carbon steel, also known as spring steel. For most armor, weapons and tools, this is the ‘good stuff’ (but see below on pattern welding).
From 1.25% to 2% are ‘ultra-high-carbon steels’ which, as far as I can tell didn’t see much use in the ancient or medieval world.
Above 2%, you have cast iron or pig iron; excessive carbon makes the steel much too hard and brittle, making it unsuitable for most purposes.
I don’t want to get too bogged down in the exact chemistry of how the introduction of carbon changes the metallic matrix of the iron; you are welcome to read about it. As the carbon content of the iron increases, the iron’s basic characteristics – it’s ductility and hardness (among others) – changes. Pure iron, when it takes a heavy impact, tends to deform (bend) to absorb that impact (it is ductile and soft). Increasing the carbon-content makes the iron harder, causing it to both resist bending more and also to hold an edge better (hardness is the key characteristic for holding an edge through use). In the right amount, the steel is springy, bending to absorb impacts but rapidly returning to its original shape. But too much carbon and the steel becomes too hard and not ductile enough, causing it to become brittle.
Compared to the other materials available for tools and weapons, high carbon ‘spring steel’ was essentially the super-material of the pre-modern world. High carbon steel is dramatically harder than iron, such that a good steel blade will bite – often surprisingly deeply – into an iron blade without much damage to itself. Moreover, good steel can take fairly high energy impacts and simply bend to absorb the energy before springing back into its original shape (rather than, as with iron, having plastic deformation, where it bends, but doesn’t bend back – which is still better than breaking, but not much). And for armor, you may recall from our previous look at arrow penetration, a steel plate’s ability to resist puncture is much higher than the same plate made of iron (bronze, by the by, performs about as well as iron, assuming both are work hardened). of course, different applications still prefer different carbon contents; armor, for instance, tended to benefit from somewhat lower carbon content than a sword blade.
It is sometimes contended that the ancients did not know the difference between iron and steel. This is mostly a philological argument based on the infrequency of a technical distinction between the two in ancient languages. Latin authors will frequently use ferrum (iron) to mean both iron and steel; Greek will use σίδηρος (sideros, “iron”) much the same way. The problem here is that high literature in the ancient world – which is almost all of the literature we have – has a strong aversion to technical terms in general; it would do no good for an elite writer to display knowledge more becoming to a tradesman than a senator. That said in a handful of spots, Latin authors use chalybs (from the Greek χάλυψ) to mean steel, as distinct from iron.
More to the point, while our elite authors – who are, at most dilettantish observers of metallurgy, never active participants – may or may not know the difference, ancient artisans clearly did. As Tylecote (op. cit.) notes, we see surface carburization on tools as clearly as 1000 B.C. in the Levant and Egypt, although the extent of its use and intentionality is hard to gauge to due rust and damage. There is no such problem with Gallic metallurgy from at least the La Tène period (450 BCE – 50 B.C.) or Roman metallurgy from c. 200 B.C., because we see evidence of smiths quite deliberately varying carbon content over the different parts of sword-blades (more carbon in the edges, less in the core) through pattern welding, which itself can leave a tell-tale ‘streaky’ appearance to the blade (these streaks can be faked, but there’s little point in faking them if they are not already understood to signify a better weapon). There can be little doubt that the smith who welds a steel edge to an iron core to make a sword blade understands that there is something different about that edge (especially since he cannot, as we can, precisely test the hardness of the two every time – he must know a method that generally produces harder metal and be working from that assumption; high carbon steel, properly produced, can be much harder than iron, as we’ll see).
That said, our ancient – or even medieval – smiths do not understand the chemistry of all of this, of course. Understanding the effects of carbuzation and how to harness that to make better tools must have been something learned through experience and experimentation, not from theoretical knowledge – a thing passed from master to apprentice, with only slight modification in each generation (though it is equally clear that techniques could move quite quickly over cultural boundaries, since smiths with an inferior technique need only imitate a superior one).
Now, in modern steel-making, the main problem is an excess of carbon. Steel, when smelted in a blast furnace, tends to have far too much carbon. Consequently a lot of modern iron-working is about walking the steel down to a usefully low amount of carbon by getting excess carbon out of it. But ancient iron-working approaches the steeling problem from exactly the opposite direction, likely beginning with something close to a pure mass of iron and having to find ways to get more carbon into that iron to produce steel.
So how do we take our carbon and get it into our iron? Well, the good news is that the basic principle is actually very simple: when hot, iron will absorb carbon from the environment around it, although the process is quite slow if the iron is not molten (which it never is in these processes). There are a few stages where that can happen and thus a few different ways of making steel out of our iron.
The popular assumption – in part because it was the working scholarly assumption for quite some time – is that iron can be at least partially carburized by repeatedly being reforged. Experimental efforts to replicate this suggest that this is not true (note Craddock, op. cit., 252 on the arguments). The first problem is time: carbon absorption for hot-but-solid iron (like an iron-bar in the forge) is relatively slow, often taking hours (one experiment suggests about three hours to completely steel a 3mm thick piece of iron, with thickness increasing the time required non-linearly). But irons are generally left in the forge fire only for minutes, which would mean that even if any carburization did take place, it would have penetrated only an extremely thin layer of the iron. Meanwhile, simply leaving an iron in the forge for a prolonged time is also a bad idea, as it will cause the iron to burn unless the forge is kept at a lower temperature (which would in turn mean not using it for regular forge work in the meantime) or all oxygen is excluded (more on that in a second). So at best, the forge fire is going to provide only an extremely thin coating of steel over a bar of iron – something like 0.03mm.
The problem with trying to make up for the slowness of this process by just going through the forging process over and over again is that you also have two different sources of decarburization. The first is the air. As we saw in our discussion of the roasting process, if you heat up iron – either metal or ore – in an environment with lots of oxygen (O2), that oxygen molecule will tend to grab spare carbon to make carbon dioxide (CO2). That’s still true with our carburized iron that has been heated up for forging. But since our smithy has to be an oxygen rich atmosphere, on account of our smith’s need to breath, some of that carbon will get pulled out of the outermost layer of the iron. Worse yet, that oxygen is also going to oxidize (that is, rust) that outer layer of iron, which leads to – as we discussed last time – that rust getting dislodged during hammering as hammer scale. As a result, careless forging can actually decarburize the edges of a piece of iron and metallurgical tests on some ancient weapons have seen some evidence that this did happen, where carbon content in the edge was lower than in the core (which is, to be clear, not a desirable situation)!
Fundamentally, our problem here is oxygen. Oxygen makes the iron burn in the forge, it causes oxidation in the iron and it steals away our free carbon to form carbon dioxide. So in order to get our carbon into our iron in quantity, we need to look for ways to get the iron hot, in a carbon-rich environment, with little to no oxygen present. That leaves two ideal phases for steeling:
First, steeling in the bloom. After all, we already have a stage of iron production where creating an oxygen starved environment was crucial. Can we get our carbon into our bloom during the smelting process? The answer is yes; if the ratio of charcoal to iron ore is tilted heavily enough in charcoal’s favor, the end result, once the charcoal has burned down, will be a steel bloom. This seems to have been the case in some traditional African bloomery traditions (Craddock, op. cit. 236) and the Japanese Tatara-buki process (Sim & Kaminski, op. cit. 59). Some Iron Age European finds have also been interpreted this way, but my understanding is that there are still many questions here; the documentary evidence provides, as I understand it, no support for widespread use of the bloomery method in Europe.
Alternately, the carbon can be introduced after the iron has been formed into a bar in a process known as cementation (also called case hardening or forge hardening, although the phrase ‘case hardening‘ can also mean effectively ‘surface hardening’ making it an imprecise term). Once iron is heated above roughly 900°C (or, in visual terms, a ‘red heat’), it will begin to absorb carbon if kept in contact with a source of carbon in an oxygen-starved environment. And we actually have a fair amount of attestation as to how this would be done from the medieval period (see Craddock, op. cit. 252).
First, the iron bars (having been smelted into a bloom, then forged into bars) were wrapped or surrounded in carbon-rich materials, which might be charcoal itself, or else plants, hoofs, horn or leather, and then sealed inside of a ceramic casing. That casing was then heated to the correct temperatures (because the interior of the case is oxygen deprived, there is minimal risk of ‘burning’ the iron, so going ‘high’ on the temperature is less of a threat) and held at that temperature for several hours while the iron absorbed the carbon. The iron bars used were often intentionally quite thin (1-2cm thickness) to allow for more rapid carburization. The result, sometimes called blister steel, might have a carbon content up to 2%, depending on how thorough the cementation process was; doubtless long practice led smiths to get a sense for exactly how long and at what heat a given amount of iron should be treated to produce the desired levels of carbon.
What is clear is that in both cases, using bloomery processes or cementation, that the fuel and time required made the resulting steel expensive; Tylecote (op. cit. 278) notes that steel in the medieval period often commanded around four times the price of iron. Consequently, we tend to see steel and iron objects in use, side by side, from the beginning of the European Iron Age onward (Craddock, in particular, has examples). Just like how iron was generally only used over cheaper materials like wood, stone and leather when the job demanded a lot of material toughness at low weight, so steel (especially steel of higher equality) was generally only used in place of iron when the job demanded extreme performance. But of course, not all parts of even a single object demand exactly the same properties, which brings us to:
As noted above, it was most efficient to carburize fairly thin rods of iron, since the carbon was absorbed through the outermost layer of the iron. Moreover, the process of making steel through carbon absorption, either in the bloom or through cementation often leaves the carbon levels throughout the iron somewhat uneven, with more carbon in the outer layers and less in the core.
One way to manage this, particularly in the production of practical tools was ‘steeling.’ We actually saw an axe-head produced through a method designed to permit steeling last week. In a steeled blade or tool, the core of the tool is forged in iron (perhaps lightly carburized) and then, near the end of forging, the business end (blade, hammer-surface, pick-point, etc. – whatever needs the most hardness, generally) is forge-welded with a piece of steel, making a single piece of metal bonded strongly together but with different carbon-counts in different areas. This can be done a number of ways; the steel might be used as a core and the iron body welded around it and then filed away leaving the steel exposed (more common, I believe, with axes – this was the method we saw last week). In other cases, a steel edge might by wrapped or layered over an iron core.
If the goal instead was to create a more homogeneous steel, the solution was ‘piling‘ (sometimes inaccurately referred to as ‘damascening’). The steel bar is drawn out into a fairly thin rod, then folded back and fire-welded into itself, often repeatedly, to create a more homogeneous steel. Though it is mostly now a thing of the past, where quite some time there was a pervasive popular belief that this particular method was unique to Japan; in any event, it was not. The downside, of course, was the time and labor demanded, compounded by the fact that repeated fire-welding meant repeated material loss to oxidation and ejection, especially since, after several pilings, the amount of slag to be ejected was likely to be quite low.
More complex is pattern welding, which marked some of the highest quality blades in much of the world until the early modern period (with exceptions for things like Wootz steel, which is not pattern welded, but confusingly sometimes equated with pattern welded steel under the confusing term ‘Damascus Steel’ which you will note I effort to avoid entirely). In the basic pattern welding method, we begin with a thin rod or bar of carburized iron. This is then piled and drawn repeatedly to create a laminated rod of iron with relatively more homogeneous carbon-content. Then two or more such rods are twisted and then welded together to produce a strong steel core. Generally then a blade – often more full carburized to maximize its hardness (since harder metal holds a sharp edge better) – is welded on to the core to make the final object.
Pattern welding was intensive in both time and fuel and consequently was reserved generally for valuable prestige items. For iron, this almost always meant the blades of weapons, particularly (though not exclusively) swords; pattern welded knives, hammers and spear-heads exist, but are less common. Part of the prestige value must have been the high performance of weapons made this way, but it cannot have hurt that such weapons, if polished and etched, clearly displayed the patterns of the welds and there is evidence that they were kept in this state. Pattern welding is an ancient technique – some Middle and Late La Tène are exquisitely pattern welded – which in Europe continues through the Roman period and into the Middle Ages, although it is somewhat less common (as I understand it) in the Early Middle Ages as compared to either the Roman period or the High Middle Ages. The art never seems to have been ‘lost,’ though the greater availability of either imported Wootz or larger and more homogeneously carburized locally made steel blooms (using the bloomery process rather than cementation) seems to have caused European sword manufacture to shift away from pattern welding later in the Middle Ages, essentially because it became no longer necessary to ensure a blade of sufficient quality.
Of course, pattern welding could be ‘faked’ by going through the final steps (twisting, welding and attaching the blade) without the former steps or even properly carburizing the iron. Sometimes these blades – pattern welded using low-carbon or even no-carbon iron – are taken to mean that the role of the carbon or the quality of the metal was not understood. I do not think this is the case, given that often the carbon content of high quality blades, even as early as the Roman period, seem very deliberately distributed. Bishop and Coulston (Roman Military Equipment (2006), 242) features a chart (not in the public domain, so I won’t reproduce here) which shows the carbon-content of a number of Roman gladii as a cross-section; several have high carbon (hard) edges and lower carbon (soft) cores, which is exactly what you would want in a sword (and coincidentally also how the highest quality Japanese katana were made, though I should note that these gladii are some 1300 years older than the oldest katana). Instead, I think we should understand low-carbon or iron blades with pattern welding patterns to have essentially been ‘fakes’ or ‘knock offs’ – meant to look like a superior, high quality pattern welded steel blade, but made using inferior materials or processes.
This was intended to be one long post, but the demands of time have led me to split it here. Next time, we’ll look at the other tools that a blacksmith has to control the characteristics of his iron: work hardening and heat treatment (which is to say hardening, tempering and quenching).
67 thoughts on “Collections: Iron, How Did They Make It, Part IVa: Steel Yourself”
Why is it valuable to put a thin sheathe of hardened steel around the non-cutting edges of a honsanmai and call it a soshu kitae? Preventing damage to the blade if it receives stress from the side or rear?
In Katana fighting one doesn’t block with the edge as one does in European styles, one blocks with the corners, in the upper right and upper left of the diagrams. Hardening in those locations will help with strength of the blade when blocking. Generally, it looks like the more-involved layering methods would have the effect of propagating the blocking and striking forces into the ductile core steel and allowing them to disperse harmlessly. Stress concentrations are how one gets cracks, so energy dispersion is important in tools.
no-one blocks with the edge, if you did it would get chipped. With European styles they block with the flat side of the blade.
Fiercely contested point, actually. Quite a lot of argument in HEMA circles. The representational evidence is difficult to interpret, but contains both edge and flat parries.
I would guess that you try to block with the flat, but blocking it in some manner is more important than failing to block it. Is that unreasonable?
Blocking with the edge offers different tactical opportunities than a block with the flat. Two sharp edges will bite on each other, keeping the blades from sliding; it’s clear that quite a number of plays from the period manuals require that ‘biting’ to be effective. Roland Warzecha has some examples where it matters on his youtube channel: https://youtu.be/_0HulsThp9U
Fuller discussion of the topic also here: https://www.youtube.com/playlist?list=PLMUtS78ZxryNzppeniLfdHb_ftc7huhEY
In practice, both kinds of parries exist in the sources as intentional, purposeful uses of the weapon.
When I was doing Iaijutsu, we blocked both with the sides and edges, depending on the situation. Ideally for your edge, you’d like to block with the side, but the advantage of blocking with your edge usually trumped that.
The big reason — and the Japanese don’t emphasize this the way Western rapier masters do — is that using your true edge (or with katana, the ONLY edge) is stronger in a bind or block than the side of your sword. Biomechanically, you’re pushing into the notch between your thumb and your palm, and you’re pushing with your arms in the direction of the force. Trying to do that with the side of the sword rather than the edge puts you at a disadvantage because you’re not leveraging your grip.
That said, if you’re not going strength-to-strength (for example, with a hanging parry that my sensei called a “crescent moon block”), you can get away with using the side of the sword.
It’s my understanding that harder materials are scratch resistant, and swords clash against each other, so you’d get scratches on the flat of the blade. I notice from that chart that having hard flats is more common than having a hard back, and I wouldn’t expect the back of the blade to touch the other sword.
In general (I can’t speak with absolute authority on the specific techniques of fighting with a Katana) one parries with the side of one’s blade, so as to avoid damaging your own edge. It also runs less risk of your sword simply getting sliced in half if your opponent has the superior blade. So it would make sense to have “hard” steel on the sides of a sword as well as the cutting edge, especially in situations where your sword is your primary method of blocking blows. It would be interesting to see if weapons designed to be used concurrently with shields had fewer instances of steel sides.
Disclaimer: I am not a specialist swordsmith so might be extrapolating more out of my rectal database of heat treating than warranted.
Besides the use of the sword there may also be manufacturing reasons. Low and high carbon steel contract at different rates. Also, the edge of the blade cools (hardens) faster in the water quench tank. This causes stress in the blade when hardening it, enough stress that in a see-through tank you can what the blade pitch forward and then return to it’s final curve. The stress is enough that blade did and can crack at this last “hot” step after a lot of work, even in the hands of a skilled smith.
Japanese sword smith’s use clay on the blade to try to control how quick the edge cools which produces the wavy “hamon” line. It could be that some of the steeling on the sides and back were intended, at least in part, to try to control these stresses a bit and reduce how many blades they broke in the hardening.
I think I read somewhere that the curve of a katana isn’t intentional; it’s a side effect of some aspect of the forging process. Would the quenching be it?
It may be more accurate to say the curve is not forged in versus unintended. It indeed is mostly a product of the hardening process, however the smith knows it is going to happen and counting on it. If they actually wanted a straight sword there are things they could do to achieve that. Like when forging a single edge knife, you pre-curve the blade forward a bit before forging the bevel because you know that is going to curve it back.
Yes, intentionally kept. Part of that is the philosophy that a sheathed katana is just a dangerous as a drawn one — the slight curve makes it easier to draw into a cut. (It’s also the reason why katana are shorter than longswords. Given how they were worn, they need to be a little shorter to effectively draw-cut.)
I’ve also heard it argued that the curve is better for cutting overall, but the katana’s curve only gives something like a 3% advantage over a straight blade. There are enough thrusting techniques that make me think this is a choice to help the draw without giving up the ability to thrust.
The “soshu kitae” method in this chart is probably an invention of the historical replica industry. It’s clearly intended to refer to the Sōshū school of the famous blacksmith Masamune (“kitae” just means “forging”), but the term is entirely unknown in Japanese, and serious discussion of the Sōshū workmanship seems to focus on blade shapes and edge patterns rather than any special lamination method.
Sort of off topic, but the bit about ideas spreading easily by imitation but only slowly improving by innovation reminds me of Alon Levy’s claim that the reason the US (or English-speaking countries more generally) can’t effectively build good rail/transit compared with the rest of the world is that they don’t (for various cultural reasons) have the ability to learn to imitate international best practices, which is much more effective than trying to innovate your own. (see e.g. https://pedestrianobservations.com/2020/09/19/learning-worst-industry-practices/ )
Levy’s argument predicts far more than he wants it to, because there’s no reason (except that he is only interested in transport) not to extend it to non-transport industries too. And there you get the interesting conclusion that the US couldn’t possibly learn lean manufacturing techniques from Japan, and UK retail couldn’t possibly learn better logistics from Aldi, because innovation doesn’t go that way. And there is a huge amount of epicyclic argument there too – what about former colonies? (In other words, pretty much everywhere) Why aren’t they more efficient? Oh, because they’re learning the wrong stuff from China…
That doesn’t really hold – areas with free-market competition have more ability or incentive to learn and compete (although worth noting that US car companies, which were a psudeu-monopoly, did almost go bankrupt at one point because they had a hard time learning manufacturing techniques from Japan). Transit specifically is much harder because it’s a sort of natural monopoly run by political appointees.
The argument for former colonies is that they generally learned from their former colonizers, (or have occasionally learned from China since cultural influences have shifted), and that part of the theory seems to hold up extremely well (It’s the reason why Korea, which was colonized by Japan, does so much better than British-colonized Singapore despite Singapore having equally effective government in general).
I think Japan was too busy murdering/suppressing the Koreans to teach them manufacturing techniques. Just a thought.
Brad DeLong attributed some of the success of South Korea to the fact that you have to get rid of the big landowning families to get to a modern economy, and Korea had that happen twice, once when the Japanese invaded, and again when they left.
Korean learning from Japan when it comes to transportation is postcolonial… I don’t know the precise history in the 1950s and 60s, but consider that Park Chung-hee had been a Japanese collaborator, and at the time Japan-South Korea relationships were friendly out of shared anti-communism. I believe the Korean (and Taiwanese) model of industrial policy was also directly lifted from Japan, but professional economic historians have studied this and I haven’t. I also think there was a sentiment in Korea (and Taiwan) that they wanted to be like Japan, already a fairly prosperous society in the 1960s.
Whatever the prior history, starting in the 1970s we see pretty glaring similarities in how South Korea is building its transportation network with what Japan had done a generation before. We see suppression of cars rather than a US-influenced view that car ownership is the symbol of prosperity, and we see technical characteristics that are very Japanese like subway-commuter rail through-service.
Anyway, I don’t want to derail this post too much, and I’m eventually going to blog more about the Japanese way of building rapid transit and how it influenced Korea and Taiwan.
You don’t need teaching for learning to occur.
The Japanese occupation of Korea was horrible in many ways but lots of institutions and techniques based on Japanese models were established doing the occupation and many still persist to today.
Some of them are useful (some economic systems), some of them fairly arbitrary (school uniforms and schedules) and some of them are annoying as all hell (the Japanese address system which was transplanted into Korea and which was an absolute nightmare before it was officially replaced with normal street addresses which still mucks things up since a lot of people don’t think in terms of street addresses and still use landmarks etc. for even pretty basic navigation).
The “some cultures are bad at learning from other cultures” effect is nonuniform. Some industries or specific projects are more susceptible than others.
For example, as I understand it, nearly every subway system is built, at least in the US, as a bespoke project managed by an entirely different city, likely one that has not built any significant amount of subway tunnels in living memory.
If so, then there is little room for an institutional capacity to conserve best practices from project to project. Furthermore, the local subway-builders often lack the resources and connections to efficiently reach out to people with better practices on an entirely different continent, and so fail to do so.
By contrast, if General Motors is getting carved like a roast by superior Japanese manufacturing processes, they are in principle *centralized* enough that a concerted effort to learn from what Japan is doing has some plausible hope of success.
Reforms often struggle to succeed in the absence of centralized systems for designing the reforms and propagating them out to the general public.
I don’t know, but I think the US could build effective rail, if we wanted to. I think we just don’t care enough because we go by car much of the time. If driving is impractical, we usually upgrade straight to airplane travel, bypassing rail altogether.
There’s a lot this explanation doesn’t cover though – for one thing it’s not just that the US doesn’t build much rail, it’s that the rail it does build is extremely expensive (about 10x the global average for costs – and that’s not explained by higher incomes, since e.g. Scandinavian countries have very low costs).Local transit systems (especially the NYC subway) which are heavily depended on are even more disproportionately expensive.
It also doesn’t explain why other English-speaking countries have expensive rail – Is the UK really inherently car-oriented in ways that France or Germany aren’t?
The UK has less rail nowadays because Dr Beeching shut a lot of lines down in the 60s. This was because the motorways were rising in competition. Tarmac is incredibly recyclable and a lot of the stuff was available as RAF runways shut down (along with a labour force who knew how to lay tarmac). The runways were there because, until D-Day, strategic bombing was the only way to strike directly at the Third Reich.
So there’s not as many railways because of Dunkirk. Or so they say.
The US can’t build a good transit rail system because its railway right-of-way got used up building an excellent freight rail system. Neither one should operate at the speed of the other, so you can only have one.
Urban transit and intercity rail (where you find freight) are separate issues. Freight rail doesn’t explain why building a new suburb is ridiculously expensive.
Also AIUI a lot of rail ROW is wide enough to host four tracks. Two for freight, two for passenger! (Really high speed passenger rail may need different lines, but the US used to have 90-100 MPH ‘streamliners’, a fair bit faster than people should be driving, and much safer.)
You forgot to categorize this part of the series. Previous parts used Collections and HDTMI.
Whoopsies. This is what I get for finishing things late at night…
So the patterned swords were considered to be superior for some time. This is interesting, because the popular image is a shiny pure mirror-like sword without any blemish. Was this image true in later Middle Ages perhaps?
Also, did people back then rationalize the cementation process somehow? Or in general, did they have any explanation for why the do the steps in the way they do them, or were they simply following their ancestors.
“First, the iron bars (having been smelted into a bloom, then forged into bars) were wrapped or surrounded in carbon-rich materials, which might be charcoal itself, or else plants, hoofs, horn or leather, and then sealed inside of a ceramic casing.”
I think that Paul Brickhill, “The Great Escape”, describes POWs making wirecutters from iron straps, filing them into shape, and then hardening the cutting edges by coating them with sugar paste and reheating them. (He is, if I remember, sceptical about whether this did any good. From your description I think he’s right; the sugar would burn off and expose the iron to the air pretty quickly, I should think.)
if they covered the sugar with dirt first it should work. the idea is reasonable, and writers (and story tellers) tend towards “high literature” and probably will forget important steps. I would expect prisoners to come in to either not knowing they need more carbon, or knowing enough of the case hardening process to know that sugar can work if they exclude air. Note that the prisoners who know this can teach the others who for their part might not realize how important the dirt wrapper is.
I haven’t read the sources you reference though, I’m just guessing based on what I know of humans.
First paragraph, boom should be bloom.
is hard to gauge to due rust and damage. -> is hard to gauge due to rust and damage.
which you will note I effort to avoid entirely). -> which you will note I make an effort to avoid entirely).
One thing I really like about this blog is that it gives cross checks with real world data.
I’ve spent my life as an SF fan and vague pop-history consumer, but gradually realized that an awful lot of it was “just-so” stories or superficially plausible simple explanations for complex things.
For your convenience, Bret, I’ll add a 3 more typos here:
tools as clearly as 1000 B.C. –> as early as
effects of carbuzation –> carburization
smith’s need to breath –> need to breathe
For people interested in history of metallurgy from more technical side i would suggest this website: https://www.tf.uni-kiel.de/matwis/amat/iss/ – Helmut Föll: Iron, Steel and Swords – A detailed history of iron and steel from a Materials Science point of view.
Note that the author (of Iron, Steel and Sword) is rather skeptical to use of cementation / case hardening for steel production in premodern times, and suggests direct production of steel in bloomery (i.e. that bloomery may produce iron with varying carbon concentration) – https://www.tf.uni-kiel.de/matwis/amat/iss/kap_a/backbone/ra_2_3.html .
Er…case hardening/cementation is the method actually described in our literary sources for the medieval period. It is the one method we can be confident was used, because they tell us so. They even describe the materials used. Note Craddock, 252.
Lee Sauder and Darrell Markewitz also have good websites.
My understanding is that the serious archaeometallurgists and primitive smelters now think that they borrowed too many ideas from people who had access to cheap mass-produced wrought iron in colonial and post-colonial contexts in the 19th and early 20th century; if you smelt the iron yourself, making nothing but good soft iron is hard, usually the product has a reasonable level of carbon in it and you have to quickly asses what you got and sell it for the purposes that batch will be good for. Case-hardening seems to have been used for specific purposes (files, hardening mail) and by people whose material was too soft for the purpose they wanted to use it for.
It’s very likely that blacksmiths didn’t know what steel was on a technical level but knew on an intuitive level how to make ‘good iron’ with the properties they desired, this probably explained why some blades lacked proper carbization in the desired places as lesser blacksmiths would copy the process without the understanding or experience that made it work
on tools as clearly as – “as early as”
We actually last week an axe-head – “actually discussed last week”? had? some verb probably.-
where quite some time – “for quite some time” I think
I effort to avoid – “I’m taking effort to avoid”
I’m surprised you didn’t label this one as “the Riddle of Steel”
I gotta say I was disappointed in a previous post that showed the bad metallurgy from the more recent Conan movie instead of the bad metallurgy from the original.
it’s ductility and hardness -> its ductility and hardness
hard to gauge to due rust and damage -> hard to gauge due to rust and damage
the process of making steel through carbon absorption, either in the bloom or through cementation often leaves the carbon levels throughout the iron somewhat uneven, with more carbon in the outer layers and less in the core. *squints* isn’t that exactly what you want?
If you attempted to carburize an entire sword-blade like this, it would both take quite a long time (the blade will be thicker than the rods generally used in the cementation process; higher volume-to-surface-area) and you would end up with quite a lot of carbon in the edges – possibly too much – before there was much of any carbon at all in the core. You could do it (there are artifacts where it looks like that was what was done) but it would produce an inferior product and still involve quite a long time in the cementation process (which means high fuel use and cost).
The different amounts carbon in different parts of an object are desirable in a finished product, but not really in the raw material.
The pain of working on steel where the carbon content varies within a bar is something that sometimes be experienced today when forging structural steel which is speced for strength (e.g. it won’t shear until 36,000 pounds per square inch) versus chemistry (e.g. 0.04% carbon). You can get a bar that is nice and soft for a bit, then hard, then red short with no visible indication of what section is what. It makes things more difficult to do good work.
hard to gauge to due rust and damage.
where quite some time — for quite some time?
which you will note I effort to avoid entirely
You say steel was 4x the price of iron. Do you have any idea how those related to copper and bronzes prices at various times (or copper and bronze with respect to each other?) I feel gold was 10-20x the value of silver, and silver 100-ish the value of copper or bronze, and that’s all I’ve got.
For the eternal Tolkien tangent: if you took an ancient/medieval blacksmith, but made him a Noldor elf who knows about elements due to Valar instruction (“I am adding coalstuff to my iron to make it harder”) can he do much better than historical techniques derived via trial and error?
“But since our smithy has to be an oxygen rich atmosphere, on account of our smith’s need to breathe, some of that carbon will get pulled out of the outermost layer of the iron.”
Would it be feasible for the smith to work in a low-oxygen environment and breathe through a tube connected to the outside?
The forge also needs lots of oxygen to burn the fuel that heats the iron. The piece of iron being worked on has to be heated up directly in/over the burning fuel AND still be visible and close enough for the blacksmith to watch the color changes and be able to pull it out and immediately start working. I don’t see how you could keep the forge and a low oxygen work area separate with ancient / medieval tech.
Plus the “error” part of trial and error development of a suitable breathing apparatus kills the blacksmith, which I imagine would be discouraging.
No. That would massively increase the “dead space” between your lungs and the circulating air.
For the people who associate “dead space” with the decade-old video game franchise and not the technical term: Dead space refers to the volume of air not exchanged when exhaling. Ordinarily, that’s your trachea, larynx, mouth, sinuses, etc. If you are SCUBA diving, the regulator is also (at least partly) dead space.
Dead space is important because the “dead space” air isn’t refreshed. When you breathe in, the “dead space” air enters your lungs first; only once the “dead space” is emptied does fresh outside air come in. (Obviously there’s some air mixing and diffusion and stuff going on, but this model is accurate enough for amateur use.)
If you breathed through a long tube—and it would have to be long, to give the smith enough room to move outside the factory—the dead space is going to be several times more voluminous than the smith’s lungs. They’re going to mostly just keep breathing the same air as it grows more and more clogged with CO2, more and more toxic, until finally the smith either falls unconscious or storms out of the smithy and demands to know what the big idea is.
That’s not even discussing the complexities of how to make a low-oxygen smithy (which is a lot trickier than just burning the oxygen out of a mostly-solid lump of wood and clay), which would probably be impossible in and of itself at a tech level where manual smithing is still important.
That said, a science fantasy setting could probably rig together a set of explanations and justifications for that sort of rig. (I’d suggest using a tank of compressed air instead of a big tube, but a hookah-type setup might work.)
Are the percentages of carbon in steel by mass or by volume?
Can’t find it in the previous posts in this series.
By percent mass
Question: why don’t we see skme more half-assed steel? Meaning just a bit of carbon around the edges for extra hardness and call it a day? Seems like it’d be better than just iron and after all of the involved process to get iron, adding one more simple step to improve it doesn’t seem like much, especially compared to the involved process for top shelf steel…
Or do we? How much of the iron kit of, say, a Roman legion would have any kind of steel in it?
It might have happened, or even been common, but would be hard to know. One of the problems with looking at most metal artifacts is recycling. Items that were broken, worn out, or otherwise unwanted in their current form were often re-forged into something else. The best pieces were often more valuable, well looked after and had a better chance to survive until now. A lot of crappy swords were likely re-forged into a new life as plowshares or something else.
I also suspect that when added up the fuel and effort costs in making blister or shear steel and properly steeling an edge in a way that will survive a sharpening or two might have been negligibly than case hardening a finished blade/tool. I couldn’t point to any literature on that though.
If you had to choose, softer metal was preferred to harder. A brittle sword would break, but a softer on bend (one saga has a fight where a character has a cheap sword, and has to keep straightening it out under foot).
The issue is rather more complex than this. Blades need to have quite high hardness on the edges; soft metal will not hold an edge. Pick-heads and hammer-heads also need to have very high hardness on the impact points to do their job. On the flip side, the core of such objects need to be able to engage in elastic deformation to absorb the energy of impact. So there are competing demands between softness and hardness, brittleness and ductility and while we’re here, also ‘strength’ and ‘toughness’ in their technical sense in metallurgy.
I realize that two series is hardly a good sample size, but I’m starting to wonder whether the author should just plan for their last post in each “How Did They Make It?” series to run long.
I know this is beyond the scope of this series but it is possible to produce pretty much pure iron with modern techniques, either through electrolysis or a process very similar to the modern Mond process which is used to refine nickel.