History

April 22nd 2024

6 Misconceptions About the Vikings

From those famous horned helmets to the vaunted fiery funerals, we’re busting your favorite Viking misconceptions.

Mental Floss

  • Jake Rossen

Vikings are the focus of countless movies, TV shows, video games, sports teams, and comic books today—but that doesn’t mean we always get them right. From the myths surrounding their horned helmets to their not-so-fiery burial customs, here are some common misconceptions about Vikings, adapted from an episode of Misconceptions on YouTube. https://www.youtube-nocookie.com/embed/MoA36H9KtUw?si=UDEmM-2VQviPvvhe

1. Misconception: Vikings Wore Horned Helmets.

In 1876, German theatergoers were abuzz about a hot new ticket in town. Titled Der Ring des Nibelungen, or The Ring of the Nibelung, Richard Wagner’s musical drama played out over an astounding 15 hours and portrayed Norse and German legends all vying for a magical ring that could grant them untold power. To make his characters look especially formidable, costume designer Carl Emil Doepler made sure they were wearing horned helmets.

Though the image of Vikings plundering and pillaging while wearing horned helmets has permeated popular fiction ever since, the historical record doesn’t quite line up with it. Viking helmets were typically made of iron or leather, and it’s possible some Vikings went without one altogether, since helmets were an expensive item at the time. In fact, archaeologists have uncovered only one authentic Viking helmet, and it was made of iron and sans horns, which some historians and battle experts believe would have had absolutely no combat benefit whatsoever.

So where did Doepler get the idea for horned helmets from? There were earlier illustrations of Vikings in helmets that were occasionally horned (but more often winged). There were also Norse and Germanic priests who wore horned helmets for ceremonial purposes. This was centuries before Vikings turned up, though. Some historians argue that there is some evidence of ritualistic horned helmets in the Viking Age, but if they existed, they would have been decorative horns that priests wore—not something intended for combat.

Composer Richard Wagner apparently wasn’t pleased with the wardrobe choices; he didn’t want his opera to be mired in cheap tropes or grandiose costumes. Wagner’s wife, Cosima, was also irritated, saying that Doepler’s wardrobe smacked of “provincial tastelessness.”

The look wound up taking hold when Der Ring des Nibelungen went on tour through Europe in the late 19th and early 20th centuries. Other artists were then inspired by the direction of the musical and began using horned Viking helmets in their own depictions, including in children’s books. Pretty soon, it was standard Viking dress code.

2. Misconception: All Vikings Had Scary Nicknames.

651564-gettyimages-50787434-ed3e4c83be2f44c808565927fe65448a.jpg

Leif Erikson. Not as scary of a nickname. (Hulton Archive/Getty Images)

When tales of Viking action spread throughout Europe, they were sometimes accompanied by ferocious-sounding nicknames like Ásgeirr the Terror of the Norwegians and Hlif the Castrator of Horses. This may have been a handy way to refer to Vikings with reputations for being hardcore at a time when actual surnames were in short supply. If you wanted to separate yourself from others with the same name, you needed a nickname. But plenty of them also had less intimidating labels.

Take, for instance, Ǫlver the Friend of Children. Sweet, right? Actually, Ǫlver got his name because he refused to murder children. Then there was Hálfdan the Generous and the Stingy with Food, who was said to pay his men very generously, but apparently didn’t feed them, leading to this contradictory nickname. Ragnarr Hairy Breeches was said to have donned furry pants when he fought a dragon.

Other unfortunate-but-real Viking names include Ulf the Squint-Eyed, Eirik Ale-Lover, Eystein Foul-Fart, Skagi the Ruler of Shit, and Kolbeinn Butter Penis. While the historical record is vague on how these names came to be, the truth is never going to be as good as whatever it is you’re thinking right now.

3. Misconception: Vikings Had Viking Funerals.

When someone like Kolbeinn Butter Penis died, it would only be fitting that they were laid to rest with dignity. And if you know anything about Vikings from pop culture, you know that meant setting them on fire and pushing them out to sea.

But as cool as that visual may be, it’s not exactly accurate. Vikings had funerals similar to pretty much everyone else. When one of them died, they were often buried in the ground. Archaeologists in Norway uncovered one such burial site in 2019, where at least 20 burial mounds were discovered.

The lead archaeologist on the site, Raymond Sauvage of the Norwegian University of Science and Technology, told Atlas Obscura that:

“We have no evidence for waterborne Viking funeral pyres in Scandinavia. I honestly do not know where this conception derives from, and it should be regarded as a modern myth. Normal burial practice was that people were buried on land, in burial mounds.”

The flaming ship myth may have come from a combination of two real Viking death practices. Vikings did sometimes entomb their dead in their ships, although the vessels remained on land where they were buried. And they did sometimes have funeral pyres. At some point in the historical record, someone may have combined these two scenarios and imagined that Vikings set ships ablaze before sending them out to sea with their dead still on board.

4. Misconception: Vikings Were Experienced and Trained Combat Soldiers.

651564-gettyimages-51246721-32dd15e126d503e98f7eb50a1a7c0972.jpg

Spears and arrows were most cost-effective than swords. (Spencer Arnold Collection/Hulton Archive/Getty Images)

While it’s true Vikings were violent, they weren’t necessarily the most experienced or talented warriors of their day. In fact, they were mostly normal people who decided plundering would be a great side hustle in the gig economy of Europe.

Historians believe Vikings were made up mostly of farmers, fishermen, and even peasants, rather than burly Conan the Barbarian types. Considering that the coastal villages they attacked probably didn’t put up much resistance, one could be a Viking and not even have to fight all that much. This leads to another common misconception—that Vikings were always swinging swords around. Like helmets, swords were expensive. A day of fighting was more likely to include spears, axes, long knives, or a bow and arrow.

You can blame this fierce warrior rep on the one squad of Vikings that actually fit the bill. Known as berserkers, these particular Vikings worshipped Odin, the god of war and death, and took Odin’s interests to heart. Some berserkers were said to have fought so fiercely that it was as though they had entered a kind of trance. If they were waiting around too long for a fight to start, it was said they might start killing each other.

5. Misconception: Vikings Were Dirty, Smelly, and Gross.

Most depictions of Vikings would have you believe that they were constantly caked in mud, blood, and other miscellaneous funk. Don’t fall for it. Archaeologists have unearthed a significant amount of personal grooming products over the years that belonged to Vikings, including tweezers, combs, toothpicks, and ear cleaners.

Vikings were also known to have bathed at least once a week, which was a staggeringly hygienic schedule for 11th-century Europe. In fact, Vikings put so much attention on bathing that Saturday was devoted to it. They called it Laugardagur, or bathing day. They even had soap made from animal fat.

Hygiene was only one aspect of their routine. Vikings put time and effort into styling their hair and sometimes even dyed it using lye. Their beards were neatly trimmed, and they were also known to wear eyeliner. All of this preening was said to make Vikings a rather attractive prospect to women in villages they raided, as other men of the era were somewhat reluctant to bathe.

6. Misconception: There Were No Viking Women.

651564-gettyimages-804482150-c682e69c007a2d4208f8abd49051e5a6.jpg

An illustration of Lathgertha, legendary Danish Viking shieldmaiden. (Historica Graphica Collection/Heritage Images/Getty Images)

Considering the times, Vikings actually had a fairly progressive approach to gender roles. Women could own property, challenge any kind of marriage arrangement, and even request a divorce if things weren’t working out at home. To do so, at least as one story tells it, they’d have to ask witnesses to come over, stand near her bed, and watch as she declared a separation. 

In addition to having a relatively high degree of independence, Viking women were also known to pick up a weapon and bash some heads on occasion. The historical record of a battle in 971 CE says that women had fought and died alongside the men. A woman who donned armor was known as a “shieldmaiden.” According to legend, over 300 shieldmaidens fought in the Battle of Brávellir in the 8th century and successfully kept their enemies at bay.

According to History, one of the most notable shieldmaidens was a warrior named Lathgertha who so impressed a famous Viking named Ragnar Lothbrok—he of the Hairy Breeches—that he became smitten and asked for her hand in marriage.

Janauary 15th 2024

How Measuring Time Shaped History

From Neolithic constructions to atomic clocks, how humans measure time reveals what we value most.

Scientific American

  • Clara Moskowitz

Humans have tracked time in one way or another in every civilization we have records of, writes physicist Chad Orzel. In his book A Brief History of Timekeeping (BenBella Books, 2022), Orzel chronicles Neolithic efforts to predict solstices and other astronomical events, the latest atomic clocks that keep time to ever more precise decimals and everything that came in between. He describes the evolution of clocks, from water clocks that timed intervals by how long it took water to flow out of a container to hourglasses filled with sand to the first mechanical and pendulum clocks to our modern era. Each episode is filled with interesting physics and engineering, as well as a glimpse at how different ways of keeping time affected how people lived their lives at various points and places in history.

Scientific American talked to Orzel about the coolest clocks in history, the most complicated calendar systems and why we still need to improve the best clocks of today.

[An edited transcript of the interview follows.]

How did the advent of clocks change history?

There’s an interesting democratization of time as you go along. The very most ancient monuments are things such as Newgrange in Ireland. It’s this massive artificial hill with a passage through the center. Once a year sunlight reaches that central chamber, and that tells you it’s the winter solstice. I’ve been there, and you can put 10 to 12 people in there, maybe. This is an elite thing where only a few people have access to this information. As you start to get things such as water clocks, that’s something that individual people can use to time things. They’re not superaccurate, but that makes it more accessible. Mechanical clocks make it even better, and then you get public clocks—clocks on church towers with bells that ring out the hours. Everybody starts to have access to time. Mechanical watches start to become reasonably accurate and reasonably cheap by the 1890s. They cost about one day’s wages. Suddenly everybody has access to accurate timekeeping all the time, and that’s a really interesting change.

Is it hard to track the advent of different kinds of clocks?

When people write about clocks in history, they use the same word for a bunch of different things. There’s a famous example: there was a fire in a particular monastery, and the record says some of the brothers ran to the well, and some ran to the clock. That tells you that it was a water clock because they’re going there to fill up buckets to put the fire out. There’s another reference that says a clock was installed above the rood screen of a church. If it’s 50 feet up in the air, that was probably not a water clock but a mechanical clock because nobody would make a device where you have to go up there and fill it with water.

What is your favorite clock from history?

I really like this Chinese tower clock. It was built in about A.D. 1100 by a court official, Su Song. It’s a water clock, based on a constant flow of water, but it’s a mechanical device. It’s this giant wheel that has buckets at the end of arms, and the bucket is positioned under this constant-flow water source. When it fills past a certain point, the bucket tips, and that releases a mechanism that lets the wheel rotate. The wheel turns and brings a new bucket that starts filling. The timing regulation is really provided by the water, which also provides the drive force. The weight of the water is what’s turning the wheel. It’s this weird hybrid between an old-school water clock and the mechanical clocks that would be developed in Europe a century or two later. It’s an amazingly intricate system, a monumental thing that worked incredibly well. But it didn’t last long—it ran for about 20 years. It was located in a capital city of a particular dynasty, which fell, and the successors couldn’t make it work.

It’s a neat episode in history. The origin of this is that Su Song was sent to offer greetings on the winter solstice to a neighboring kingdom. But the calendar was off by a day. He got there and gave his greeting on the wrong day, which would have been a huge embarrassment. When he got back, the calendar makers were punished, and he said, “I’m fixing this.”

How does a society’s way of keeping time reveal what it values?

Every civilization that we have decent records of has its own way of keeping time. It’s very interesting because there are all these different approaches. None of the natural cycles you see are commensurate with one another. A year is not an integer number of days, and it’s not an integer number of cycles of the moon. So you have to decide what you’re prioritizing over what else. You have systems such as the Islamic calendar, which is strictly lunar. They end up with a calendar that is 12 lunar months, which is short [compared with about 365 days in a solar year], so the dates of the holidays move relative to the seasons.

The Jewish calendar is doing complicated things because they want to keep both: they want holidays to be associated with seasons, so they have to fall in the right part of the year, but they also want them in the right phase of the moon.

The Gregorian calendar sort of splits the difference: We have months whose lengths are sort of based on the moon, but we fix the months, so the solstice is always going to be June 20, 21 or 22. We give the position of the year relative to the seasons priority above everything else.

Then you have the Maya doing something completely different. Their calendar involves this 260-day interval, and no one is completely sure why 260 days was so important.

How accurate are clocks today?

The official time now is based on cesium: one second is 9,192,631,770 oscillations of the light emitted as cesium moves between two particular states.

I can’t believe you know that off the top of your head!

I’ve taught this a bunch of times [laughs].

Time is defined in terms of cesium atoms, so the best clocks in the world are cesium clocks. Cesium clocks are good to a part in 1016. If it says one second, there are 15 zeros after the decimal point before you get to the first uncertain digit. There are experimental clocks that are two, maybe three, orders of magnitude better than that. They are not officially clocks. They are measuring a frequency and measuring it to a better precision than the best cesium clocks.

Those are good enough that an aluminum ion clock did a test of relativity. [Researchers] held the ion fixed in one, and the other, they shook back and forth, and they could see that the one that was moving ticked a little slower. Then they held one in position and moved one about a foot higher, and they could see that the one at higher altitude ticked faster. They agreed perfectly with relativity.

Is there a limit to how accurate clocks can ever be?

There’s a limit in the sense that there are a lot of things that affect the precision. Einstein’s general theory of relatively tells you that the closer you are to a large mass, the slower your clock will tick. At some point, you’re sensitive to the gravitational attraction of graduate students coming into and out of the lab. At that point, it becomes impractical.

This is already an issue because the atomic time for the world is a consensus of atomic clocks all over the world. Here in the U.S., there are two big standards labs: one is the Naval Observatory in Washington D.C., around sea level, and the other is in Boulder, Colo., about a mile up. Their cesium clocks tick at different rates because they’re at different distances from the center of the earth. They have to take that into account. So we kind of made an unwise choice locating this in Boulder.

After all you know about clocks, I have to ask what kind of watch you wear.

I have two: One is a quartz watch—nothing special. I’ve probably spent more replacing the band on it several times than I spent on the watch. The other I have is a mechanical watch from the 1960s. It’s an Omega, purely mechanical watch. It’s an incredibly intricate watch, a marvel of engineering, but I can go to a dollar store and buy a watch that can keep time just as well because quartz is so accurate.

Clara Moskowitz is a senior editor at Scientific American, where she covers astronomy, space, physics and mathematics. She has been at Scientific American for a decade; previously she worked at Space.com. Moskowitz has reported live from rocket launches, space shuttle liftoffs and landings, suborbital spaceflight training, mountaintop observatories and more. She has a bachelor’s degree in astronomy and physics from Wesleyan University and a graduate degree in science communication from the University of California, Santa Cruz.

January 4th 2024

Inside the ‘ghost ships’ of the Baltic Sea

Francesca Street, CNN

<strong>Marine life: </strong>This photograph shows a cod swimming past the wreck of SMS Undine, a German cruiser that became a naval ship during the First World War.
<strong>Unknown stories: </strong>The ship Liro was built in 1876, and sank in 1931. Why Liro sunk remains unknown. For Dahm and Douglas, such mysteries are always intriguing. "I will probably never know the answers to all these questions, but it's okay, most shipwrecks will never reveal their secrets anyway," Dahm tells CNN Travel.
<strong>"Ghost ships": </strong>In the icy waters surrounding Scandinavia, divers Jonas Dahm and Carl Douglas explore and photograph wrecks long lost to the ocean, the so-called "ghost ships'' of the Baltic Sea. This picture is of an unidentified shipwreck dating possibly from the 17th or 18th centuries.

“Ghost ships”: In the icy waters surrounding Scandinavia, divers Jonas Dahm and Carl Douglas explore and photograph wrecks long lost to the ocean, the so-called “ghost ships” of the Baltic Sea. This picture is of an unidentified shipwreck dating possibly from the 17th or 18th centuries.Jonas Dahm

<strong>Underwater photography: </strong>On dives, Dahm captures haunting photographs, including this one of an unidentified Russian submarine from the First World War. Dahm and Douglas also extensively research the wrecks they explore.
<strong>Photos and reflections: </strong>A selection of Dahm's photographs, paired with Douglas' written reflections, feature in the book "Ghost Ships of the Baltic Sea," published by Swedish publisher Bokförlaget Max Ström. This chronometer, a type of clock, was photographed on the German steamer Otto Cords which sunk in the Second World War.
<strong>"History coming to life": </strong>This photograph is of Svärdet, a Swedish navy war ship which sank in 1676. Bronze cannons can still be seen on and around the wreck. "Diving around Svärdet was one of the greatest underwater experiences of my life," writes Douglas in the book. "I had an overwhelming sense of history coming to life."
<strong>Interior shot:</strong> Dahm took this picture of the interior of what was once a passenger cabin on board the Aachen, a 19th century steam ship that sank in the First World War after becoming a German navy vessel.
<strong>Light and dark:</strong> Dahm says there are many challenges associated with photographing wrecks -- from lighting to visibility. Plus, many of the ships are some 100 meters under the sea. Pictured here: remnants of the steam ship Rumina.
<strong>Capturing atmosphere:</strong> Dahm and Douglas and their team of divers use flashlights to illuminate details for the photos. They also try to capture the atmosphere of the murky waters. Pictured here: the SMS Prinz Adalbert, another German ship that sank during the First World War. The ship was broken in two, and this picture depicts what remains of the ship's stern.
<strong>Capturing details:</strong> The 19th century steamer Astrid is in "poor condition," writes Douglas, but its figurehead remains intact.
<strong>Marine life: </strong>This photograph shows a cod swimming past the wreck of SMS Undine, a German cruiser that became a naval ship during the First World War.
<strong>Unknown stories: </strong>The ship Liro was built in 1876, and sank in 1931. Why Liro sunk remains unknown. For Dahm and Douglas, such mysteries are always intriguing. "I will probably never know the answers to all these questions, but it's okay, most shipwrecks will never reveal their secrets anyway," Dahm tells CNN Travel.
<strong>"Ghost ships": </strong>In the icy waters surrounding Scandinavia, divers Jonas Dahm and Carl Douglas explore and photograph wrecks long lost to the ocean, the so-called "ghost ships'' of the Baltic Sea. This picture is of an unidentified shipwreck dating possibly from the 17th or 18th centuries.
<strong>Underwater photography: </strong>On dives, Dahm captures haunting photographs, including this one of an unidentified Russian submarine from the First World War. Dahm and Douglas also extensively research the wrecks they explore.
Photographs reveal ‘ghost ships’ of Baltic Sea

1 of 10 CNN  — 

Plunging into the icy waters surrounding Scandinavia, divers Jonas Dahm and Carl Douglas hunt for vessels long lost to the ocean, what they call the “ghost ships” of the Baltic Sea.

Dahm and Douglas are history lovers and long-time friends who’ve devoted some 25 years of their lives to wreck hunting and research.

While many of the barnacle-clad vessels claimed by the Baltic Sea have lain in wait for centuries, some are in remarkably good condition due to the preservative effects of the water’s chilly temperatures.

On dives, Dahm captures haunting photographs. Intact ship furniture, detailed interior wall carvings and an only-slightly-cracked ship’s clock have all been snapped on the seabed.

Dahm and Douglas also spend hours poring over books, researching the wrecks’ histories.

A selection of Dahm’s eerie photographs, paired with Douglas’ written reflections, feature in the book “Ghost Ships of the Baltic Sea,” published by Swedish publisher Bokförlaget Max Ström.

In the depths of the sea

Dahm took this photograph of a cod swimming past SS Undine, a 19th century German cruiser that became a war vessel during the First World War. The ship sank in the southern Baltic in 1915.

Dahm took this photograph of a cod swimming past SS Undine, a 19th century German cruiser that became a war vessel during the First World War. The ship sank in the southern Baltic in 1915.Jonas Dahm

The Baltic Sea has been a center for seafaring activity for centuries – from maritime trade to maritime conflict. Inevitably, that means a long history of ships claimed by the waves.

In the book, Douglas writes there are “tens of thousands of intact, undisturbed shipwrecks from every era” submerged in Baltic’s watery depths.

“There still are many that haven’t been found yet,” Dahm tells CNN Travel.

The Baltic Sea’s potential wealth of well-preserved wrecks makes it the home of “the best diving in the world,” says Douglas.

Dahm and Douglas first met in the late 1990s through mutual diving friends in Stockholm, Sweden. Dahm’s been diving since he was a teenager, honing his underwater photography skills during his compulsory military service.

In contrast, Douglas admits he avoided the ocean for a long time.

“I was scared of water – but things that scare often also fascinate,” he tells CNN Travel. After a bit of persuasion from friends, he gave diving a shot.

“After that I was hooked,” he says, while admitting he still gets seasick on boats.

Photographing underwater

Dahm took this photograph of the interior of what was once a passenger cabin on board the Aachen, a 19th century steam ship that sank in the First World War when it became a German navy vessel.

Dahm took this photograph of the interior of what was once a passenger cabin on board the Aachen, a 19th century steam ship that sank in the First World War when it became a German navy vessel.Jonas Dahm

Each of Dahm’s images are bathed in a viridescent ocean hue, but key details are also illuminated, as Dahm moves between capturing the heft of the lost ships and zooming in on haunting details.

In the book, Douglas writes about how the divers use “darkness to our advantage.”

“Balancing the natural light from the surface with flashlights and covering as much as possible of the wreck site is the goal,” he says.

Photographer Dahm, who uses Nikon D850 and Fujifilm GFX 100s medium format cameras, works with other divers to maximize time spent under the sea.

“To take the big wide-angle pictures we sometimes are two to three divers working together, while I usually can handle close-ups images by myself,” he explains.

This is a close-up shot of a ship's chronometer -- a type of clock -- on one of the wrecks.

This is a close-up shot of a ship’s chronometer — a type of clock — on one of the wrecks.Jonas Dahm

As well as light, there are are other challenges associated with photographing wrecks – the cold, of course, and the sometimes-opaque visibility.

And there’s the depths, which mean Dahm and Douglas and their team can’t ever linger. The ships featured in the book are submerged as deep as 110 meters (360 feet) under sea.

“At that depth you don’t have very much time to take the photos the way you want,” says Dahm.

The hunt for the world’s most elusive shipwrecks

Unknowable stories

Remnants of the steam ship Rumina.

Remnants of the steam ship Rumina.Jonas Dahm

Douglas and Dahm plan trips to particular sites to see particular ships. They get tips from local fishermen, and other times follow in the footsteps of other divers.

But sometimes “a wreck will appear almost randomly on our echo sounder,” as Douglas puts it in the book. Dahm and Douglas love spending time researching the history of their discoveries – particularly when they stumble across anonymous ships, story unknown.

For Dahm, it’s one of these more mysterious wrecks that stands out to him. He calls the ship, the “porcelain wreck,” because it’s still home to treasures includes violins, clay pipes and pocket watches – and yes, several pieces made of porcelain.

“We don’t know its name, why it sank or where it was headed. All we know is that the ship had a valuable cargo and that it did not reach its destination,” says Dahm.

Dahm and Douglas are careful not to damage the wrecks during their exploration. They’re passionate about preserving the sea and marine life. They also approach their photography and their book with respect and care.

“In many cases they represent disasters where people lost their lives under terrible circumstances,” writes Douglas in the book. “We visit these sites with enormous respect, and we do it to honor the victims and tell the story of what happened.”

The book is led by Dahm’s photographs, but Douglas’ accompanying text brings many of the stories to life. He says he wanted his writing to offer insight – there’s input from experts like Dr. Fred Hocker, director of research at Stockholm’s Vasa maritime museum – but the writing also leaves room for questioning and reflecting.

“The writing follows the images,” says Douglas. “We want the reader to really feel the wrecks. Sometimes too much information can ruin that.”

And while the divers love to discover answers to their questions in their research, they also accept not every story is knowable, and find something strangely satisfying in that unknown.

“Sometimes we have to resign ourselves that we will never know the full history – but these mystery wrecks are also very attractive,” says Douglas.

“I will probably never know the answers to all these questions, but it’s okay, most shipwrecks will never reveal their secrets anyway,” says Dahm.

December 12th 2023

A Pint for the Alewives

Until the Plague decimated Europe and reconfigured society, brewing beer and selling it was chiefly the domain of the fairer sex.

A woman proffers a jug of ale to a man in the street from her 'house of shame', in an allegorical 19th century woodcut.

Getty Share

By: Akanksha Singh

December 5, 2023

5 minutes

The icon indicates free access to the linked research on JSTOR.

As the old sexist saw goes, “Beer is a man’s drink.” Yet, until the fourteenth century, women dominated the field of beer brewing. And the alewife, as she was known, was responsible for a high proportion of ale sales in Europe.

JSTOR Daily Membership Ad

“Ale was virtually the sole liquid consumed by medieval peasants,” writes Judith M. Bennett in a chapter in the edited volume, Women and Work in Preindustrial Europe. “Water was considered to be unhealthy, [so] each household required a large and steady supply of this perishable item.”

Sometimes referred to as a “brewster” (female brewer), the alewife made the drink as part of a typical peasant diet, which also included bread, soups, meat, legumes, and seasonal produce. While most villages depended on local bakers to prepare bread “the skills and equipment required for brewing,” explains Bennett “were readily available in many households.” This included “large pots, vats, ladles, and straining cloths,” implements found even in the poorest households. In other words: anyone who had the time could potentially make and sell ale.

Ale production was time-consuming, and the drink soured within days. The grain needed to make it, usually barley, “had to be soaked for several days, then drained of excess water and carefully germinated to create malt,” writes Bennett. The malt was dried and ground, and then added to hot water for fermentation, following which the wort—the liquid—was drained off and “herbs or yeast could be added as a final touch.”

Most households alternated between making their own ale and buying from and selling to neighbors. Women—wives, mothers, the unmarried, and the widowed—largely oversaw these transactions, writes Christopher Dyer.

“Ale selling was an extension of a common domestic activity: many women brewed for their own household’s consumption, so producing extra for sale was relatively easy,” Dyer explains.

Ale-making was a revolutionary trade for women. “We have heard much in the recent past about the weak work-identity of women, … [how] women were/are dabblers; they fail to attain high skill levels [and] they abandon work when it conflicts with marital or familial obligations,” writes Bennett. But for women of the Middle Ages, making ale was “both practical and rational.” It allowed married women to contribute to household incomes and offered both single women and widows a means to support themselves. This was true, for example, in the English villages of Redgrave and Rickinghall, about 100 miles northeast of London, where records suggested that ale sellers were both poor and single or widowed.

Further west, records from the manorial court of Brigstock show the domestic industry of ale-making to be entirely female dominated.

“The high proportion of women known to have sold ale suggests that all adult women were skilled at brewing ale, even if only some brewed ale for profit,” writes Bennett.

The records that allow us such a close look at ale-making exist in part due to the Assize of Bread and Ale, English regulations from the thirteenth century that created standards of measurement, quality, and pricing for these goods. Since a large number of people sold ale “unpredictably and intermittently,” writes Bennett, “triweekly presentments by ale-tasters” to regulate quality were necessary. “Dominating” the ale trade in the village of Brigstock, according to Bennett, however, was an “elite group” of thirty-eight brewers—alewives—who were “frequently [supplementing] their household economies.”

What’s more, these alewives, particularly in Brigstock, “faced almost no significant male competition,” notes Bennett. “Only a few dozen ale fines were assessed against Brigstock males, and all such men were married to women already active in the ale market.” In the Midlands manor of Houghton-cum-Wyton, on the other hand, some eleven percent of fines were levied against men, while in the manor of Iver in Buckinghamshire, a whopping 71 percent of fines were charged to male brewers.

Broadly, in England around 1300, “a high proportion of women,” writes Dyer, around “a dozen or two in most villages and 100 in larger towns” brewed ale for sale each year. It’s unclear how much the women of Brigstock women earned, on average, through ale-making, but Bennett notes that “the high proportion of women known to have sold ale suggests that all adult women were skilled at brewing ale, even if only some brewed ale for profit.” These alewives weren’t affluent, writes Bennett; they largely came from households “headed by men [of]… modest influence.” In fact, in Brigstock, 74 percent of women were identified as ale “wives” throughout their brewing careers (meaning they were married and unwidowed). In other words, working in the ale trade here wasn’t linked to social status; wives from all backgrounds contributed significantly to household income.

Alewives remained a key part of the production line until roughly 1350, when the Plague decimated communities throughout Europe. After that, male brewers grew in number to meet demand. That doesn’t mean women altogether abandoned the business; those who were linked to a man—as wives or as widows—endured until constraints curtailed their roles, notes historian Patricia T. Rooke. By the 1370s, beer brewing in England was predominantly male.

“That of ‘huswyffe’ (housewife) became valorized at the expense of applewife, alewife, fishwife, or for that matter, glassblower, miller, auctioneer, bricklayer, nun, and prioress,” Rooke writes.

Diverging historical timelines have dispelled the myth that alewives, with their bubbling cauldrons, were hunted alongside witches at the time. (There was a time when alewives were persecuted for being financially independent women, after the Babylonian Hammurabi Code from 1755 BC decreed the death penalty for alewives who insisted on payment in silver rather than in grain.) Still, the alewife lives on in literature, at least. There is Siduri, who dissuades the title character in the Epic of Gilgamesh from continuing his quest to find eternal life, urging him to find happiness in his current world. And in the induction of Shakespeare’s The Taming of the Shrew, the character Christopher Sly mentions “Marian Hacket, the fat alewife of Wincot,” as he drunkenly calls for more drink.

Today, women brewers continue to carve a space for themselves in the field, hearkening to the unsung role they played through the Victorian era. If beer is indeed a “man’s drink,” it’s really only thanks to women.


Support JSTOR Daily! Join our membership program on Patreon today.

Have a correction or comment about this article?
Please contact us.

alcoholwomen’s history

November 8th 2023

  1. Arts + Culture
  2. Business + Economy
  3. Cities
  4. Education
  5. Environment
  6. Health
  7. Politics + Society
  8. Science + Tech
  9. Podcasts
  10. Insights

The Great Fire of London by Josepha Jane Battlehooke (1675). Museum of London

Great Fire of London: how we uncovered the man who first found the flames

Published: October 31, 2023 4.47pm GMT

Author

  1. Kate Loveman Professor of Early Modern Literature and Culture, University of Leicester

Disclosure statement

Kate Loveman received funding from the Arts and Humanities Research Council for this research.

Partners

University of Leicester

University of Leicester provides funding as a member of The Conversation UK.

The Conversation UK receives funding from these organisations

View the full list

CC BY NDWe believe in the free flow of information
Republish our articles for free, online or in print, under Creative Commons licence.

Email

Twitter

Facebook21

LinkedIn

Print

If you had been in London on September 2 1666, the chances are you’d remember exactly where you were and who you were with. This was the day the Great Fire began, sweeping across the city for almost five days.

The Museum of London is due to open a new site in 2026. And in preparation for this, curators of the Great Fire gallery decided to examine the stories of everyday Londoners.

As I’d been working with the museum on a project about teaching the Great Fire in schools, I was asked by Meriel Jeater, curator of the Great Fire displays, if I could help research the lives of these Londoners. Top of our list for investigation were the residents of Thomas Farriner’s bakery in Pudding Lane, where the fire began.

There has been lots of excellent work on the Great Fire but, because of ambiguities in the surviving sources, historians have different conclusions about who was in the bakery. Farriner, his wife, children and anonymous servants were among the people mentioned in modern accounts. But it was quickly clear I needed to go back to the manuscript evidence to find answers.

We believe in experts. We believe knowledge must inform decisions

Two types of official investigation into the fire’s causes were carried out in 1666: a parliamentary enquiry and the trial of Robert Hubert, a Frenchman who had falsely confessed to starting the blaze. While you might think Londoners would have a keen interest in who was there at the start of the fire, the surviving accounts of exactly who was present are fragmentary.

Full reports from the enquiries were not published. Meanwhile, most writers at the time were, understandably, much more concerned with the fire’s destructive power than describing its beginnings.

As a result, the clearest account of events in the bakery is in a letter from an MP, Sir Edward Harley, reporting what he had heard. It’s now in the British Library, and was written in October 1666, when the two investigations into the fire were underway:

The Baker of Pudding Lane in whose hous ye Fire began, makes it evident that no Fire was left in his Oven … that his daughter was in ye Bakehous at 12 of ye clock, that between one and two His man was waked with ye choak of ye Smoke, the fire begun remote from ye chimney and Oven, His mayd was burnt in ye Hous not adventuring to Escape as He, his daughter who was much scorched, and his man did out of ye Windore [window] and Gutter.

Narrowing down the suspects

Other details in Harley’s letter suggested he was reliably reporting what he’d learned. The letter provides a list of bakery residents: Thomas Farriner, his unmarried daughter (Hanna), his “man” (meaning trained workman, aka journeyman) and his maid, who died. Other reports don’t mention the “man” or maid, but put Farriner’s son in the bakery.

A document in the London Metropolitan Archives provided more clues. This records the charges against Robert Hubert and – crucially – the names of seven witnesses against him. At the end were: “Thomas Farriner senior, Hanna Farriner, Thomas Dagger, Thomas Farriner Junior”.

Painting of burning buildings in London.
The Great Fire of London, with Ludgate and Old St Paul’s, artist unknown (c.1670). Yale Centre for British Art

Given that Thomas Dagger was sandwiched between the Farriners, Jeater and I suspected that he might be an unrecognised member of the household. Testing this theory, I was able to establish that the indictment’s list of names began with two men who had heard Hubert’s confession and a third who possibly had. The later names, starting with Thomas senior, appeared to be people who could testify to circumstances in the bakery.

This was exciting, because Thomas Dagger looked like a candidate for Farriner’s “man” in Harley’s account, potentially putting a name to the first reported witness of the Great Fire. Searching online archives, I could see there was a baker named Thomas Dagger running a business in Billingsgate after the fire and having many children. But we needed evidence to put Dagger in Pudding Lane.

Sleuthing in the archives

Fortunately, the Bakers’ Company records had not gone up in smoke like so many other guild documents did in September 1666. So I went sleuthing at the Guildhall Library, comparing Bakers’ company information on Farriner’s workforce to names on the indictment.

After much squinting at microfilms, this produced firm evidence for two young men. Thomas Farriner junior had joined the Bakers’ Company in 1669, claiming that right through his father. I was delighted to find Thomas Dagger had indeed worked for Farriner too. He came from Norton in Wiltshire and had been apprenticed to another baker in 1655 before serving out his apprenticeship at Pudding Lane.

A sign from the baker's guild marking the spot of the Pudding Lane bakery.
A sign commemorating the starting place of the Great Fire. Paula French/Shutterstock

That nine-year apprenticeship (unusually long), had ended in 1664, so at the time of the fire he’d stayed on, working unofficially as a journeyman. Of all the names on the indictment, Dagger most clearly matched the description of the man who first discovered the fire.

Continuing the 17th-century investigations into the Great Fire was intriguing, but it’s how the bakery residents’ stories are told that matters. One of the great things about having the life stories of people such as Thomas Dagger is that it will help make the history of London more relevant to young visitors.

For example, if you’re a school child from Wiltshire learning about the Great Fire, Thomas Dagger’s presence in the bakery suddenly makes that national history part of your local history.

As a low-status journeyman, Dagger’s name wasn’t memorable to people in 1666 – he’s barely mentioned in the sources. But the hope is that he and others like him might become memorable to visitors to the new London Museum.

The new research will enable the displays to better represent the Farriner household and provide a fuller understanding of this pivotal moment in London’s history.


Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.


September 23rd 2023

With Speed and Agility, Germany’s Ar 234 Bomber Was a Success That Ultimately Failed

Only one is known to survive today and it is in the collections of the Smithsonian’s National Air and Space Museum.

Smithsonian Magazine

  • David Kindy

Read when you’ve got time to spare.

Advertisement

Smithsonian Magazine

More from Smithsonian Magazine

Advertisement

Arado Ar 234 on display at the National Air and Space Museum

Public Domain/Wikimedia Commons

On Christmas Eve, 1944, American forces were dug in around Liege in Belgium and prepared for just about anything. Eight days earlier, four German armies had launched a surprise attack from the Ardennes Forest, using one of the coldest and snowiest winters in European history as cover from Allied air superiority.

The Nazis smashed through stretched-thin defensive positions and were pushing towards the port of Antwerp to cut off Allied supply lines in what would become known as the Battle of the Bulge.

With savage fighting on many fronts, American troops at Liege were on high alert in case the Germans tried something there—though they didn’t expect what happened next. With the weather clearing, aircraft from both sides were flying once again. High above the Belgian city came the sound of approaching planes. The engine rumble from these aircraft was not typical, though.

Rather than the reverberating growl of piston-driven engines, these aircraft emitted a smooth piercing roar. They were jets, but not Messerschmitt Me 262s, history’s first jet fighter. These were Arado Ar 234 B-2s, the first operational jet bomber to see combat. Nine of them were approaching a factory complex at Liege, each laden with a 1,100-pound bomb.

Luftwaffe Captain Diether Lukesch of Kampfgeschwader (Bomber Wing) 76 led the small squadron on the historic bombing run. Powered by two Jumo 004 B4-1 turbojet engines, the sleek planes zoomed in to drop their payloads and then quickly soared away. They were so fast that Allied fighters could not catch them.

Often overshadowed by more famous jets in World War II, the Ar 234 B-2—known as the Blitz, or Lightning—had caught the Allies by surprise when the nine soared through the skies on December 24, 1944.

History’s first operational jet bomber was designed and built by the Arado company. The plane originally began service as a scout aircraft. One had flown reconnaissance over Normandy snapping photos of supply depots and troop movements just four months earlier. But reconfigured as a bomber and operated by one pilot, who also served as bombardier, the Blitz was fast and agile. It easily eluded most Allied aircraft with its top speed of 456 miles per hour. The Germans also created two other versions of the aircraft—a night fighter and a four-engine heavy bomber—neither made it into full production.

The Allies were keen to capture the Ar 234 so it could be studied. It was not until the end of the war when they finally got their hands on a handful of them.

Only one is known to survive today and it is on display at the Smithsonian National Air and Space Museum’s Udvar-Hazy Center in Chantilly, Virginia.

“The Allies were collecting all this German technology after the war,” says curator Alex Spencer. “The Army Air Force spent a good five to six years really studying what these aircraft were capable of doing—both their strengths and weaknesses. Some of the aerodynamic aspects of the Ar 234 and other jets were undoubtedly taken advantage of for some of our early designs, like the F-86 Sabre and other planes.”

The one-person bomber was innovative with its canister fuselage and tricycle landing gear while an autopilot system guided the aircraft on bombing runs and periscope bombing sights allowed for precision attacks. The Ar 234 was at least 100 miles per hour faster than American fighter planes, which could never catch the jet bomber in pursuit. But Allied pilots eventually realized the Blitz was especially vulnerable to attack at slower speeds during takeoff and landing.

Captain Don Bryan of the American Air Force was unsuccessful in his first three attempts to shoot down the Blitz, but he remained determined to score a kill. He finally did in March 1945 when he spotted one making a bombing run on the Ludendorff Bridge at Remagen in an attempt to prevent American forces from crossing the Rhine into Germany.

When he saw the jet bomber slow in a tight turn after dropping its payload, Bryan jumped at his chance. Blasting away with the .50-caliber machineguns in his P-51 Mustang, he knocked out one of the jet engines and was then able to get behind the bomber and shoot it down. Bryan’s was the first air-to-air kill of a Blitz.

While the Ar 234 is historic, its effectiveness as a jet bomber is questionable. According to Spencer, it arrived too late in the war and in too few numbers to have any significant impact and was rushed off the drawing board too soon with too many flaws. At best, it was an experimental aircraft that needed more conceptual consideration before it was pressed into service. All told, only a few hundred Ar 234 B-2s were produced with several dozen making it into combat.

“As with most of the so-called German ‘Wonder Weapons,’ they make me wonder,” Spencer says. “Everyone is enamored with them, but they really don’t live up to expectations. Same with the Blitz. It had its good points and bad points. As an attack bomber, it was not that effective.”

During a 10-day period in March 1945, the Luftwaffe flew 400 sorties against the bridge at Remagen in an effort to slow the Allied advance. Ar 234 B-2s from KG 76, as well as other German aircraft, made repeated attacks on the river crossing. All bombs missed their target.

“They made several runs at Remagen and they couldn’t hit the thing,” Spencer says. “It was such a squirrelly aircraft to fly and pilots weren’t used to it. They were learning to fly at speeds they were not used to and their timing was off. It’s new and interesting technology but as far as being a game-changer, I don’t buy the argument.”

The Smithsonian’s Ar 234 was captured by the British in Norway with the surrender of Nazi Germany on May 7, 1945. It had been flown there in the waning days of the war for safekeeping. The English turned over this plane, Werk Nummer 140312, to the Americans, who eventually flew it to a U.S. research facility. In 1949, the U.S. Air Force donated it along with other German aircraft to the Smithsonian. This Blitz had gone through extensive changes so American test pilots could fly it, and the museum undertook a major restoration effort in 1984 to get the Ar 234 back to its wartime condition.

“It was a basket case,” Spencer says. “It took two guys working on it almost five years to restore it. The jet engines and some of the navigational systems had been swapped out during testing, but our staff was able to replace most of that. Some 13,200 man/hours went into returning it to original condition. We’re still missing a few parts, but it’s about as close to 1945 as it can be.”

The restored Ar 234 went on view when work was completed in 1989. News media around the globe reported on the historic aircraft going on display, including a German-language aviation magazine.

In 1990, Willi Kriessmann happened to be leafing through that publication when he spotted the article. The German native, then living in California, read the report with interest and stopped short when he saw the Ar 234’s serial number: 140312. It looked very familiar, so he went and checked his papers from when he flew as a World War II Luftwaffe pilot.

“Out of sheer curiosity, I looked up my logbook, which I saved throughout all the turbulence of the war,” he wrote in his unpublished memoir, which eventually he donated to the Smithsonian. “Eureka! The same serial number!”

Just before Germany’s surrender, Kriessmann had transported the jet to several locations around Germany before flying it to Norway, where both he and the plane were captured by the British. He contacted the Smithsonian and sent along copies of his flight book for authentication. He was invited to the museum to see the Ar 234 again, where he was welcomed by Don Lopez, then deputy director of the National Air and Space Museum.

“I finally faced ‘my bird’ on May 11, 1990,” he wrote. “It was a very emotional reunion.”

The Blitz was moved to the museum’s Udvar-Hazy Center , when the huge expansion site opened in 2003. Kriessmann visited once again at that time. In his memoir, he states how he was saddened that many of his fellow pilots would not be able to join him at the Smithsonian because they had not survived the war. But he was grateful his plane had.

“The (future of the) Ar 234 is now assured, I hope, at least for a while. Maybe eternity,” wrote Kriessmann, who died in 2012.

David Kindy is a former daily correspondent for Smithsonian. He is also a journalist, freelance writer and book reviewer who lives in Plymouth, Massachusetts. He writes about history, culture and other topics for Air & Space, Military History, World War II, Vietnam, Aviation History, Providence Journal and other publications and websites.

June 19th 2023

How the 18th-Century Gay Bar Survived and Thrived in a Deadly Environment

Welcome to the molly house.

Atlas Obscura

  • Natasha Frost
More from Atlas Obscura

Advertisement

image.png

Like men’s club houses, molly houses were also places people went simply to socialize, gossip, drink and smoke. Photo from the Wellcome Collection/CC BY 4.0.

In 1709, the London journalist Ned Ward published an account of a group he called “the Mollies Club.” Visible through the homophobic bile (he describes the members as a “Gang of Sodomitical Wretches”) is the clear image of a social club that sounds, most of all, like a really good time. Every evening of the week, Ward wrote, at a pub he would not mention by name, a group of men came together to gossip and tell stories, probably laughing like drains as they did so, and occasionally succumbing to “the Delights of the Bottle.”

In 18th and early-19th-century Britain, a “molly” was a commonly used term for men who today might identify as gay, bisexual or queer. Sometimes, this was a slur; sometimes, a more generally used noun, likely coming from mollis, the Latin for soft or effeminate. A whole molly underworld found its home in London, with molly houses, the clubs and bars where these men congregated, scattered across the city like stars in the night sky. Their locale gives some clue to the kind of raucousness and debauchery that went on within them—one was in the shadow of Newgate prison; another in the private rooms of a tavern called the Red Lion. They might be in a brandy shop, or among the theaters of Drury Lane. But wherever they were, in these places, dozens of men would congregate to meet one another for sex or for love, and even stage performances incorporating drag, “marriage” ceremonies, and other kinds of pageantry.

image.jpg

A 19th-century engraving of London as it was in the later days of the molly house. Photo from the Wellcome Collection/Public Domain.

It’s hard to unpick exactly where molly houses came from, or when they became a phenomenon in their own right. In documents from the prior century, there is an abundance of references to, and accounts of, gay men in London’s theaters or at court. Less overtly referenced were gay brothels, which seem harder to place than their heterosexual equivalents. (The historian Rictor Norton suggests that streets once called Cock’s Lane and Lad Lane may lend a few clues.) Before the 18th century, historians Jeffrey Merrick and Bryant Ragan argue, sodomy was like any other sin, and its proponents like any other sinners, “engaged in a particular vice, like gamblers, drunks, adulterers, and the like.”

But in the late 17th century, a certain moral sea change left men who had sex with men under more scrutiny than ever before. Part of this stemmed from a fear of what historian Alan Bray calls the “disorder of sexual relations that, in principle, at least, could break out anywhere.” Being a gay man became more and more dangerous. In 1533, Henry VIII had passed the Buggery Act, sentencing those found guilty of “unnatural sexual act against the will of God and man” to death. In theory, this meant anal sex or bestiality. In practice, this came to mean any kind of sexual activity between two men. At first, the law was barely applied, with only a handful of documented cases in the 150 years after it was first passed—but as attitudes changed, it began to be more vigorously applied.

image1.jpg

Men found guilty of buggery would be sentenced to death by hanging, with members of the public congregating to watch their execution. Photo from the Public Domain.

The moral shift ushered in a belief that sodomy was more serious than all other crimes. Indeed, writes Ian McCormick, “in its sinfulness, it also included all of them: from blasphemy, sedition, and witchcraft, to the demonic.” While Oscar Wilde might call homosexuality “the love that dare not speak its name,” others saw it as a crime too shocking to name, with “language … incapable of sufficiently expressing the horror of it.” Other commenters of the time, trying to wrangle with the idea, seem incapable of getting beyond the impossible question of why women would not be sufficient for these men:

“’Tis strange that in a Country where
Our Ladies are so Kind and Fair,
So Gay, and Lovely, to the Sight,
So full of Beauty and Delight;
That Men should on each other doat,
And quit the charming Petticoat.”

In this climate, molly houses were an all-too-necessary place of refuge. Sometimes, these were houses, with a mixture of permanent lodgers and occasional visitors. Others were hosted in taverns. All had two things in common—they were sufficiently accessible that a stranger in the know could enter without too much hassle; and there was always plenty to drink.

Samuel Stevens was an undercover agent for the Reformation of Manners, a religious organization that vowed to put a stop to everything from sodomy to sex work to breaking the Sabbath. In 1724, he led a police constable to one such house on Beech Lane, where the brutalist Barbican Estate is today. “There I found a company of men fiddling and dancing and singing bawdy songs, kissing and using their hands in a very unseemly manner,” the policeman, Joseph Sellers, is recorded as having said in court. Elsewhere, the men sprawled across one another’s laps, “talked bawdy,” and took part in country dances. Others peeled off to other rooms to have sex in what they believed to be relative safety.

Sex may have been at the root of the matter, but everything around it—the drinking, the flirting, the camaraderie—was every bit as important. In the minds of the general public, molly houses were dens of sin. But, for their regular customers, writes Bray, “they must have seemed like any ghetto, at times claustrophobic and oppressive, at others warm and reassuring. It was a place to take off the mask.” Here, men pushed to the fringes of society could find their community.

It’s probable that a certain amount of sex work took place, though how much money really changed hands is hard to tease out. “Homosexual prostitution was of only marginal significance,” writes Norton. Instead, “men bought their potential partners beer.” More moralizing commentators mistook consensual sexual activity for more mercenary sex work, using “terms such as ‘He-Strumpets’ and ‘He-Whores’ even for quite ordinary gay men who would never think of soliciting payment for their pleasures.”

image2.jpg

An 18th-century engraving, A Morning Frolic, or the Transmutation of the Sexes, shows two people in various stages of drag. Photo from the Public Domain.

Because so many accounts of these molly houses came from people like the agent, Stevens, or the journalist, Ward, who wanted the houses closed and their customers sanctioned, they tend towards the salacious. Many focus on sexual activity or on particular aspects of this nascent gay culture that seemed shocking or foreign to these unwanted observers. One account delivered in the court known as the Old Bailey creates a vivid picture of an early drag ball. “Some were completely rigged in gowns, petticoats, headcloths, fine laced shoes, befurbelowed scarves, and masks; some had riding hoods, some were dressed like milkmaids, others like shepherdesses with green hats, waistcoats, and petticoats; and others had their faces patched and painted and wore very extensive hoop petticoats, which had been very lately introduced.” In a slew of costumes and colors, they danced, drank, and made merry.

Drag seems to have been common in molly houses, sometimes veering into kinds of pantomime that don’t quite have a modern equivalent. In Ward’s article about the Mollies Club, he describes how the men, who called one another “sister,” would dress up one of their party in a woman’s night-gown. Once they were in costume, this person then pretended to be giving birth, he wrote, and “to mimic the wry Faces of a groaning Woman.” A wooden baby was brought forth, everyone celebrated and a pageant baptism took place. Others dressed up as nurses and midwives—they would crowd around the baby, “being as intent upon the Business in hand, as if they had been Women, the Occasion real, and their Attendance necessary.” A celebratory meal of cold tongue and chicken was served, and everyone feasted to commemorate their bouncing new arrival, with “the eyes, Nose, and Mouth of its own credulous Daddy.”

image3.jpg

Men of all social classes mingled in the molly house. Photo from the Wellcome Collection/CC BY 4.0.

At one of the most famous molly houses, owned by Margaret Clap, couples slipped into another room, two-by-two, to be “married.” In “The Marrying Room,” or “The Chapel,” a marriage attendant stood by, guarding the door to give the happy couple some privacy as they made use of the double bed. (“Often,” Norton writes, “the couple did not bother to close the door behind them.”) At the White Swan, on Vere Street, John Church, an ordained minister, performed marriage ceremonies for those who wanted them—though it’s unknown whether these nuptials were expected to last more than a single evening.

The openness that allowed men seeking sex with men to find molly houses exposed them to the risk of raids, and subsequent convictions. Police constables would occasionally swarm into the festivities, rounding up the men they found there and hauling them off to prison, where they would await trial. In the mid-1720s, there was a rash of these raids, leading many men to the gallows. Embittered informants who were intimately familiar with the molly houses would allow police to pose as their “husbands” in order to infiltrate the club. These constables would be welcomed into the fold—then return later to close it up for good and use what they had seen against the members.

image4.jpg

A 1707 broadsheet illustration shows two men embracing. On the left, one cuts his throat when his friend is hanged. On the right, a man is cut down from the gallows. Photo from the Public Domain.

In 1810, one of the most famous raids took place on the White Swan. It had barely been open six months when police stormed the place, arresting about 30 of the people they found inside. A violent mob, of which the women were apparently “most ferocious,” surged around the prisoners’ coaches as they made their way to the watch house. Two were sentenced for buggery, and put to death. A larger number, convicted of “running a disorderly house,” were mostly sentenced to many humiliating hours in the pillory, where members of the public could throw things at them. Three months after the original raid, these men were led through the streets and out onto the pillory. As they went, thousands of people formed a seething crowd, hurling whatever they could at them—dead cats, mud pellets, putrid fish and eggs, dung, potatoes, slurs. The men began to bleed profusely from their wounds. After an hour in the pillory, about 50 women formed a ring around them, pelting them with object after object. One man was beaten until he was unconscious.

These raids forced molly houses, and men who had sex with men, deeper underground. The subculture didn’t die out, but the establishments often did, or became harder to find. And then, in 1861, a sliver of light—the death penalty for sodomy or buggery was repealed and replaced with nebulous legislation around “gross indecency.” (It was for this charge that Oscar Wilde was imprisoned for two years, four decades later.) It would be another 106 years after the repeal that sex between men was decriminalized altogether in the United Kingdom, by which time the molly house was all but forgotten, and its patrons nearly scrubbed from the history books.

May 10th 2023

What Happens When You Kill Your King

After the English Revolution—and an island’s experiment with republicanism—a genuine restoration was never in the cards.

By Adam Gopnik

April 17, 2023

A man on stage holding a crown on one hand and a decapitated head on the other.

Jonathan Healey’s “The Blazing World” sees both sectarian strife and galvanizing political ideas in the civil wars—and in the fateful conflict between Cromwell and Charles I.Illustration by Wesley Allsbrook

Amid the pageantry (and the horrible family intrigue) of the approaching coronation, much will be said about the endurance of the British monarchy through the centuries, and perhaps less about how the first King Charles ended his reign: by having his head chopped off in public while the people cheered or gasped. The first modern revolution, the English one that began in the sixteen-forties, which replaced a monarchy with a republican commonwealth, is not exactly at the forefront of our minds. Think of the American Revolution and you see pop-gun battles and a diorama of eloquent patriots and outwitted redcoats; think of the French Revolution and you see the guillotine and the tricoteuses, but also the Declaration of the Rights of Man. Think of the English Revolution that preceded both by more than a century and you get a confusion of angry Puritans in round hats and likable Cavaliers in feathered ones. Even a debate about nomenclature haunts it: should the struggles, which really spilled over many decades, be called a revolution at all, or were they, rather, a set of civil wars?

According to the “Whig” interpretation of history—as it is called, in tribute to the Victorian historians who believed in it—ours is a windup world, regularly ticking forward, that was always going to favor the emergence of a constitutional monarchy, becoming ever more limited in power as the people grew in education and capacity. And so the core seventeenth-century conflict was a constitutional one, between monarchical absolutism and parliamentary democracy, with the real advance marked by the Glorious Revolution, and the arrival of limited monarchy, in 1688. For the great Marxist historians of the postwar era, most notably Christopher Hill, the main action had to be parsed in class terms: a feudal class in decline, a bourgeois class in ascent—and, amid the tectonic grindings between the two, the heartening, if evanescent, appearance of genuine social radicals. Then came the more empirically minded revisionists, conservative at least as historians, who minimized ideology and saw the civil wars as arising from the inevitable structural difficulties faced by a ruler with too many kingdoms to subdue and too little money to do it with.

The point of Jonathan Healey’s new book, “The Blazing World” (Knopf), is to acknowledge all the complexities of the episode but still to see it as a real revolution of political thought—to recapture a lost moment when a radically democratic commonwealth seemed possible. Such an account, as Healey recognizes, confronts formidable difficulties. For one thing, any neat sorting of radical revolutionaries and conservative loyalists comes apart on closer examination: many of the leading revolutionaries of Oliver Cromwell’s “New Model” Army were highborn; many of the loyalists were common folk who wanted to be free to have a drink on Sunday, celebrate Christmas, and listen to a fiddler in a pub. (All things eventually restricted by the Puritans in power.)

The Best Books We Read This Week

Something like this is always true. Revolutions are won by coalitions and only then seized by fanatics. There were plenty of blue bloods on the sansculottes side of the French one, at least at the beginning, and the American Revolution joined abolitionists with slaveholders. One of the most modern aspects of the English Revolution was Cromwell’s campaign against the Irish Catholics after his ascent to power; estimates of the body count vary wildly, but it is among the first organized genocides on record, resembling the Young Turks’ war against the Armenians. Irish loyalists, forced to take refuge in churches, were burned alive inside them.

Healey, a history don at Oxford, scants none of these things. A New Model social historian, he writes with pace and fire and an unusually sharp sense of character and humor. At one emotional pole, he introduces us to the visionary yet perpetually choleric radical John Lilburne, about whom it was said, in a formula that would apply to many of his spiritual heirs, that “if there were none living but himself John would be against Lilburne, and Lilburne against John.” At the opposite pole, Healey draws from obscurity the mild-mannered polemicist William Walwyn, who wrote pamphlets with such exquisitely delicate titles as “A Whisper in the Ear of Mr Thomas Edward” and “Some Considerations Tending to the Undeceiving of Those, Whose Judgements Are Misinformed.”

For Hill, the clashes of weird seventeenth-century religious beliefs were mere scrapings of butter on the toast of class conflict. If people argue over religion, it is because religion is an extension of power; the squabbles about pulpits are really squabbles about politics. Against this once pervasive view, Healey declares flatly, “The Civil War wasn’t a class struggle. It was a clash of ideologies, as often as not between members of the same class.” Admiring the insurgents, Healey rejects the notion that they were little elves of economic necessity. Their ideas preceded and shaped the way that they perceived their class interests. Indeed, like the “phlegmatic” and “choleric” humors of medieval medicine, “the bourgeoisie” can seem a uselessly encompassing category, including merchants, bankers, preachers, soldiers, professionals, and scientists. Its members were passionate contestants on both sides of the fight, and on some sides no scholar has yet dreamed of.

Healey insists, in short, that what seventeenth-century people seemed to be arguing about is what they were arguing about. When members of the influential Fifth Monarchist sect announced that Charles’s death was a signal of the Apocalypse, they really meant it: they thought the Lord was coming, not the middle classes. With the eclectic, wide-angle vision of the new social history, Healey shows that ideas and attitudes, rhetoric and revelations, rising from the ground up, can drive social transformation. Ripples on the periphery of our historical vision can be as important as the big waves at the center of it. The mummery of signatures and petitions and pamphlets which laid the ground for conflict is as important as troops and battlefield terrain. In the spirit of E. P. Thompson, Healey allows members of the “lunatic fringe” to speak for themselves; the Levellers, the Ranters, and the Diggers—radicals who cried out in eerily prescient ways for democracy and equality—are in many ways the heroes of the story, though not victorious ones.

But so are people who do not fit neatly into tales of a rising merchant class and revanchist feudalists. Women, shunted to the side in earlier histories of the era, play an important role in this one. We learn of how neatly monarchy recruited misogyny, with the Royalist propaganda issuing, Rush Limbaugh style, derisive lists of the names of imaginary women radicals, more frightening because so feminine: “Agnes Anabaptist, Kate Catabaptist . . . Penelope Punk, Merald Makebate.” The title of Healey’s book is itself taken from a woman writer, Margaret Cavendish, whose astonishing tale “The Description of a New World, Called the Blazing World” was a piece of visionary science fiction that summed up the dreams and disasters of the century. Healey even reports on what might be a same-sex couple among the radicals: the preacher Thomas Webbe took one John Organ for his “man-wife.”

What happened in the English Revolution, or civil wars, took an exhaustingly long time to unfold, and its subplots were as numerous as the bits of the Shakespeare history play the wise director cuts. Where the French Revolution proceeds in neat, systematic French parcels—Revolution, Terror, Directorate, Empire, etc.—the English one is a mess, exhausting to untangle and not always edifying once you have done so. There’s a Short Parliament, a Long Parliament, and a Rump Parliament to distinguish, and, just as one begins to make sense of the English squabbles, the dour Scots intervene to further muddy the story.

In essence, though, what happened was that the Stuart monarchy, which, after the death of Elizabeth, had come to power in the person of the first King James, of Bible-version fame, got caught in a kind of permanent political cul-de-sac. When James died, in 1625, he left his kingdom to his none too bright son Charles. Parliament was then, as now, divided into Houses of Lords and Commons, with the first representing the aristocracy and the other the gentry and the common people. The Commons, though more or less elected, by uneven means, served essentially at the King’s pleasure, being summoned and dismissed at his will.

Video From The New YorkerMy Parent, Neal: Transitioning at Sixty-two

Parliament did, however, have the critical role of raising taxes, and, since the Stuarts were both war-hungry and wildly incompetent, they needed cash and credit to fight their battles, mainly against rebellions in Scotland and Ireland, with one disastrous expedition into France. Although the Commons as yet knew no neat party divides, it was, in the nature of the times, dominated by Protestants who often had a starkly Puritan and always an anti-papist cast, and who suspected, probably wrongly, that Charles intended to take the country Catholic. All of this was happening in a time of crazy sectarian religious division, when, as the Venetian Ambassador dryly remarked, there were in London “as many religions as there were persons.” Healey tells us that there were “reports of naked Adamites, of Anabaptists and Brownists, even Muslims and ‘Bacchanalian’ pagans.”

In the midst of all that ferment, mistrust and ill will naturally grew between court and Parliament, and between dissident factions within the houses of Parliament. In January, 1642, the King entered Parliament and tried to arrest a handful of its more obnoxious members; tensions escalated, and Parliament passed the Militia Ordinance, awarding itself the right to raise its own fighting force, which—a significant part of the story—it was able to do with what must have seemed to the Royalists frightening ease, drawing as it could on the foundation of the London civic militia. The King, meanwhile, raised a conscript army of his own, which was ill-supplied and, Healey says, “beset with disorder and mutiny.” By August, the King had officially declared war on Parliament, and by October the first battle began. A series of inconclusive wins and losses ensued over the next couple of years.

The situation shifted when, in February, 1645, Parliament consolidated the New Model Army, eventually under the double command of the aristocratic Thomas Fairfax, about whom, one woman friend admitted, “there are various opinions about his intellect,” and the grim country Protestant Oliver Cromwell, about whose firm intellect opinions varied not. Ideologically committed, like Napoleon’s armies a century later, and far better disciplined than its Royalist counterparts, at least during battle (they tended to save their atrocities for the after-victory party), the New Model Army was a formidable and modern force. Healey, emphasizing throughout how fluid and unpredictable class lines were, makes it clear that the caste lines of manners were more marked. Though Cromwell was suspicious of the egalitarian democrats within his coalition—the so-called Levellers—he still declared, “I had rather have a plain russet-coated captain that knows what he fights for, and loves what he knows, than that which you call a gentleman.”

Throughout the blurred action, sharp profiles of personality do emerge. Ronald Hutton’s marvellous “The Making of Oliver Cromwell” (Yale) sees the Revolution in convincingly personal terms, with the King and Cromwell as opposed in character as they were in political belief. Reading lives of both Charles and Cromwell, one can only recall Alice’s sound verdict on the Walrus and the Carpenter: that they were both very unpleasant characters. Charles was, the worst thing for an autocrat, both impulsive and inefficient, and incapable of seeing reality until it was literally at his throat. Cromwell was cruel, self-righteous, and bloodthirsty.

Yet one is immediately struck by the asymmetry between the two. Cromwell was a man of talents who rose to power, first military and then political, through the exercise of those talents; Charles was a king born to a king. It is still astounding to consider, in reading the history of the civil wars, that so much energy had to be invested in analyzing the character of someone whose character had nothing to do with his position. But though dynastic succession has been largely overruled in modern politics, it still holds in the realm of business. And so we spend time thinking about the differences, say, between George Steinbrenner and his son Hal, and what that means for the fate of the Yankees, with the same nervous equanimity that seventeenth-century people had when thinking about the traits and limitations of an obviously dim-witted Royal Family.

Although Cromwell emerges from every biography as a very unlikable man, he was wholly devoted to his idea of God and oddly magnetic in his ability to become the focus of everyone’s attention. In times of war, we seek out the figure who embodies the virtues of the cause and ascribe to him not only his share of the credit but everybody else’s, too. Fairfax tended to be left out of the London reports. He fought the better battles but made the wrong sounds. That sentence of Cromwell’s about the plain captain is a great one, and summed up the spirit of the time. Indeed, the historical figure Cromwell most resembles is Trotsky, who similarly mixed great force of character with instinctive skill at military arrangements against more highly trained but less motivated royal forces. Cromwell clearly had a genius for leadership, and also, at a time when religious convictions were omnipresent and all-important, for assembling a coalition that was open even to the more extreme figures of the dissident side. Without explicitly endorsing any of their positions, Cromwell happily accepted their support, and his ability to create and sustain a broad alliance of Puritan ideologies was as central to his achievement as his cool head with cavalry.

Hutton and Healey, in the spirit of the historians Robert Darnton and Simon Schama—recognizing propaganda as primary, not merely attendant, to the making of a revolution—bring out the role that the London explosion of print played in Cromwell’s triumph. By 1641, Healey explains, “London had emerged as the epicentre of a radically altered landscape of news . . . forged on backstreet presses, sold on street corners and read aloud in smoky alehouses.” This may be surprising; we associate the rise of the pamphlet and the newspaper with a later era, the Enlightenment. But just as, once speed-of-light communication is possible, it doesn’t hugely matter if its vehicle is telegraphy or e-mail, so, too, once movable type was available, the power of the press to report and propagandize didn’t depend on whether it was produced single sheet by single sheet or in a thousand newspapers at once.

At last, at the Battle of Naseby, in June, 1645, the well-ordered Parliamentary forces won a pivotal victory over the royal forces. Accident and happenstance aided the supporters of Parliament, but Cromwell does seem to have been, like Napoleon, notably shrewd and self-disciplined, keeping his reserves in reserve and throwing them into battle only at the decisive moment. By the following year, Charles I had been captured. As with Louis XVI, a century later, Charles was offered a perfectly good deal by his captors—basically, to accept a form of constitutional monarchy that would still give him a predominant role—but left it on the table. Charles tried to escape and reimpose his reign, enlisting Scottish support, and, during the so-called Second Civil War, the bloodletting continued.

In many previous histories of the time, the battles and Cromwell’s subsequent rise to power were the pivotal moments, with the war pushing a newly created “middling class” toward the forefront. For Healey, as for the historians of the left, the key moment of the story occurs instead in Putney, in the fall of 1647, in a battle of words and wills that could easily have gone a very different way. It was there that the General Council of the New Model Army convened what Healey calls “one of the most remarkable meetings in the whole of English history,” in which “soldiers and civilians argued about the future of the constitution, the nature of sovereignty and the right to vote.” The implicit case for universal male suffrage was well received. “Every man that is to live under a government ought first by his own consent to put himself under that government,” Thomas Rainsborough, one of the radical captains, said. By the end of a day of deliberation, it was agreed that the vote should be extended to all men other than servants and paupers on relief. The Agitators, who were in effect the shop stewards of the New Model Army, stuck into their hatbands ribbons that read “England’s freedom and soldier’s rights.” Very much in the manner of the British soldiers of the Second World War who voted in the first Labour government, they equated soldiery and equality.

The democratic spirit was soon put down. Officers, swords drawn, “plucked the papers from the mutineers’ hats,” Healey recounts, and the radicals gave up. Yet the remaining radicalism of the New Model Army had, in the fall of 1648, fateful consequences. The vengeful—or merely egalitarian—energies that had been building since Putney meant that the Army objected to Parliament’s ongoing peace negotiations with Charles. Instead, he was tried for treason, the first time in human memory that this had happened to a monarch, and, in 1649, he was beheaded. In the next few years, Cromwell turned against Parliament, impatient with its slow pace, and eventually staged what was in effect a coup to make himself dictator. “Lord Protector” was the title Cromwell took, and then, in the way of such things, he made himself something very like a king.

Cromwell won; the radicals had lost. The political thought of their time—however passionate—hadn’t yet coalesced around a coherent set of ideas and ideals that could have helped them translate those radical intuitions into a persuasive politics. Philosophies count, and these hadn’t been, so to speak, left to simmer on the Hobbes long enough: “Leviathan” was four years off, and John Locke was only a teen-ager. The time was still recognizably and inherently pre-modern.

Even the word “ideology,” favored by Healey, may be a touch anachronistic. The American and the French Revolutions are both recognizably modern: they are built on assumptions that we still debate today, and left and right, as they were established then, are not so different from left and right today. Whatever obeisance might have been made to the Deity, they were already playing secular politics in a post-religious atmosphere. During the English Revolution, by contrast, the most passionate ideologies at stake were fanatic religious beliefs nurtured through two millennia of Christianity.

Those beliefs, far from being frosting on a cake of competing interests, were the competing interests. The ability of seventeenth-century people to become enraptured, not to say obsessed, with theological differences that seem to us astonishingly minute is the most startling aspect of the story. Despite all attempts to depict these as the mere cosmetic covering of clan loyalties or class interests, those crazy-seeming sectarian disputes were about what they claimed to be about. Men were more likely to face the threat of being ripped open and having their bowels burned in front of their eyes (as happened eventually to the regicides) on behalf of a passionately articulated creed than they were on behalf of an abstract, retrospectively conjured class.

But, then, perhaps every age has minute metaphysical disputes whose profundity only that age can understand. In an inspired study of John Donne, “Super-Infinite,” the scholar Katherine Rundell points out how preoccupied her subject was with the “trans-” prefix—transpose, translate, transubstantiate—because it marked the belief that we are “creatures born transformable.” The arguments over transubstantiation that consumed the period—it would be the cause of the eventual unseating of Charles I’s second son, King James II—echo in our own quarrels about identity and transformation. Weren’t the nonconformist Puritans who exalted a triune godhead simply insisting, in effect, on plural pronouns for the Almighty? The baseline anxiety of human beings so often turns on questions of how transformable we creatures are—on how it is that these meat-and-blood bodies we live within can somehow become the sites of spirit and speculation and grace, by which we include free will. These issues of body and soul, however soluble they may seem in retrospect, are the ones that cause societies to light up and sometimes conflagrate.

History is written by the victors, we’re told. In truth, history is written by the romantics, as stories are won by storytellers. Anyone who can spin lore and chivalry, higher calling and mystic purpose, from the ugliness of warfare can claim the tale, even in defeat. As Ulysses S. Grant knew, no army in history was as badly whipped as Robert E. Lee’s, and yet the Confederates were still, outrageously, winning the history wars as late as the opening night of “Gone with the Wind.” Though the Parliamentarians routed the Cavaliers in the first big war, the Cavaliers wrote the history—and not only because they won the later engagement of the Restoration. It was also because the Cavaliers, for the most part, had the better writers. Aesthetes may lose the local battle; they usually win the historical war. Cromwell ruled as Lord Protector for five years, and then left the country to his hapless son, who was deposed in just one. Healey makes no bones about the truth that, when the Commonwealth failed and Charles II gained the throne, in 1660, for what became a twenty-five-year reign, it opened up a period of an extraordinary English artistic renaissance. “The culture war, that we saw at the start of the century,” he writes, “had been won. Puritanism had been cast out. . . . Merry England was back.”

There was one great poet-propagandist for Cromwell, of course: John Milton, whose “Paradise Lost” can be read as a kind of dreamy explication of Cromwellian dissident themes. But Milton quit on Cromwell early, going silent at his apogee, while Andrew Marvell’s poems in praise of Cromwell are masterpieces of equivocation and irony, with Cromwell praised, the King’s poise in dying admired, and in general a tone of wry hyperbole turning into fatalism before the reader’s eyes. Marvell’s famously conditional apothegm for Cromwell, “If these the times, then this must be the man,” is as backhanded a compliment as any poet has offered a ruler, or any flunky has ever offered a boss.

Healey makes the larger point that, just as the Impressionists rose, in the eighteen-seventies, as a moment of repose after the internecine violence of the Paris Commune, the matchless flowering of English verse and theatre in the wake of the Restoration was as much a sigh of general civic relief as a paroxysm of Royalist pleasure. The destruction of things of beauty by troops under Cromwell’s direction is still shocking to read of. At Peterborough Cathedral, they destroyed ancient stained-glass windows, and in Somerset at least one Parliamentarian ripped apart a Rubens.

Yet, in Cromwell’s time, certain moral intuitions and principles appeared that haven’t disappeared; things got said that could never be entirely unsaid. Government of the people resides in their own consent to be governed; representative bodies should be in some way representative; whatever rights kings have are neither divine nor absolute; and, not least, religious differences should be settled by uneasy truces, if not outright toleration.

And so there is much to be said for a Whig history after all, if not as a story of inevitably incremental improvements then at least as one of incremental inspirations. The Restoration may have had its glories, but a larger glory belongs to those who groped, for a time, toward something freer and better, and who made us, in particular—Americans, whose Founding Fathers, from Roger Williams to the Quakers, leaped intellectually right out of the English crucible—what we spiritually remain. America, on the brink of its own revolution, was, essentially, London in the sixteen-forties, set free then, and today still blazing. ♦Published in the print edition of the April 24 & May 1, 2023, issue, with the headline “The Great Interruption.”

New Yorker Favorites

Sign up for our daily newsletter to receive the best stories from The New Yorker.

Adam Gopnik, a staff writer, has been contributing to The New Yorker since 1986. He is the author of, most recently, “The Real Work: On the Mystery of Mastery.”

Books & Fiction

Get book recommendations, fiction, poetry, and dispatches from the world of literature in your in-box. Sign up for the Books & Fiction newsletter.

E-mail address

By signing up, you agree to our User Agreement and Privacy Policy & Cookie

May 3rd 2023

‘The King and His Husband’: The Gay History of British Royals

Queen Elizabeth’s cousin wed in the first same-sex royal wedding—but he is far from the first gay British royal, according to historians.

The Washington Post

  • Kayla Epstein

Read when you’ve got time to spare.

More from The Washington Post

Screen Shot 2021-06-24 at 2.33.28 PM.png

King Edward II was known for his intensely close relationships with two men.  Photo by duncan1890/Getty Images

Ordinarily, the wedding of a junior member of the British royal family wouldn’t attract much global attention. But Lord Ivar Mountbatten’s did.

That’s because Mountbatten, a cousin of Queen Elizabeth II, wed James Coyle in the summer of 2018 in what was heralded as the “first-ever” same-sex marriage in Britain’s royal family.

Perhaps what makes it even more unusual is that Mountbatten’s ex-wife, Penny Mountbatten, gave her former husband away.

Who says the royals aren’t a modern family?

Though Mountbatten and Coyle’s ceremony was expected to be small, it’s much larger in significance.

“It’s seen as the extended royal family giving a stamp of approval, in a sense, to same-sex marriage,” said Carolyn Harris, historian and author of Raising Royalty: 1000 Years of Royal Parenting. “This marriage gives this wider perception of the royal family encouraging everyone to be accepted.”

But the union isn’t believed to be the first same-sex relationship in British monarchy, according to historians. And they certainly couldn’t carry out their relationships openly or without causing intense political drama within their courts.

Edward II, who ruled from 1307-1327, is one of England’s less fondly remembered kings. His reign consisted of feuds with his barons, a failed invasion of Scotland in 1314, a famine, more feuding with his barons, and an invasion by a political rival that led to him being replaced by his son, Edward III. And many of the most controversial aspects of his rule — and fury from his barons — stemmed from his relationships with two men: Piers Gaveston and, later, Hugh Despenser.

Gaveston and Edward met when Edward was about 16 years old, when Gaveston joined the royal household. “It’s very obvious from Edward’s behavior that he was quite obsessed with Gaveston,” said Kathryn Warner, author of Edward II: The Unconventional King. Once king, Edward II made the relatively lowborn Gaveston the Earl of Cornwall, a title usually reserved for members of the royal family, “just piling him with lands and titles and money,” Warner said. He feuded with his barons over Gaveston, who they believed received far too much attention and favor.

Gaveston was exiled numerous times over his relationship with Edward II, though the king always conspired to bring him back. Eventually, Gaveston was assassinated. After his death, Edward “constantly had prayers said for [Gaveston’s] soul; he spent a lot of money on Gaveston’s tomb,” Warner said.

Several years after Gaveston’s death, Edward formed a close relationship with another favorite and aide, Hugh Despenser. How close? Walker pointed to the annalist of Newenham Abbey in Devon in 1326, who called Edward and Despenser “the king and his husband,” while another chronicler noted that Despenser “bewitched Edward’s heart.”

The speculation that Edward II’s relationships with these men went beyond friendship was fueled by Christopher Marlowe’s 16th-century play “Edward II”, which is often noted for its homoerotic portrayal of Edward II and Gaveston.

James VI and I, who reigned over Scotland and later England and Ireland until his death in 1625, attracted similar scrutiny for his male favorites, a term used for companions and advisers who had special preference with monarchs. Though James married Anne of Denmark and had children with her, it has long been believed that James had romantic relationships with three men: Esmé Stewart; Robert Carr; and George Villiers, Duke of Buckingham.

Correspondence between James and his male favorites survives, and as David M. Bergeron theorizes in his book King James and Letters of Homoerotic Desire: “The inscription that moves across the letters spell desire.”

James was merely 13 when he met 37-year-old Stewart, and their relationship was met with concern.

“The King altogether is persuaded and led by him . . . and is in such love with him as in the open sight of the people often he will clasp him about the neck with his arms and kiss him,” wrote one royal informant of their relationship. James promoted Stewart up the ranks, eventually making him Duke of Lennox. James was eventually forced to banish him, causing Stewart great distress. “I desire to die rather than to live, fearing that that has been the occasion of your no longer loving me,” Stewart wrote to James.

But James’s most famous favorite was Villiers. James met him in his late 40s and several years later promoted him to Duke of Buckingham — an astounding rise for someone of his rank. Bergeron records the deeply affectionate letters between the two; in a 1623 letter, James refers bluntly to “marriage” and calls Buckingham his “wife:”

“I cannot content myself without sending you this present, praying God that I may have a joyful and comfortable meeting with you and that we may make at this Christmas a new marriage ever to be kept hereafter . . . I desire to live only in this world for your sake, and that I had rather live banished in any part of the earth with you than live a sorrowful widow’s life without you. And may so God bless you, my sweet child and wife, and grant that ye may ever be a comfort to your dear dad and husband.”

A lost portrait of Buckingham by Flemish artist Peter Paul Rubens was discovered in Scotland, depicting a striking and stylish man. And a 2008 restoration of Apethorpe Hall, where James and Villiers met and later spent time together, discovered a passage that linked their bedchambers.

One queen who has attracted speculation about her sexuality is Queen Anne, who ruled from 1702-1714. Her numerous pregnancies, most of which ended in miscarriage or a stillborn child, indicate a sexual relationship with her husband, George of Denmark.

And yet, “she had these very intense, close friendships with women in her household,” Harris said.

Most notable is her relationship to Sarah Churchill, the Duchess of Marlborough, who held enormous influence in Anne’s court as mistress of the robes and keeper of the privy purse. She was an influential figure in Whig party politics, famous for providing Anne with blunt advice and possessing as skillful a command of politics as her powerful male contemporaries.

Whether Churchill and Queen Anne’s intense friendship became something more is something we may never know. “Lesbianism, by its unverifiable nature, is an awful subject for historical research and, inversely, the best subject for political slander,” writes Ophelia Field in her book Sarah Churchill: Duchess of Marlborough: The Queen’s Favourite.

But Field also notes that when examining the letters between the women, it’s important to understand that their friendship was “something encompassing what we would nowadays class as romantic or erotic feeling.”

Field writes in “The Queen’s Favourite”:

“Without Sarah beside her when she moved with the seasonal migrations of the Court, Anne complained of loneliness and boredom: ‘I must tell you I am not as you left me . . . I long to be with you again and tis impossible for you ever to believe how much I love you except you saw my heart.’ [ . . .] Most commentators have suggested that the hyperbole in Anne’s letters to her friend was merely stylistic. In fact, the overwhelming impression is not of overstatement but that Anne was repressing what she really wanted to say.”

Their relationship deteriorated in part because of Anne’s growing closeness to another woman, Churchill’s cousin, Abigail Masham. Churchill grew so infuriated that she began insinuating Anne’s relationship with Masham was sinister.

The drama surrounding the three women played out in the 2018 film, The Favourite, starring Rachel Weisz, Emma Stone and Olivia Colman.

Though there is much evidence that these royals had same-sex relationships with their favorites or other individuals, Harris cautioned that jealousy or frustration with favorites within the courts often led to rumors about the relationships. “If a royal favorite, no matter the degree of personal relationship, was disrupting the social or political hierarchy in some way, then that royal favorite was considered a problem, regardless of what was going on behind closed doors,” she said.

Harris also noted that it was difficult to take 21st-century definitions of sexual orientation and apply them to past monarchs. “When we see historical figures, they might have same-sex relationships but might not talk about their orientation,” she said. “Historical figures often had different ways of viewing themselves than people today.”

But she acknowledged that reexamining the lives, and loves, of these monarchs creates a powerful, humanizing bond between our contemporary society and figures of the past. It shows “that there have been people who dealt with some of the same concerns and the same issues that appear in the modern day,” she said.

April 23rd 2023

The Titanic Wreck Is a Landmark Almost No One Can See

Visiting the remains of the doomed ship causes it damage—but so will just leaving it there.

Atlas Obscura

  • Natasha Frost
More from Atlas Obscura

image2.jpg

A view of the bathtub in Capt. Smith’s bathroom, photographed in 2003. Rusticles are growing over most of the fixtures in the room. Photo from the Public Domain/Lori Johnston, RMS Titanic Expedition 2003, NOAA-OE.

The bride wore a flame-retardant suit—and so did the groom. In July 2001, an American couple got married in the middle of the Atlantic Ocean, thousands of feet below the surface. In the background was an international landmark every bit as familiar as the Eiffel Tower, the Taj Mahal or any other postcard-perfect wedding photo destination. David Leibowitz and Kimberley Miller wed on the bow of the Titanic shipwreck, in a submarine so small they had to crouch as they said their vows. Above the water, Captain Ron Warwick officiated via hydrophone from the operations room of a Russian research ship.

The couple had agreed to the undersea nuptials only if they could avoid a media circus, but quickly became the faces of a troubling trend: The wreck of the Titanic as landmark tourist attraction, available to gawk at to anyone with $36,000 singeing a hole in their pocket. (Leibowitz won a competition run by diving company Subsea Explorer, who then offered to finance the costs of their wedding and honeymoon.)

As opprobrium mounted, particularly from those whose relatives had died aboard the ship, a Subsea representative told the press: “What’s got to be remembered is that every time a couple gets married in church they have to walk through a graveyard to get to the altar.” Was the Titanic no more than an ordinary cemetery? The event focused attention on a predicament with no single answer: Who did the wreck belong to, what was the “right” thing to do to it, and what was the point of a landmark that almost no one could visit?

People had been wrestling with earlier forms of these questions for decades, long before the nonprofit Woods Hole Oceanographic Institution discovered the Titanic wreck in 1985. The most prominent of these earlier dreamers was Briton Douglas Woolley, who began to appear in the national press in the 1960s with increasingly harebrained schemes to find, and then resurface, the ship. One such scheme involved him going down in a deep-sea submersible, finding the ship, and then lifting it with a shoal of thousands of nylon balloons attached to its hull. The balloons would be filled with air, and then rise to the surface, dragging the craft up with them. As Walter Lord, author of Titanic history bestseller The Night Lives On, ponders, “How the balloons would be inflated 13,000 feet down wasn’t clear.”

Next, Woolley coaxed Hungarian inventors aboard his project. The newly incorporated Titanic Salvage Company would use seawater electrolysis to generate 85,000 cubic yards of hydrogen. They’d fill plastic bags with it, they announced—and presto! But this too was a wash. They had budgeted a week to generate the gas; a scholarly paper by an American chemistry professor suggested it might take closer to 10 years. The company foundered and the Hungarians returned home. (In 1980, Woolley allegedly acquired the title to the Titanic from the ship and insurance companies—his more recent attempts to assert ownership have proven unsuccessful.)

Woolley might not have raised the Titanic from the depths, but he had succeeded in winching up interest in the vessel, and whether it might ever see the light of day again. In the following decade, some eight different groups announced plans to find and explore the ship. Most were literally impossible; some were practically unfeasible. One 1979 solution involving benthos glass floats was nixed when it became clear that it would cost $238,214,265, the present day equivalent of the GDP of a small Caribbean nation.

image.jpg

A crowd gathering outside of the White Star Line offices for news of the shipwreck. Photo from the Library of Congress/LC-DIG-ggbain-10355.

In the early 1980s, various campaigns set out to find the ship and its supposedly diamond-filled safes. But as they came back empty-handed, newspapers grew weary of these fruitless efforts. When the Woods Hole Oceanographic Institution set sail in 1985 with the same objective, it generated barely a media ripple. Their subsequent triumph in early September made front page news: the New York Times proclaimed tentatively: “Wreckage of Titanic Reported Discovered 12,000 Feet Down.”

Within days of its discovery, the legal rights to the ship began to be disputed. Entrepreneurs read the headlines and saw dollar signs, and new plans to turn the Titanic into an attraction began to bubble up to the surface. Tony Wakefield, a salvage engineer from Stamford, Connecticut, proposed pumping Vaseline into polyester bags placed in the ship’s hull. The Vaseline would harden underwater, he said, and then become buoyant, lifting the Titanic up to the surface. This was scarcely the least fantastical of the solutions—others included injecting thousands of ping pong balls into the hull, or using levers and pulleys to crank the 52,000-ton ship out of the water. “Yet another would encase the liner in ice,” Lord writes. “Then, like an ordinary cube in a drink, the ice would rise to the surface, bringing the Titanic with it.”

Robert Ballard, the young marine geologist who had led the successful expedition, spoke out against these plans. The wreck should not be commercially exploited, he said, but instead declared an international memorial—not least because any clumsy attempt to obtain debris from the site might damage the ship irreparably, making further archeological study impossible. “To deter would-be salvagers,” the Times reported, “he has refused to divulge the ship’s exact whereabouts.”

Somehow, the coordinates got out. Ballard’s wishes were ignored altogether: in the years that followed, team after team visited the wreck, salvaging thousands of objects and leaving a trail of destruction in their wake. Panicked by the potential for devastation, Ballard urged then-chairman of the House Merchant Marine and Fisheries Committee, Congressman Walter B. Jones, Sr., to introduce the RMS Titanic Maritime Memorial Act in the United States House of Representatives. The Act would limit how many people could explore and salvage the wreck, which would remain preserved in the icy depths of the Atlantic.

Despite being signed into law by President Ronald Reagan in October 1986, the Act proved utterly toothless. The Titanic site is outside of American waters, giving the U.S. government little jurisdiction over its rusty grave. In 1998, the Act was abandoned altogether.

In the meantime, visits to the site had continued. In 1987, Connecticut-based Titanic Ventures Inc. coupled with the French oceanographic agency IFREMER to survey and salvage the site. Among their desired booty was the bell from the crow’s nest, which had sounded out doom to so many hundreds of passengers. When pulled from the wreck, the crow’s nest collapsed altogether, causing immense damage to the site. People began to question whether it was right for people to be there at all, let alone looting what was effectively a mass grave. Survivor Eva Hart, whose father perished on the ship, decried Titanic visitors as “fortune hunters, vultures, pirates!”—yet the trips continued. A few years later, director James Cameron’s team, who were scoping out the wreck for his 1997 blockbuster, caused further accidental damage.

image1.jpg

The cover of the New York Herald and the New York Times, the day of and the day after the sinking, respectively. Photo from the Public Domain.

Gradually, researchers realized that nature, too, had refused to cooperate with the statute introduced above the surface. “The deep ocean has been steadily dismantling the once-great cruise liner,” Popular Science reported in July 2004. One forensic archaeologist described the decay as unstoppable: “The Titanic is becoming something that belongs to biology.” The hulking wreck had become a magnet for sea life, with iron-eating bacteria burrowing into its cracks and turning some 400 pounds of iron a day into fine, eggshell-delicate “rusticles”, which hung pendulously from the steel sections of the wreck and dissolved into particles at the slightest touch. Molluscs and other underwater critters chomped away at the ship, while eddies and other underwater flows have broken bits off the wreck, dispersing them back into the ocean.

A century after the Titanic sunk in 1912, over 140 people had visited the landmark many believe should have been left completely alone. Some have had government or nonprofit backing; others have simply been wealthy tourists of the sort who accompanied Leibowitz and Miller to their underwater wedding. With its centenary, the ship finally became eligible for UNESCO protection, under the 2001 UNESCO Convention on the Protection of Underwater Cultural Heritage. Then-Director General Irina Bokova announced the protection of the site, limiting the destruction, pillage, sale and dispersion of objects found among its vestiges. Human remains would be treated with new dignity, the organization announced, while exploration attempts subject to ethical and scientific scrutiny. “We do not tolerate the plundering of cultural sites on land, and the same should be true for our sunken heritage,” Bokova said, calling on divers not to dump equipment or commemorative plaques on the Titanic site.

The legal protections now in place on the Titanic wreck may have been hard won, but they’re bittersweet in their ineffectiveness. The Titanic has been protected from excavation, but it’s defenseless against biology. Scientists now believe that within just a few decades, the ship will be all but gone, begging the question of precisely what the purpose of these statutes is.

In its present location, protections or no, Titanic’s destruction seems assured. It’s likely, but not certain, that moving the ship would damage it, yet keeping it in place makes its erosion a certainty. A few days after the wreck was found in 1985, competing explorer and Texan oilman Jack Grimm announced his own plans to salvage the ship, rather than let it be absorbed by the ocean floor. “What possible harm can that do to this mass of twisted steel?” he wondered. Grimm, and many others, may have been prevented from salvaging the site for its own protection—but simply leaving it alone has doomed it to disappear.

February 12th 2023

When Did Americans Lose Their British Accents?

The absence of audio recording technology makes “when” a tough question to answer. But there are some theories as to “why.”

ezgif.com-webp-to-jpg(113).jpg

Photo from Getty Images.

There are manymany evolving regional British and American accents, so the terms “British accent” and “American accent” are gross oversimplifications. What a lot of Americans think of as the typical “British accent” is what’s called standardized Received Pronunciation (RP), also known as Public School English or BBC English. What most people think of as an “American accent,” or most Americans think of as “no accent,” is the General American (GenAm) accent, sometimes called a “newscaster accent” or “Network English.” Because this is a blog post and not a book, we’ll focus on these two general sounds for now and leave the regional accents for another time.

English colonists established their first permanent settlement in the New World at Jamestown, Virginia, in 1607, sounding very much like their countrymen back home. By the time we had recordings of both Americans and Brits some three centuries later (the first audio recording of a human voice was made in 1860), the sounds of English as spoken in the Old World and New World were very different. We’re looking at a silent gap of some 300 years, so we can’t say exactly when Americans first started to sound noticeably different from the British.

As for the “why,” though, one big factor in the divergence of the accents is rhotacism. The General American accent is rhotic and speakers pronounce the r in words such as hard. The BBC-type British accent is non-rhotic, and speakers don’t pronounce the r, leaving hard sounding more like hahd. Before and during the American Revolution, the English, both in England and in the colonies, mostly spoke with a rhotic accent. We don’t know much more about said accent, though. Various claims about the accents of the Appalachian Mountains, the Outer Banks, the Tidewater region and Virginia’s Tangier Island sounding like an uncorrupted Elizabethan-era English accent have been busted as myths by linguists. 

Talk This Way

Around the turn of the 18th 19th century, not long after the revolution, non-rhotic speech took off in southern England, especially among the upper and upper-middle classes. It was a signifier of class and status. This posh accent was standardized as Received Pronunciation and taught widely by pronunciation tutors to people who wanted to learn to speak fashionably. Because the Received Pronunciation accent was regionally “neutral” and easy to understand, it spread across England and the empire through the armed forces, the civil service and, later, the BBC.

Across the pond, many former colonists also adopted and imitated Received Pronunciation to show off their status. This happened especially in the port cities that still had close trading ties with England — Boston, Richmond, Charleston, and Savannah. From the Southeastern coast, the RP sound spread through much of the South along with plantation culture and wealth.

After industrialization and the Civil War and well into the 20th century, political and economic power largely passed from the port cities and cotton regions to the manufacturing hubs of the Mid Atlantic and Midwest — New York, Philadelphia, Pittsburgh, Cleveland, Chicago, Detroit, etc. The British elite had much less cultural and linguistic influence in these places, which were mostly populated by the Scots-Irish and other settlers from Northern Britain, and rhotic English was still spoken there. As industrialists in these cities became the self-made economic and political elites of the Industrial Era, Received Pronunciation lost its status and fizzled out in the U.S. The prevalent accent in the Rust Belt, though, got dubbed General American and spread across the states just as RP had in Britain. 

Of course, with the speed that language changes, a General American accent is now hard to find in much of this region, with New York, Philadelphia, Pittsburgh, and Chicago developing their own unique accents, and GenAm now considered generally confined to a small section of the Midwest.

As mentioned above, there are regional exceptions to both these general American and British sounds. Some of the accents of southeastern England, plus the accents of Scotland and Ireland, are rhotic. Some areas of the American Southeast, plus Boston, are non-rhotic.

Matt Soniak is a long-time mental_floss regular and writes about science, history, etymology and Bruce Springsteen for both the website and the print magazine. His work has also appeared in print and online for Men’s Health, Scientific American, The Atlantic, Philly.com and others. He tweets as @mattsoniak and blogs about animal behavior at mattsoniak.com.

February 10th 2023

The Rift Valley, Kenya. Photo by Steve Forrest/Panos

Tristan McConnell

is a writer and foreign correspondent, who studied anthropology before becoming a journalist. His essays and reporting have appeared in National Geographic, The New Yorker, Emergence Magazine, GQ and the London Review of Books, among others. After working in different parts of Africa for nearly 20 years, he now lives in Woodbridge, in the UK.

Edited byCameron Allan McKean

3,400 words
Syndicate this Essay

Email

Save

Tweet

Share

Love Aeon?

Support our work

Donate

We are restless even in death. Entombed in stone, our most distant ancestors still travel along Earth’s subterranean passageways. One of them, a man in his 20s, began his journey around 230,000 years ago after collapsing into marshland on the lush edge of a river delta feeding a vast lake in East Africa’s Rift Valley. He became the earth in which he lay as nutrients leached from his body and his bone mineralised into fossil. Buried in the sediment of the Rift, he moved as the earth moved: gradually, inexorably.

Millions of years before he died, tectonic processes began pushing the Rift Valley up and apart, like a mighty inhalation inflating the ribcage of the African continent. The force of it peeled apart a 4,000-mile fissure in Earth’s crust. As geological movements continued, and the rift grew, the land became pallbearer, lifting and carrying our ancestor away to Omo-Kibish in southern Ethiopia where, in 1967, a team of Kenyan archaeologists led by Richard Leakey disinterred his shattered remains from an eroding rock bank.

Lifted from the ground, the man became the earliest anatomically modern human, and the start of a new branch – Homo sapiens – on the tangled family tree of humanity that first sprouted 4 million years ago. Unearthed, he emerged into the same air and the same sunlight, the same crested larks greeting the same rising sun, the same swifts darting through the same acacia trees. But it was a different world, too: the nearby lake had retreated hundreds of miles, the delta had long since narrowed to a river, the spreading wetland had become parched scrub. His partial skull, named Omo 1, now resides in a recessed display case at Kenya’s national museum in Nairobi, near the edge of that immense fault line.

Sign up to our newsletter

Updates on everything new at Aeon.

DailyWeekly

See our newsletter privacy policy here.

I don’t remember exactly when I first learned about the Rift Valley. I recall knowing almost nothing of it when I opened an atlas one day and saw, spread across two colourful pages, a large topographical map of the African continent. Toward the eastern edge of the landmass, a line of mountains, valleys and lakes – the products of the Rift – drew my eye and drove my imagination, more surely than either the yellow expanse of the Sahara or the green immensity of the Congo. Rainforests and deserts appeared uncomplicated, placid swathes of land in comparison with the fragmenting, shattering fissures of the Rift.

On a map, you can trace the valley’s path from the tropical coastal lowlands of Mozambique to the Red Sea shores of the Arabian Peninsula. It heads due north, up the length of Lake Malawi, before splitting. The western branch takes a left turn, carving a scythe-shaped crescent of deep lake-filled valleys – Tanganyika, Kivu, Edward – that form natural borders between the Democratic Republic of Congo and a succession of eastern neighbours: Tanzania, Burundi, Rwanda, Uganda. But the western branch peters out, becoming the broad shallow valley of the White Nile before dissipating in the Sudd, a vast swamp in South Sudan.

The eastern branch is more determined in its northward march. A hanging valley between steep ridges, it runs through the centre of Tanzania, weaving its way across Kenya and into Ethiopia where, in the northern Afar region, it splits again at what geologists call a ‘triple junction’, the point where three tectonic plates meet or, in this case, bid farewell. The Nubian and Somalian plates are pulling apart and both are pulling away from the Arabian plate to their north, deepening and widening the Rift Valley as they unzip the African continent. Here in the Rift, our origins and that of the land are uniquely entwined. Understanding this connection demands more than a bird’s-eye view of the continent.

The Rift Valley is the only place where human history can be seen in its entirety

Looking out across a landscape such as East Africa’s Rift Valley reveals a view of beauty and scale. But this way of seeing, however breath-taking, will only ever be a snapshot of the present, a static moment in time. Another way of looking comes from tipping your perspective 90 degrees, from the horizontal plane to the vertical axis, a shift from space to time, from geography to stratigraphy, which allows us to see the Rift in all its dizzying, vertiginous complexity. Here, among seemingly unending geological strata, we can gaze into what the natural philosopher John Playfair called ‘the abyss of time’, a description he made after he, James Hall and James Hutton in 1788 observed layered geological aeons in the rocky outcrops of Scotland’s Siccar Point – a revelation that would eventually lead Hutton to become the founder of modern geology. In the Rift Valley, this vertical, tilted way of seeing is all the more powerful because the story of the Rift is the story of all of us, our past, our present, and our future. It’s a landscape that offers a diachronous view of humanity that is essential to make sense of the Anthropocene, the putative geological epoch in which humans are understood to be a planetary force with Promethean powers of world-making and transformation.

The Rift Valley humbles us. It punctures the transcendent grandiosity of human exceptionalism by returning us to a specific time and a particular place: to the birth of our species. Here, we are confronted with a kind of homecoming as we discern our origins among rock, bones and dust. The Rift Valley is the only place where human history can be seen in its entirety, the only place we have perpetually inhabited, from our first faltering bipedal steps to the present day, when the planetary impacts of climatic changes and population growth can be keenly felt in the equatorial heat, in drought and floods, and in the chaotic urbanisation of fast-growing nations. The Rift is one of many frontiers in the climate crisis where we can witness a tangling of causes and effects.

But locating ourselves here, within Earth’s processes, and understanding ourselves as part of them, is more than just a way of seeing. It is a way of challenging the kind of short-term, atemporal, election-cycle thinking that is failing to deliver us from the climate and biodiversity crises. It allows us to conceive of our current moment not as an endpoint but as the culmination of millions of years of prior events, the fleeting staging point for what will come next, and echo for millennia to come. We exist on a continuum: a sliver in a sediment core bored out of the earth, a plot point in an unfolding narrative, of which we are both author and character. It brings the impact of what we do now into focus, allowing facts about atmospheric carbon or sea level rises to resolve as our present responsibilities.

The Rift is a place, but ‘rift’ is also a word. It’s a noun for splits in things or relationships, a geological term for the result of a process in which Earth shifts, and it’s a verb apt to describe our current connection to the planet: alienation, separation, breakdown. The Rift offers us another way of thinking.

That we come from the earth and return to it is not a burial metaphor but a fact. Geological processes create particular landforms that generate particular environments and support particular kinds of life. In a literal sense, the earth made us. The hominin fossils scattered through the Rift Valley are anthropological evidence but also confronting artefacts. Made of rock not bone, they are familiar yet unexpected, turning up in strange places, emerging from the dirt weirdly heavy, as if burdened with the physical weight of time. They are caught up in our ‘origin stories and endgames’, writes the geographer Kathryn Yusoff, as simultaneous manifestations of mortality and immortality. They embody both the vanishing brevity of an individual life and the near-eternity of a mineralised ‘geologic life’, once – as the philosopher Manuel DeLanda puts it in A Thousand Years of Nonlinear History (1997) – bodies and bones cross ‘the threshold back into the world of rocks’. There is fear in this, but hope too, because we can neither measure, contend with, nor understand the Anthropocene without embedding ourselves in different timescales and grounding ourselves in the earth. Hominin fossils are a path to both.

The rain, wind and tectonics summon long-buried bones, skulls and teeth from the earth

Those species that cannot adapt, die. Humans, it turns out – fortunately for us, less so for the planet – are expert adapters. We had to be, because the Rift Valley in which we were born is a complex, fragmented, shifting place, so diverse in habitats that it seems to contain the world. It is as varied as it is immense, so broad that on all but the clearest of days its edges are lost in haze. From high on its eastern shoulder, successive hills descend thousands of feet to the plains below, like ridges of shoreward ocean swell. Here, the valley floor is hard-baked dirt, the hot air summoning dust devils to dance among whistling thorns, camphor and silver-leafed myrrh. Dormant volcanoes puncture the land, their ragged, uneven craters stark against the sky. Fissures snake across the earth. Valley basins are filled with vast lakes, or dried out and clogged with sand and sediment. An ice-capped mountain stands sentinel, its razor ridges of black basalt rearing out of cloud forest. Elsewhere, patches of woodland cluster on sky islands, or carpet hills and plateaus. In some of the world’s least hospitable lands, the rain, wind and tectonics summon long-buried bones, skulls and teeth from the earth. This is restless territory, a landscape of tumult and movement, and the birthplace of us all.

My forays into this territory over the past dozen years have only scratched at the surface of its immense variety. I have travelled to blistering basalt hillsides, damp old-growth forests, ancient volcanoes with razor rims, smoking geothermal vents, hardened fields of lava, eroding sandstone landscapes that spill fossils, lakes with water that is salty and warm, desert dunes with dizzying escarpments, gently wooded savannah, and rivers as clear as gin. Here, you can travel through ecosystems and landscapes, but also through time

I used to live beside the Rift. For many years, my Nairobi home was 30 kilometres from the clenched knuckles of the Valley’s Ngong Hills, which slope downwards to meet a broad, flat ridge. Here, the road out of the city makes a sharp turn to the right, pitching over the escarpment’s edge before weaving its way, thousands of feet downwards over dozens of kilometres, through patchy pasture and whistling thorns. The weather is always unsettled here and, at 6,500 feet can be cold even on the clearest and brightest of days.

One particularly chilly bend in the road has been given the name ‘Corner Baridi’, cold corner. Occasionally, I would sit here, on scrubby grass by the crumbling edge of a ribbon of old tarmac, and look westwards across a transect of the Rift Valley as young herders wandered past, bells jangling at their goats’ necks. The view was always spectacular, never tired: a giant’s staircase of descending bluffs, steep, rocky and wooded, volcanic peaks and ridges, the sheen of Lake Magadi, a smudge of smoke above Ol Doinyo Lengai’s active caldera, the mirrored surface of Lake Natron, the undulating expanse of the valley floor.

And the feeling the scene conjured was always the same: awe, and nostalgia, in its original sense of a longing for home, a knowledge rooted in bone not books. This is where Homo sapiens are from. This is fundamental terrane, where all our stories begin. Sitting, I would picture the landscape as a time-lapse film, changing over millions of years with spectral life drifting across its shifting surface like smoke.

Humankind was forged in the tectonic crucible of the Rift Valley. The physical and cognitive advances that led to Homo sapiens were driven by changes of topography and climate right here, as Earth tipped on its axis and its surface roiled with volcanism, creating a complex, fragmented environment that demanded a creative, problem-solving creature.

Much of what we know of human evolution in the Rift Valley builds on the fossil finds and theoretical thinking of Richard Leakey, the renowned Kenyan palaeoanthropologist. Over the years I lived in Nairobi, we met and talked on various occasions and, one day in 2021, I visited him at his home, a few miles from Corner Baridi.

Millennia from now, the Rift Valley will have torn the landmass apart and become the floor of a new sea

It was a damp, chilly morning and, when I arrived, Leakey was finishing some toast with jam. Halved red grapefruit and a pot of stovetop espresso coffee sat on the Lazy Susan, a clutch bag stuffed with pills and tubes of Deep Heat and arthritis gel lay on the table among the breakfast debris, a walking stick hung from the doorknob behind him, and from the cuffs of his safari shorts extended two metal prosthetic legs, ending in a pair of brown leather shoes.

At the time, the 77-year-old had shown a knack for immortality, surviving the plane crash that took his legs in 1993, as well as bouts of skin cancer, transplants of his liver and kidneys, and COVID-19. He died in January 2022, but he was as energetic and enthused as I had ever seen him when we met. We discussed Nairobi weather, Kenyan politics, pandemic lockdowns, and his ongoing work. He described his ambitions for a £50 million museum of humankind, to be called Ngaren (meaning ‘the beginning’, in the Turkana language) and built close to his home on a patch of family land he planned to donate. It was the only place that made sense for the museum, he said, describing how the fossils he had uncovered over the years – among them, Omo 1 and the Homo erectus nicknamed Turkana Boy – were all phrases, sentences, or sometimes whole chapters in the story of where we came from, and who we are. ‘The magic of the Rift Valley is it’s the only place you can read the book,’ he told me.

School children gaze upon the skeleton of Homo erectus, nicknamed Turkana Boy, in the Nairobi National Museum, Kenya. Photo by Tony Karumba/Getty

Afterwards, I drove out to the spot where Leakey envisioned his museum being built: a dramatic basalt outcropping amid knee-high grass and claw-branched acacias, perched at the end of a ridge, the land falling precipitously away on three sides. It felt like an immense pulpit or perhaps, given Leakey’s paternal, didactic style, atheist beliefs, and academic rigour, a lectern.

A little way north of Leakey’s home, beyond Corner Baridi, a new railway tunnel burrows through the Ngong Hills to the foot of the escarpment where there is a town of low-slung concrete, and unfinished roofs punctured by reinforced steel bars. For most hours of most days, lorries rumble by, nose to tail, belching smoke and leaking oil. They ferry goods back and forth across the valley plains. The new railway will do the same, moving more stuff, more quickly. The railway, like the road, is indifferent to its surroundings, its berms, bridges, cuttings and tunnels defy topography, mock geography.

Running perpendicular to these transport arteries, pylons stride across the landscape, bringing electricity in high voltage lines from a wind farm in the far north to a new relay station at the foot of a dormant volcano. The promise of all this infrastructure increases the land’s value and, where once there were open plains, now there are fences, For Sale signs, and quarter-acre plots sold in their hundreds. Occasionally, geology intervenes, as it did early one March morning in 2018 when Eliud Njoroge Mbugua’s home disappeared.

It began with a feathering crack scurrying across his cement floor, which widened as the hours passed. Then the crack became a fissure, and eventually split his cinderblock shack apart, hauling its tin-roofed remnants into the depths. Close by, the highway was also torn in two. The next day, journalists launched drones into the sky capturing footage that revealed a lightning-bolt crack in the earth stretching hundreds of metres across the flat valley floor. Breathless news reports followed, mangling the science and making out that an apocalyptic splitting of the African continent was underway. They were half-right.

Ten thousand millennia from now, the Rift Valley will have torn the landmass apart and become the floor of a new sea. Where the reports were wrong, however, was in failing to recognise that Mbugua’s home had fallen victim to old tectonics, not new ones: heavy rains had washed away the compacted sediment on which his home had been built, revealing a fault line hidden below the surface. Sometimes, the changes here can point us forward in time, toward our endings. But more often, they point backwards.

Just a few years earlier, when I first moved to Nairobi, the railway line and pylons did not exist. Such is the velocity of change that, a generation ago, the nearby hardscrabble truck stop town of Mai Mahiu also did not exist. If we go four generations back, there were neither trucks nor the roads to carry them, neither fence posts nor brick homes. The land may look empty in this imagined past, but is not: pastoralist herders graze their cows, moving in search of grass and water for their cattle, sharing the valley with herds of elephant, giraffe and antelope, and the lions that stalk them.

Thousands of years earlier still, and the herders are gone, too. Their forebears are more than 1,000 miles to the northwest, grazing their herds on pastures that will become the Sahara as temperatures rise in the millennia following the end of the ice age, the great northern glaciers retreat and humidity falls, parching the African land. Instead, the valley is home to hunter-gatherers and fishermen who tread the land with a lighter foot.

Go further. At the dawn of the Holocene – the warm interglacial period that began 12,000 years ago and may be coming to a close – the Rift is different, filled with forests of cedar, yellowwood and olive, sedge in the understory. The temperature is cooler, the climate wetter. Dispersed communities of human hunter-gatherers, semi-nomads, live together, surviving on berries, grasses and meat, cooking with fire, hunting with sharpened stone. Others of us have already left during the preceding 40,000 years, moving north up the Rift to colonise what will come to be called the Middle East, Europe, Asia, the Americas.

As geology remakes the land, climate makes its power felt too, swinging between humidity and aridity

Some 200,000 years ago, the Rift is inhabited by the earliest creature that is undoubtedly us: the first Homo sapiens, like our ancestor found in Ethiopia. Scrubbed and dressed, he would not turn heads on the streets of modern-day Nairobi, London or New York. At this time, our ancestors are here, and only here: in the Rift.

Two million years ago, we are not alone. There are at least two species of our Homo genus sharing the Rift with the more ape-like, thicker-skulled and less dexterous members of the hominin family: Australopithecus and Paranthropus. A million years earlier, a small, ape-like Australopithecus (whom archaeologists will one day name ‘Lucy’) lopes about on two legs through a mid-Pliocene world that is even less recognisable, full of megafauna, forests and vast lakes.

Further still – rewinding into the deep time of geology and tectonics, through the Pliocene and Miocene – there is nothing we could call ‘us’ anymore. The landscape has shifted and changed. As geology remakes the land, climate makes its power felt too, swinging between humidity and aridity. Earth wobbles on its axis and spins through its orbit, bringing millennia-long periods of oscillation between wetness and dryness. The acute climate sensitivity of the equatorial valley means basin lakes become deserts, and salt pans fill with water.

On higher ground, trees and grasses engage in an endless waltz, ceding and gaining ground, as atmospheric carbon levels rise and fall, favouring one family of plant, then the other. Eventually, the Rift Valley itself is gone, closing up as Earth’s crust slumps back towards sea level and the magma beneath calms and subsides. A continent-spanning tropical forest, exuberant in its humidity, covers Africa from coast to coast. High in the branches of an immense tree sits a small ape, the common ancestor of human and chimpanzee before tectonics, celestial mechanics and climate conspire to draw us apart, beginning the long, slow process of splitting, separating, fissuring, that leads to today, tens of millions of years later, but perhaps at the same latitude and longitude of that immense tree: a degree and a half south, 36.5 degrees west, on a patch of scrubby grass at the edge of the Rift.

Comment Why does everything go back to Africa ? Ask the white bourgeoise liberals. Only they know the truth. R J Cook

February 8th 2023

Inside History
How Are US Government Documents Classified?Here’s what qualifies documents as “Top Secret,” “Secret” and “Confidential”—and how they’re supposed to be handled.Read More
 
How Angela Davis Ended Up on the FBI Most Wanted ListThe scholar and activist was sought and then arrested by the FBI in 1970—the experience informed her life’s work.Read More
 
Weird and Wondrous: the Evolution of Super Bowl Halftime ShowsThe Big Game is this weekend. From a 3-D glasses experiment to ‘Left Shark,’ the halftime show has always captured the public’s imagination.Read More
 
8 Black Inventors Who Made Daily Life EasierBlack innovators changed the way we live through their contributions, from the traffic light to the ironing board.Read More
 
The Greatest Story Never ToldFrom Pulitzer Prize-winning journalist Nikole Hannah-Jones comes The 1619 Project, a docuseries based on the New York Times multimedia project that examines the legacy of slavery in America and its impact on our society today.Stream Now
The First Valentine Was Sent From Prison
 
THE NEW YORK TIMES8 Places Across the US That Illuminate Black History
 
NATIONAL GEOGRAPHICA Mecca for Rap has Emerged in the Birthplace of Jazz and Blues
 
THE WASHINGTON POSTWhere Did All the Strange State of the Union Traditions Come From?


Inside History
 

How Are US Government Documents Classified?
Here’s what qualifies documents as “Top Secret,” “Secret” and “Confidential”—and how they’re supposed to be handled.
Read More
 

How Angela Davis Ended Up on the FBI Most Wanted List
The scholar and activist was sought and then arrested by the FBI in 1970—the experience informed her life’s work.
Read More
 

Weird and Wondrous: the Evolution of Super Bowl Halftime Shows
The Big Game is this weekend. From a 3-D glasses experiment to ‘Left Shark,’ the halftime show has always captured the public’s imagination.
Read More
 

8 Black Inventors Who Made Daily Life Easier
Black innovators changed the way we live through their contributions, from the traffic light to the ironing board.
Read More
 

The Greatest Story Never Told
From Pulitzer Prize-winning journalist Nikole Hannah-Jones comes The 1619 Project, a docuseries based on the New York Times multimedia project that examines the legacy of slavery in America and its impact on our society today.
Stream Now
 

 

The First Valentine Was Sent From Prison

 

 
THE NEW YORK TIMES
8 Places Across the US That Illuminate Black History
 
NATIONAL GEOGRAPHIC
A Mecca for Rap has Emerged in the Birthplace of Jazz and Blues
 
THE WASHINGTON POST
Where Did All the Strange State of the Union Traditions Come From?

February 7th 2023

6 Myths About the History of Black People in America

Six historians weigh in on the biggest misconceptions about Black history, including the Tuskegee experiment and enslaved people’s finances.

Vox

  • Karen Turner
  • Jessica Machado

To study American history is often an exercise in learning partial truths and patriotic fables. Textbooks and curricula throughout the country continue to center the white experience, with Black people often quarantined to a short section about slavery and quotes by Martin Luther King Jr. Many walk away from their high school history class — and through the world — with a severe lack of understanding of the history and perspective of Black people in America.

In the summer of 2019, the New York Times’s 1619 Project burst open a long-overdue conversation about how stories of Black Americans need to be told through the lens of Black Americans themselves. In this tradition, and in celebration of Black History Month, Vox has asked six Black scholars and historians about myths that perpetuate about Black history. Ultimately, understanding Black history is more than learning about the brutality and oppression Black people have endured — it’s about the ways they have fought to survive and thrive in America.


Myth 1: That enslaved people didn’t have money

Enslaved people were money. Their bodies and labor were the capital that fueled the country’s founding and wealth.

But many also had money. Enslaved people actively participated in the informal and formal market economy. They saved money earned from overwork, from hiring themselves out, and through independent economic activities with banks, local merchants, and their enslavers. Elizabeth Keckley, a skilled seamstress whose dresses for Abraham Lincoln’s wife are displayed in Smithsonian museums, supported her enslaver’s entire family and still earned enough to pay for her freedom.

Free and enslaved market women dominated local marketplaces, including in Savannah and Charleston, controlling networks that crisscrossed the countryside. They ensured fresh supplies of fruits, vegetables, and eggs for the markets, as well as a steady flow of cash to enslaved people. Whites described these women as “loose” and “disorderly” to criticize their actions as unacceptable behavior for women, but white people of all classes depended on them for survival.

 Illustrated portrait of Elizabeth Keckley (1818-1907), a formerly enslaved woman who bought her freedom and became dressmaker for first lady Mary Todd Lincoln. Hulton Archive/Getty Images 

In fact, enslaved people also created financial institutions, especially mutual aid societies. Eliza Allen helped form at least three secret societies for women on her own and nearby plantations in Petersburg, Virginia. One of her societies, Sisters of Usefulness, could have had as many as two to three dozen members. Cities like Baltimore even passed laws against these societies — a sure sign of their popularity. Other cities reluctantly tolerated them, requiring that a white person be present at meetings. Enslaved people, however, found creative ways to conduct their societies under white people’s noses. Often, the treasurer’s ledger listed members by numbers so that, in case of discovery, members’ identities remained protected.

During the tumult of the Civil War, hundreds of thousands of Black people sought refuge behind Union lines. Most were impoverished, but a few managed to bring with them wealth they had stashed under beds, in private chests, and in other hiding places. After the war, Black people fought through the Southern Claims Commission for the return of the wealth Union and Confederate soldiers impounded or outright stole.

Given the resurgence of attention on reparations for slavery and the racial wealth gap, it is important to recall the long history of black people’s engagement with the US economy — not just as property, but as savers, spenders, and small businesspeople.

Shennette Garrett-Scott is an associate professor of history and African American Studies at the University of Mississippi and the author of Banking on Freedom: Black Women in US Finance Before the New Deal.


Myth 2: That Black revolutionary soldiers were patriots

Much is made about how colonial Black Americans — some free, some enslaved — fought during the American Revolution. Black revolutionary soldiers are usually called Black Patriots. But the term Patriot is reserved within revolutionary discourse to refer to the men of the 13 colonies who believed in the ideas expressed in the Declaration of Independence: that America should be an independent country, free from Britain. These persons were willing to fight for this cause, join the Continental Army, and, for their sacrifice, are forever considered Patriots. That’s why the term Black Patriot is a myth — it infers that Black and white revolutionary soldiers fought for the same reasons.

 Painting of the 1770 Boston Massacre showing Crispus Attucks, one of the leaders of the demonstration and one of the five men killed by the gunfire of the British troops. Bettmann Archive/Getty Images 

First off, Black revolutionary soldiers did not fight out of love for a country that enslaved and oppressed them. Black revolutionary soldiers were fighting for freedom — not for America, but for themselves and the race as a whole. In fact, the American Revolution is a case study of interest convergence. Interest convergence denotes that within racial states such as the 13 colonies, any progress made for Black people can only be made if that progress also benefits the dominant culture — in this case the liberation of the white colonists of America. In other words, colonists’ enlistment of Black people was not out of some moral mandate, but based on manpower needs to win the war.

In 1775, Lord Dunmore, the royal governor of Virginia who wanted to quickly end the war, issued a proclamation to free enslaved Black people if they defected from the colonies and fought for the British army. In response, George Washington revised the policy that restricted Black persons (free or enslaved) from joining his Continental Army. His reversal was based in a convergence of his interests: competing with a growing British military, securing the slave economy, and increasing labor needs for the Continental Army. When enslaved persons left the plantation, this caused serious social and economic unrest in the colonies. These defections were encouragement for many white plantation owners to join the Patriotic cause even if they previously held reservations.

Washington also saw other benefits in Black enlistment: White revolutionary soldiers only fought in three- to four-month increments and returned to their farms or plantation, but many Black soldiers could serve longer terms. The need for the Black soldier was essential for the war effort, and the need to win the war became greater than racial or racist ideology.

Interests converged with those of Black revolutionary soldiers as well. Once the American colonies promised freedom, about a quarter of the Continental Army became Black; before that, more Black people defected to the British military for a chance to be free. Black revolutionary soldiers understood the stakes of the war and realized that they could also benefit and leave bondage. As historian Gary Nash has said, the Black revolutionary soldier “can best be understood by realizing that his major loyalty was not to a place, not to a people, but to a principle.”

Black people played a dual role — service with the American forces and fleeing to the British — both for freedom. The notion of the Black Patriot is a misused term. In many ways, while the majority of the whites were fighting in the American Revolution, Black revolutionary soldiers were fighting the “African Americans’ Revolution.”

LaGarrett King is an education professor at the University of Missouri Columbia and the founding director of the Carter Center for K-12 Black History Education.


Myth 3: That Black men were injected with syphilis in the Tuskegee experiment

A dangerous myth that continues to haunt Black Americans is the belief that the government infected 600 Black men in Macon County, Alabama, with syphilis. This myth has created generations of African Americans with a healthy distrust of the American medical profession. While these men weren’t injected with syphilis, their story does illuminate an important truth: America’s medical past is steeped in racialized terror and the exploitation of Black bodies.

The Tuskegee Study of Untreated Syphilis in the Negro Male emerged from a study group formed in 1932 connected with the venereal disease section of the US Public Health Service. The purpose of the experiment was to test the impact of syphilis untreated and was conducted at what is now Tuskegee University, a historically Black university in Macon County, Alabama.

The 600 Black men in the experiment were not given syphilis. Instead, 399 men already had stages of the disease, and the 201 who did not served as a control group. Both groups were withheld from treatment of any kind for the 40 years they were observed. The men were subjected to humiliating and often painfully invasive tests and experiments including spinal taps.

Deemed uneducated and impoverished sharecroppers, these men were lured by free medical examinations, hot meals, free treatment for minor injuries, rides to and from the hospital, and guaranteed burial stipends (up to $50) to be paid to their survivors. The study also did not occur in total secret, and several African American health workers and educators associated with the Tuskegee Institute assisted in the study.

By the end of the study in the summer of 1972, after a whistleblower exposed the story in national headlines, only 74 of the test subjects were still alive. From the original 399 infected men, 28 had died of syphilis, 100 others from related complications. Forty of the men’s wives had been infected, and an estimated 19 of their children were born with congenital syphilis.

As a result of the case, the US Department of Health and Human Services established the Office for Human Research Protections (OHRP) in 1974 to oversee clinical trials. The case also solidified the idea of African Americans being cast and used as medical guinea pigs.

An unfortunate side effect of both the truth of medical racism and the myth of syphilis injection, however, is it tangibly reinforces the inability to place trust in the medical system for some African Americans who may not choose to seek out assistance, and as a result put themselves in danger.

Sowande Mustakeem is an associate professor of History and African & African American Studies at Washington University in St. Louis.


Myth 4: That Black people in early Jim Crow America didn’t fight back

It is well-known that African Americans faced the constant threat of ritualistic public executions by white mobs, unpunished attacks by individuals, and police brutality in Jim Crow America. But how they responded to this is a myth that persists. In an effort to find lawful ways to address such events, some Black people made legalistic appeals to convince police and civic leaders their rights and lives should be protected. Yet the crushing weight of a hostile criminal justice system and the rigidity of the color line often muted those petitions, leaving Black people vulnerable to more mistreatment and murder.

 An unidentified member of the Detroit chapter of the Black Panther Party stands guard with a shotgun on December 11, 1969. Bettmann Archive/Getty Images 

In the face of this violence, some African Americans prepared themselves physically and psychologically for the abuse they expected — and they fought back. Distressed by public racial violence and unwilling to accept it, many adhered to emerging ideologies of outright rebellion, particularly after the turn of the 20th century and the emergence of the “New Negro.” Urban, more educated than their parents, and often trained militarily, a generation coming of age following World War I sought to secure themselves in the only ways left. Many believed, as Marcus Garvey once told a Harlem audience, that Black folks would never gain freedom “by praying for it.”

For New Negroes, the comparatively tame efforts of groups like the NAACP were not urgent enough. Most notably, they defended themselves fiercely nationwide during the bloodshed of the Red Summer of 1919 when whites attacked African Americans in multiple cities across the country. Whites may have initiated most race riots in the early Jim Crow era, but some also happened as Black people rejected the limitations placed on their life, leisure, and labor, and when they refused to fold under the weight of white supremacy. The magnitude of racial and state violence often came down upon Black people who defended themselves from police and citizens, but that did not stop some from sparking personal and collective insurrections.

Douglas J. Flowe is an assistant professor of history at Washington University in St. Louis.


Myth 5: That crack in the “ghetto” was the largest drug crisis of the 1980s

The bodies of people of color have a pernicious history of total exploitation and criminalization in the US. Like total war, total exploitation enlists and mobilizes the resources of mainstream society to obliterate the resources and infrastructure of the vulnerable. This has been done to Black people through a robust prison industrial complex that feeds on their vilification, incarceration, disenfranchisement, and erasure. And the crack epidemic of the late 1980s and ’90s is a clear example of this cycle.

Even though more white people reported using crack more than Black people in a 1991 National Institute on Drug Abuse survey, Black people were sentenced for crack offenses eight times more than whites. Meanwhile, there was a corresponding cocaine epidemic in white suburbs and college campuses that compelled the US to install harsher penalties for crack than for cocaine.For example, in 1986, before the enactment of federal mandatory minimum sentencing for crack cocaine offenses, the average federal drug sentence for African Americans was 11 percent higher than for whites. Four years later, the average federal drug sentence for African Americans was 49 percent higher.

Even through the ’90s and beyond, the media and supposed liberal allies, like Hillary Clinton, designated Black children and teens as drug-dealing “superpredators” to mostly white audiences. The criminalization of people of color during the crack epidemic made mainstream white Americans comfortable knowing that this was a contained black-on-black problem.

It also left white America unprepared to deal with the approach of the opioid epidemic, which is often a white-on-white crime whose dealers will evade prison (see: the Sacklers, the billionaire family behind Oxycontin who has served no jail time; and Johnson & Johnson, which got a $107 million break in fines when it was found liable for marketing practices that led to thousands of overdose deaths). Unlike Black Americans who are sent to prison, these white dealers retain their right to vote, lobby, and hold on to their wealth.

Jason Allen is a public historian and facilitator at xCHANGEs, a cultural diversity and inclusion training consultancy.


Myth 6: That all Black people were enslaved until emancipation

One of the biggest myths about the history of Black people in America is that all were enslaved until the Emancipation Proclamation, or Juneteenth Day.

In reality, free Black and Black-white biracial communities existed in states such as Louisiana, Maryland, Virginia, and Ohio well before abolition. For example, Anthony Johnson, named Antonio the Negro on the 1625 census, was listed on this document as a servant. By 1640, he and his wife owned and managed a large plot of land in Virginia.

 A group of free African Americans in an unknown city, circa 1860. Bettmann Archive/Getty Images 

Some enslaved Africans were able to sell their labor or craftsmanship to others, thereby earning enough money to purchase their freedom. Such was the case for Richard Allen, who paid for his freedom in 1786 and co-founded the African Methodist Episcopal Church less than a decade later. After the American Revolutionary War, Robert Carter III committed the largest manumission — or freeing of slaves — before Lincoln’s Emancipation Proclamation, freeing his 100 enslaved Africans.

Not all emancipations were large. Individuals or families were sometimes freed upon the death of their enslaver and his family. And many escaped and lived free in the North or in Canada. Finally, there were generations of children born in free Black and biracial communities, many who never knew slavery.

Eventually, slave states established expulsion laws making residency there for free Black people illegal. Some filed petitions to remain near enslaved family members, while others moved West or North. And in the Northeast, many free Blacks formed benevolent organizations such as the Free African Union Society for support and in some cases repatriation.

The Emancipation Proclamation in 1863 — and the announcement of emancipation in Texas two years later — allowed millions of enslaved people to join the ranks of already free Black Americans.

Dale Allender is an associate professor at California State University Sacramento.

February 6th 2023

What Have Strikes Achieved?

Withdrawing labour is an age-old response to workplace grievances. But how old, and to what effect?

History Today | Published in History TodayVolume 73 Issue 1 January 2023

Vicente Cutanda - Una huelga de obreros en Vizcaya
‘Una huelga de obreros en Vizcaya (A strike of workers in Biscay)’, Vicente Cutanda, 1892. Museo del Prado/Wiki Commons.

‘In Aristophanes’ Lysistrata, the women of Greece unite together in a sex-strike’

Lynette Mitchell, Professor in Greek History and Politics at the University of Exeter

‘Strike action’ – the withdrawal of labour as a protest – was known in the ancient world. The Greeks, however, did not generally form themselves into professional guilds, at least not before the third century BC when the associations of ‘the musicians of Dionysus’ were formed alongside the growth in the number of festivals.

This did not mean, however, that the Greeks were oblivious to the significance of the withdrawal of labour. The epic poem the Iliad begins with Achilles – the best of the Greek fighters – withdrawing from battle against the Trojans because he has been deprived of his war-prize, the concubine Briseis.

Withdrawing one’s skills as a fighter in warfare was a significant bargaining tool. At the beginning of the fourth century BC, the Greek army of the Ten Thousand, who were employed by Cyrus the Younger in the war against his brother, Artaxerxes II, threatened to abandon the Persian prince unless he raised their pay to a level commensurate with the danger of engaging the ‘King of Kings’ in battle (they had originally been employed on another pretext and a different pay scale). In 326 BC, when the soldiers of Alexander the Great reached the River Hyphasis in the Hindu Kush, they refused to cross it and penetrate further east into northern India, thus forcing Alexander to give up his pursuit of limitless glory. The writer Arrian says that this was his only defeat.

War brought glory, but it also brought misery. In Aristophanes’ comedy Lysistrata, produced in 411 BC, the women of Greece unite together in a sex-strike in order to force their husbands to give up their wars with each other. Although the women struggle to maintain discipline among their own ranks (some of the most comic scenes of the play describe women sneaking away from the Acropolis, which the strikers have occupied), the eponymous Lysistrata, a woman of intelligence and determination, is asked to arbitrate between the Greek cities in order to bring the strike to an end; she presents the warring men with a beautiful girl, Reconciliation, and the play ends with the Spartans and Athenians remembering the wars fought together against the Persians. Peace is restored.

‘During the reign of Ramesses III underpayment had become typical’

Dan Potter, Assistant Curator of the Ancient Mediterranean collections at National Museums Scotland

Early in the 29th year of the reign of Ramesses III (c.1153 BC), the royal tomb builders of Deir el-Medina grew increasingly concerned about the payment of their wages. The workmen were paid in sacks of barley and wheat, which was not just their families’ food, but also currency. Late deliveries and underpayment had become typical, leading one scribe to keep a detailed record of the arrears. Supply issues were linked to the agricultural calendar, but the consistent problems of this period show it was also a failure of state. An initial complaint by the workers was resolved but the causes were not dealt with. With the approval of their ‘captains’ (a three-man leadership group), the workers staged eight days of action; they ‘passed the walls’ of their secluded village and walked down to nearby royal temples chanting ‘We are hungry!’ They held sit-ins at several temples, but officials remained unable, or unwilling, to assist. A torchlit demonstration later in the week forced through one month’s grain payment.

In the following months, they ‘passed the walls’ multiple times. Eventually, the recently promoted vizier, To, wrote to them explaining that the royal granaries were empty. He apologised with a politician’s answer for the ages: ‘It was not because there was nothing to bring you that I did not come.’ In reality, To was probably busy in the delta capital at the King’s Heb-Sed (royal jubilee). To rustled together a half payment to appease the striking workers. After this derisory delivery, the angry Chief Workman Khons proposed a door-to-door campaign against local officials which was only halted by his fellow captain Amunnakht, the scribe who recorded much of the detail we have about the strikes.

Even after a bulk reimbursement was paid early in year 30, inconsistent payments resulted in more industrial action in the ensuing years. The strikes were indicative of increasing regional instability, as Waset (Luxor) experienced food shortages, inflation, incursions from nomadic tribes, tomb robberies and more downing of tools. The workers’ village was partially abandoned around 70 years later.

‘Success depends on the response of the public and the possibility of favourable government intervention’

Alastair Reid, Fellow of Girton College, Cambridge

The word strike usually brings to mind a mass strike which goes on for a long time and completely shuts down an industry, such as the British coal miners’ strikes of the 1920s and the 1970s. These sort of disputes have rarely achieved anything positive: they are costly for the incomes of the strikers and their families and if their unions could afford to give the strikers some support, then that only drained the organisation’s funds. The stress caused has often led to splits within the union and friction with other organisations.

It is noticeable, therefore, that in recent years trade unions calling large numbers of their members out on strike have tended to focus on limited days of action rather than indefinite closures.

Sometimes the wider public has been sympathetic towards the strikers. This was the case during the London dock strike of 1889. However, when the disruption has affected public services, as in the ‘Winter of Discontent’ in 1978-79, strikers have become very unpopular. Often, when this sort of strike action achieved positive results for trade unionists, it was when the government had reason to intervene in their favour: during the First World War for example, when maintaining military production was essential.

The mass withdrawal of labour is not the only form of strike action that has been seen in the past. Highly skilled unions such as engineers and printers developed a tactic known as the ‘strike in detail’, during which they used their unemployment funds to support members in leaving blacklisted firms and thus effectively targeted employers one at a time. Another possibility is the opposite of a strike – a ‘work in’ – as at the Upper Clyde Shipbuilders in 1971, when a significant part of the workforce refused to accept the closure of the yards and won significant public support for their positive attitude. In general, the mass strike is a dangerous weapon that can easily backfire: success depends on the response of the public and the possibility of favourable government intervention.

‘There was one clear winner: the Chinese Communist Party’

Elisabeth Forster, Lecturer in Chinese History at the University of Southampton

Gu Zhenghong was shot dead by a foreman on 15 May 1925, triggering China’s anti-imperialist May 30th Movement. Gu was a worker on strike at a textile mill in Shanghai. The mill was Japanese-owned, Japan being among the countries that had semi-colonised China. Outraged by Gu’s death – and the imperialism behind it – students and workers demonstrated in Shanghai’s Foreign Settlement on 30 May. At some point British police opened fire, leaving more than ten demonstrators dead. In response, a general strike was called, with workers’, students’ and merchants’ unions, the Shanghai General Chamber of Commerce, as well as the Nationalist Party (GMD) and the Chinese Communist Party among its leaders.

Among the strikers were students, merchants and workers in various sectors, such as seamen, workers at the wharves, at phone companies, power plants, buses and trams. Not all sectors participated and certain individuals broke the strike, some of whom were then kidnapped by their unions. The strikes were accompanied by boycotts of foreign goods and sometimes strikers clashed violently with the authorities.

The demands were broad and were not confined to work-related issues, but also covered anti-imperialist goals, such as an end to extraterritoriality. By August, enthusiasm for the strikes had waned. Merchants were tired of their financial losses. Some of the workers started rioting against their union, since strike pay had dried up. The strikes’ organisers therefore had to settle the industrial (and political) dispute.

Contemporaries were unsure if the strikes had achieved their goal. Strike demands had been reduced and not all were met. Many new unions had been founded, but some were also closed by the authorities, and labour movement organisers had to go underground or face arrest and execution. But there was one clear winner: the Chinese Communist Party. If workers had previously mistrusted communists as hairy, badly dressed ‘extremists’, the Party was now acknowledged as a leader of labour. Imperialism in China would end, but not until after the Second World War and the era of global decolonisation.

Social

Related Articles

Bruree Workers Soviet Mills, in Bruree, County Limerick, declared soviet on 26 August 1921. Wiki Commons/Limerick Museum.
The First Soviet in Ireland
Dockers unloading sugar in the West India Docks at the end of the dock labourers' strike, 16th September, 1889
The Dockers Who Won

Popular articles

Vicente Cutanda - Una huelga de obreros en Vizcaya

What Have Strikes Achieved?
Zhu Youjian killing his daughter Princess Zhaoren, 20th century.

February 2nd 2023

It’s been 230 years since British pirates robbed the US of the metric system

How did the world’s largest economy get stuck with retro measurement?

icon Iain Thomson

Sun 22 Jan 2023 // 08:38 UTC

Feature In 1793, French scientist Joseph Dombey sailed for the newly formed United States at the request of Thomas Jefferson carrying two objects that could have changed America. He never made it, and now the US is stuck with a retro version of measurement that is unique in the modern world.

The first, a metal cylinder, was exactly one kilogram in mass. The second was a copper rod the length of a newly proposed distance measurement, the meter.

Jefferson was keen on the rationality of the metric system in the US and an avid Francophile. But Dombey’s ship was blown off course, captured by English privateers (pirates with government sanction), and the scientist died on the island of Montserrat while waiting to be ransomed.

And so America is one of a handful of countries that maintains its own unique forms of weights and measures.

The reason for this history lesson? Over the last holiday period this hack has been cooking and is sick of this pounds/ounces/odd pints business – and don’t even get me started on using cups as a unit of measurement.

It’s time for America to get out of the Stone Age and get on board with the International System of Units (SI), as the metric system used to be known.

There’s a certain amount of hypocrisy here – I’m British and we still cling to our pints, miles per hour, and I’m told drug dealers still deal in eighths and ‘teenths in the land of my birth. But the American system is bonkers, has cost the country many millions of dollars, an increasing amount of influence, and needs to be changed.

Brits and Americans…

The cylinder and rod Dombey was carrying, the former now owned by the US National Institute of Standards and Technology, was requested by Jefferson because the British system in place was utterly irrational.

When the UK settled in the Americas they brought with them a bastardized version of weights, measures and currencies. A Scottish pint, for example, was almost triple the size of an English equivalent until 1824, which speaks volumes about the drinking culture north of the border.

British measurements were initially standardized in the UK’s colonies, but it was a curious system, taking in Roman, Frankish, and frankly bizarre additions. Until 1971, in the UK a pound consisted of 240 pence, with 12 pence to the shilling and 20 shillings to the pound.

To make things even more confusing, individual settlements adopted their own local weights and measures. From 1700, Pennsylvania took control of its own measurements and other areas soon followed. But this mishmash of coins, distances and weights held the country back and Jefferson scored his first success in the foundation of a decimal system for the dollar.

“I question if a common measure of more convenient size than the Dollar could be proposed. The value of 100, 1,000, 10,000 dollars is well estimated by the mind; so is that of a tenth or hundredth of a dollar. Few transactions are above or below these limits,” he said [PDF].

So of course he’s on the least popular note

Jefferson wanted something new, more rational, and he was not alone. In the first ever State of the Union address in 1790, George Washington observed: “Uniformity in the Currency, Weights and Measures of the United States is an object of great importance, and will, I am persuaded, be duly attended to.”

America was a new country, and owed a large part of the success of the Revolutionary War to France, in particular the French navy. The two countries were close, and the metric system appealed to Jefferson’s mindset, and to many in the new nation.

And this desire for change wasn’t just limited to weights and measures. Also in 1793, Alexander Hamilton hired Noah Webster, who as a lexicographer and ardent revolutionary wanted America to cast off the remnants of the old colonial power. Webster wrote a dictionary, current versions of which can be found in almost every classroom in the US.

And then politics and Napoleon happened

Jefferson asked the French for other samples including a copper meter and a copy of the kilogram, which was sent in 1795, but by then things had changed somewhat since he was no longer running the show. On January 2, 1794, he was replaced as US Secretary of State by fellow Founding Father Edmund Randolph, who was much less keen on the government getting involved in such things.

To make matters worse, relations between America and France were deteriorating sharply. The French government felt that the newly formed nation wasn’t being supportive enough in helping Gallic forces fight the British in the largely European War of the First Coalition. In something of a hissy fit, the French government declined to invite representatives from the US to the international gathering at Paris in 1798-99 that set the initial standards for the metric system.

Jefferson’s plans were kicked into committee and while a form of standardization based on pounds and ounces was approved by the House, the Senate declined to rule on the matter.

Not that it mattered much longer. In 1812, Napoleon effectively abolished the enforcement of the metric system in France. Napoleon was known as Le Petit Caporal, with multiple reports he was five foot two. As we know now, he was around average height for the time.

After the French dictator was defeated, the case for the metric system in France sank into near-limbo at first, as it did in the US. But it gradually spread across Europe because you can’t keep a good idea down and science and industrialization were demanding it.

Welcome to the rational world

What has kept the metric system going is its inherent rationality. Rather than use a hodgepodge of local systems, why not build one based on measurements everyone could agree on configured around the number 10, which neatly matches the number of digits on most people’s hands?

Above all it’s universal, a gram means a gram in any culture. Meanwhile, buy a pint in the UK and you’ll get 20oz of beer, do the same in America and, depending where you are, you’ll likely get 16oz – a fact that still shocks British drinkers. The differences are also there with tons, and the odd concept of stones as a weight measurement.

Metric is by no means perfect. For example, in the initial French system, a gram, or grave as it was initially known, was the mass of one cubic centimeter of water. A meter was a 10 millionth of the distance between the pole and the equator – although the French weren’t exactly sure how far that was at the time.

The original metre carved into the Place Vendôme in Paris, some adjustment required

Since then the system has been revised a lot with discoveries of more natural constants. For example, a meter is now 1/299,792,458 of the distance light travels during a second. As of 1967, the second itself has been defined as “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom,” but better measurement by atomic clocks may change this.

The chief adherents to the metric system initially were scientists who desperately needed universal sources of measurement to compare notes and replicate experiments without the errors common when converting from one measuring system to another.

This is down to convoluted systems like 12 inches in a foot, three feet in a yard, 1,760 yards in a mile, compared to 100 centimeters in a meter and 1,000 meters to a kilometer. A US pound is 0.453592 kilograms, to six figures at least, these are the kind of numbers that cause mistakes to be made.

Most famously in recent memory was the Mars Climate Orbiter in 1999. The $125 million space probe broke up in the Martian atmosphere after engineers at Lockheed Martin, who built the instrument, used the US Customary System of measurement rather than metric measurements used by others on the project. The probe descended too close to the surface and was lost.

A more down-to-earth example came in 1983 with the Air Canada “Gimli Glider” incident, where pilots of a Boeing 767 underestimated the amount of fuel they needed because the navigational computer was measuring fuel in kilograms rather than pounds. With roughly 2.2 pounds to the kilogram, the aircraft took on less than half the fuel is needed and the engines failed at 41,000 feet (12,500m).

The two pilots were forced to glide the aircraft, containing 69 souls, to an old air force base in Gimli that luckily one of the pilots had served at. It was now being used as a drag strip but thankfully there were only a few minor injuries. 

And don’t even get me started on Celsius and Fahrenheit. With Celsius water freezes at 0 degrees and boils at 100 at ground level, compared to 32 and 212 for Fahrenheit. It’s a nonsensical system and the US is now the only nation in the world to use Fahrenheit to measure regular temperatures.

The slow and winding road

Back in 1821, Secretary of State John Quincy Adams reported to Congress on the measurements issue. In his seminal study on the topic he concluded that while a metric system based on natural constants was preferable, the amount of kerfuffle needed to change from the current regime would be highly disruptive and he wasn’t sure Congress had the right to overrule the systems used by individual states.

The disruption would have been large. The vast majority of America’s high value trade was with the UK and Canada, neither of which were metric.

In addition, American and British manufacturers rather liked the old ways. With the existing system, companies manufactured parts to their own specifications, meaning if you wanted spares you had to go buy them from the original manufacturer. This proved highly lucrative.

By the middle of the 19th century, things were changing… slightly. The US government scientists did start using some metric measurements for things like mapping out territory, even though its domestic system was more common for day-to-day use. The Civil War also spurred a push towards standardization, with some states like Utah briefly mandating the system.

Two big changes came around in the 20th century following two World Wars. Interchangeability of parts, particularly bolt threading, seriously hampered the Allied forces. In 1947, America joined the International Organization for Standardization and bolt threads went metric. Today the US Army uses metric to better integrate with NATO allies.

This has continued ever since American manufacturers realized they would have to accommodate the new systems if it wanted to sell more kit abroad. Today there are technically US measurement parts still being manufactured, particularly in some industries, but there is at least a standardized system for converting these to metric measurements.

In the 1960s, metric was renamed as the Le Système international d’unités (International System of Units) or SI and things started moving again in America. After Congressional study, President Gerald Ford signed the Metric Conversion Act in 1975, setting a plan for America to finally go metric as “the preferred system of weights and measures for United States trade and commerce.”

But it suffered some drawbacks. Firstly, the system was voluntary, which massively slowed down adoption. Secondly, a year later, the new US president Jimmy Carter was a strong proponent of the system, and this caused the opposition in Congress to largely oppose the plan.

President Reagan closed most of the moves to metric in 1982, but his successor, Bush, revived some of the plans in 1991, ordering US government departments to move over to metric as far as possible. The issue has been kicked down the road ever since.

Different cultures, different customs

These days the arguments over metric versus American measurements are more fraught, becoming a political issue between left and right. Witness Tucker Carlson’s cringe-worthy rant in which he describes metric as “the yoke of tyranny,” hilariously mispronouncing “kailograms.”

What in the world is he even talking about? pic.twitter.com/KhL8eS7mO1— George Takei (@GeorgeTakei) July 25, 2019

Given that trust-fund kid Carlson was educated in a Swiss boarding school, he knows how it’s pronounced, but never let the facts get in the way of invective.

As such, it seems unlikely that we’ll see anything change soon. But that day is coming – America is no longer the manufacturing giant it was and China is perfectly happy with the metric system, although it maintains other measurement for domestic societal use like Britain does with pints and miles.

There’s really no logical reason to not go metric – it’s a simple, universal system used by every nation in the world except for the US, Liberia and Myanmar. That’s hardly august company for the Land of the Free.

It will be a long, slow process. No country has managed a full shift to metric in less than a generation, with most it took two or more, and the UK seems to be going backwards. Now-former Prime Minister Boris Johnson was keen to see a return of the old UK Imperial measurements in Britain, which make the current American system look positively rational.

It may take generations before the issue is resolved in the UK, and longer still for the US. It may, in fact, never happen in America, but the SI system makes sense, is logically sound, and will remain the language of science, medicine and engineering for the vast majority of the world.

If the US doesn’t want to play catch-up with the rest of the world it will have to take rational measurements seriously. But that day isn’t coming soon, so in the meantime this hack will have to remain using old cookbooks and we’ll face more measurement mistakes together. ®

January 28th 2023

Science & Technology

The Colonial History of the Telegraph

Gutta-percha, a natural resin, enabled European countries to communicate with their colonial outposts around the world.

old morse key telegraph on wood table

An old morse key telegraph

Getty

By: Livia Gershon

January 21, 2023

3 minutes

Share

Tweet Email Print

Long before the internet, the telegraph brought much of the world together in a communications network. And, as historian John Tully writes, the new nineteenth-century technology was deeply entangled with colonialism, both in the uses it was put to and the raw material that made it possible—the now-obscure natural plastic gutta-percha.

Tully writes that the resin product, made from the sap of certain Southeast Asian trees, is similar to rubber, but without the bounce. When warmed in hot water, it becomes pliable before hardening again as it cools. It’s resistant to both water and acid. For centuries, Malay people had used the resin to make various tools. When Europeans learned about its uses in the nineteenth century, they adopted it for everything from shoe soles to water pipes. It even became part of the slang of the day—in the 1860s, New Englanders might refer to someone they disliked as an “old gutta-percha.” Perhaps most importantly, gutta-percha was perfect for coating copper telegraph wire, replacing much less efficient insulators like tarred cotton or hemp. It was especially important in protecting undersea cables, which simply wouldn’t have been practical without it.

Prior to the invention of the electric telegraph, Tully writes, it could take six months for news from a colonial outpost to reach the mother country.

And those undersea cables became a key part of colonial governance in the second half of the nineteenth century. Prior to the invention of the electric telegraph, Tully writes, it could take six months for news from a colonial outpost to reach the mother country, making imperial control difficult. For example, when Java’s Prince Diponegoro led an uprising against Dutch colonists in 1825, the Dutch government didn’t find out for months, delaying the arrival of reinforcements.

Then, in 1857, Indians rebelled against the rule of the British East India Company. This led panicked colonists to demand an expanded telegraph system. By 1865, Karachi had a near-instant communications line to London. Just a decade later, more than 100,000 miles of cable laid across seabeds brought Australia, South Africa, Newfoundland, and many places in between, into a global communication network largely run by colonial powers. Tully argues that none of this would have been possible without gutta-percha.

But the demand for gutta-percha was bad news for the rainforests where it was found. Tens of millions of trees were felled to extract the resin. Even a large tree might yield less than a pound of the stuff, and the growing telegraph system used as much as four million pounds a year. By the 1890s, ancient forests were in ruins and the species that produced gutta-percha were so rare that some cable companies had to decline projects because they couldn’t get enough of it.

The trees weren’t driven completely extinct, and, eventually, the wireless telegraph and synthetic plastics made its use in telegraph cables obsolete. Today, the resin is only used in certain specialty areas such as dentistry. Yet sadly, the decimation of the trees prefigured the fate of rainforests around the world under colonial and neocolonial global systems for more than a century to come.


January 17th 2023

The Tudor Roots of Modern Billionaires’ Philanthropy

The debate over how to manage the wealthy’s fortunes after their deaths traces its roots to Henry VIII and Elizabeth I

Nuri Heckler, The Conversation January 13, 2023


L to R: Andrew Carnegie, Elizabeth I, Henry VIII and Henry Ford
L to R: Andrew Carnegie, Elizabeth I, Henry VIII and Henry Ford Illustration by Meilan Solly / Photos via Wikimedia Commons under public domain

More than 230 of the world’s wealthiest people, including Elon Musk, Bill Gates and Warren Buffett, have promised to give at least half of their fortunes to charity within their lifetimes or in their wills by signing the Giving Pledge. Some of the most affluent, including Jeff Bezos (who hadn’t signed the Giving Pledge as of early 2023) and his ex-wife MacKenzie Scott (who did sign the pledge after their divorce in 2019) have declared that they will go further by giving most of their fortunes to charity before they die.

This movement stands in contrast to practices of many of the philanthropists of the late 19th and early 20th centuries. Industrial titans like oil baron John D. Rockefeller, automotive entrepreneur Henry Ford and steel magnate Andrew Carnegie established massive foundations that to this day have big pots of money at their disposal despite decades of charitable grantmaking. This kind of control over funds after death is usually illegal because of a “you can’t take it with you” legal doctrine that originated in England 500 years ago.

Known as the Rule Against Perpetuities, it holds that control over property must cease within 21 years of a death. But there is a loophole in that rule for money given to charities, which theoretically can flow forever. Without it, many of the largest American and British foundations would have closed their doors after disbursing all their funds long ago.

As a lawyer and researcher who studies nonprofit law and history, I wondered why American donors get to give from the grave.

Henry VIII had his eye on property

In a recent working paper that I wrote with my colleague Angela Eikenberry and Kenya Love, a graduate student, we explained that this debate goes back to the court of Tudor monarch Henry VIII.

The Rule Against Perpetuities developed in response to political upheaval in the 1530s. The old feudal law made it almost impossible for most properties to be sold, foreclosed upon or have their ownership changed in any way.

At the time, a small number of people and the Catholic Church controlled most of the wealth in England. Henry wanted to end this practice because it was difficult to tax property that never transferred, and property owners were mostly unaccountable to England’s monarchy. This encouraged fraud and led to a consolidation of wealth that threatened the king’s power.

Hans Holbein the Younger, Henry VIII, circa 1537
Hans Holbein the Younger, Henry VIII, circa 1537 Image © Museo Nacional Thyssen-Bornemisza, Madrid

As he sought to sever England’s ties to the Catholic Church, Henry had one eye on changing religious doctrine so he could divorce his first wife, Catherine of Aragon, and the other on all the property that would become available when he booted out the church.

After splitting with the church and securing his divorce, he enacted a new property system giving the British monarchy more power over wealth. Henry then used that power to seize property. Most of the property the king took first belonged to the church, but all property interests were more vulnerable under the new law.

Henry’s power grab angered the wealthy gentry, who launched a violent uprising known as the Pilgrimage of Grace.

After quelling that upheaval, Henry compromised by allowing the transfer of property from one generation to the next. But he didn’t let people tell others how to use their property after they died. The courts later developed the Rule Against Perpetuities to allow people to transfer property to their children when they turned 21 years old.

At the same time, wealthy Englishmen were encouraged to give large sums of money and property to help the poor. Some of these funds had strings attached for longer than the 21 years.

Elizabeth I codified the rule

Elizabeth I in her coronation robes
Elizabeth I in her coronation robes Public domain via Wikimedia Commons

Elizabeth I, Henry’s daughter with his ill-fated wife Anne Boleyn, became queen in 1558, after the deaths of her siblings Edward VI and Mary I. She used her reign to codify that previously informal charitable exception. By then it was the 1590s, a tough time for England, due to two wars, a pandemic, inflation and famine. Elizabeth needed to prevent unrest without raising taxes even further than she already had.

Elizabeth’s solution was a new law decreed in 1601. Known as the Statute of Charitable Uses, it encouraged the wealthy to make big charitable donations and gave courts the power to enforce the terms of the gifts.

The monarchy believed that partnering with charities would ease the burdens of the state to aid the poor.

This concept remains popular today, especially among conservatives in the United States and United Kingdom.

The charitable exception today

When the U.S. broke away from Great Britain and became an independent country, it was unclear whether it would stick with the charitable exception.

Some states initially rejected British law, but by the early 19th century, every state in the U.S. had adopted the Rule Against Perpetuities.

In the late 1800s, scholars started debating the value of the rule, even as large foundations took advantage of Elizabeth’s philanthropy loophole. My co-authors and I found that, as of 2022, 40 U.S. states had ended or limited the rule; every jurisdiction, including the District of Columbia, permits eternal control over donations.

Although this legal precept has endured, many scholars, charities and philanthropists question whether it makes sense to let foundations hang on to massive endowments with the goal of operating in the future in accordance with the wishes of a long-gone donor rather than spend that money to meet society’s needs today.

With such issues as climate change, spending more now could significantly decrease what it will cost later to resolve the problem.

View of the atrium of the Ford Foundation Building in New York
View of the atrium of the Ford Foundation Building in New York Elsie140 via Wikimedia Commons under CC BY-SA 4.0

Still other problems require change that is more likely to come from smaller nonprofits. In one example, many long-running foundations, including the Ford, Carnegie and Kellogg foundations, contributed large sums to help Flint, Michigan, after a shift in water supply brought lead in the tap water to poisonous levels. Some scholars argue this money undermined local community groups that better understood the needs of Flint’s residents.

Another argument is more philosophical: Why should dead billionaires get credit for helping to solve contemporary problems through the foundations bearing their names? This question often leads to a debate over whether history is being rewritten in ways that emphasize their philanthropy over the sometimes questionable ways that they secured their wealth.

Some of those very rich people who started massive foundations were racist and anti-Semitic. Does their use of this rule that’s been around for hundreds of years give them the right to influence how Americans solve 21st-century problems?

Nuri Heckler is an expert on public administration at the University of Nebraska Omaha. His research focuses on power in public organizations, including nonprofits, social enterprise and government.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

January 9th 2023

Rethinking the European Conquest of Native Americans

In a new book by Pekka Hämäläinen, a picture emerges of a four-century-long struggle for primacy among Native power centers in North America.By David Waldstreicher

Print of Comanche procession
Library of Congress

December 31, 2022

When the term Indian appears in the Declaration of Independence, it is used to refer to “savage” outsiders employed by the British as a way of keeping the colonists down. Eleven years later, in the U.S. Constitution, the Indigenous peoples of North America are presented  differently: as separate entities with which the federal government must negotiate. They also appear as insiders who are clearly within the borders of the new country yet not to be counted for purposes of representation. The same people are at once part of the oppression that justifies the need for independence, a rival for control of land, and a subjugated minority whose rights are ignored.

For the Finnish scholar Pekka Hämäläinen, this emphasis on what Native people meant to white Americans misses an important factor: Native power. The lore about Jamestown and Plymouth, Pocahontas and Squanto, leads many Americans to think in terms of tragedy and, eventually, disappearance. But actually, Indigenous people continued to control most of the interior continent long after they were outnumbered by the descendants of Europeans and Africans.

Indigenous Continent – The Epic Contest For North AmericaPekka Hämäläinen, National Geographic Books

When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Much more accurate is the picture Hämäläinen paints in his new book, Indigenous Continent: a North American history that encompasses 400 years of wars that Natives often, even mostly, won—or did not lose decisively in the exceptional way that the Powhatans and Pequots had by the 1640s. Out of these centuries of broader conflict with newcomers and one another, Native peoples established decentralized hives of power, and even new empires.

In a previous book, The Comanche Empire, Hämäläinen wrote of what he controversially referred to as a “reversed colonialism,” which regarded the aggressive, slaving equestrians of “greater Comanchería”—an area covering most of the Southwest—as imperialists in ways worth comparing to the French, English, Dutch, and Spanish in America. There was continued pushback from some scholars when Hämäläinen extended the argument northward in his 2019 study, Lakota America. (The impact of his work among historians may be measured by his appointment as the Rhodes Professor of American History at Oxford University.)

What was most distinctive about these two previous books was that Hämäläinen so convincingly explained the Indigenous strategies for survival and even conquest. Instead of focusing on the microbes that decimated Native populations, Hämäläinen showed how the Comanche developed what he termed a “politics of grass.” A unique grasslands ecosystem in the plains allowed them to cultivate huge herds of horses and gave the Comanche access to bison, which they parlayed into market dominance over peoples who could supply other goods they wanted, such as guns, preserved foods, and slaves for both trade and service as herders.

Hämäläinen treats Native civilizations as polities making war and alliances. In Indigenous Continent, there is less emphasis than in The Comanche Empire on specific ecosystems and how they informed Indigenous strategies. Instead, he describes so many Native nations and European settlements adapting to one another over such a wide and long time period that readers can appreciate anew how their fates were intertwined—shattering the simple binary of “Indians” and “settlers.” Indigenous peoples adapted strenuously and seasonally to environments that remained under their control but had to contend at the same time with Europeans and other refugees encroaching on their vague borders. These newcomers could become allies, kin, rivals, or victims.

Hämäläinen sees a larger pattern of often-blundering Europeans becoming part of Indigenous systems of reciprocity or exploitation, followed by violent resets. When Dutch or French traders were “generous with their wares” and did not make too many political demands, Natives pulled them into their orbit. Spanish and, later, British colonists, by contrast, more often demanded obeisance and control over land, leading to major conflicts such as the ones that engulfed the continent in the 1670s–80s and during the Seven Years’ War. These wars redirected European imperial projects, leading to the destruction of some nations, and the migration and recombination of others, such as the westward movement of the Lakota that led to their powerful position in the Missouri River Valley and, later, farther west. In this history, Indigenous “nomadic” mobility becomes grand strategy. North America is a continent of migrants battling for position long before the so-called nation of immigrants.

Recommended Reading

“Properly managed,” settlers and their goods “could be useful,” Hämäläinen writes. The five nations of the Iroquois (Haudenosaunee) confederacy established a pattern by turning tragic depopulation by epidemic into opportunities for what Hämäläinen calls  “mourning wars” attacking weakened tribes and gaining captives. They formed new alliances and capitalized on their geographic centrality between fur-supplying nations to the west and north, and French and Dutch and, later, English tool and gun suppliers to the east and south. Hämäläinen insists that their warfare was “measured, tactical,” that their use of torture was “political spectacle,” that their captives were actually adoptees, that their switching of sides in wartime and the Iroquois’ selling out of distant client tribes such as the Delaware was a “principled plasticity.” This could almost be an expert on European history talking about the Plantagenets, the Hapsburgs, or Rome.

And there’s the rub. Hämäläinen, a northern European, feels comfortable applying the ur-Western genre of the rise and fall of empires to Native America, but imperial history comes with more baggage. Hämäläinen seems certain that Comanche or other Indigenous imperial power was different in nature from the European varieties, but it often seems as if Indigenous peoples did many of the same things that European conquerors did. Whether the Iroquois had “imperial moments,” actually were an empire, or only played one for diplomatic advantage is only part of the issue. Hämäläinen doesn’t like the phrase settler colonialism. He worries that the current term of art for the particularly Anglo land-grabbing, eliminationist version of empire paints with too broad a brush. Perhaps it does. But so does his undefined concept of empire, which seems to play favorites at least as much as traditional European histories do.

If an empire is an expanding, at least somewhat centralized polity that exploits the resources of other entities, then the Iroquois, Comanche, Lakota, and others may well qualify. But what if emphasizing the prowess of warriors and chiefs, even if he refers to them as “soldiers” and “officials,” paradoxically reinforces exoticizing stereotypes? Hämäläinen is so enthralled with the surprising power and adaptability of the tribes that he doesn’t recognize the contradiction between his small-is-beautful praise of decentralized Indigenous cultures and his condescension toward Europeans huddling in their puny, river-hugging farms and towns.

Hämäläinen notes that small Native nations could be powerful too, and decisive in wars. His savvy Indigenous imperialists wisely prioritized their relationships, peaceful or not, with other Natives, using the British or French as suppliers of goods. Yet he praises them for the same resource exploitation and trade manipulation that appears capitalist and murderous when European imperialists do their version. In other words, he praises Natives when they win for winning. Who expanded over space, who won, is the story; epic battles are the chapters; territory is means and end.

And the wheel turns fast, followed by the rhetoric. When British people muscle out Natives or seek to intimidate them at treaty parleys, they are “haughty.” At the same time, cannibalism and torture are ennobled as strategies—when they empower Natives. Native power as terror may help explain genocidal settler responses, but it makes natives who aren’t just plain brave—including women, who had been producers of essential goods and makers of peace—fade away almost as quickly as they did in the old history. As readers, we gain a continental perspective, but strangely, we miss the forest for the battlefields.

It’s already well known why natives lost their land and, by the 19th century, no longer had regional majorities: germs, technology, greed, genocidal racism, and legal chicanery, not always in that order. Settler-colonial theory zeroes in on the desire to replace the Native population, one way or another, for a reason: Elimination was intended even when it failed in North America for generations.

To Hämäläinen, Natives dominated so much space for hundreds of years because of their “resistance,” which he makes literally the last word of his book. Are power and resistance the same thing? Many scholars associated with the Native American and Indigenous Studies Association find it outrageous to associate any qualities of empire with colonialism’s ultimate, and ongoing, victims. The academic and activist Nick Estes has accused Hämäläinen of “moral relativist” work that is “titillating white settler fantasies” and “winning awards” for doing so. Native American scholars, who labor as activists and community representatives as well as academics in white-dominated institutions, are especially skeptical when Indigenous people are seen as powerful enough to hurt anyone, even if the intent is to make stock figures more human. In America, tales of Native strength and opportunistic mobility contributed to the notion that all Natives were the same, and a threat to peace. The alternative categories of victim and rapacious settler help make better arguments for reparative justice.

In this light, the controversy over Native empires is reminiscent of what still happens when it’s pointed out that Africans participated in the slave trade—an argument used by anti-abolitionists in the 19th century and ever since to evade blame for the new-world slaveries that had turned deadlier and ideologically racial. It isn’t coincidental that Hämäläinen, as a fan of the most powerful Natives, renders slavery among Indigenous people as captivity and absorption, not as the commodified trade it became over time. Careful work by historians has made clear how enslavement of and by Natives became, repeatedly, a diplomatic tool and an economic engine that created precedents for the enslavement of Black Americans.

All genres of history have their limits, often shaped by politics. That should be very apparent in the age of the 1619 and 1776 projects. Like the Declaration and the Constitution, when it comes to Indigenous peoples, historians are still trying to have it both ways. Books like these are essential because American history needs to be seen from all perspectives, but there will be others that break more decisively with a story that’s focused on the imperial winners.

Indigenous Continent – The Epic Contest For North AmericaPekka Hämäläinen, National Geographic Books

January 8th 2023

Britain’s first black aristocrats

Share using EmailShare on TwitterShare on FacebookShare on Linkedin

Alamy

By Fedora Abu10th May 2021

Whitewashed stories about the British upper classes are being retold. Fedora Abu explores the Bridgerton effect, and talks to Lawrence Scott, author of Dangerous Freedom.

F

For centuries, the Royal Family, Britain’s wealthiest, most exclusive institution, has been synonymous with whiteness. And yet, for a brief moment, there she was: Her Royal Highness the Duchess of Sussex, a biracial black woman, on the balcony at Buckingham Palace. Her picture-perfect wedding to Prince Harry in 2018 was an extraordinary amalgamation of black culture and centuries-old royal traditions, as an African-American preacher and a gospel choir graced St George’s Chapel in Windsor. Watching on that sunny May afternoon, who would’ve known things would unravel the way they have three years on?

More like this:

Facing up to Britian’s murky past
Britain’s hidden slavery history
The woman changing how Africa is seen

Although heralded as a history-maker, the Duchess of Sussex is not actually the first woman of colour to have been part of the British upper classes. Dangerous Freedom, the latest novel by Trinidadian author Lawrence Scott, tells the story of the real historical figure Elizabeth Dido Belle, the mixed-race daughter of enslaved woman Maria Belle and Captain Sir John Lindsay. Born in 1761, she was taken in by her great-uncle, Lord Chief Justice William Murray, first Earl of Mansfield, and raised amid the lavish setting of Kenwood House in Hampstead, London, alongside her cousin Elizabeth. It was a rare arrangement, most likely unique, and today she is considered to be Britain’s first black aristocrat.Lawrence Scott's novel tells the story of Belle from a fresh perspective (Credit: Papillote Press)

Lawrence Scott’s novel tells the story of Belle from a fresh perspective (Credit: Papillote Press)

Scott’s exploration of Belle’s story began with a portrait. Painted by Scottish artist David Martin, the only known image of Belle shows her in a silk dress, pearls and turban, next to her cousin, in the grounds of Kenwood. It’s one of the few records of Belle’s life, along with a handful of written accounts: a mention in her father’s obituary in the London Chronicle describing her “amiable disposition and accomplishments”; a recollection by Thomas Hutchinson, a guest of Mansfield, of her joining the family after dinner, and her uncle’s fondness for her. These small nuggets – together with years of wider research – allowed Scott to gradually piece together a narrative.

As it happened, while Scott was delving into the life of Dido Belle, so were the makers of Belle, the 2014 film starring Gugu Mbatha-Raw that was many people’s first introduction to the forgotten figure. With those same fragments, director Amma Asante and screenwriter Misan Sagay spun a tale that followed two classic Hollywood plotlines: a love story, as Dido seeks to find a husband, but also a moral one as we await Mansfield’s ruling on a landmark slavery case. As might be expected, Belle is subjected to racist comments by peers and, in line with Hutchinson’s account, does not dine with her family – nor have a “coming out”. However, she is shown to have a warm relationship with her cousin “Bette” and her “Papa” Lord Mansfield, and a romantic interest in John Davinier, an anglicised version of his actual name D’Aviniere, who in the film is depicted as a white abolitionist clergyman and aspiring lawyer.

There’s this kind of whitewashing of these bits of colonial history – not really owning these details, these conflicts – Lawrence Scott

Two drafts into his novel when Belle came out, Scott was worried that the stories were too similar – but it turned out that wasn’t the case. Dangerous Freedom follows Belle’s life post-Kenwood – now known as Elizabeth D’Aviniere and married and with three sons, as she reflects on a childhood tinged with trauma, and yearns to know more about her mother. Her husband is not an aspiring lawyer but a steward, and cousin “Beth” is more snobbish than sisterly. Even the painting that inspired the novel is reframed: where many see Dido presented as an equal to her cousin, Scott’s Dido is “appalled” and “furious”, unable to recognise the “turbaned, bejewelled… tawny woman”.In a 1778 painting by David Martin, Dido Belle is depicted with her cousin Lady Elizabeth Murray (Credit: Alamy)

In a 1778 painting by David Martin, Dido Belle is depicted with her cousin Lady Elizabeth Murray (Credit: Alamy)

For Scott, the portrait itself is a romantic depiction of Belle that he aims to re-examine with his book – the painting’s motifs have not always been fully explored in whitewashed art history, and he has his own interpretation. “The Dido in the portrait is a very romanticised, exoticised, sexualised sort of image,” he says. “She has a lot of the tell-tale relics of 18th-Century portraiture, such as the bowl of fruit and flowers, which all these enslaved young boys and girls are carrying in other portraits. She’s carrying it differently, it’s a different kind of take, but I really wonder what [the artist] Martin was trying to do.” The film also hints at the likely sexualisation of Belle when in one scene a prospective suitor describes her as a “rare and exotic flower”. “One does not make a wife of the rare and exotic,” retorts his brother. “One samples it on the cotton fields.” 

Post-racial utopia

In fact, to find a black woman who married into the aristocracy, we have to fast forward another 250 years, when Emma McQuiston, the daughter of a black Nigerian father and white British mother, wedded Ceawlin Thynn, then Viscount Weymouth in 2013. In many ways, the experiences of Thynn (now the Marchioness of Bath) echo those of Dido: in interviews, she has addressed the racism and snobbery she first experienced in aristocratic circles, and her husband has shared that his mother expressed worries about “400 years of bloodline“.

Ironically, there has long been speculation that the Royal Family could itself have mixed-race ancestry. For decades, historians have debated whether Queen Charlotte, wife of King George III, had African heritage but was “white-passing” – as is alluded to in Dangerous Freedom. While many academics have cast doubt on the theory, it’s one that the writers of TV drama series Bridgerton run with, casting her as an unambiguously black woman. The show imagines a diverse “ton” (an abbreviation of the French phrase le bon ton, meaning sophisticated society), with other black characters including the fictional Duke of Hastings, who is society’s most eligible bachelor, and his confidante Lady Danbury. Viewed within the context of period dramas, which typically exclude people of colour for the sake of historical accuracy, Bridgerton’s ethnically diverse take on the aristocracy is initially refreshing. However, that feeling is complicated somewhat by the revelation that the Bridgerton universe is not exactly “colourblind”, but rather what is being depicted in the series is an imagined scenario where the marriage of Queen Charlotte to King George has ushered in a sort of post-racial utopia.

With all those palaces, jewels and paintings, it’s not hard to see why contemporary culture tends to romanticise black figures within the British upper classes

Light-hearted, frothy and filled with deliberate anachronisms, Bridgerton is not designed to stand up to rigorous analysis. Even so, the show’s handling of race has drawn criticism for being more revisionist than radical. The series is set in 1813, 20 years before slavery was fully abolished in Britain, and while the frocks, palaces and parties of Regency London all make for sumptuous viewing, a key source of all that wealth has been glossed over. What’s more, just as Harry and Meghan’s union made no material difference to ordinary black Britons, the suggestion that King George’s marriage to a black Queen Charlotte wiped out racial hierarchies altogether feels a touch too fantastical.In the TV drama series Bridgerton, Queen Charlotte is played by Golda Rosheuvel (Credit: Alamy)

In the TV drama series Bridgerton, Queen Charlotte is played by Golda Rosheuvel (Credit: Alamy)

In some ways, Bridgerton could be read as an accidental metaphor for Britain’s real-life rewriting of its own slave-trading past. That the Royal Family in particular had a major hand in transatlantic slavery ­– King Charles II and James, Duke of York, were primary shareholders in the Royal African Company, which trafficked more Africans to the Americas than any other institution – is hardly acknowledged today. “As [historian] David Olusoga is constantly arguing, there’s this kind of whitewashing of these bits of colonial history – not really owning these details, these conflicts,” says Scott. Instead, as University College London’s Catherine Hall notes, the history of slavery in Britain has been told as “the triumph of abolition”.

Olusoga himself has been among those digging up those details, and in 2015 he fronted the BBC documentary Britain’s Forgotten Slaveowners, which, together with the UCL Centre for the Study of the Legacies of British Slave-ownership, looked into who was granted a share of the £20m ($28m) in compensation for their loss of “property” post-abolition. It’s only in learning that this figure equates to £17bn ($24bn) in real terms (with half going to just 6% of the 46,000 claimants) – and that those payments continued to be made until 2015 – that we can begin to understand how much the slave trade shaped who holds wealth today.

It took the Black Lives Matter protests of last summer to accelerate the re-examination of Britain’s slave-trading history, including its links to stately homes. In September 2020, the National Trust published a report which found that a third of its estates had some connection to the spoils of the colonial era; a month later, Historic Royal Palaces announced it was launching a review into its own properties. Unsurprisingly, the prospect of “decolonising” some of Britain’s most prized country houses has sparked a “culture war” backlash, but a handful of figures among the landed gentry have been open to confronting the past. David Lascelles, Earl of Harewood, for example, has long been upfront about how the profits from slavery paid for Harewood House, even appearing in Olusoga’s documentary and making the house’s slavery archives public.The British aristocracy is multi-racial in the reimagined historical universe presented by TV series Bridgerton (Credit: Alamy)

The British aristocracy is multi-racial in the reimagined historical universe presented by TV series Bridgerton (Credit: Alamy)

“Much more now, great houses are bringing [this history] to the fore and having the documentation in the home,” says Scott. “Kenwood has done that to the extent that it has a copy of the portrait now… [and] the volunteers that take you around tell a much more conflicted story about it.” Still, even as these stories are revealed in more vivid detail, how we reckon with the ways in which they’ve influenced our present – and maybe even remedy some of the injustices ­– is a conversation yet to be had.

With all those palaces, jewels and paintings, it’s not hard to see why contemporary culture tends to romanticise black figures within the British upper classes. Works such as Dangerous Freedom are now offering an alternative view, stripping the aristocracy of its glamour, giving a voice to the enslaved and narrating the discrimination, isolation and tensions that we’ve seen still endure. The progressive fairytale – or utopian reimagining – will always have greater appeal. But perhaps, as Scott suggests, it’s time for a new story to be written.

Dangerous Freedom by Lawrence Scott (Papillote Press) is out now. 

December 30th 2022

How Diverse Was Medieval Britain?

An archaeologist explains how studies of ancient DNA and objects reveal that expansive migrations led to much greater diversity in medieval Britain than most people imagine today.

ByDuncan Sayer

29 Nov 2022

A photograph shows, through glass, a person with short black hair wearing glasses, a necklace, and a black shirt pointing at a graphic of stick figures. Different clusters of stick people interact in different scenarios as various red lines surround them, with several lines intersecting at a central node.

This article was originally published at The Conversation and has been republished with Creative Commons.

WHEN YOU IMAGINE LIFE for ordinary people in ancient Britain, you’d be forgiven for picturing quaint villages where everyone looked and spoke the same way. But a recent study could change the way historians think about early medieval communities.

Most of what we know about English history after the fall of the Roman Empire is limited to archaeological finds. There are only two contemporary accounts of this post-Roman period. Gildas (sixth century) and Bede (eighth century) were both monks who gave narrow descriptions of invasion by people from the continent and neither provides an objective account.

My team’s study, published in Nature, changes that. We analyzed DNA from the remains of 460 people from sites across Northern Europe and found evidence of mass migration from Europe to England and the movement of people from as far away as West Africa. Our study combined information from artifacts and human remains.

That meant we could dig deeper into the data to explore the human details of migration.

JOURNEY INTO ENGLAND’S PAST

This paper found that about 76 percent of the genetic ancestry in the early medieval English population we studied originated from what is today northern Germany and southern Scandinavia—Continental Northern European (CNE). This number is an average taken from 278 ancient skeletons sampled from the south and east coasts of England. It is strong evidence for mass migration into the British Isles after the end of Roman administration.

A photograph features an old bone comb on a plot of dirt with a red-and-white–striped centimeter ruler placed under it for measurement.

One of the most surprising discoveries was the skeleton of a young girl who died at about 10 or 11 years of age, found in Updown near Eastry in Kent. She was buried in typical early seventh-century style, with a finely made pot, knife, spoon, and bone comb. Her DNA, however, tells a more complex story. As well as 67 percent CNE ancestry, she also had 33 percent West African ancestry. Her African ancestor was most closely related to modern-day Esan and Yoruba populations in southern Nigeria.

Evidence of far-reaching commercial connections with Kent at this time are known. The garnets in many brooches found in this region came from Afghanistan, for example. And the movement of the Updown girl’s ancestors was likely linked to these ancient trading routes.

KEEPING IT IN THE FAMILY

Two women buried close by were sisters and had predominantly CNE ancestry. They were related to Updown girl—perhaps her aunts. The fact that all three were buried in a similar way, with brooches, buckles, and belt hangers, suggests the people who buried them chose to highlight similarities between Updown girl and her older female relatives when they dressed them and located the burials close together. They treated her as kin, as a girl from their village, because that is what she was.

The aunts also shared a close kinship with a young man buried with artifacts that implied some social status, including a spearhead and buckle. The graves of these four people were all close together. They were buried in a prominent position marked by small barrow mounds (ancient burial places covered with a large mound of earth and stones). The visibility of this spot, combined with their dress and DNA, marks these people as part of an important local family.

The site studied in most detail—Buckland, near Dover in Kent—had kinship groups that spanned at least four generations.

One family group with CNE ancestry is remarkable because of how quickly they integrated with western British and Irish (WBI) people. Within a few generations, traditions had merged between people born far away from each other. A 100 percent WBI woman had two daughters with a 100 percent CNE man. WBI ancestry entered this family again a generation later, in near 50/50 mixed-ancestry grandchildren. Objects, including similar brooches and weapons, were found in graves on both sides of this family, indicating shared values between people of different ancestries.

This family was buried in graves close together for three generations—that is, until a woman from the third generation was buried in a different cluster of graves to the north of the family group. One of her children, a boy, died at about 8 to 10 years of age. He was buried in the cluster of graves that included his maternal grandparents and their close family, and she laid her youngest child to rest in a grave surrounded by her family. But when the mother died, her adult children chose a spot close to their father for her grave. They considered her part of the paternal side of the family.

A map graphic features two halves. The left graphic is littered with mostly gray shapes, with a select few numbered and highlighted in red, blue, yellow, or green. The right graphic features tiers of red squares and circles connected by lines. A blue circle appears in the middle of the graphic, and some squares and circles toward the bottom have both colors—faded from red (left) to blue (right).

Another woman from Buckland had a unique haplotype, a set of DNA variants that tend to be inherited together. Both males and females inherit their haplogroup from their mothers. So her DNA suggests she had no maternal family in the community she was buried with.

The chemical isotopes from her teeth and bones indicate she was not born in Kent but moved there when she was 15–25 years old. An ornate gold pendant, called a bracteate, which may have been of Scandinavian origin, was found in her grave.

This suggests she left home from Scandinavia in her youth, and her mother’s family did not travel with her. She very likely had an exogamous marriage (marriage outside of one’s social group). What is striking is the physical distance that this partnership bridged. This woman traveled 700 miles, including a voyage across the North Sea, to start her family.

RETHINKING HISTORY

These people were migrants and the children of migrants who traveled in the fifth, sixth, and seventh centuries. Their stories are of community and intermarriage. The genetic data points to profound mobility within a time of mass migration, and the archaeological details help complete the family histories. Migration did not happen at the same time, nor did migrants come from the same place. Early Anglo-Saxon culture was a mixing pot of ideas, intermarriage, and movement. This genetic coalescing and cultural diversity created something new in the south and east of England after the Roman Empire ended.

A photograph features a person with short brown hair, handlebar mustache, and beard wearing a black vest and a gray striped scarf.

Duncan Sayer

Duncan Sayer is a reader in archaeology at the University of Central Lancashire. He directed excavations at Oakington early Anglo-Saxon cemetery and Ribchester Roman Fort, and has worked extensively in field archaeology. Sayer is the author of Ethics and Burial A

August 21st 2022

A London newspaper advertisment from 1947 recruiting hefty girls for the Metropolitan Police – Appledene Archives / London Evening News.

What the ‘golden age’ of flying was really like

Jacopo Prisco, CNN • Updated 5th August 2022

Bacchanalian motifs served as a backdrop to cocktail hour on Lufthansa's first-class 'Senator' service in 1958.

(CNN) — Cocktail lounges, five course meals, caviar served from ice sculptures and an endless flow of champagne: life on board airplanes was quite different during the “golden age of travel,” the period from the 1950s to the 1970s that is fondly remembered for its glamor and luxury.It coincided with the dawn of the jet age, ushered in by aircraft like the de Havilland Comet, the Boeing 707 and the Douglas DC-8, which were used in the 1950s for the first scheduled transatlantic services, before the introduction of the Queen of the Skies, the Boeing 747, in 1970. So what was it actually like to be there?”Air travel at that time was something special,” says Graham M. Simons, an aviation historian and author. “It was luxurious. It was smooth. It was fast. “People dressed up because of it. The staff was literally wearing haute couture uniforms. And there was much more space: seat pitch — that’s the distance between the seats on the aircraft — was probably 36 to 40 inches. Now it’s down to 28, as they cram more and more people on board.”

Golden era

Sunday roast is carved for passengers in first class on a BOAC VC10 in 1964.

Sunday roast is carved for passengers in first class on a BOAC VC10 in 1964.Airline: Style at 30,000 Feet/Keith LovegroveWith passenger numbers just a fraction of what they are today and fares too expensive for anyone but the wealthy, airlines weren’t worried about installing more seats, but more amenities.”The airlines were marketing their flights as luxurious means of transport, because in the early 1950s they were up against the cruise liners,” adds Simons. “So there were lounge areas, and the possibility of four, five, even six course meals. Olympic Airways had gold-plated cutlery in the first class cabins. “Some of the American airlines had fashion shows down the aisle, to help the passengers pass the time. At one stage, there was talk of putting baby grand pianos on the aircraft to provide entertainment.”The likes of Christian Dior, Chanel and Pierre Balmain were working with Air France, Olympic Airways and Singapore Airlines respectively to design crew uniforms. Being a flight attendant — or a stewardess, as they were called until the 1970s — was a dream job.”Flight crews looked like rock stars when they walked through the terminal, carrying their bags, almost in slow motion,” says designer and author of the book “Airline: Style at 30,000 Feet, Keith Lovegrove.”They were very stylish, and everybody was either handsome or beautiful.”Most passengers tried to follow suit.Related contentConfessions of a 1980s flight attendant

Relaxed attitude

Pan American World Airways is perhaps the airline most closely linked with the 'Golden age'.

Pan American World Airways is perhaps the airline most closely linked with the ‘Golden age’.Ivan Dmitri/Michael Ochs Archives/Getty Images”It was like going to a cocktail party. We had a shirt and tie and a jacket, which sounds ridiculous now, but was expected then,” adds Lovegrove, who began flying in the 1960s as a child with his family, often getting first class seats as his father worked in the airline industry. “When we flew on the jumbo jet, the first thing my brother and I would do was go up the spiral staircase to the top deck, and sit in the cocktail lounge.””This is the generation where you’d smoke cigarettes on board and you’d have free alcohol. “I don’t want to put anyone in trouble, but at a young age we were served a schooner of sherry before our supper, then champagne and then maybe a digestive afterwards, all below drinking age. “There was an incredible sense of freedom, despite the fact that you were stuck in this fuselage for a few hours.”According to Lovegrove, this relaxed attitude also extended to security.”There was very little of it,” he says. “We once flew out to the Middle East from the UK with a budgerigar, a pet bird, which my mother took on board in a shoebox as hand luggage.”She punched two holes in the top, so the little bird could breathe. When we were brought our three-course meal, she took the lettuce garnish off the prawn cocktail and laid it over the holes. The bird sucked it in. Security-wise, I don’t think you could get away with that today.”Related contentCognac and cigars: The golden age of inflight meals

‘Impeccable service’

A Pan Am flight attendant serves champagne in the first class cabin of a Boeing 747 jet.

A Pan Am flight attendant serves champagne in the first class cabin of a Boeing 747 jet. Tim Graham/Getty ImagesThe airline most often associated with the golden age of travel is Pan Am, the first operator of the Boeing 707 and 747 and the industry leader on transoceanic routes at the time. “My job with Pan Am was an adventure from the very day I started,” says Joan Policastro, a former flight attendant who worked with the airline from 1968 until its dissolution in 1991. “There was no comparison between flying for Pan Am and any other airline. They all looked up to it. “The food was spectacular and service was impeccable. We had ice swans in first class that we’d serve the caviar from, and Maxim’s of Paris [a renowned French restaurant] catered our food. Policastro recalls how passengers would come to a lounge in front of first class “to sit and chat” after the meal service. “A lot of times, that’s where we sat too, chatting with our passengers. Today, passengers don’t even pay attention to who’s on the airplane, but back then, it was a much more social and polite experience,” says Policastro, who worked as a flight attendant with Delta before retiring in 2019.Suzy Smith, who was also a flight attendant with Pan Am starting in 1967, also remembers sharing moments with passengers in the lounge, including celebrities like actors Vincent Price and Raquel Welch, anchorman Walter Cronkite and the Princess Grace of Monaco.