May 10th 2023
What Happens When You Kill Your King
After the English Revolution—and an island’s experiment with republicanism—a genuine restoration was never in the cards.
By Adam Gopnik
April 17, 2023
Jonathan Healey’s “The Blazing World” sees both sectarian strife and galvanizing political ideas in the civil wars—and in the fateful conflict between Cromwell and Charles I.Illustration by Wesley Allsbrook
Amid the pageantry (and the horrible family intrigue) of the approaching coronation, much will be said about the endurance of the British monarchy through the centuries, and perhaps less about how the first King Charles ended his reign: by having his head chopped off in public while the people cheered or gasped. The first modern revolution, the English one that began in the sixteen-forties, which replaced a monarchy with a republican commonwealth, is not exactly at the forefront of our minds. Think of the American Revolution and you see pop-gun battles and a diorama of eloquent patriots and outwitted redcoats; think of the French Revolution and you see the guillotine and the tricoteuses, but also the Declaration of the Rights of Man. Think of the English Revolution that preceded both by more than a century and you get a confusion of angry Puritans in round hats and likable Cavaliers in feathered ones. Even a debate about nomenclature haunts it: should the struggles, which really spilled over many decades, be called a revolution at all, or were they, rather, a set of civil wars?
According to the “Whig” interpretation of history—as it is called, in tribute to the Victorian historians who believed in it—ours is a windup world, regularly ticking forward, that was always going to favor the emergence of a constitutional monarchy, becoming ever more limited in power as the people grew in education and capacity. And so the core seventeenth-century conflict was a constitutional one, between monarchical absolutism and parliamentary democracy, with the real advance marked by the Glorious Revolution, and the arrival of limited monarchy, in 1688. For the great Marxist historians of the postwar era, most notably Christopher Hill, the main action had to be parsed in class terms: a feudal class in decline, a bourgeois class in ascent—and, amid the tectonic grindings between the two, the heartening, if evanescent, appearance of genuine social radicals. Then came the more empirically minded revisionists, conservative at least as historians, who minimized ideology and saw the civil wars as arising from the inevitable structural difficulties faced by a ruler with too many kingdoms to subdue and too little money to do it with.
The point of Jonathan Healey’s new book, “The Blazing World” (Knopf), is to acknowledge all the complexities of the episode but still to see it as a real revolution of political thought—to recapture a lost moment when a radically democratic commonwealth seemed possible. Such an account, as Healey recognizes, confronts formidable difficulties. For one thing, any neat sorting of radical revolutionaries and conservative loyalists comes apart on closer examination: many of the leading revolutionaries of Oliver Cromwell’s “New Model” Army were highborn; many of the loyalists were common folk who wanted to be free to have a drink on Sunday, celebrate Christmas, and listen to a fiddler in a pub. (All things eventually restricted by the Puritans in power.)
The Best Books We Read This Week
Something like this is always true. Revolutions are won by coalitions and only then seized by fanatics. There were plenty of blue bloods on the sansculottes side of the French one, at least at the beginning, and the American Revolution joined abolitionists with slaveholders. One of the most modern aspects of the English Revolution was Cromwell’s campaign against the Irish Catholics after his ascent to power; estimates of the body count vary wildly, but it is among the first organized genocides on record, resembling the Young Turks’ war against the Armenians. Irish loyalists, forced to take refuge in churches, were burned alive inside them.
Healey, a history don at Oxford, scants none of these things. A New Model social historian, he writes with pace and fire and an unusually sharp sense of character and humor. At one emotional pole, he introduces us to the visionary yet perpetually choleric radical John Lilburne, about whom it was said, in a formula that would apply to many of his spiritual heirs, that “if there were none living but himself John would be against Lilburne, and Lilburne against John.” At the opposite pole, Healey draws from obscurity the mild-mannered polemicist William Walwyn, who wrote pamphlets with such exquisitely delicate titles as “A Whisper in the Ear of Mr Thomas Edward” and “Some Considerations Tending to the Undeceiving of Those, Whose Judgements Are Misinformed.”
For Hill, the clashes of weird seventeenth-century religious beliefs were mere scrapings of butter on the toast of class conflict. If people argue over religion, it is because religion is an extension of power; the squabbles about pulpits are really squabbles about politics. Against this once pervasive view, Healey declares flatly, “The Civil War wasn’t a class struggle. It was a clash of ideologies, as often as not between members of the same class.” Admiring the insurgents, Healey rejects the notion that they were little elves of economic necessity. Their ideas preceded and shaped the way that they perceived their class interests. Indeed, like the “phlegmatic” and “choleric” humors of medieval medicine, “the bourgeoisie” can seem a uselessly encompassing category, including merchants, bankers, preachers, soldiers, professionals, and scientists. Its members were passionate contestants on both sides of the fight, and on some sides no scholar has yet dreamed of.
Healey insists, in short, that what seventeenth-century people seemed to be arguing about is what they were arguing about. When members of the influential Fifth Monarchist sect announced that Charles’s death was a signal of the Apocalypse, they really meant it: they thought the Lord was coming, not the middle classes. With the eclectic, wide-angle vision of the new social history, Healey shows that ideas and attitudes, rhetoric and revelations, rising from the ground up, can drive social transformation. Ripples on the periphery of our historical vision can be as important as the big waves at the center of it. The mummery of signatures and petitions and pamphlets which laid the ground for conflict is as important as troops and battlefield terrain. In the spirit of E. P. Thompson, Healey allows members of the “lunatic fringe” to speak for themselves; the Levellers, the Ranters, and the Diggers—radicals who cried out in eerily prescient ways for democracy and equality—are in many ways the heroes of the story, though not victorious ones.
But so are people who do not fit neatly into tales of a rising merchant class and revanchist feudalists. Women, shunted to the side in earlier histories of the era, play an important role in this one. We learn of how neatly monarchy recruited misogyny, with the Royalist propaganda issuing, Rush Limbaugh style, derisive lists of the names of imaginary women radicals, more frightening because so feminine: “Agnes Anabaptist, Kate Catabaptist . . . Penelope Punk, Merald Makebate.” The title of Healey’s book is itself taken from a woman writer, Margaret Cavendish, whose astonishing tale “The Description of a New World, Called the Blazing World” was a piece of visionary science fiction that summed up the dreams and disasters of the century. Healey even reports on what might be a same-sex couple among the radicals: the preacher Thomas Webbe took one John Organ for his “man-wife.”
What happened in the English Revolution, or civil wars, took an exhaustingly long time to unfold, and its subplots were as numerous as the bits of the Shakespeare history play the wise director cuts. Where the French Revolution proceeds in neat, systematic French parcels—Revolution, Terror, Directorate, Empire, etc.—the English one is a mess, exhausting to untangle and not always edifying once you have done so. There’s a Short Parliament, a Long Parliament, and a Rump Parliament to distinguish, and, just as one begins to make sense of the English squabbles, the dour Scots intervene to further muddy the story.
In essence, though, what happened was that the Stuart monarchy, which, after the death of Elizabeth, had come to power in the person of the first King James, of Bible-version fame, got caught in a kind of permanent political cul-de-sac. When James died, in 1625, he left his kingdom to his none too bright son Charles. Parliament was then, as now, divided into Houses of Lords and Commons, with the first representing the aristocracy and the other the gentry and the common people. The Commons, though more or less elected, by uneven means, served essentially at the King’s pleasure, being summoned and dismissed at his will.
Video From The New YorkerMy Parent, Neal: Transitioning at Sixty-two
Parliament did, however, have the critical role of raising taxes, and, since the Stuarts were both war-hungry and wildly incompetent, they needed cash and credit to fight their battles, mainly against rebellions in Scotland and Ireland, with one disastrous expedition into France. Although the Commons as yet knew no neat party divides, it was, in the nature of the times, dominated by Protestants who often had a starkly Puritan and always an anti-papist cast, and who suspected, probably wrongly, that Charles intended to take the country Catholic. All of this was happening in a time of crazy sectarian religious division, when, as the Venetian Ambassador dryly remarked, there were in London “as many religions as there were persons.” Healey tells us that there were “reports of naked Adamites, of Anabaptists and Brownists, even Muslims and ‘Bacchanalian’ pagans.”
In the midst of all that ferment, mistrust and ill will naturally grew between court and Parliament, and between dissident factions within the houses of Parliament. In January, 1642, the King entered Parliament and tried to arrest a handful of its more obnoxious members; tensions escalated, and Parliament passed the Militia Ordinance, awarding itself the right to raise its own fighting force, which—a significant part of the story—it was able to do with what must have seemed to the Royalists frightening ease, drawing as it could on the foundation of the London civic militia. The King, meanwhile, raised a conscript army of his own, which was ill-supplied and, Healey says, “beset with disorder and mutiny.” By August, the King had officially declared war on Parliament, and by October the first battle began. A series of inconclusive wins and losses ensued over the next couple of years.
The situation shifted when, in February, 1645, Parliament consolidated the New Model Army, eventually under the double command of the aristocratic Thomas Fairfax, about whom, one woman friend admitted, “there are various opinions about his intellect,” and the grim country Protestant Oliver Cromwell, about whose firm intellect opinions varied not. Ideologically committed, like Napoleon’s armies a century later, and far better disciplined than its Royalist counterparts, at least during battle (they tended to save their atrocities for the after-victory party), the New Model Army was a formidable and modern force. Healey, emphasizing throughout how fluid and unpredictable class lines were, makes it clear that the caste lines of manners were more marked. Though Cromwell was suspicious of the egalitarian democrats within his coalition—the so-called Levellers—he still declared, “I had rather have a plain russet-coated captain that knows what he fights for, and loves what he knows, than that which you call a gentleman.”
Throughout the blurred action, sharp profiles of personality do emerge. Ronald Hutton’s marvellous “The Making of Oliver Cromwell” (Yale) sees the Revolution in convincingly personal terms, with the King and Cromwell as opposed in character as they were in political belief. Reading lives of both Charles and Cromwell, one can only recall Alice’s sound verdict on the Walrus and the Carpenter: that they were both very unpleasant characters. Charles was, the worst thing for an autocrat, both impulsive and inefficient, and incapable of seeing reality until it was literally at his throat. Cromwell was cruel, self-righteous, and bloodthirsty.
Yet one is immediately struck by the asymmetry between the two. Cromwell was a man of talents who rose to power, first military and then political, through the exercise of those talents; Charles was a king born to a king. It is still astounding to consider, in reading the history of the civil wars, that so much energy had to be invested in analyzing the character of someone whose character had nothing to do with his position. But though dynastic succession has been largely overruled in modern politics, it still holds in the realm of business. And so we spend time thinking about the differences, say, between George Steinbrenner and his son Hal, and what that means for the fate of the Yankees, with the same nervous equanimity that seventeenth-century people had when thinking about the traits and limitations of an obviously dim-witted Royal Family.
Although Cromwell emerges from every biography as a very unlikable man, he was wholly devoted to his idea of God and oddly magnetic in his ability to become the focus of everyone’s attention. In times of war, we seek out the figure who embodies the virtues of the cause and ascribe to him not only his share of the credit but everybody else’s, too. Fairfax tended to be left out of the London reports. He fought the better battles but made the wrong sounds. That sentence of Cromwell’s about the plain captain is a great one, and summed up the spirit of the time. Indeed, the historical figure Cromwell most resembles is Trotsky, who similarly mixed great force of character with instinctive skill at military arrangements against more highly trained but less motivated royal forces. Cromwell clearly had a genius for leadership, and also, at a time when religious convictions were omnipresent and all-important, for assembling a coalition that was open even to the more extreme figures of the dissident side. Without explicitly endorsing any of their positions, Cromwell happily accepted their support, and his ability to create and sustain a broad alliance of Puritan ideologies was as central to his achievement as his cool head with cavalry.
Hutton and Healey, in the spirit of the historians Robert Darnton and Simon Schama—recognizing propaganda as primary, not merely attendant, to the making of a revolution—bring out the role that the London explosion of print played in Cromwell’s triumph. By 1641, Healey explains, “London had emerged as the epicentre of a radically altered landscape of news . . . forged on backstreet presses, sold on street corners and read aloud in smoky alehouses.” This may be surprising; we associate the rise of the pamphlet and the newspaper with a later era, the Enlightenment. But just as, once speed-of-light communication is possible, it doesn’t hugely matter if its vehicle is telegraphy or e-mail, so, too, once movable type was available, the power of the press to report and propagandize didn’t depend on whether it was produced single sheet by single sheet or in a thousand newspapers at once.
At last, at the Battle of Naseby, in June, 1645, the well-ordered Parliamentary forces won a pivotal victory over the royal forces. Accident and happenstance aided the supporters of Parliament, but Cromwell does seem to have been, like Napoleon, notably shrewd and self-disciplined, keeping his reserves in reserve and throwing them into battle only at the decisive moment. By the following year, Charles I had been captured. As with Louis XVI, a century later, Charles was offered a perfectly good deal by his captors—basically, to accept a form of constitutional monarchy that would still give him a predominant role—but left it on the table. Charles tried to escape and reimpose his reign, enlisting Scottish support, and, during the so-called Second Civil War, the bloodletting continued.
In many previous histories of the time, the battles and Cromwell’s subsequent rise to power were the pivotal moments, with the war pushing a newly created “middling class” toward the forefront. For Healey, as for the historians of the left, the key moment of the story occurs instead in Putney, in the fall of 1647, in a battle of words and wills that could easily have gone a very different way. It was there that the General Council of the New Model Army convened what Healey calls “one of the most remarkable meetings in the whole of English history,” in which “soldiers and civilians argued about the future of the constitution, the nature of sovereignty and the right to vote.” The implicit case for universal male suffrage was well received. “Every man that is to live under a government ought first by his own consent to put himself under that government,” Thomas Rainsborough, one of the radical captains, said. By the end of a day of deliberation, it was agreed that the vote should be extended to all men other than servants and paupers on relief. The Agitators, who were in effect the shop stewards of the New Model Army, stuck into their hatbands ribbons that read “England’s freedom and soldier’s rights.” Very much in the manner of the British soldiers of the Second World War who voted in the first Labour government, they equated soldiery and equality.
The democratic spirit was soon put down. Officers, swords drawn, “plucked the papers from the mutineers’ hats,” Healey recounts, and the radicals gave up. Yet the remaining radicalism of the New Model Army had, in the fall of 1648, fateful consequences. The vengeful—or merely egalitarian—energies that had been building since Putney meant that the Army objected to Parliament’s ongoing peace negotiations with Charles. Instead, he was tried for treason, the first time in human memory that this had happened to a monarch, and, in 1649, he was beheaded. In the next few years, Cromwell turned against Parliament, impatient with its slow pace, and eventually staged what was in effect a coup to make himself dictator. “Lord Protector” was the title Cromwell took, and then, in the way of such things, he made himself something very like a king.
Cromwell won; the radicals had lost. The political thought of their time—however passionate—hadn’t yet coalesced around a coherent set of ideas and ideals that could have helped them translate those radical intuitions into a persuasive politics. Philosophies count, and these hadn’t been, so to speak, left to simmer on the Hobbes long enough: “Leviathan” was four years off, and John Locke was only a teen-ager. The time was still recognizably and inherently pre-modern.
Even the word “ideology,” favored by Healey, may be a touch anachronistic. The American and the French Revolutions are both recognizably modern: they are built on assumptions that we still debate today, and left and right, as they were established then, are not so different from left and right today. Whatever obeisance might have been made to the Deity, they were already playing secular politics in a post-religious atmosphere. During the English Revolution, by contrast, the most passionate ideologies at stake were fanatic religious beliefs nurtured through two millennia of Christianity.
Those beliefs, far from being frosting on a cake of competing interests, were the competing interests. The ability of seventeenth-century people to become enraptured, not to say obsessed, with theological differences that seem to us astonishingly minute is the most startling aspect of the story. Despite all attempts to depict these as the mere cosmetic covering of clan loyalties or class interests, those crazy-seeming sectarian disputes were about what they claimed to be about. Men were more likely to face the threat of being ripped open and having their bowels burned in front of their eyes (as happened eventually to the regicides) on behalf of a passionately articulated creed than they were on behalf of an abstract, retrospectively conjured class.
But, then, perhaps every age has minute metaphysical disputes whose profundity only that age can understand. In an inspired study of John Donne, “Super-Infinite,” the scholar Katherine Rundell points out how preoccupied her subject was with the “trans-” prefix—transpose, translate, transubstantiate—because it marked the belief that we are “creatures born transformable.” The arguments over transubstantiation that consumed the period—it would be the cause of the eventual unseating of Charles I’s second son, King James II—echo in our own quarrels about identity and transformation. Weren’t the nonconformist Puritans who exalted a triune godhead simply insisting, in effect, on plural pronouns for the Almighty? The baseline anxiety of human beings so often turns on questions of how transformable we creatures are—on how it is that these meat-and-blood bodies we live within can somehow become the sites of spirit and speculation and grace, by which we include free will. These issues of body and soul, however soluble they may seem in retrospect, are the ones that cause societies to light up and sometimes conflagrate.
History is written by the victors, we’re told. In truth, history is written by the romantics, as stories are won by storytellers. Anyone who can spin lore and chivalry, higher calling and mystic purpose, from the ugliness of warfare can claim the tale, even in defeat. As Ulysses S. Grant knew, no army in history was as badly whipped as Robert E. Lee’s, and yet the Confederates were still, outrageously, winning the history wars as late as the opening night of “Gone with the Wind.” Though the Parliamentarians routed the Cavaliers in the first big war, the Cavaliers wrote the history—and not only because they won the later engagement of the Restoration. It was also because the Cavaliers, for the most part, had the better writers. Aesthetes may lose the local battle; they usually win the historical war. Cromwell ruled as Lord Protector for five years, and then left the country to his hapless son, who was deposed in just one. Healey makes no bones about the truth that, when the Commonwealth failed and Charles II gained the throne, in 1660, for what became a twenty-five-year reign, it opened up a period of an extraordinary English artistic renaissance. “The culture war, that we saw at the start of the century,” he writes, “had been won. Puritanism had been cast out. . . . Merry England was back.”
There was one great poet-propagandist for Cromwell, of course: John Milton, whose “Paradise Lost” can be read as a kind of dreamy explication of Cromwellian dissident themes. But Milton quit on Cromwell early, going silent at his apogee, while Andrew Marvell’s poems in praise of Cromwell are masterpieces of equivocation and irony, with Cromwell praised, the King’s poise in dying admired, and in general a tone of wry hyperbole turning into fatalism before the reader’s eyes. Marvell’s famously conditional apothegm for Cromwell, “If these the times, then this must be the man,” is as backhanded a compliment as any poet has offered a ruler, or any flunky has ever offered a boss.
Healey makes the larger point that, just as the Impressionists rose, in the eighteen-seventies, as a moment of repose after the internecine violence of the Paris Commune, the matchless flowering of English verse and theatre in the wake of the Restoration was as much a sigh of general civic relief as a paroxysm of Royalist pleasure. The destruction of things of beauty by troops under Cromwell’s direction is still shocking to read of. At Peterborough Cathedral, they destroyed ancient stained-glass windows, and in Somerset at least one Parliamentarian ripped apart a Rubens.
Yet, in Cromwell’s time, certain moral intuitions and principles appeared that haven’t disappeared; things got said that could never be entirely unsaid. Government of the people resides in their own consent to be governed; representative bodies should be in some way representative; whatever rights kings have are neither divine nor absolute; and, not least, religious differences should be settled by uneasy truces, if not outright toleration.
And so there is much to be said for a Whig history after all, if not as a story of inevitably incremental improvements then at least as one of incremental inspirations. The Restoration may have had its glories, but a larger glory belongs to those who groped, for a time, toward something freer and better, and who made us, in particular—Americans, whose Founding Fathers, from Roger Williams to the Quakers, leaped intellectually right out of the English crucible—what we spiritually remain. America, on the brink of its own revolution, was, essentially, London in the sixteen-forties, set free then, and today still blazing. ♦Published in the print edition of the April 24 & May 1, 2023, issue, with the headline “The Great Interruption.”
New Yorker Favorites
- What happened to the whale from “Free Willy.”
- They thought that they’d found the perfect apartment. They weren’t alone.
- It was one of the oldest buildings left downtown. Why not try to save it?
- The religious right’s leading ghostwriter.
- After high-school football stars were accused of rape, online vigilantes demanded that justice be served.
- The world’s oldest temple and the dawn of civilization.
- A comic strip by Alison Bechdel: the seven-minute semi-sadistic workout.
Sign up for our daily newsletter to receive the best stories from The New Yorker.
Adam Gopnik, a staff writer, has been contributing to The New Yorker since 1986. He is the author of, most recently, “The Real Work: On the Mystery of Mastery.”
Books & Fiction
Get book recommendations, fiction, poetry, and dispatches from the world of literature in your in-box. Sign up for the Books & Fiction newsletter.
May 3rd 2023
‘The King and His Husband’: The Gay History of British Royals
Queen Elizabeth’s cousin wed in the first same-sex royal wedding—but he is far from the first gay British royal, according to historians.
- Kayla Epstein
Read when you’ve got time to spare.
More from The Washington Post
- How living on the wrong side of a time zone can be hazardous to your health
- A medical condition or just a difference? The question roils autism community.
- ‘If I disappear’: Chinese students make farewell messages amid crackdowns over labor activism
King Edward II was known for his intensely close relationships with two men. Photo by duncan1890/Getty Images
Ordinarily, the wedding of a junior member of the British royal family wouldn’t attract much global attention. But Lord Ivar Mountbatten’s did.
That’s because Mountbatten, a cousin of Queen Elizabeth II, wed James Coyle in the summer of 2018 in what was heralded as the “first-ever” same-sex marriage in Britain’s royal family.
Perhaps what makes it even more unusual is that Mountbatten’s ex-wife, Penny Mountbatten, gave her former husband away.
Who says the royals aren’t a modern family?
Though Mountbatten and Coyle’s ceremony was expected to be small, it’s much larger in significance.
“It’s seen as the extended royal family giving a stamp of approval, in a sense, to same-sex marriage,” said Carolyn Harris, historian and author of Raising Royalty: 1000 Years of Royal Parenting. “This marriage gives this wider perception of the royal family encouraging everyone to be accepted.”
But the union isn’t believed to be the first same-sex relationship in British monarchy, according to historians. And they certainly couldn’t carry out their relationships openly or without causing intense political drama within their courts.
Edward II, who ruled from 1307-1327, is one of England’s less fondly remembered kings. His reign consisted of feuds with his barons, a failed invasion of Scotland in 1314, a famine, more feuding with his barons, and an invasion by a political rival that led to him being replaced by his son, Edward III. And many of the most controversial aspects of his rule — and fury from his barons — stemmed from his relationships with two men: Piers Gaveston and, later, Hugh Despenser.
Gaveston and Edward met when Edward was about 16 years old, when Gaveston joined the royal household. “It’s very obvious from Edward’s behavior that he was quite obsessed with Gaveston,” said Kathryn Warner, author of Edward II: The Unconventional King. Once king, Edward II made the relatively lowborn Gaveston the Earl of Cornwall, a title usually reserved for members of the royal family, “just piling him with lands and titles and money,” Warner said. He feuded with his barons over Gaveston, who they believed received far too much attention and favor.
Gaveston was exiled numerous times over his relationship with Edward II, though the king always conspired to bring him back. Eventually, Gaveston was assassinated. After his death, Edward “constantly had prayers said for [Gaveston’s] soul; he spent a lot of money on Gaveston’s tomb,” Warner said.
Several years after Gaveston’s death, Edward formed a close relationship with another favorite and aide, Hugh Despenser. How close? Walker pointed to the annalist of Newenham Abbey in Devon in 1326, who called Edward and Despenser “the king and his husband,” while another chronicler noted that Despenser “bewitched Edward’s heart.”
The speculation that Edward II’s relationships with these men went beyond friendship was fueled by Christopher Marlowe’s 16th-century play “Edward II”, which is often noted for its homoerotic portrayal of Edward II and Gaveston.
James VI and I, who reigned over Scotland and later England and Ireland until his death in 1625, attracted similar scrutiny for his male favorites, a term used for companions and advisers who had special preference with monarchs. Though James married Anne of Denmark and had children with her, it has long been believed that James had romantic relationships with three men: Esmé Stewart; Robert Carr; and George Villiers, Duke of Buckingham.
Correspondence between James and his male favorites survives, and as David M. Bergeron theorizes in his book King James and Letters of Homoerotic Desire: “The inscription that moves across the letters spell desire.”
James was merely 13 when he met 37-year-old Stewart, and their relationship was met with concern.
“The King altogether is persuaded and led by him . . . and is in such love with him as in the open sight of the people often he will clasp him about the neck with his arms and kiss him,” wrote one royal informant of their relationship. James promoted Stewart up the ranks, eventually making him Duke of Lennox. James was eventually forced to banish him, causing Stewart great distress. “I desire to die rather than to live, fearing that that has been the occasion of your no longer loving me,” Stewart wrote to James.
But James’s most famous favorite was Villiers. James met him in his late 40s and several years later promoted him to Duke of Buckingham — an astounding rise for someone of his rank. Bergeron records the deeply affectionate letters between the two; in a 1623 letter, James refers bluntly to “marriage” and calls Buckingham his “wife:”
“I cannot content myself without sending you this present, praying God that I may have a joyful and comfortable meeting with you and that we may make at this Christmas a new marriage ever to be kept hereafter . . . I desire to live only in this world for your sake, and that I had rather live banished in any part of the earth with you than live a sorrowful widow’s life without you. And may so God bless you, my sweet child and wife, and grant that ye may ever be a comfort to your dear dad and husband.”
A lost portrait of Buckingham by Flemish artist Peter Paul Rubens was discovered in Scotland, depicting a striking and stylish man. And a 2008 restoration of Apethorpe Hall, where James and Villiers met and later spent time together, discovered a passage that linked their bedchambers.
One queen who has attracted speculation about her sexuality is Queen Anne, who ruled from 1702-1714. Her numerous pregnancies, most of which ended in miscarriage or a stillborn child, indicate a sexual relationship with her husband, George of Denmark.
And yet, “she had these very intense, close friendships with women in her household,” Harris said.
Most notable is her relationship to Sarah Churchill, the Duchess of Marlborough, who held enormous influence in Anne’s court as mistress of the robes and keeper of the privy purse. She was an influential figure in Whig party politics, famous for providing Anne with blunt advice and possessing as skillful a command of politics as her powerful male contemporaries.
Whether Churchill and Queen Anne’s intense friendship became something more is something we may never know. “Lesbianism, by its unverifiable nature, is an awful subject for historical research and, inversely, the best subject for political slander,” writes Ophelia Field in her book Sarah Churchill: Duchess of Marlborough: The Queen’s Favourite.
But Field also notes that when examining the letters between the women, it’s important to understand that their friendship was “something encompassing what we would nowadays class as romantic or erotic feeling.”
Field writes in “The Queen’s Favourite”:
“Without Sarah beside her when she moved with the seasonal migrations of the Court, Anne complained of loneliness and boredom: ‘I must tell you I am not as you left me . . . I long to be with you again and tis impossible for you ever to believe how much I love you except you saw my heart.’ [ . . .] Most commentators have suggested that the hyperbole in Anne’s letters to her friend was merely stylistic. In fact, the overwhelming impression is not of overstatement but that Anne was repressing what she really wanted to say.”
Their relationship deteriorated in part because of Anne’s growing closeness to another woman, Churchill’s cousin, Abigail Masham. Churchill grew so infuriated that she began insinuating Anne’s relationship with Masham was sinister.
The drama surrounding the three women played out in the 2018 film, The Favourite, starring Rachel Weisz, Emma Stone and Olivia Colman.
Though there is much evidence that these royals had same-sex relationships with their favorites or other individuals, Harris cautioned that jealousy or frustration with favorites within the courts often led to rumors about the relationships. “If a royal favorite, no matter the degree of personal relationship, was disrupting the social or political hierarchy in some way, then that royal favorite was considered a problem, regardless of what was going on behind closed doors,” she said.
Harris also noted that it was difficult to take 21st-century definitions of sexual orientation and apply them to past monarchs. “When we see historical figures, they might have same-sex relationships but might not talk about their orientation,” she said. “Historical figures often had different ways of viewing themselves than people today.”
But she acknowledged that reexamining the lives, and loves, of these monarchs creates a powerful, humanizing bond between our contemporary society and figures of the past. It shows “that there have been people who dealt with some of the same concerns and the same issues that appear in the modern day,” she said.
April 23rd 2023
The Titanic Wreck Is a Landmark Almost No One Can See
Visiting the remains of the doomed ship causes it damage—but so will just leaving it there.
- Natasha Frost
More from Atlas Obscura
- Remembering a Little-Known Chapter in the Famed Endurance Expedition to Antarctica
- The History, Myth, and Future of the Giant Clam
- The Forgotten Women Aquanauts of the 1970s
A view of the bathtub in Capt. Smith’s bathroom, photographed in 2003. Rusticles are growing over most of the fixtures in the room. Photo from the Public Domain/Lori Johnston, RMS Titanic Expedition 2003, NOAA-OE.
The bride wore a flame-retardant suit—and so did the groom. In July 2001, an American couple got married in the middle of the Atlantic Ocean, thousands of feet below the surface. In the background was an international landmark every bit as familiar as the Eiffel Tower, the Taj Mahal or any other postcard-perfect wedding photo destination. David Leibowitz and Kimberley Miller wed on the bow of the Titanic shipwreck, in a submarine so small they had to crouch as they said their vows. Above the water, Captain Ron Warwick officiated via hydrophone from the operations room of a Russian research ship.
The couple had agreed to the undersea nuptials only if they could avoid a media circus, but quickly became the faces of a troubling trend: The wreck of the Titanic as landmark tourist attraction, available to gawk at to anyone with $36,000 singeing a hole in their pocket. (Leibowitz won a competition run by diving company Subsea Explorer, who then offered to finance the costs of their wedding and honeymoon.)
As opprobrium mounted, particularly from those whose relatives had died aboard the ship, a Subsea representative told the press: “What’s got to be remembered is that every time a couple gets married in church they have to walk through a graveyard to get to the altar.” Was the Titanic no more than an ordinary cemetery? The event focused attention on a predicament with no single answer: Who did the wreck belong to, what was the “right” thing to do to it, and what was the point of a landmark that almost no one could visit?
People had been wrestling with earlier forms of these questions for decades, long before the nonprofit Woods Hole Oceanographic Institution discovered the Titanic wreck in 1985. The most prominent of these earlier dreamers was Briton Douglas Woolley, who began to appear in the national press in the 1960s with increasingly harebrained schemes to find, and then resurface, the ship. One such scheme involved him going down in a deep-sea submersible, finding the ship, and then lifting it with a shoal of thousands of nylon balloons attached to its hull. The balloons would be filled with air, and then rise to the surface, dragging the craft up with them. As Walter Lord, author of Titanic history bestseller The Night Lives On, ponders, “How the balloons would be inflated 13,000 feet down wasn’t clear.”
Next, Woolley coaxed Hungarian inventors aboard his project. The newly incorporated Titanic Salvage Company would use seawater electrolysis to generate 85,000 cubic yards of hydrogen. They’d fill plastic bags with it, they announced—and presto! But this too was a wash. They had budgeted a week to generate the gas; a scholarly paper by an American chemistry professor suggested it might take closer to 10 years. The company foundered and the Hungarians returned home. (In 1980, Woolley allegedly acquired the title to the Titanic from the ship and insurance companies—his more recent attempts to assert ownership have proven unsuccessful.)
Woolley might not have raised the Titanic from the depths, but he had succeeded in winching up interest in the vessel, and whether it might ever see the light of day again. In the following decade, some eight different groups announced plans to find and explore the ship. Most were literally impossible; some were practically unfeasible. One 1979 solution involving benthos glass floats was nixed when it became clear that it would cost $238,214,265, the present day equivalent of the GDP of a small Caribbean nation.
A crowd gathering outside of the White Star Line offices for news of the shipwreck. Photo from the Library of Congress/LC-DIG-ggbain-10355.
In the early 1980s, various campaigns set out to find the ship and its supposedly diamond-filled safes. But as they came back empty-handed, newspapers grew weary of these fruitless efforts. When the Woods Hole Oceanographic Institution set sail in 1985 with the same objective, it generated barely a media ripple. Their subsequent triumph in early September made front page news: the New York Times proclaimed tentatively: “Wreckage of Titanic Reported Discovered 12,000 Feet Down.”
Within days of its discovery, the legal rights to the ship began to be disputed. Entrepreneurs read the headlines and saw dollar signs, and new plans to turn the Titanic into an attraction began to bubble up to the surface. Tony Wakefield, a salvage engineer from Stamford, Connecticut, proposed pumping Vaseline into polyester bags placed in the ship’s hull. The Vaseline would harden underwater, he said, and then become buoyant, lifting the Titanic up to the surface. This was scarcely the least fantastical of the solutions—others included injecting thousands of ping pong balls into the hull, or using levers and pulleys to crank the 52,000-ton ship out of the water. “Yet another would encase the liner in ice,” Lord writes. “Then, like an ordinary cube in a drink, the ice would rise to the surface, bringing the Titanic with it.”
Robert Ballard, the young marine geologist who had led the successful expedition, spoke out against these plans. The wreck should not be commercially exploited, he said, but instead declared an international memorial—not least because any clumsy attempt to obtain debris from the site might damage the ship irreparably, making further archeological study impossible. “To deter would-be salvagers,” the Times reported, “he has refused to divulge the ship’s exact whereabouts.”
Somehow, the coordinates got out. Ballard’s wishes were ignored altogether: in the years that followed, team after team visited the wreck, salvaging thousands of objects and leaving a trail of destruction in their wake. Panicked by the potential for devastation, Ballard urged then-chairman of the House Merchant Marine and Fisheries Committee, Congressman Walter B. Jones, Sr., to introduce the RMS Titanic Maritime Memorial Act in the United States House of Representatives. The Act would limit how many people could explore and salvage the wreck, which would remain preserved in the icy depths of the Atlantic.
Despite being signed into law by President Ronald Reagan in October 1986, the Act proved utterly toothless. The Titanic site is outside of American waters, giving the U.S. government little jurisdiction over its rusty grave. In 1998, the Act was abandoned altogether.
In the meantime, visits to the site had continued. In 1987, Connecticut-based Titanic Ventures Inc. coupled with the French oceanographic agency IFREMER to survey and salvage the site. Among their desired booty was the bell from the crow’s nest, which had sounded out doom to so many hundreds of passengers. When pulled from the wreck, the crow’s nest collapsed altogether, causing immense damage to the site. People began to question whether it was right for people to be there at all, let alone looting what was effectively a mass grave. Survivor Eva Hart, whose father perished on the ship, decried Titanic visitors as “fortune hunters, vultures, pirates!”—yet the trips continued. A few years later, director James Cameron’s team, who were scoping out the wreck for his 1997 blockbuster, caused further accidental damage.
The cover of the New York Herald and the New York Times, the day of and the day after the sinking, respectively. Photo from the Public Domain.
Gradually, researchers realized that nature, too, had refused to cooperate with the statute introduced above the surface. “The deep ocean has been steadily dismantling the once-great cruise liner,” Popular Science reported in July 2004. One forensic archaeologist described the decay as unstoppable: “The Titanic is becoming something that belongs to biology.” The hulking wreck had become a magnet for sea life, with iron-eating bacteria burrowing into its cracks and turning some 400 pounds of iron a day into fine, eggshell-delicate “rusticles”, which hung pendulously from the steel sections of the wreck and dissolved into particles at the slightest touch. Molluscs and other underwater critters chomped away at the ship, while eddies and other underwater flows have broken bits off the wreck, dispersing them back into the ocean.
A century after the Titanic sunk in 1912, over 140 people had visited the landmark many believe should have been left completely alone. Some have had government or nonprofit backing; others have simply been wealthy tourists of the sort who accompanied Leibowitz and Miller to their underwater wedding. With its centenary, the ship finally became eligible for UNESCO protection, under the 2001 UNESCO Convention on the Protection of Underwater Cultural Heritage. Then-Director General Irina Bokova announced the protection of the site, limiting the destruction, pillage, sale and dispersion of objects found among its vestiges. Human remains would be treated with new dignity, the organization announced, while exploration attempts subject to ethical and scientific scrutiny. “We do not tolerate the plundering of cultural sites on land, and the same should be true for our sunken heritage,” Bokova said, calling on divers not to dump equipment or commemorative plaques on the Titanic site.
The legal protections now in place on the Titanic wreck may have been hard won, but they’re bittersweet in their ineffectiveness. The Titanic has been protected from excavation, but it’s defenseless against biology. Scientists now believe that within just a few decades, the ship will be all but gone, begging the question of precisely what the purpose of these statutes is.
In its present location, protections or no, Titanic’s destruction seems assured. It’s likely, but not certain, that moving the ship would damage it, yet keeping it in place makes its erosion a certainty. A few days after the wreck was found in 1985, competing explorer and Texan oilman Jack Grimm announced his own plans to salvage the ship, rather than let it be absorbed by the ocean floor. “What possible harm can that do to this mass of twisted steel?” he wondered. Grimm, and many others, may have been prevented from salvaging the site for its own protection—but simply leaving it alone has doomed it to disappear.
February 12th 2023
When Did Americans Lose Their British Accents?
The absence of audio recording technology makes “when” a tough question to answer. But there are some theories as to “why.”
Photo from Getty Images.
There are many, many evolving regional British and American accents, so the terms “British accent” and “American accent” are gross oversimplifications. What a lot of Americans think of as the typical “British accent” is what’s called standardized Received Pronunciation (RP), also known as Public School English or BBC English. What most people think of as an “American accent,” or most Americans think of as “no accent,” is the General American (GenAm) accent, sometimes called a “newscaster accent” or “Network English.” Because this is a blog post and not a book, we’ll focus on these two general sounds for now and leave the regional accents for another time.
English colonists established their first permanent settlement in the New World at Jamestown, Virginia, in 1607, sounding very much like their countrymen back home. By the time we had recordings of both Americans and Brits some three centuries later (the first audio recording of a human voice was made in 1860), the sounds of English as spoken in the Old World and New World were very different. We’re looking at a silent gap of some 300 years, so we can’t say exactly when Americans first started to sound noticeably different from the British.
As for the “why,” though, one big factor in the divergence of the accents is rhotacism. The General American accent is rhotic and speakers pronounce the r in words such as hard. The BBC-type British accent is non-rhotic, and speakers don’t pronounce the r, leaving hard sounding more like hahd. Before and during the American Revolution, the English, both in England and in the colonies, mostly spoke with a rhotic accent. We don’t know much more about said accent, though. Various claims about the accents of the Appalachian Mountains, the Outer Banks, the Tidewater region and Virginia’s Tangier Island sounding like an uncorrupted Elizabethan-era English accent have been busted as myths by linguists.
Talk This Way
Around the turn of the 18th 19th century, not long after the revolution, non-rhotic speech took off in southern England, especially among the upper and upper-middle classes. It was a signifier of class and status. This posh accent was standardized as Received Pronunciation and taught widely by pronunciation tutors to people who wanted to learn to speak fashionably. Because the Received Pronunciation accent was regionally “neutral” and easy to understand, it spread across England and the empire through the armed forces, the civil service and, later, the BBC.
Across the pond, many former colonists also adopted and imitated Received Pronunciation to show off their status. This happened especially in the port cities that still had close trading ties with England — Boston, Richmond, Charleston, and Savannah. From the Southeastern coast, the RP sound spread through much of the South along with plantation culture and wealth.
After industrialization and the Civil War and well into the 20th century, political and economic power largely passed from the port cities and cotton regions to the manufacturing hubs of the Mid Atlantic and Midwest — New York, Philadelphia, Pittsburgh, Cleveland, Chicago, Detroit, etc. The British elite had much less cultural and linguistic influence in these places, which were mostly populated by the Scots-Irish and other settlers from Northern Britain, and rhotic English was still spoken there. As industrialists in these cities became the self-made economic and political elites of the Industrial Era, Received Pronunciation lost its status and fizzled out in the U.S. The prevalent accent in the Rust Belt, though, got dubbed General American and spread across the states just as RP had in Britain.
Of course, with the speed that language changes, a General American accent is now hard to find in much of this region, with New York, Philadelphia, Pittsburgh, and Chicago developing their own unique accents, and GenAm now considered generally confined to a small section of the Midwest.
As mentioned above, there are regional exceptions to both these general American and British sounds. Some of the accents of southeastern England, plus the accents of Scotland and Ireland, are rhotic. Some areas of the American Southeast, plus Boston, are non-rhotic.
Matt Soniak is a long-time mental_floss regular and writes about science, history, etymology and Bruce Springsteen for both the website and the print magazine. His work has also appeared in print and online for Men’s Health, Scientific American, The Atlantic, Philly.com and others. He tweets as @mattsoniak and blogs about animal behavior at mattsoniak.com.
February 10th 2023
The Rift Valley, Kenya. Photo by Steve Forrest/Panos
is a writer and foreign correspondent, who studied anthropology before becoming a journalist. His essays and reporting have appeared in National Geographic, The New Yorker, Emergence Magazine, GQ and the London Review of Books, among others. After working in different parts of Africa for nearly 20 years, he now lives in Woodbridge, in the UK.
Edited byCameron Allan McKean
Syndicate this Essay
Support our work
We are restless even in death. Entombed in stone, our most distant ancestors still travel along Earth’s subterranean passageways. One of them, a man in his 20s, began his journey around 230,000 years ago after collapsing into marshland on the lush edge of a river delta feeding a vast lake in East Africa’s Rift Valley. He became the earth in which he lay as nutrients leached from his body and his bone mineralised into fossil. Buried in the sediment of the Rift, he moved as the earth moved: gradually, inexorably.
Millions of years before he died, tectonic processes began pushing the Rift Valley up and apart, like a mighty inhalation inflating the ribcage of the African continent. The force of it peeled apart a 4,000-mile fissure in Earth’s crust. As geological movements continued, and the rift grew, the land became pallbearer, lifting and carrying our ancestor away to Omo-Kibish in southern Ethiopia where, in 1967, a team of Kenyan archaeologists led by Richard Leakey disinterred his shattered remains from an eroding rock bank.
Lifted from the ground, the man became the earliest anatomically modern human, and the start of a new branch – Homo sapiens – on the tangled family tree of humanity that first sprouted 4 million years ago. Unearthed, he emerged into the same air and the same sunlight, the same crested larks greeting the same rising sun, the same swifts darting through the same acacia trees. But it was a different world, too: the nearby lake had retreated hundreds of miles, the delta had long since narrowed to a river, the spreading wetland had become parched scrub. His partial skull, named Omo 1, now resides in a recessed display case at Kenya’s national museum in Nairobi, near the edge of that immense fault line.
Sign up to our newsletter
Updates on everything new at Aeon.
I don’t remember exactly when I first learned about the Rift Valley. I recall knowing almost nothing of it when I opened an atlas one day and saw, spread across two colourful pages, a large topographical map of the African continent. Toward the eastern edge of the landmass, a line of mountains, valleys and lakes – the products of the Rift – drew my eye and drove my imagination, more surely than either the yellow expanse of the Sahara or the green immensity of the Congo. Rainforests and deserts appeared uncomplicated, placid swathes of land in comparison with the fragmenting, shattering fissures of the Rift.
On a map, you can trace the valley’s path from the tropical coastal lowlands of Mozambique to the Red Sea shores of the Arabian Peninsula. It heads due north, up the length of Lake Malawi, before splitting. The western branch takes a left turn, carving a scythe-shaped crescent of deep lake-filled valleys – Tanganyika, Kivu, Edward – that form natural borders between the Democratic Republic of Congo and a succession of eastern neighbours: Tanzania, Burundi, Rwanda, Uganda. But the western branch peters out, becoming the broad shallow valley of the White Nile before dissipating in the Sudd, a vast swamp in South Sudan.
The eastern branch is more determined in its northward march. A hanging valley between steep ridges, it runs through the centre of Tanzania, weaving its way across Kenya and into Ethiopia where, in the northern Afar region, it splits again at what geologists call a ‘triple junction’, the point where three tectonic plates meet or, in this case, bid farewell. The Nubian and Somalian plates are pulling apart and both are pulling away from the Arabian plate to their north, deepening and widening the Rift Valley as they unzip the African continent. Here in the Rift, our origins and that of the land are uniquely entwined. Understanding this connection demands more than a bird’s-eye view of the continent.
The Rift Valley is the only place where human history can be seen in its entirety
Looking out across a landscape such as East Africa’s Rift Valley reveals a view of beauty and scale. But this way of seeing, however breath-taking, will only ever be a snapshot of the present, a static moment in time. Another way of looking comes from tipping your perspective 90 degrees, from the horizontal plane to the vertical axis, a shift from space to time, from geography to stratigraphy, which allows us to see the Rift in all its dizzying, vertiginous complexity. Here, among seemingly unending geological strata, we can gaze into what the natural philosopher John Playfair called ‘the abyss of time’, a description he made after he, James Hall and James Hutton in 1788 observed layered geological aeons in the rocky outcrops of Scotland’s Siccar Point – a revelation that would eventually lead Hutton to become the founder of modern geology. In the Rift Valley, this vertical, tilted way of seeing is all the more powerful because the story of the Rift is the story of all of us, our past, our present, and our future. It’s a landscape that offers a diachronous view of humanity that is essential to make sense of the Anthropocene, the putative geological epoch in which humans are understood to be a planetary force with Promethean powers of world-making and transformation.
The Rift Valley humbles us. It punctures the transcendent grandiosity of human exceptionalism by returning us to a specific time and a particular place: to the birth of our species. Here, we are confronted with a kind of homecoming as we discern our origins among rock, bones and dust. The Rift Valley is the only place where human history can be seen in its entirety, the only place we have perpetually inhabited, from our first faltering bipedal steps to the present day, when the planetary impacts of climatic changes and population growth can be keenly felt in the equatorial heat, in drought and floods, and in the chaotic urbanisation of fast-growing nations. The Rift is one of many frontiers in the climate crisis where we can witness a tangling of causes and effects.
But locating ourselves here, within Earth’s processes, and understanding ourselves as part of them, is more than just a way of seeing. It is a way of challenging the kind of short-term, atemporal, election-cycle thinking that is failing to deliver us from the climate and biodiversity crises. It allows us to conceive of our current moment not as an endpoint but as the culmination of millions of years of prior events, the fleeting staging point for what will come next, and echo for millennia to come. We exist on a continuum: a sliver in a sediment core bored out of the earth, a plot point in an unfolding narrative, of which we are both author and character. It brings the impact of what we do now into focus, allowing facts about atmospheric carbon or sea level rises to resolve as our present responsibilities.
The Rift is a place, but ‘rift’ is also a word. It’s a noun for splits in things or relationships, a geological term for the result of a process in which Earth shifts, and it’s a verb apt to describe our current connection to the planet: alienation, separation, breakdown. The Rift offers us another way of thinking.
That we come from the earth and return to it is not a burial metaphor but a fact. Geological processes create particular landforms that generate particular environments and support particular kinds of life. In a literal sense, the earth made us. The hominin fossils scattered through the Rift Valley are anthropological evidence but also confronting artefacts. Made of rock not bone, they are familiar yet unexpected, turning up in strange places, emerging from the dirt weirdly heavy, as if burdened with the physical weight of time. They are caught up in our ‘origin stories and endgames’, writes the geographer Kathryn Yusoff, as simultaneous manifestations of mortality and immortality. They embody both the vanishing brevity of an individual life and the near-eternity of a mineralised ‘geologic life’, once – as the philosopher Manuel DeLanda puts it in A Thousand Years of Nonlinear History (1997) – bodies and bones cross ‘the threshold back into the world of rocks’. There is fear in this, but hope too, because we can neither measure, contend with, nor understand the Anthropocene without embedding ourselves in different timescales and grounding ourselves in the earth. Hominin fossils are a path to both.
The rain, wind and tectonics summon long-buried bones, skulls and teeth from the earth
Those species that cannot adapt, die. Humans, it turns out – fortunately for us, less so for the planet – are expert adapters. We had to be, because the Rift Valley in which we were born is a complex, fragmented, shifting place, so diverse in habitats that it seems to contain the world. It is as varied as it is immense, so broad that on all but the clearest of days its edges are lost in haze. From high on its eastern shoulder, successive hills descend thousands of feet to the plains below, like ridges of shoreward ocean swell. Here, the valley floor is hard-baked dirt, the hot air summoning dust devils to dance among whistling thorns, camphor and silver-leafed myrrh. Dormant volcanoes puncture the land, their ragged, uneven craters stark against the sky. Fissures snake across the earth. Valley basins are filled with vast lakes, or dried out and clogged with sand and sediment. An ice-capped mountain stands sentinel, its razor ridges of black basalt rearing out of cloud forest. Elsewhere, patches of woodland cluster on sky islands, or carpet hills and plateaus. In some of the world’s least hospitable lands, the rain, wind and tectonics summon long-buried bones, skulls and teeth from the earth. This is restless territory, a landscape of tumult and movement, and the birthplace of us all.
My forays into this territory over the past dozen years have only scratched at the surface of its immense variety. I have travelled to blistering basalt hillsides, damp old-growth forests, ancient volcanoes with razor rims, smoking geothermal vents, hardened fields of lava, eroding sandstone landscapes that spill fossils, lakes with water that is salty and warm, desert dunes with dizzying escarpments, gently wooded savannah, and rivers as clear as gin. Here, you can travel through ecosystems and landscapes, but also through time
I used to live beside the Rift. For many years, my Nairobi home was 30 kilometres from the clenched knuckles of the Valley’s Ngong Hills, which slope downwards to meet a broad, flat ridge. Here, the road out of the city makes a sharp turn to the right, pitching over the escarpment’s edge before weaving its way, thousands of feet downwards over dozens of kilometres, through patchy pasture and whistling thorns. The weather is always unsettled here and, at 6,500 feet can be cold even on the clearest and brightest of days.
One particularly chilly bend in the road has been given the name ‘Corner Baridi’, cold corner. Occasionally, I would sit here, on scrubby grass by the crumbling edge of a ribbon of old tarmac, and look westwards across a transect of the Rift Valley as young herders wandered past, bells jangling at their goats’ necks. The view was always spectacular, never tired: a giant’s staircase of descending bluffs, steep, rocky and wooded, volcanic peaks and ridges, the sheen of Lake Magadi, a smudge of smoke above Ol Doinyo Lengai’s active caldera, the mirrored surface of Lake Natron, the undulating expanse of the valley floor.
And the feeling the scene conjured was always the same: awe, and nostalgia, in its original sense of a longing for home, a knowledge rooted in bone not books. This is where Homo sapiens are from. This is fundamental terrane, where all our stories begin. Sitting, I would picture the landscape as a time-lapse film, changing over millions of years with spectral life drifting across its shifting surface like smoke.
Humankind was forged in the tectonic crucible of the Rift Valley. The physical and cognitive advances that led to Homo sapiens were driven by changes of topography and climate right here, as Earth tipped on its axis and its surface roiled with volcanism, creating a complex, fragmented environment that demanded a creative, problem-solving creature.
Much of what we know of human evolution in the Rift Valley builds on the fossil finds and theoretical thinking of Richard Leakey, the renowned Kenyan palaeoanthropologist. Over the years I lived in Nairobi, we met and talked on various occasions and, one day in 2021, I visited him at his home, a few miles from Corner Baridi.
Millennia from now, the Rift Valley will have torn the landmass apart and become the floor of a new sea
It was a damp, chilly morning and, when I arrived, Leakey was finishing some toast with jam. Halved red grapefruit and a pot of stovetop espresso coffee sat on the Lazy Susan, a clutch bag stuffed with pills and tubes of Deep Heat and arthritis gel lay on the table among the breakfast debris, a walking stick hung from the doorknob behind him, and from the cuffs of his safari shorts extended two metal prosthetic legs, ending in a pair of brown leather shoes.
At the time, the 77-year-old had shown a knack for immortality, surviving the plane crash that took his legs in 1993, as well as bouts of skin cancer, transplants of his liver and kidneys, and COVID-19. He died in January 2022, but he was as energetic and enthused as I had ever seen him when we met. We discussed Nairobi weather, Kenyan politics, pandemic lockdowns, and his ongoing work. He described his ambitions for a £50 million museum of humankind, to be called Ngaren (meaning ‘the beginning’, in the Turkana language) and built close to his home on a patch of family land he planned to donate. It was the only place that made sense for the museum, he said, describing how the fossils he had uncovered over the years – among them, Omo 1 and the Homo erectus nicknamed Turkana Boy – were all phrases, sentences, or sometimes whole chapters in the story of where we came from, and who we are. ‘The magic of the Rift Valley is it’s the only place you can read the book,’ he told me.
Afterwards, I drove out to the spot where Leakey envisioned his museum being built: a dramatic basalt outcropping amid knee-high grass and claw-branched acacias, perched at the end of a ridge, the land falling precipitously away on three sides. It felt like an immense pulpit or perhaps, given Leakey’s paternal, didactic style, atheist beliefs, and academic rigour, a lectern.
A little way north of Leakey’s home, beyond Corner Baridi, a new railway tunnel burrows through the Ngong Hills to the foot of the escarpment where there is a town of low-slung concrete, and unfinished roofs punctured by reinforced steel bars. For most hours of most days, lorries rumble by, nose to tail, belching smoke and leaking oil. They ferry goods back and forth across the valley plains. The new railway will do the same, moving more stuff, more quickly. The railway, like the road, is indifferent to its surroundings, its berms, bridges, cuttings and tunnels defy topography, mock geography.
Running perpendicular to these transport arteries, pylons stride across the landscape, bringing electricity in high voltage lines from a wind farm in the far north to a new relay station at the foot of a dormant volcano. The promise of all this infrastructure increases the land’s value and, where once there were open plains, now there are fences, For Sale signs, and quarter-acre plots sold in their hundreds. Occasionally, geology intervenes, as it did early one March morning in 2018 when Eliud Njoroge Mbugua’s home disappeared.
It began with a feathering crack scurrying across his cement floor, which widened as the hours passed. Then the crack became a fissure, and eventually split his cinderblock shack apart, hauling its tin-roofed remnants into the depths. Close by, the highway was also torn in two. The next day, journalists launched drones into the sky capturing footage that revealed a lightning-bolt crack in the earth stretching hundreds of metres across the flat valley floor. Breathless news reports followed, mangling the science and making out that an apocalyptic splitting of the African continent was underway. They were half-right.
Ten thousand millennia from now, the Rift Valley will have torn the landmass apart and become the floor of a new sea. Where the reports were wrong, however, was in failing to recognise that Mbugua’s home had fallen victim to old tectonics, not new ones: heavy rains had washed away the compacted sediment on which his home had been built, revealing a fault line hidden below the surface. Sometimes, the changes here can point us forward in time, toward our endings. But more often, they point backwards.
Just a few years earlier, when I first moved to Nairobi, the railway line and pylons did not exist. Such is the velocity of change that, a generation ago, the nearby hardscrabble truck stop town of Mai Mahiu also did not exist. If we go four generations back, there were neither trucks nor the roads to carry them, neither fence posts nor brick homes. The land may look empty in this imagined past, but is not: pastoralist herders graze their cows, moving in search of grass and water for their cattle, sharing the valley with herds of elephant, giraffe and antelope, and the lions that stalk them.
Thousands of years earlier still, and the herders are gone, too. Their forebears are more than 1,000 miles to the northwest, grazing their herds on pastures that will become the Sahara as temperatures rise in the millennia following the end of the ice age, the great northern glaciers retreat and humidity falls, parching the African land. Instead, the valley is home to hunter-gatherers and fishermen who tread the land with a lighter foot.
Go further. At the dawn of the Holocene – the warm interglacial period that began 12,000 years ago and may be coming to a close – the Rift is different, filled with forests of cedar, yellowwood and olive, sedge in the understory. The temperature is cooler, the climate wetter. Dispersed communities of human hunter-gatherers, semi-nomads, live together, surviving on berries, grasses and meat, cooking with fire, hunting with sharpened stone. Others of us have already left during the preceding 40,000 years, moving north up the Rift to colonise what will come to be called the Middle East, Europe, Asia, the Americas.
As geology remakes the land, climate makes its power felt too, swinging between humidity and aridity
Some 200,000 years ago, the Rift is inhabited by the earliest creature that is undoubtedly us: the first Homo sapiens, like our ancestor found in Ethiopia. Scrubbed and dressed, he would not turn heads on the streets of modern-day Nairobi, London or New York. At this time, our ancestors are here, and only here: in the Rift.
Two million years ago, we are not alone. There are at least two species of our Homo genus sharing the Rift with the more ape-like, thicker-skulled and less dexterous members of the hominin family: Australopithecus and Paranthropus. A million years earlier, a small, ape-like Australopithecus (whom archaeologists will one day name ‘Lucy’) lopes about on two legs through a mid-Pliocene world that is even less recognisable, full of megafauna, forests and vast lakes.
Further still – rewinding into the deep time of geology and tectonics, through the Pliocene and Miocene – there is nothing we could call ‘us’ anymore. The landscape has shifted and changed. As geology remakes the land, climate makes its power felt too, swinging between humidity and aridity. Earth wobbles on its axis and spins through its orbit, bringing millennia-long periods of oscillation between wetness and dryness. The acute climate sensitivity of the equatorial valley means basin lakes become deserts, and salt pans fill with water.
On higher ground, trees and grasses engage in an endless waltz, ceding and gaining ground, as atmospheric carbon levels rise and fall, favouring one family of plant, then the other. Eventually, the Rift Valley itself is gone, closing up as Earth’s crust slumps back towards sea level and the magma beneath calms and subsides. A continent-spanning tropical forest, exuberant in its humidity, covers Africa from coast to coast. High in the branches of an immense tree sits a small ape, the common ancestor of human and chimpanzee before tectonics, celestial mechanics and climate conspire to draw us apart, beginning the long, slow process of splitting, separating, fissuring, that leads to today, tens of millions of years later, but perhaps at the same latitude and longitude of that immense tree: a degree and a half south, 36.5 degrees west, on a patch of scrubby grass at the edge of the Rift.
Comment Why does everything go back to Africa ? Ask the white bourgeoise liberals. Only they know the truth. R J Cook
February 8th 2023
|How Are US Government Documents Classified?Here’s what qualifies documents as “Top Secret,” “Secret” and “Confidential”—and how they’re supposed to be handled.Read More|
|How Angela Davis Ended Up on the FBI Most Wanted ListThe scholar and activist was sought and then arrested by the FBI in 1970—the experience informed her life’s work.Read More|
|Weird and Wondrous: the Evolution of Super Bowl Halftime ShowsThe Big Game is this weekend. From a 3-D glasses experiment to ‘Left Shark,’ the halftime show has always captured the public’s imagination.Read More|
|8 Black Inventors Who Made Daily Life EasierBlack innovators changed the way we live through their contributions, from the traffic light to the ironing board.Read More|
|The Greatest Story Never ToldFrom Pulitzer Prize-winning journalist Nikole Hannah-Jones comes The 1619 Project, a docuseries based on the New York Times multimedia project that examines the legacy of slavery in America and its impact on our society today.Stream Now|
|THE NEW YORK TIMES8 Places Across the US That Illuminate Black History|
|NATIONAL GEOGRAPHICA Mecca for Rap has Emerged in the Birthplace of Jazz and Blues|
|THE WASHINGTON POSTWhere Did All the Strange State of the Union Traditions Come From?|
How Are US Government Documents Classified?
Here’s what qualifies documents as “Top Secret,” “Secret” and “Confidential”—and how they’re supposed to be handled.
How Angela Davis Ended Up on the FBI Most Wanted List
The scholar and activist was sought and then arrested by the FBI in 1970—the experience informed her life’s work.
Weird and Wondrous: the Evolution of Super Bowl Halftime Shows
The Big Game is this weekend. From a 3-D glasses experiment to ‘Left Shark,’ the halftime show has always captured the public’s imagination.
8 Black Inventors Who Made Daily Life Easier
Black innovators changed the way we live through their contributions, from the traffic light to the ironing board.
The Greatest Story Never Told
From Pulitzer Prize-winning journalist Nikole Hannah-Jones comes The 1619 Project, a docuseries based on the New York Times multimedia project that examines the legacy of slavery in America and its impact on our society today.
The First Valentine Was Sent From Prison
THE NEW YORK TIMES
8 Places Across the US That Illuminate Black History
A Mecca for Rap has Emerged in the Birthplace of Jazz and Blues
THE WASHINGTON POST
Where Did All the Strange State of the Union Traditions Come From?
February 7th 2023
6 Myths About the History of Black People in America
Six historians weigh in on the biggest misconceptions about Black history, including the Tuskegee experiment and enslaved people’s finances.
- Karen Turner
- Jessica Machado
To study American history is often an exercise in learning partial truths and patriotic fables. Textbooks and curricula throughout the country continue to center the white experience, with Black people often quarantined to a short section about slavery and quotes by Martin Luther King Jr. Many walk away from their high school history class — and through the world — with a severe lack of understanding of the history and perspective of Black people in America.
In the summer of 2019, the New York Times’s 1619 Project burst open a long-overdue conversation about how stories of Black Americans need to be told through the lens of Black Americans themselves. In this tradition, and in celebration of Black History Month, Vox has asked six Black scholars and historians about myths that perpetuate about Black history. Ultimately, understanding Black history is more than learning about the brutality and oppression Black people have endured — it’s about the ways they have fought to survive and thrive in America.
Myth 1: That enslaved people didn’t have money
Enslaved people were money. Their bodies and labor were the capital that fueled the country’s founding and wealth.
But many also had money. Enslaved people actively participated in the informal and formal market economy. They saved money earned from overwork, from hiring themselves out, and through independent economic activities with banks, local merchants, and their enslavers. Elizabeth Keckley, a skilled seamstress whose dresses for Abraham Lincoln’s wife are displayed in Smithsonian museums, supported her enslaver’s entire family and still earned enough to pay for her freedom.
Free and enslaved market women dominated local marketplaces, including in Savannah and Charleston, controlling networks that crisscrossed the countryside. They ensured fresh supplies of fruits, vegetables, and eggs for the markets, as well as a steady flow of cash to enslaved people. Whites described these women as “loose” and “disorderly” to criticize their actions as unacceptable behavior for women, but white people of all classes depended on them for survival.
Illustrated portrait of Elizabeth Keckley (1818-1907), a formerly enslaved woman who bought her freedom and became dressmaker for first lady Mary Todd Lincoln. Hulton Archive/Getty Images
In fact, enslaved people also created financial institutions, especially mutual aid societies. Eliza Allen helped form at least three secret societies for women on her own and nearby plantations in Petersburg, Virginia. One of her societies, Sisters of Usefulness, could have had as many as two to three dozen members. Cities like Baltimore even passed laws against these societies — a sure sign of their popularity. Other cities reluctantly tolerated them, requiring that a white person be present at meetings. Enslaved people, however, found creative ways to conduct their societies under white people’s noses. Often, the treasurer’s ledger listed members by numbers so that, in case of discovery, members’ identities remained protected.
During the tumult of the Civil War, hundreds of thousands of Black people sought refuge behind Union lines. Most were impoverished, but a few managed to bring with them wealth they had stashed under beds, in private chests, and in other hiding places. After the war, Black people fought through the Southern Claims Commission for the return of the wealth Union and Confederate soldiers impounded or outright stole.
Given the resurgence of attention on reparations for slavery and the racial wealth gap, it is important to recall the long history of black people’s engagement with the US economy — not just as property, but as savers, spenders, and small businesspeople.
Shennette Garrett-Scott is an associate professor of history and African American Studies at the University of Mississippi and the author of Banking on Freedom: Black Women in US Finance Before the New Deal.
Myth 2: That Black revolutionary soldiers were patriots
Much is made about how colonial Black Americans — some free, some enslaved — fought during the American Revolution. Black revolutionary soldiers are usually called Black Patriots. But the term Patriot is reserved within revolutionary discourse to refer to the men of the 13 colonies who believed in the ideas expressed in the Declaration of Independence: that America should be an independent country, free from Britain. These persons were willing to fight for this cause, join the Continental Army, and, for their sacrifice, are forever considered Patriots. That’s why the term Black Patriot is a myth — it infers that Black and white revolutionary soldiers fought for the same reasons.
Painting of the 1770 Boston Massacre showing Crispus Attucks, one of the leaders of the demonstration and one of the five men killed by the gunfire of the British troops. Bettmann Archive/Getty Images
First off, Black revolutionary soldiers did not fight out of love for a country that enslaved and oppressed them. Black revolutionary soldiers were fighting for freedom — not for America, but for themselves and the race as a whole. In fact, the American Revolution is a case study of interest convergence. Interest convergence denotes that within racial states such as the 13 colonies, any progress made for Black people can only be made if that progress also benefits the dominant culture — in this case the liberation of the white colonists of America. In other words, colonists’ enlistment of Black people was not out of some moral mandate, but based on manpower needs to win the war.
In 1775, Lord Dunmore, the royal governor of Virginia who wanted to quickly end the war, issued a proclamation to free enslaved Black people if they defected from the colonies and fought for the British army. In response, George Washington revised the policy that restricted Black persons (free or enslaved) from joining his Continental Army. His reversal was based in a convergence of his interests: competing with a growing British military, securing the slave economy, and increasing labor needs for the Continental Army. When enslaved persons left the plantation, this caused serious social and economic unrest in the colonies. These defections were encouragement for many white plantation owners to join the Patriotic cause even if they previously held reservations.
Washington also saw other benefits in Black enlistment: White revolutionary soldiers only fought in three- to four-month increments and returned to their farms or plantation, but many Black soldiers could serve longer terms. The need for the Black soldier was essential for the war effort, and the need to win the war became greater than racial or racist ideology.
Interests converged with those of Black revolutionary soldiers as well. Once the American colonies promised freedom, about a quarter of the Continental Army became Black; before that, more Black people defected to the British military for a chance to be free. Black revolutionary soldiers understood the stakes of the war and realized that they could also benefit and leave bondage. As historian Gary Nash has said, the Black revolutionary soldier “can best be understood by realizing that his major loyalty was not to a place, not to a people, but to a principle.”
Black people played a dual role — service with the American forces and fleeing to the British — both for freedom. The notion of the Black Patriot is a misused term. In many ways, while the majority of the whites were fighting in the American Revolution, Black revolutionary soldiers were fighting the “African Americans’ Revolution.”
LaGarrett King is an education professor at the University of Missouri Columbia and the founding director of the Carter Center for K-12 Black History Education.
Myth 3: That Black men were injected with syphilis in the Tuskegee experiment
A dangerous myth that continues to haunt Black Americans is the belief that the government infected 600 Black men in Macon County, Alabama, with syphilis. This myth has created generations of African Americans with a healthy distrust of the American medical profession. While these men weren’t injected with syphilis, their story does illuminate an important truth: America’s medical past is steeped in racialized terror and the exploitation of Black bodies.
The Tuskegee Study of Untreated Syphilis in the Negro Male emerged from a study group formed in 1932 connected with the venereal disease section of the US Public Health Service. The purpose of the experiment was to test the impact of syphilis untreated and was conducted at what is now Tuskegee University, a historically Black university in Macon County, Alabama.
The 600 Black men in the experiment were not given syphilis. Instead, 399 men already had stages of the disease, and the 201 who did not served as a control group. Both groups were withheld from treatment of any kind for the 40 years they were observed. The men were subjected to humiliating and often painfully invasive tests and experiments including spinal taps.
Deemed uneducated and impoverished sharecroppers, these men were lured by free medical examinations, hot meals, free treatment for minor injuries, rides to and from the hospital, and guaranteed burial stipends (up to $50) to be paid to their survivors. The study also did not occur in total secret, and several African American health workers and educators associated with the Tuskegee Institute assisted in the study.
By the end of the study in the summer of 1972, after a whistleblower exposed the story in national headlines, only 74 of the test subjects were still alive. From the original 399 infected men, 28 had died of syphilis, 100 others from related complications. Forty of the men’s wives had been infected, and an estimated 19 of their children were born with congenital syphilis.
As a result of the case, the US Department of Health and Human Services established the Office for Human Research Protections (OHRP) in 1974 to oversee clinical trials. The case also solidified the idea of African Americans being cast and used as medical guinea pigs.
An unfortunate side effect of both the truth of medical racism and the myth of syphilis injection, however, is it tangibly reinforces the inability to place trust in the medical system for some African Americans who may not choose to seek out assistance, and as a result put themselves in danger.
Sowande Mustakeem is an associate professor of History and African & African American Studies at Washington University in St. Louis.
Myth 4: That Black people in early Jim Crow America didn’t fight back
It is well-known that African Americans faced the constant threat of ritualistic public executions by white mobs, unpunished attacks by individuals, and police brutality in Jim Crow America. But how they responded to this is a myth that persists. In an effort to find lawful ways to address such events, some Black people made legalistic appeals to convince police and civic leaders their rights and lives should be protected. Yet the crushing weight of a hostile criminal justice system and the rigidity of the color line often muted those petitions, leaving Black people vulnerable to more mistreatment and murder.
An unidentified member of the Detroit chapter of the Black Panther Party stands guard with a shotgun on December 11, 1969. Bettmann Archive/Getty Images
In the face of this violence, some African Americans prepared themselves physically and psychologically for the abuse they expected — and they fought back. Distressed by public racial violence and unwilling to accept it, many adhered to emerging ideologies of outright rebellion, particularly after the turn of the 20th century and the emergence of the “New Negro.” Urban, more educated than their parents, and often trained militarily, a generation coming of age following World War I sought to secure themselves in the only ways left. Many believed, as Marcus Garvey once told a Harlem audience, that Black folks would never gain freedom “by praying for it.”
For New Negroes, the comparatively tame efforts of groups like the NAACP were not urgent enough. Most notably, they defended themselves fiercely nationwide during the bloodshed of the Red Summer of 1919 when whites attacked African Americans in multiple cities across the country. Whites may have initiated most race riots in the early Jim Crow era, but some also happened as Black people rejected the limitations placed on their life, leisure, and labor, and when they refused to fold under the weight of white supremacy. The magnitude of racial and state violence often came down upon Black people who defended themselves from police and citizens, but that did not stop some from sparking personal and collective insurrections.
Douglas J. Flowe is an assistant professor of history at Washington University in St. Louis.
Myth 5: That crack in the “ghetto” was the largest drug crisis of the 1980s
The bodies of people of color have a pernicious history of total exploitation and criminalization in the US. Like total war, total exploitation enlists and mobilizes the resources of mainstream society to obliterate the resources and infrastructure of the vulnerable. This has been done to Black people through a robust prison industrial complex that feeds on their vilification, incarceration, disenfranchisement, and erasure. And the crack epidemic of the late 1980s and ’90s is a clear example of this cycle.
Even though more white people reported using crack more than Black people in a 1991 National Institute on Drug Abuse survey, Black people were sentenced for crack offenses eight times more than whites. Meanwhile, there was a corresponding cocaine epidemic in white suburbs and college campuses that compelled the US to install harsher penalties for crack than for cocaine.For example, in 1986, before the enactment of federal mandatory minimum sentencing for crack cocaine offenses, the average federal drug sentence for African Americans was 11 percent higher than for whites. Four years later, the average federal drug sentence for African Americans was 49 percent higher.
Even through the ’90s and beyond, the media and supposed liberal allies, like Hillary Clinton, designated Black children and teens as drug-dealing “superpredators” to mostly white audiences. The criminalization of people of color during the crack epidemic made mainstream white Americans comfortable knowing that this was a contained black-on-black problem.
It also left white America unprepared to deal with the approach of the opioid epidemic, which is often a white-on-white crime whose dealers will evade prison (see: the Sacklers, the billionaire family behind Oxycontin who has served no jail time; and Johnson & Johnson, which got a $107 million break in fines when it was found liable for marketing practices that led to thousands of overdose deaths). Unlike Black Americans who are sent to prison, these white dealers retain their right to vote, lobby, and hold on to their wealth.
Jason Allen is a public historian and facilitator at xCHANGEs, a cultural diversity and inclusion training consultancy.
Myth 6: That all Black people were enslaved until emancipation
One of the biggest myths about the history of Black people in America is that all were enslaved until the Emancipation Proclamation, or Juneteenth Day.
In reality, free Black and Black-white biracial communities existed in states such as Louisiana, Maryland, Virginia, and Ohio well before abolition. For example, Anthony Johnson, named Antonio the Negro on the 1625 census, was listed on this document as a servant. By 1640, he and his wife owned and managed a large plot of land in Virginia.
A group of free African Americans in an unknown city, circa 1860. Bettmann Archive/Getty Images
Some enslaved Africans were able to sell their labor or craftsmanship to others, thereby earning enough money to purchase their freedom. Such was the case for Richard Allen, who paid for his freedom in 1786 and co-founded the African Methodist Episcopal Church less than a decade later. After the American Revolutionary War, Robert Carter III committed the largest manumission — or freeing of slaves — before Lincoln’s Emancipation Proclamation, freeing his 100 enslaved Africans.
Not all emancipations were large. Individuals or families were sometimes freed upon the death of their enslaver and his family. And many escaped and lived free in the North or in Canada. Finally, there were generations of children born in free Black and biracial communities, many who never knew slavery.
Eventually, slave states established expulsion laws making residency there for free Black people illegal. Some filed petitions to remain near enslaved family members, while others moved West or North. And in the Northeast, many free Blacks formed benevolent organizations such as the Free African Union Society for support and in some cases repatriation.
The Emancipation Proclamation in 1863 — and the announcement of emancipation in Texas two years later — allowed millions of enslaved people to join the ranks of already free Black Americans.
Dale Allender is an associate professor at California State University Sacramento.
February 6th 2023
What Have Strikes Achieved?
Withdrawing labour is an age-old response to workplace grievances. But how old, and to what effect?
History Today | Published in History TodayVolume 73 Issue 1 January 2023
‘In Aristophanes’ Lysistrata, the women of Greece unite together in a sex-strike’
Lynette Mitchell, Professor in Greek History and Politics at the University of Exeter
‘Strike action’ – the withdrawal of labour as a protest – was known in the ancient world. The Greeks, however, did not generally form themselves into professional guilds, at least not before the third century BC when the associations of ‘the musicians of Dionysus’ were formed alongside the growth in the number of festivals.
This did not mean, however, that the Greeks were oblivious to the significance of the withdrawal of labour. The epic poem the Iliad begins with Achilles – the best of the Greek fighters – withdrawing from battle against the Trojans because he has been deprived of his war-prize, the concubine Briseis.
Withdrawing one’s skills as a fighter in warfare was a significant bargaining tool. At the beginning of the fourth century BC, the Greek army of the Ten Thousand, who were employed by Cyrus the Younger in the war against his brother, Artaxerxes II, threatened to abandon the Persian prince unless he raised their pay to a level commensurate with the danger of engaging the ‘King of Kings’ in battle (they had originally been employed on another pretext and a different pay scale). In 326 BC, when the soldiers of Alexander the Great reached the River Hyphasis in the Hindu Kush, they refused to cross it and penetrate further east into northern India, thus forcing Alexander to give up his pursuit of limitless glory. The writer Arrian says that this was his only defeat.
War brought glory, but it also brought misery. In Aristophanes’ comedy Lysistrata, produced in 411 BC, the women of Greece unite together in a sex-strike in order to force their husbands to give up their wars with each other. Although the women struggle to maintain discipline among their own ranks (some of the most comic scenes of the play describe women sneaking away from the Acropolis, which the strikers have occupied), the eponymous Lysistrata, a woman of intelligence and determination, is asked to arbitrate between the Greek cities in order to bring the strike to an end; she presents the warring men with a beautiful girl, Reconciliation, and the play ends with the Spartans and Athenians remembering the wars fought together against the Persians. Peace is restored.
‘During the reign of Ramesses III underpayment had become typical’
Dan Potter, Assistant Curator of the Ancient Mediterranean collections at National Museums Scotland
Early in the 29th year of the reign of Ramesses III (c.1153 BC), the royal tomb builders of Deir el-Medina grew increasingly concerned about the payment of their wages. The workmen were paid in sacks of barley and wheat, which was not just their families’ food, but also currency. Late deliveries and underpayment had become typical, leading one scribe to keep a detailed record of the arrears. Supply issues were linked to the agricultural calendar, but the consistent problems of this period show it was also a failure of state. An initial complaint by the workers was resolved but the causes were not dealt with. With the approval of their ‘captains’ (a three-man leadership group), the workers staged eight days of action; they ‘passed the walls’ of their secluded village and walked down to nearby royal temples chanting ‘We are hungry!’ They held sit-ins at several temples, but officials remained unable, or unwilling, to assist. A torchlit demonstration later in the week forced through one month’s grain payment.
In the following months, they ‘passed the walls’ multiple times. Eventually, the recently promoted vizier, To, wrote to them explaining that the royal granaries were empty. He apologised with a politician’s answer for the ages: ‘It was not because there was nothing to bring you that I did not come.’ In reality, To was probably busy in the delta capital at the King’s Heb-Sed (royal jubilee). To rustled together a half payment to appease the striking workers. After this derisory delivery, the angry Chief Workman Khons proposed a door-to-door campaign against local officials which was only halted by his fellow captain Amunnakht, the scribe who recorded much of the detail we have about the strikes.
Even after a bulk reimbursement was paid early in year 30, inconsistent payments resulted in more industrial action in the ensuing years. The strikes were indicative of increasing regional instability, as Waset (Luxor) experienced food shortages, inflation, incursions from nomadic tribes, tomb robberies and more downing of tools. The workers’ village was partially abandoned around 70 years later.
‘Success depends on the response of the public and the possibility of favourable government intervention’
Alastair Reid, Fellow of Girton College, Cambridge
The word strike usually brings to mind a mass strike which goes on for a long time and completely shuts down an industry, such as the British coal miners’ strikes of the 1920s and the 1970s. These sort of disputes have rarely achieved anything positive: they are costly for the incomes of the strikers and their families and if their unions could afford to give the strikers some support, then that only drained the organisation’s funds. The stress caused has often led to splits within the union and friction with other organisations.
It is noticeable, therefore, that in recent years trade unions calling large numbers of their members out on strike have tended to focus on limited days of action rather than indefinite closures.
Sometimes the wider public has been sympathetic towards the strikers. This was the case during the London dock strike of 1889. However, when the disruption has affected public services, as in the ‘Winter of Discontent’ in 1978-79, strikers have become very unpopular. Often, when this sort of strike action achieved positive results for trade unionists, it was when the government had reason to intervene in their favour: during the First World War for example, when maintaining military production was essential.
The mass withdrawal of labour is not the only form of strike action that has been seen in the past. Highly skilled unions such as engineers and printers developed a tactic known as the ‘strike in detail’, during which they used their unemployment funds to support members in leaving blacklisted firms and thus effectively targeted employers one at a time. Another possibility is the opposite of a strike – a ‘work in’ – as at the Upper Clyde Shipbuilders in 1971, when a significant part of the workforce refused to accept the closure of the yards and won significant public support for their positive attitude. In general, the mass strike is a dangerous weapon that can easily backfire: success depends on the response of the public and the possibility of favourable government intervention.
‘There was one clear winner: the Chinese Communist Party’
Elisabeth Forster, Lecturer in Chinese History at the University of Southampton
Gu Zhenghong was shot dead by a foreman on 15 May 1925, triggering China’s anti-imperialist May 30th Movement. Gu was a worker on strike at a textile mill in Shanghai. The mill was Japanese-owned, Japan being among the countries that had semi-colonised China. Outraged by Gu’s death – and the imperialism behind it – students and workers demonstrated in Shanghai’s Foreign Settlement on 30 May. At some point British police opened fire, leaving more than ten demonstrators dead. In response, a general strike was called, with workers’, students’ and merchants’ unions, the Shanghai General Chamber of Commerce, as well as the Nationalist Party (GMD) and the Chinese Communist Party among its leaders.
Among the strikers were students, merchants and workers in various sectors, such as seamen, workers at the wharves, at phone companies, power plants, buses and trams. Not all sectors participated and certain individuals broke the strike, some of whom were then kidnapped by their unions. The strikes were accompanied by boycotts of foreign goods and sometimes strikers clashed violently with the authorities.
The demands were broad and were not confined to work-related issues, but also covered anti-imperialist goals, such as an end to extraterritoriality. By August, enthusiasm for the strikes had waned. Merchants were tired of their financial losses. Some of the workers started rioting against their union, since strike pay had dried up. The strikes’ organisers therefore had to settle the industrial (and political) dispute.
Contemporaries were unsure if the strikes had achieved their goal. Strike demands had been reduced and not all were met. Many new unions had been founded, but some were also closed by the authorities, and labour movement organisers had to go underground or face arrest and execution. But there was one clear winner: the Chinese Communist Party. If workers had previously mistrusted communists as hairy, badly dressed ‘extremists’, the Party was now acknowledged as a leader of labour. Imperialism in China would end, but not until after the Second World War and the era of global decolonisation.
The First Soviet in Ireland
The Dockers Who Won
What Have Strikes Achieved?
February 2nd 2023
It’s been 230 years since British pirates robbed the US of the metric system
How did the world’s largest economy get stuck with retro measurement?
Sun 22 Jan 2023 // 08:38 UTC
Feature In 1793, French scientist Joseph Dombey sailed for the newly formed United States at the request of Thomas Jefferson carrying two objects that could have changed America. He never made it, and now the US is stuck with a retro version of measurement that is unique in the modern world.
The first, a metal cylinder, was exactly one kilogram in mass. The second was a copper rod the length of a newly proposed distance measurement, the meter.
Jefferson was keen on the rationality of the metric system in the US and an avid Francophile. But Dombey’s ship was blown off course, captured by English privateers (pirates with government sanction), and the scientist died on the island of Montserrat while waiting to be ransomed.
And so America is one of a handful of countries that maintains its own unique forms of weights and measures.
The reason for this history lesson? Over the last holiday period this hack has been cooking and is sick of this pounds/ounces/odd pints business – and don’t even get me started on using cups as a unit of measurement.
It’s time for America to get out of the Stone Age and get on board with the International System of Units (SI), as the metric system used to be known.
There’s a certain amount of hypocrisy here – I’m British and we still cling to our pints, miles per hour, and I’m told drug dealers still deal in eighths and ‘teenths in the land of my birth. But the American system is bonkers, has cost the country many millions of dollars, an increasing amount of influence, and needs to be changed.
Brits and Americans…
The cylinder and rod Dombey was carrying, the former now owned by the US National Institute of Standards and Technology, was requested by Jefferson because the British system in place was utterly irrational.
When the UK settled in the Americas they brought with them a bastardized version of weights, measures and currencies. A Scottish pint, for example, was almost triple the size of an English equivalent until 1824, which speaks volumes about the drinking culture north of the border.
British measurements were initially standardized in the UK’s colonies, but it was a curious system, taking in Roman, Frankish, and frankly bizarre additions. Until 1971, in the UK a pound consisted of 240 pence, with 12 pence to the shilling and 20 shillings to the pound.
To make things even more confusing, individual settlements adopted their own local weights and measures. From 1700, Pennsylvania took control of its own measurements and other areas soon followed. But this mishmash of coins, distances and weights held the country back and Jefferson scored his first success in the foundation of a decimal system for the dollar.
“I question if a common measure of more convenient size than the Dollar could be proposed. The value of 100, 1,000, 10,000 dollars is well estimated by the mind; so is that of a tenth or hundredth of a dollar. Few transactions are above or below these limits,” he said [PDF].
So of course he’s on the least popular note
Jefferson wanted something new, more rational, and he was not alone. In the first ever State of the Union address in 1790, George Washington observed: “Uniformity in the Currency, Weights and Measures of the United States is an object of great importance, and will, I am persuaded, be duly attended to.”
America was a new country, and owed a large part of the success of the Revolutionary War to France, in particular the French navy. The two countries were close, and the metric system appealed to Jefferson’s mindset, and to many in the new nation.
And this desire for change wasn’t just limited to weights and measures. Also in 1793, Alexander Hamilton hired Noah Webster, who as a lexicographer and ardent revolutionary wanted America to cast off the remnants of the old colonial power. Webster wrote a dictionary, current versions of which can be found in almost every classroom in the US.
And then politics and Napoleon happened
Jefferson asked the French for other samples including a copper meter and a copy of the kilogram, which was sent in 1795, but by then things had changed somewhat since he was no longer running the show. On January 2, 1794, he was replaced as US Secretary of State by fellow Founding Father Edmund Randolph, who was much less keen on the government getting involved in such things.
To make matters worse, relations between America and France were deteriorating sharply. The French government felt that the newly formed nation wasn’t being supportive enough in helping Gallic forces fight the British in the largely European War of the First Coalition. In something of a hissy fit, the French government declined to invite representatives from the US to the international gathering at Paris in 1798-99 that set the initial standards for the metric system.
Jefferson’s plans were kicked into committee and while a form of standardization based on pounds and ounces was approved by the House, the Senate declined to rule on the matter.
Not that it mattered much longer. In 1812, Napoleon effectively abolished the enforcement of the metric system in France. Napoleon was known as Le Petit Caporal, with multiple reports he was five foot two. As we know now, he was around average height for the time.
After the French dictator was defeated, the case for the metric system in France sank into near-limbo at first, as it did in the US. But it gradually spread across Europe because you can’t keep a good idea down and science and industrialization were demanding it.
Welcome to the rational world
What has kept the metric system going is its inherent rationality. Rather than use a hodgepodge of local systems, why not build one based on measurements everyone could agree on configured around the number 10, which neatly matches the number of digits on most people’s hands?
Above all it’s universal, a gram means a gram in any culture. Meanwhile, buy a pint in the UK and you’ll get 20oz of beer, do the same in America and, depending where you are, you’ll likely get 16oz – a fact that still shocks British drinkers. The differences are also there with tons, and the odd concept of stones as a weight measurement.
Metric is by no means perfect. For example, in the initial French system, a gram, or grave as it was initially known, was the mass of one cubic centimeter of water. A meter was a 10 millionth of the distance between the pole and the equator – although the French weren’t exactly sure how far that was at the time.
The original metre carved into the Place Vendôme in Paris, some adjustment required
Since then the system has been revised a lot with discoveries of more natural constants. For example, a meter is now 1/299,792,458 of the distance light travels during a second. As of 1967, the second itself has been defined as “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom,” but better measurement by atomic clocks may change this.
The chief adherents to the metric system initially were scientists who desperately needed universal sources of measurement to compare notes and replicate experiments without the errors common when converting from one measuring system to another.
This is down to convoluted systems like 12 inches in a foot, three feet in a yard, 1,760 yards in a mile, compared to 100 centimeters in a meter and 1,000 meters to a kilometer. A US pound is 0.453592 kilograms, to six figures at least, these are the kind of numbers that cause mistakes to be made.
Most famously in recent memory was the Mars Climate Orbiter in 1999. The $125 million space probe broke up in the Martian atmosphere after engineers at Lockheed Martin, who built the instrument, used the US Customary System of measurement rather than metric measurements used by others on the project. The probe descended too close to the surface and was lost.
A more down-to-earth example came in 1983 with the Air Canada “Gimli Glider” incident, where pilots of a Boeing 767 underestimated the amount of fuel they needed because the navigational computer was measuring fuel in kilograms rather than pounds. With roughly 2.2 pounds to the kilogram, the aircraft took on less than half the fuel is needed and the engines failed at 41,000 feet (12,500m).
The two pilots were forced to glide the aircraft, containing 69 souls, to an old air force base in Gimli that luckily one of the pilots had served at. It was now being used as a drag strip but thankfully there were only a few minor injuries.
And don’t even get me started on Celsius and Fahrenheit. With Celsius water freezes at 0 degrees and boils at 100 at ground level, compared to 32 and 212 for Fahrenheit. It’s a nonsensical system and the US is now the only nation in the world to use Fahrenheit to measure regular temperatures.
The slow and winding road
Back in 1821, Secretary of State John Quincy Adams reported to Congress on the measurements issue. In his seminal study on the topic he concluded that while a metric system based on natural constants was preferable, the amount of kerfuffle needed to change from the current regime would be highly disruptive and he wasn’t sure Congress had the right to overrule the systems used by individual states.
The disruption would have been large. The vast majority of America’s high value trade was with the UK and Canada, neither of which were metric.
In addition, American and British manufacturers rather liked the old ways. With the existing system, companies manufactured parts to their own specifications, meaning if you wanted spares you had to go buy them from the original manufacturer. This proved highly lucrative.
- Metric versus imperial: Reg readers weigh in
- US space programme in shock metric conversion
- Pints under attack as Lord Howe demands metric-only UK
- Britannia triumphs over Johnny Metric
By the middle of the 19th century, things were changing… slightly. The US government scientists did start using some metric measurements for things like mapping out territory, even though its domestic system was more common for day-to-day use. The Civil War also spurred a push towards standardization, with some states like Utah briefly mandating the system.
Two big changes came around in the 20th century following two World Wars. Interchangeability of parts, particularly bolt threading, seriously hampered the Allied forces. In 1947, America joined the International Organization for Standardization and bolt threads went metric. Today the US Army uses metric to better integrate with NATO allies.
This has continued ever since American manufacturers realized they would have to accommodate the new systems if it wanted to sell more kit abroad. Today there are technically US measurement parts still being manufactured, particularly in some industries, but there is at least a standardized system for converting these to metric measurements.
In the 1960s, metric was renamed as the Le Système international d’unités (International System of Units) or SI and things started moving again in America. After Congressional study, President Gerald Ford signed the Metric Conversion Act in 1975, setting a plan for America to finally go metric as “the preferred system of weights and measures for United States trade and commerce.”
But it suffered some drawbacks. Firstly, the system was voluntary, which massively slowed down adoption. Secondly, a year later, the new US president Jimmy Carter was a strong proponent of the system, and this caused the opposition in Congress to largely oppose the plan.
President Reagan closed most of the moves to metric in 1982, but his successor, Bush, revived some of the plans in 1991, ordering US government departments to move over to metric as far as possible. The issue has been kicked down the road ever since.
Different cultures, different customs
These days the arguments over metric versus American measurements are more fraught, becoming a political issue between left and right. Witness Tucker Carlson’s cringe-worthy rant in which he describes metric as “the yoke of tyranny,” hilariously mispronouncing “kailograms.”
What in the world is he even talking about? pic.twitter.com/KhL8eS7mO1
— George Takei (@GeorgeTakei) July 25, 2019
Given that trust-fund kid Carlson was educated in a Swiss boarding school, he knows how it’s pronounced, but never let the facts get in the way of invective.
As such, it seems unlikely that we’ll see anything change soon. But that day is coming – America is no longer the manufacturing giant it was and China is perfectly happy with the metric system, although it maintains other measurement for domestic societal use like Britain does with pints and miles.
There’s really no logical reason to not go metric – it’s a simple, universal system used by every nation in the world except for the US, Liberia and Myanmar. That’s hardly august company for the Land of the Free.
It will be a long, slow process. No country has managed a full shift to metric in less than a generation, with most it took two or more, and the UK seems to be going backwards. Now-former Prime Minister Boris Johnson was keen to see a return of the old UK Imperial measurements in Britain, which make the current American system look positively rational.
It may take generations before the issue is resolved in the UK, and longer still for the US. It may, in fact, never happen in America, but the SI system makes sense, is logically sound, and will remain the language of science, medicine and engineering for the vast majority of the world.
If the US doesn’t want to play catch-up with the rest of the world it will have to take rational measurements seriously. But that day isn’t coming soon, so in the meantime this hack will have to remain using old cookbooks and we’ll face more measurement mistakes together. ®
January 28th 2023
The Colonial History of the Telegraph
Gutta-percha, a natural resin, enabled European countries to communicate with their colonial outposts around the world.
An old morse key telegraph
By: Livia Gershon
January 21, 2023
Long before the internet, the telegraph brought much of the world together in a communications network. And, as historian John Tully writes, the new nineteenth-century technology was deeply entangled with colonialism, both in the uses it was put to and the raw material that made it possible—the now-obscure natural plastic gutta-percha.
Tully writes that the resin product, made from the sap of certain Southeast Asian trees, is similar to rubber, but without the bounce. When warmed in hot water, it becomes pliable before hardening again as it cools. It’s resistant to both water and acid. For centuries, Malay people had used the resin to make various tools. When Europeans learned about its uses in the nineteenth century, they adopted it for everything from shoe soles to water pipes. It even became part of the slang of the day—in the 1860s, New Englanders might refer to someone they disliked as an “old gutta-percha.” Perhaps most importantly, gutta-percha was perfect for coating copper telegraph wire, replacing much less efficient insulators like tarred cotton or hemp. It was especially important in protecting undersea cables, which simply wouldn’t have been practical without it.
Prior to the invention of the electric telegraph, Tully writes, it could take six months for news from a colonial outpost to reach the mother country.
And those undersea cables became a key part of colonial governance in the second half of the nineteenth century. Prior to the invention of the electric telegraph, Tully writes, it could take six months for news from a colonial outpost to reach the mother country, making imperial control difficult. For example, when Java’s Prince Diponegoro led an uprising against Dutch colonists in 1825, the Dutch government didn’t find out for months, delaying the arrival of reinforcements.
Then, in 1857, Indians rebelled against the rule of the British East India Company. This led panicked colonists to demand an expanded telegraph system. By 1865, Karachi had a near-instant communications line to London. Just a decade later, more than 100,000 miles of cable laid across seabeds brought Australia, South Africa, Newfoundland, and many places in between, into a global communication network largely run by colonial powers. Tully argues that none of this would have been possible without gutta-percha.
But the demand for gutta-percha was bad news for the rainforests where it was found. Tens of millions of trees were felled to extract the resin. Even a large tree might yield less than a pound of the stuff, and the growing telegraph system used as much as four million pounds a year. By the 1890s, ancient forests were in ruins and the species that produced gutta-percha were so rare that some cable companies had to decline projects because they couldn’t get enough of it.
The trees weren’t driven completely extinct, and, eventually, the wireless telegraph and synthetic plastics made its use in telegraph cables obsolete. Today, the resin is only used in certain specialty areas such as dentistry. Yet sadly, the decimation of the trees prefigured the fate of rainforests around the world under colonial and neocolonial global systems for more than a century to come.
January 17th 2023
The Tudor Roots of Modern Billionaires’ Philanthropy
The debate over how to manage the wealthy’s fortunes after their deaths traces its roots to Henry VIII and Elizabeth I
Nuri Heckler, The Conversation January 13, 2023
More than 230 of the world’s wealthiest people, including Elon Musk, Bill Gates and Warren Buffett, have promised to give at least half of their fortunes to charity within their lifetimes or in their wills by signing the Giving Pledge. Some of the most affluent, including Jeff Bezos (who hadn’t signed the Giving Pledge as of early 2023) and his ex-wife MacKenzie Scott (who did sign the pledge after their divorce in 2019) have declared that they will go further by giving most of their fortunes to charity before they die.
This movement stands in contrast to practices of many of the philanthropists of the late 19th and early 20th centuries. Industrial titans like oil baron John D. Rockefeller, automotive entrepreneur Henry Ford and steel magnate Andrew Carnegie established massive foundations that to this day have big pots of money at their disposal despite decades of charitable grantmaking. This kind of control over funds after death is usually illegal because of a “you can’t take it with you” legal doctrine that originated in England 500 years ago.
Known as the Rule Against Perpetuities, it holds that control over property must cease within 21 years of a death. But there is a loophole in that rule for money given to charities, which theoretically can flow forever. Without it, many of the largest American and British foundations would have closed their doors after disbursing all their funds long ago.
As a lawyer and researcher who studies nonprofit law and history, I wondered why American donors get to give from the grave.
Henry VIII had his eye on property
In a recent working paper that I wrote with my colleague Angela Eikenberry and Kenya Love, a graduate student, we explained that this debate goes back to the court of Tudor monarch Henry VIII.
The Rule Against Perpetuities developed in response to political upheaval in the 1530s. The old feudal law made it almost impossible for most properties to be sold, foreclosed upon or have their ownership changed in any way.
At the time, a small number of people and the Catholic Church controlled most of the wealth in England. Henry wanted to end this practice because it was difficult to tax property that never transferred, and property owners were mostly unaccountable to England’s monarchy. This encouraged fraud and led to a consolidation of wealth that threatened the king’s power.
As he sought to sever England’s ties to the Catholic Church, Henry had one eye on changing religious doctrine so he could divorce his first wife, Catherine of Aragon, and the other on all the property that would become available when he booted out the church.
After splitting with the church and securing his divorce, he enacted a new property system giving the British monarchy more power over wealth. Henry then used that power to seize property. Most of the property the king took first belonged to the church, but all property interests were more vulnerable under the new law.
Henry’s power grab angered the wealthy gentry, who launched a violent uprising known as the Pilgrimage of Grace.
After quelling that upheaval, Henry compromised by allowing the transfer of property from one generation to the next. But he didn’t let people tell others how to use their property after they died. The courts later developed the Rule Against Perpetuities to allow people to transfer property to their children when they turned 21 years old.
At the same time, wealthy Englishmen were encouraged to give large sums of money and property to help the poor. Some of these funds had strings attached for longer than the 21 years.
Elizabeth I codified the rule
Elizabeth I, Henry’s daughter with his ill-fated wife Anne Boleyn, became queen in 1558, after the deaths of her siblings Edward VI and Mary I. She used her reign to codify that previously informal charitable exception. By then it was the 1590s, a tough time for England, due to two wars, a pandemic, inflation and famine. Elizabeth needed to prevent unrest without raising taxes even further than she already had.
Elizabeth’s solution was a new law decreed in 1601. Known as the Statute of Charitable Uses, it encouraged the wealthy to make big charitable donations and gave courts the power to enforce the terms of the gifts.
The monarchy believed that partnering with charities would ease the burdens of the state to aid the poor.
This concept remains popular today, especially among conservatives in the United States and United Kingdom.
The charitable exception today
When the U.S. broke away from Great Britain and became an independent country, it was unclear whether it would stick with the charitable exception.
Some states initially rejected British law, but by the early 19th century, every state in the U.S. had adopted the Rule Against Perpetuities.
In the late 1800s, scholars started debating the value of the rule, even as large foundations took advantage of Elizabeth’s philanthropy loophole. My co-authors and I found that, as of 2022, 40 U.S. states had ended or limited the rule; every jurisdiction, including the District of Columbia, permits eternal control over donations.
Although this legal precept has endured, many scholars, charities and philanthropists question whether it makes sense to let foundations hang on to massive endowments with the goal of operating in the future in accordance with the wishes of a long-gone donor rather than spend that money to meet society’s needs today.
With such issues as climate change, spending more now could significantly decrease what it will cost later to resolve the problem.
Still other problems require change that is more likely to come from smaller nonprofits. In one example, many long-running foundations, including the Ford, Carnegie and Kellogg foundations, contributed large sums to help Flint, Michigan, after a shift in water supply brought lead in the tap water to poisonous levels. Some scholars argue this money undermined local community groups that better understood the needs of Flint’s residents.
Another argument is more philosophical: Why should dead billionaires get credit for helping to solve contemporary problems through the foundations bearing their names? This question often leads to a debate over whether history is being rewritten in ways that emphasize their philanthropy over the sometimes questionable ways that they secured their wealth.
Some of those very rich people who started massive foundations were racist and anti-Semitic. Does their use of this rule that’s been around for hundreds of years give them the right to influence how Americans solve 21st-century problems?
Nuri Heckler is an expert on public administration at the University of Nebraska Omaha. His research focuses on power in public organizations, including nonprofits, social enterprise and government.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
January 9th 2023
Rethinking the European Conquest of Native Americans
In a new book by Pekka Hämäläinen, a picture emerges of a four-century-long struggle for primacy among Native power centers in North America.By David Waldstreicher
December 31, 2022
When the term Indian appears in the Declaration of Independence, it is used to refer to “savage” outsiders employed by the British as a way of keeping the colonists down. Eleven years later, in the U.S. Constitution, the Indigenous peoples of North America are presented differently: as separate entities with which the federal government must negotiate. They also appear as insiders who are clearly within the borders of the new country yet not to be counted for purposes of representation. The same people are at once part of the oppression that justifies the need for independence, a rival for control of land, and a subjugated minority whose rights are ignored.
For the Finnish scholar Pekka Hämäläinen, this emphasis on what Native people meant to white Americans misses an important factor: Native power. The lore about Jamestown and Plymouth, Pocahontas and Squanto, leads many Americans to think in terms of tragedy and, eventually, disappearance. But actually, Indigenous people continued to control most of the interior continent long after they were outnumbered by the descendants of Europeans and Africans.
Indigenous Continent – The Epic Contest For North AmericaPekka Hämäläinen, National Geographic Books
When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
Much more accurate is the picture Hämäläinen paints in his new book, Indigenous Continent: a North American history that encompasses 400 years of wars that Natives often, even mostly, won—or did not lose decisively in the exceptional way that the Powhatans and Pequots had by the 1640s. Out of these centuries of broader conflict with newcomers and one another, Native peoples established decentralized hives of power, and even new empires.
In a previous book, The Comanche Empire, Hämäläinen wrote of what he controversially referred to as a “reversed colonialism,” which regarded the aggressive, slaving equestrians of “greater Comanchería”—an area covering most of the Southwest—as imperialists in ways worth comparing to the French, English, Dutch, and Spanish in America. There was continued pushback from some scholars when Hämäläinen extended the argument northward in his 2019 study, Lakota America. (The impact of his work among historians may be measured by his appointment as the Rhodes Professor of American History at Oxford University.)
What was most distinctive about these two previous books was that Hämäläinen so convincingly explained the Indigenous strategies for survival and even conquest. Instead of focusing on the microbes that decimated Native populations, Hämäläinen showed how the Comanche developed what he termed a “politics of grass.” A unique grasslands ecosystem in the plains allowed them to cultivate huge herds of horses and gave the Comanche access to bison, which they parlayed into market dominance over peoples who could supply other goods they wanted, such as guns, preserved foods, and slaves for both trade and service as herders.
Read: Return the national parks to the tribes
Hämäläinen treats Native civilizations as polities making war and alliances. In Indigenous Continent, there is less emphasis than in The Comanche Empire on specific ecosystems and how they informed Indigenous strategies. Instead, he describes so many Native nations and European settlements adapting to one another over such a wide and long time period that readers can appreciate anew how their fates were intertwined—shattering the simple binary of “Indians” and “settlers.” Indigenous peoples adapted strenuously and seasonally to environments that remained under their control but had to contend at the same time with Europeans and other refugees encroaching on their vague borders. These newcomers could become allies, kin, rivals, or victims.
Hämäläinen sees a larger pattern of often-blundering Europeans becoming part of Indigenous systems of reciprocity or exploitation, followed by violent resets. When Dutch or French traders were “generous with their wares” and did not make too many political demands, Natives pulled them into their orbit. Spanish and, later, British colonists, by contrast, more often demanded obeisance and control over land, leading to major conflicts such as the ones that engulfed the continent in the 1670s–80s and during the Seven Years’ War. These wars redirected European imperial projects, leading to the destruction of some nations, and the migration and recombination of others, such as the westward movement of the Lakota that led to their powerful position in the Missouri River Valley and, later, farther west. In this history, Indigenous “nomadic” mobility becomes grand strategy. North America is a continent of migrants battling for position long before the so-called nation of immigrants.
- Big Cars Are Killing AmericansAngie Schmitt
- Live Like the Ancient CynicsArthur C. Brooks
- The Death of an AdjunctAdam Harris
“Properly managed,” settlers and their goods “could be useful,” Hämäläinen writes. The five nations of the Iroquois (Haudenosaunee) confederacy established a pattern by turning tragic depopulation by epidemic into opportunities for what Hämäläinen calls “mourning wars” attacking weakened tribes and gaining captives. They formed new alliances and capitalized on their geographic centrality between fur-supplying nations to the west and north, and French and Dutch and, later, English tool and gun suppliers to the east and south. Hämäläinen insists that their warfare was “measured, tactical,” that their use of torture was “political spectacle,” that their captives were actually adoptees, that their switching of sides in wartime and the Iroquois’ selling out of distant client tribes such as the Delaware was a “principled plasticity.” This could almost be an expert on European history talking about the Plantagenets, the Hapsburgs, or Rome.
And there’s the rub. Hämäläinen, a northern European, feels comfortable applying the ur-Western genre of the rise and fall of empires to Native America, but imperial history comes with more baggage. Hämäläinen seems certain that Comanche or other Indigenous imperial power was different in nature from the European varieties, but it often seems as if Indigenous peoples did many of the same things that European conquerors did. Whether the Iroquois had “imperial moments,” actually were an empire, or only played one for diplomatic advantage is only part of the issue. Hämäläinen doesn’t like the phrase settler colonialism. He worries that the current term of art for the particularly Anglo land-grabbing, eliminationist version of empire paints with too broad a brush. Perhaps it does. But so does his undefined concept of empire, which seems to play favorites at least as much as traditional European histories do.
If an empire is an expanding, at least somewhat centralized polity that exploits the resources of other entities, then the Iroquois, Comanche, Lakota, and others may well qualify. But what if emphasizing the prowess of warriors and chiefs, even if he refers to them as “soldiers” and “officials,” paradoxically reinforces exoticizing stereotypes? Hämäläinen is so enthralled with the surprising power and adaptability of the tribes that he doesn’t recognize the contradiction between his small-is-beautful praise of decentralized Indigenous cultures and his condescension toward Europeans huddling in their puny, river-hugging farms and towns.
Hämäläinen notes that small Native nations could be powerful too, and decisive in wars. His savvy Indigenous imperialists wisely prioritized their relationships, peaceful or not, with other Natives, using the British or French as suppliers of goods. Yet he praises them for the same resource exploitation and trade manipulation that appears capitalist and murderous when European imperialists do their version. In other words, he praises Natives when they win for winning. Who expanded over space, who won, is the story; epic battles are the chapters; territory is means and end.
And the wheel turns fast, followed by the rhetoric. When British people muscle out Natives or seek to intimidate them at treaty parleys, they are “haughty.” At the same time, cannibalism and torture are ennobled as strategies—when they empower Natives. Native power as terror may help explain genocidal settler responses, but it makes natives who aren’t just plain brave—including women, who had been producers of essential goods and makers of peace—fade away almost as quickly as they did in the old history. As readers, we gain a continental perspective, but strangely, we miss the forest for the battlefields.
It’s already well known why natives lost their land and, by the 19th century, no longer had regional majorities: germs, technology, greed, genocidal racism, and legal chicanery, not always in that order. Settler-colonial theory zeroes in on the desire to replace the Native population, one way or another, for a reason: Elimination was intended even when it failed in North America for generations.
To Hämäläinen, Natives dominated so much space for hundreds of years because of their “resistance,” which he makes literally the last word of his book. Are power and resistance the same thing? Many scholars associated with the Native American and Indigenous Studies Association find it outrageous to associate any qualities of empire with colonialism’s ultimate, and ongoing, victims. The academic and activist Nick Estes has accused Hämäläinen of “moral relativist” work that is “titillating white settler fantasies” and “winning awards” for doing so. Native American scholars, who labor as activists and community representatives as well as academics in white-dominated institutions, are especially skeptical when Indigenous people are seen as powerful enough to hurt anyone, even if the intent is to make stock figures more human. In America, tales of Native strength and opportunistic mobility contributed to the notion that all Natives were the same, and a threat to peace. The alternative categories of victim and rapacious settler help make better arguments for reparative justice.
In this light, the controversy over Native empires is reminiscent of what still happens when it’s pointed out that Africans participated in the slave trade—an argument used by anti-abolitionists in the 19th century and ever since to evade blame for the new-world slaveries that had turned deadlier and ideologically racial. It isn’t coincidental that Hämäläinen, as a fan of the most powerful Natives, renders slavery among Indigenous people as captivity and absorption, not as the commodified trade it became over time. Careful work by historians has made clear how enslavement of and by Natives became, repeatedly, a diplomatic tool and an economic engine that created precedents for the enslavement of Black Americans.
All genres of history have their limits, often shaped by politics. That should be very apparent in the age of the 1619 and 1776 projects. Like the Declaration and the Constitution, when it comes to Indigenous peoples, historians are still trying to have it both ways. Books like these are essential because American history needs to be seen from all perspectives, but there will be others that break more decisively with a story that’s focused on the imperial winners.
Indigenous Continent – The Epic Contest For North AmericaPekka Hämäläinen, National Geographic Books
January 8th 2023
Britain’s first black aristocrats
Share using EmailShare on TwitterShare on FacebookShare on Linkedin
By Fedora Abu10th May 2021
Whitewashed stories about the British upper classes are being retold. Fedora Abu explores the Bridgerton effect, and talks to Lawrence Scott, author of Dangerous Freedom.
For centuries, the Royal Family, Britain’s wealthiest, most exclusive institution, has been synonymous with whiteness. And yet, for a brief moment, there she was: Her Royal Highness the Duchess of Sussex, a biracial black woman, on the balcony at Buckingham Palace. Her picture-perfect wedding to Prince Harry in 2018 was an extraordinary amalgamation of black culture and centuries-old royal traditions, as an African-American preacher and a gospel choir graced St George’s Chapel in Windsor. Watching on that sunny May afternoon, who would’ve known things would unravel the way they have three years on?
More like this:
– Facing up to Britian’s murky past
– Britain’s hidden slavery history
– The woman changing how Africa is seen
Although heralded as a history-maker, the Duchess of Sussex is not actually the first woman of colour to have been part of the British upper classes. Dangerous Freedom, the latest novel by Trinidadian author Lawrence Scott, tells the story of the real historical figure Elizabeth Dido Belle, the mixed-race daughter of enslaved woman Maria Belle and Captain Sir John Lindsay. Born in 1761, she was taken in by her great-uncle, Lord Chief Justice William Murray, first Earl of Mansfield, and raised amid the lavish setting of Kenwood House in Hampstead, London, alongside her cousin Elizabeth. It was a rare arrangement, most likely unique, and today she is considered to be Britain’s first black aristocrat.
Lawrence Scott’s novel tells the story of Belle from a fresh perspective (Credit: Papillote Press)
Scott’s exploration of Belle’s story began with a portrait. Painted by Scottish artist David Martin, the only known image of Belle shows her in a silk dress, pearls and turban, next to her cousin, in the grounds of Kenwood. It’s one of the few records of Belle’s life, along with a handful of written accounts: a mention in her father’s obituary in the London Chronicle describing her “amiable disposition and accomplishments”; a recollection by Thomas Hutchinson, a guest of Mansfield, of her joining the family after dinner, and her uncle’s fondness for her. These small nuggets – together with years of wider research – allowed Scott to gradually piece together a narrative.
As it happened, while Scott was delving into the life of Dido Belle, so were the makers of Belle, the 2014 film starring Gugu Mbatha-Raw that was many people’s first introduction to the forgotten figure. With those same fragments, director Amma Asante and screenwriter Misan Sagay spun a tale that followed two classic Hollywood plotlines: a love story, as Dido seeks to find a husband, but also a moral one as we await Mansfield’s ruling on a landmark slavery case. As might be expected, Belle is subjected to racist comments by peers and, in line with Hutchinson’s account, does not dine with her family – nor have a “coming out”. However, she is shown to have a warm relationship with her cousin “Bette” and her “Papa” Lord Mansfield, and a romantic interest in John Davinier, an anglicised version of his actual name D’Aviniere, who in the film is depicted as a white abolitionist clergyman and aspiring lawyer.
There’s this kind of whitewashing of these bits of colonial history – not really owning these details, these conflicts – Lawrence Scott
Two drafts into his novel when Belle came out, Scott was worried that the stories were too similar – but it turned out that wasn’t the case. Dangerous Freedom follows Belle’s life post-Kenwood – now known as Elizabeth D’Aviniere and married and with three sons, as she reflects on a childhood tinged with trauma, and yearns to know more about her mother. Her husband is not an aspiring lawyer but a steward, and cousin “Beth” is more snobbish than sisterly. Even the painting that inspired the novel is reframed: where many see Dido presented as an equal to her cousin, Scott’s Dido is “appalled” and “furious”, unable to recognise the “turbaned, bejewelled… tawny woman”.
In a 1778 painting by David Martin, Dido Belle is depicted with her cousin Lady Elizabeth Murray (Credit: Alamy)
For Scott, the portrait itself is a romantic depiction of Belle that he aims to re-examine with his book – the painting’s motifs have not always been fully explored in whitewashed art history, and he has his own interpretation. “The Dido in the portrait is a very romanticised, exoticised, sexualised sort of image,” he says. “She has a lot of the tell-tale relics of 18th-Century portraiture, such as the bowl of fruit and flowers, which all these enslaved young boys and girls are carrying in other portraits. She’s carrying it differently, it’s a different kind of take, but I really wonder what [the artist] Martin was trying to do.” The film also hints at the likely sexualisation of Belle when in one scene a prospective suitor describes her as a “rare and exotic flower”. “One does not make a wife of the rare and exotic,” retorts his brother. “One samples it on the cotton fields.”
In fact, to find a black woman who married into the aristocracy, we have to fast forward another 250 years, when Emma McQuiston, the daughter of a black Nigerian father and white British mother, wedded Ceawlin Thynn, then Viscount Weymouth in 2013. In many ways, the experiences of Thynn (now the Marchioness of Bath) echo those of Dido: in interviews, she has addressed the racism and snobbery she first experienced in aristocratic circles, and her husband has shared that his mother expressed worries about “400 years of bloodline“.
Ironically, there has long been speculation that the Royal Family could itself have mixed-race ancestry. For decades, historians have debated whether Queen Charlotte, wife of King George III, had African heritage but was “white-passing” – as is alluded to in Dangerous Freedom. While many academics have cast doubt on the theory, it’s one that the writers of TV drama series Bridgerton run with, casting her as an unambiguously black woman. The show imagines a diverse “ton” (an abbreviation of the French phrase le bon ton, meaning sophisticated society), with other black characters including the fictional Duke of Hastings, who is society’s most eligible bachelor, and his confidante Lady Danbury. Viewed within the context of period dramas, which typically exclude people of colour for the sake of historical accuracy, Bridgerton’s ethnically diverse take on the aristocracy is initially refreshing. However, that feeling is complicated somewhat by the revelation that the Bridgerton universe is not exactly “colourblind”, but rather what is being depicted in the series is an imagined scenario where the marriage of Queen Charlotte to King George has ushered in a sort of post-racial utopia.
With all those palaces, jewels and paintings, it’s not hard to see why contemporary culture tends to romanticise black figures within the British upper classes
Light-hearted, frothy and filled with deliberate anachronisms, Bridgerton is not designed to stand up to rigorous analysis. Even so, the show’s handling of race has drawn criticism for being more revisionist than radical. The series is set in 1813, 20 years before slavery was fully abolished in Britain, and while the frocks, palaces and parties of Regency London all make for sumptuous viewing, a key source of all that wealth has been glossed over. What’s more, just as Harry and Meghan’s union made no material difference to ordinary black Britons, the suggestion that King George’s marriage to a black Queen Charlotte wiped out racial hierarchies altogether feels a touch too fantastical.
In the TV drama series Bridgerton, Queen Charlotte is played by Golda Rosheuvel (Credit: Alamy)
In some ways, Bridgerton could be read as an accidental metaphor for Britain’s real-life rewriting of its own slave-trading past. That the Royal Family in particular had a major hand in transatlantic slavery – King Charles II and James, Duke of York, were primary shareholders in the Royal African Company, which trafficked more Africans to the Americas than any other institution – is hardly acknowledged today. “As [historian] David Olusoga is constantly arguing, there’s this kind of whitewashing of these bits of colonial history – not really owning these details, these conflicts,” says Scott. Instead, as University College London’s Catherine Hall notes, the history of slavery in Britain has been told as “the triumph of abolition”.
Olusoga himself has been among those digging up those details, and in 2015 he fronted the BBC documentary Britain’s Forgotten Slaveowners, which, together with the UCL Centre for the Study of the Legacies of British Slave-ownership, looked into who was granted a share of the £20m ($28m) in compensation for their loss of “property” post-abolition. It’s only in learning that this figure equates to £17bn ($24bn) in real terms (with half going to just 6% of the 46,000 claimants) – and that those payments continued to be made until 2015 – that we can begin to understand how much the slave trade shaped who holds wealth today.
It took the Black Lives Matter protests of last summer to accelerate the re-examination of Britain’s slave-trading history, including its links to stately homes. In September 2020, the National Trust published a report which found that a third of its estates had some connection to the spoils of the colonial era; a month later, Historic Royal Palaces announced it was launching a review into its own properties. Unsurprisingly, the prospect of “decolonising” some of Britain’s most prized country houses has sparked a “culture war” backlash, but a handful of figures among the landed gentry have been open to confronting the past. David Lascelles, Earl of Harewood, for example, has long been upfront about how the profits from slavery paid for Harewood House, even appearing in Olusoga’s documentary and making the house’s slavery archives public.
The British aristocracy is multi-racial in the reimagined historical universe presented by TV series Bridgerton (Credit: Alamy)
“Much more now, great houses are bringing [this history] to the fore and having the documentation in the home,” says Scott. “Kenwood has done that to the extent that it has a copy of the portrait now… [and] the volunteers that take you around tell a much more conflicted story about it.” Still, even as these stories are revealed in more vivid detail, how we reckon with the ways in which they’ve influenced our present – and maybe even remedy some of the injustices – is a conversation yet to be had.
With all those palaces, jewels and paintings, it’s not hard to see why contemporary culture tends to romanticise black figures within the British upper classes. Works such as Dangerous Freedom are now offering an alternative view, stripping the aristocracy of its glamour, giving a voice to the enslaved and narrating the discrimination, isolation and tensions that we’ve seen still endure. The progressive fairytale – or utopian reimagining – will always have greater appeal. But perhaps, as Scott suggests, it’s time for a new story to be written.
Dangerous Freedom by Lawrence Scott (Papillote Press) is out now.
December 30th 2022
How Diverse Was Medieval Britain?
An archaeologist explains how studies of ancient DNA and objects reveal that expansive migrations led to much greater diversity in medieval Britain than most people imagine today.
29 Nov 2022
This article was originally published at The Conversation and has been republished with Creative Commons.
WHEN YOU IMAGINE LIFE for ordinary people in ancient Britain, you’d be forgiven for picturing quaint villages where everyone looked and spoke the same way. But a recent study could change the way historians think about early medieval communities.
Most of what we know about English history after the fall of the Roman Empire is limited to archaeological finds. There are only two contemporary accounts of this post-Roman period. Gildas (sixth century) and Bede (eighth century) were both monks who gave narrow descriptions of invasion by people from the continent and neither provides an objective account.
My team’s study, published in Nature, changes that. We analyzed DNA from the remains of 460 people from sites across Northern Europe and found evidence of mass migration from Europe to England and the movement of people from as far away as West Africa. Our study combined information from artifacts and human remains.
That meant we could dig deeper into the data to explore the human details of migration.
JOURNEY INTO ENGLAND’S PAST
This paper found that about 76 percent of the genetic ancestry in the early medieval English population we studied originated from what is today northern Germany and southern Scandinavia—Continental Northern European (CNE). This number is an average taken from 278 ancient skeletons sampled from the south and east coasts of England. It is strong evidence for mass migration into the British Isles after the end of Roman administration.
One of the most surprising discoveries was the skeleton of a young girl who died at about 10 or 11 years of age, found in Updown near Eastry in Kent. She was buried in typical early seventh-century style, with a finely made pot, knife, spoon, and bone comb. Her DNA, however, tells a more complex story. As well as 67 percent CNE ancestry, she also had 33 percent West African ancestry. Her African ancestor was most closely related to modern-day Esan and Yoruba populations in southern Nigeria.
Evidence of far-reaching commercial connections with Kent at this time are known. The garnets in many brooches found in this region came from Afghanistan, for example. And the movement of the Updown girl’s ancestors was likely linked to these ancient trading routes.
KEEPING IT IN THE FAMILY
Two women buried close by were sisters and had predominantly CNE ancestry. They were related to Updown girl—perhaps her aunts. The fact that all three were buried in a similar way, with brooches, buckles, and belt hangers, suggests the people who buried them chose to highlight similarities between Updown girl and her older female relatives when they dressed them and located the burials close together. They treated her as kin, as a girl from their village, because that is what she was.
The aunts also shared a close kinship with a young man buried with artifacts that implied some social status, including a spearhead and buckle. The graves of these four people were all close together. They were buried in a prominent position marked by small barrow mounds (ancient burial places covered with a large mound of earth and stones). The visibility of this spot, combined with their dress and DNA, marks these people as part of an important local family.
The site studied in most detail—Buckland, near Dover in Kent—had kinship groups that spanned at least four generations.
One family group with CNE ancestry is remarkable because of how quickly they integrated with western British and Irish (WBI) people. Within a few generations, traditions had merged between people born far away from each other. A 100 percent WBI woman had two daughters with a 100 percent CNE man. WBI ancestry entered this family again a generation later, in near 50/50 mixed-ancestry grandchildren. Objects, including similar brooches and weapons, were found in graves on both sides of this family, indicating shared values between people of different ancestries.
This family was buried in graves close together for three generations—that is, until a woman from the third generation was buried in a different cluster of graves to the north of the family group. One of her children, a boy, died at about 8 to 10 years of age. He was buried in the cluster of graves that included his maternal grandparents and their close family, and she laid her youngest child to rest in a grave surrounded by her family. But when the mother died, her adult children chose a spot close to their father for her grave. They considered her part of the paternal side of the family.
Another woman from Buckland had a unique haplotype, a set of DNA variants that tend to be inherited together. Both males and females inherit their haplogroup from their mothers. So her DNA suggests she had no maternal family in the community she was buried with.
The chemical isotopes from her teeth and bones indicate she was not born in Kent but moved there when she was 15–25 years old. An ornate gold pendant, called a bracteate, which may have been of Scandinavian origin, was found in her grave.
This suggests she left home from Scandinavia in her youth, and her mother’s family did not travel with her. She very likely had an exogamous marriage (marriage outside of one’s social group). What is striking is the physical distance that this partnership bridged. This woman traveled 700 miles, including a voyage across the North Sea, to start her family.
These people were migrants and the children of migrants who traveled in the fifth, sixth, and seventh centuries. Their stories are of community and intermarriage. The genetic data points to profound mobility within a time of mass migration, and the archaeological details help complete the family histories. Migration did not happen at the same time, nor did migrants come from the same place. Early Anglo-Saxon culture was a mixing pot of ideas, intermarriage, and movement. This genetic coalescing and cultural diversity created something new in the south and east of England after the Roman Empire ended.
Duncan Sayer is a reader in archaeology at the University of Central Lancashire. He directed excavations at Oakington early Anglo-Saxon cemetery and Ribchester Roman Fort, and has worked extensively in field archaeology. Sayer is the author of Ethics and Burial A
August 21st 2022
What the ‘golden age’ of flying was really like
Jacopo Prisco, CNN • Updated 5th August 2022
(CNN) — Cocktail lounges, five course meals, caviar served from ice sculptures and an endless flow of champagne: life on board airplanes was quite different during the “golden age of travel,” the period from the 1950s to the 1970s that is fondly remembered for its glamor and luxury.It coincided with the dawn of the jet age, ushered in by aircraft like the de Havilland Comet, the Boeing 707 and the Douglas DC-8, which were used in the 1950s for the first scheduled transatlantic services, before the introduction of the Queen of the Skies, the Boeing 747, in 1970. So what was it actually like to be there?”Air travel at that time was something special,” says Graham M. Simons, an aviation historian and author. “It was luxurious. It was smooth. It was fast. “People dressed up because of it. The staff was literally wearing haute couture uniforms. And there was much more space: seat pitch — that’s the distance between the seats on the aircraft — was probably 36 to 40 inches. Now it’s down to 28, as they cram more and more people on board.”
Sunday roast is carved for passengers in first class on a BOAC VC10 in 1964.Airline: Style at 30,000 Feet/Keith LovegroveWith passenger numbers just a fraction of what they are today and fares too expensive for anyone but the wealthy, airlines weren’t worried about installing more seats, but more amenities.”The airlines were marketing their flights as luxurious means of transport, because in the early 1950s they were up against the cruise liners,” adds Simons. “So there were lounge areas, and the possibility of four, five, even six course meals. Olympic Airways had gold-plated cutlery in the first class cabins. “Some of the American airlines had fashion shows down the aisle, to help the passengers pass the time. At one stage, there was talk of putting baby grand pianos on the aircraft to provide entertainment.”The likes of Christian Dior, Chanel and Pierre Balmain were working with Air France, Olympic Airways and Singapore Airlines respectively to design crew uniforms. Being a flight attendant — or a stewardess, as they were called until the 1970s — was a dream job.”Flight crews looked like rock stars when they walked through the terminal, carrying their bags, almost in slow motion,” says designer and author of the book “Airline: Style at 30,000 Feet, Keith Lovegrove.”They were very stylish, and everybody was either handsome or beautiful.”Most passengers tried to follow suit.Related contentConfessions of a 1980s flight attendant
Pan American World Airways is perhaps the airline most closely linked with the ‘Golden age’.Ivan Dmitri/Michael Ochs Archives/Getty Images”It was like going to a cocktail party. We had a shirt and tie and a jacket, which sounds ridiculous now, but was expected then,” adds Lovegrove, who began flying in the 1960s as a child with his family, often getting first class seats as his father worked in the airline industry. “When we flew on the jumbo jet, the first thing my brother and I would do was go up the spiral staircase to the top deck, and sit in the cocktail lounge.””This is the generation where you’d smoke cigarettes on board and you’d have free alcohol. “I don’t want to put anyone in trouble, but at a young age we were served a schooner of sherry before our supper, then champagne and then maybe a digestive afterwards, all below drinking age. “There was an incredible sense of freedom, despite the fact that you were stuck in this fuselage for a few hours.”According to Lovegrove, this relaxed attitude also extended to security.”There was very little of it,” he says. “We once flew out to the Middle East from the UK with a budgerigar, a pet bird, which my mother took on board in a shoebox as hand luggage.”She punched two holes in the top, so the little bird could breathe. When we were brought our three-course meal, she took the lettuce garnish off the prawn cocktail and laid it over the holes. The bird sucked it in. Security-wise, I don’t think you could get away with that today.”Related contentCognac and cigars: The golden age of inflight meals
A Pan Am flight attendant serves champagne in the first class cabin of a Boeing 747 jet. Tim Graham/Getty ImagesThe airline most often associated with the golden age of travel is Pan Am, the first operator of the Boeing 707 and 747 and the industry leader on transoceanic routes at the time. “My job with Pan Am was an adventure from the very day I started,” says Joan Policastro, a former flight attendant who worked with the airline from 1968 until its dissolution in 1991. “There was no comparison between flying for Pan Am and any other airline. They all looked up to it. “The food was spectacular and service was impeccable. We had ice swans in first class that we’d serve the caviar from, and Maxim’s of Paris [a renowned French restaurant] catered our food. Policastro recalls how passengers would come to a lounge in front of first class “to sit and chat” after the meal service. “A lot of times, that’s where we sat too, chatting with our passengers. Today, passengers don’t even pay attention to who’s on the airplane, but back then, it was a much more social and polite experience,” says Policastro, who worked as a flight attendant with Delta before retiring in 2019.Suzy Smith, who was also a flight attendant with Pan Am starting in 1967, also remembers sharing moments with passengers in the lounge, including celebrities like actors Vincent Price and Raquel Welch, anchorman Walter Cronkite and the Princess Grace of Monaco.