History

February 6th 2023

What Have Strikes Achieved?

Withdrawing labour is an age-old response to workplace grievances. But how old, and to what effect?

History Today | Published in History TodayVolume 73 Issue 1 January 2023

Vicente Cutanda - Una huelga de obreros en Vizcaya
‘Una huelga de obreros en Vizcaya (A strike of workers in Biscay)’, Vicente Cutanda, 1892. Museo del Prado/Wiki Commons.

‘In Aristophanes’ Lysistrata, the women of Greece unite together in a sex-strike’

Lynette Mitchell, Professor in Greek History and Politics at the University of Exeter

‘Strike action’ – the withdrawal of labour as a protest – was known in the ancient world. The Greeks, however, did not generally form themselves into professional guilds, at least not before the third century BC when the associations of ‘the musicians of Dionysus’ were formed alongside the growth in the number of festivals.

This did not mean, however, that the Greeks were oblivious to the significance of the withdrawal of labour. The epic poem the Iliad begins with Achilles – the best of the Greek fighters – withdrawing from battle against the Trojans because he has been deprived of his war-prize, the concubine Briseis.

Withdrawing one’s skills as a fighter in warfare was a significant bargaining tool. At the beginning of the fourth century BC, the Greek army of the Ten Thousand, who were employed by Cyrus the Younger in the war against his brother, Artaxerxes II, threatened to abandon the Persian prince unless he raised their pay to a level commensurate with the danger of engaging the ‘King of Kings’ in battle (they had originally been employed on another pretext and a different pay scale). In 326 BC, when the soldiers of Alexander the Great reached the River Hyphasis in the Hindu Kush, they refused to cross it and penetrate further east into northern India, thus forcing Alexander to give up his pursuit of limitless glory. The writer Arrian says that this was his only defeat.

War brought glory, but it also brought misery. In Aristophanes’ comedy Lysistrata, produced in 411 BC, the women of Greece unite together in a sex-strike in order to force their husbands to give up their wars with each other. Although the women struggle to maintain discipline among their own ranks (some of the most comic scenes of the play describe women sneaking away from the Acropolis, which the strikers have occupied), the eponymous Lysistrata, a woman of intelligence and determination, is asked to arbitrate between the Greek cities in order to bring the strike to an end; she presents the warring men with a beautiful girl, Reconciliation, and the play ends with the Spartans and Athenians remembering the wars fought together against the Persians. Peace is restored.

‘During the reign of Ramesses III underpayment had become typical’

Dan Potter, Assistant Curator of the Ancient Mediterranean collections at National Museums Scotland

Early in the 29th year of the reign of Ramesses III (c.1153 BC), the royal tomb builders of Deir el-Medina grew increasingly concerned about the payment of their wages. The workmen were paid in sacks of barley and wheat, which was not just their families’ food, but also currency. Late deliveries and underpayment had become typical, leading one scribe to keep a detailed record of the arrears. Supply issues were linked to the agricultural calendar, but the consistent problems of this period show it was also a failure of state. An initial complaint by the workers was resolved but the causes were not dealt with. With the approval of their ‘captains’ (a three-man leadership group), the workers staged eight days of action; they ‘passed the walls’ of their secluded village and walked down to nearby royal temples chanting ‘We are hungry!’ They held sit-ins at several temples, but officials remained unable, or unwilling, to assist. A torchlit demonstration later in the week forced through one month’s grain payment.

In the following months, they ‘passed the walls’ multiple times. Eventually, the recently promoted vizier, To, wrote to them explaining that the royal granaries were empty. He apologised with a politician’s answer for the ages: ‘It was not because there was nothing to bring you that I did not come.’ In reality, To was probably busy in the delta capital at the King’s Heb-Sed (royal jubilee). To rustled together a half payment to appease the striking workers. After this derisory delivery, the angry Chief Workman Khons proposed a door-to-door campaign against local officials which was only halted by his fellow captain Amunnakht, the scribe who recorded much of the detail we have about the strikes.

Even after a bulk reimbursement was paid early in year 30, inconsistent payments resulted in more industrial action in the ensuing years. The strikes were indicative of increasing regional instability, as Waset (Luxor) experienced food shortages, inflation, incursions from nomadic tribes, tomb robberies and more downing of tools. The workers’ village was partially abandoned around 70 years later.

‘Success depends on the response of the public and the possibility of favourable government intervention’

Alastair Reid, Fellow of Girton College, Cambridge

The word strike usually brings to mind a mass strike which goes on for a long time and completely shuts down an industry, such as the British coal miners’ strikes of the 1920s and the 1970s. These sort of disputes have rarely achieved anything positive: they are costly for the incomes of the strikers and their families and if their unions could afford to give the strikers some support, then that only drained the organisation’s funds. The stress caused has often led to splits within the union and friction with other organisations.

It is noticeable, therefore, that in recent years trade unions calling large numbers of their members out on strike have tended to focus on limited days of action rather than indefinite closures.

Sometimes the wider public has been sympathetic towards the strikers. This was the case during the London dock strike of 1889. However, when the disruption has affected public services, as in the ‘Winter of Discontent’ in 1978-79, strikers have become very unpopular. Often, when this sort of strike action achieved positive results for trade unionists, it was when the government had reason to intervene in their favour: during the First World War for example, when maintaining military production was essential.

The mass withdrawal of labour is not the only form of strike action that has been seen in the past. Highly skilled unions such as engineers and printers developed a tactic known as the ‘strike in detail’, during which they used their unemployment funds to support members in leaving blacklisted firms and thus effectively targeted employers one at a time. Another possibility is the opposite of a strike – a ‘work in’ – as at the Upper Clyde Shipbuilders in 1971, when a significant part of the workforce refused to accept the closure of the yards and won significant public support for their positive attitude. In general, the mass strike is a dangerous weapon that can easily backfire: success depends on the response of the public and the possibility of favourable government intervention.

‘There was one clear winner: the Chinese Communist Party’

Elisabeth Forster, Lecturer in Chinese History at the University of Southampton

Gu Zhenghong was shot dead by a foreman on 15 May 1925, triggering China’s anti-imperialist May 30th Movement. Gu was a worker on strike at a textile mill in Shanghai. The mill was Japanese-owned, Japan being among the countries that had semi-colonised China. Outraged by Gu’s death – and the imperialism behind it – students and workers demonstrated in Shanghai’s Foreign Settlement on 30 May. At some point British police opened fire, leaving more than ten demonstrators dead. In response, a general strike was called, with workers’, students’ and merchants’ unions, the Shanghai General Chamber of Commerce, as well as the Nationalist Party (GMD) and the Chinese Communist Party among its leaders.

Among the strikers were students, merchants and workers in various sectors, such as seamen, workers at the wharves, at phone companies, power plants, buses and trams. Not all sectors participated and certain individuals broke the strike, some of whom were then kidnapped by their unions. The strikes were accompanied by boycotts of foreign goods and sometimes strikers clashed violently with the authorities.

The demands were broad and were not confined to work-related issues, but also covered anti-imperialist goals, such as an end to extraterritoriality. By August, enthusiasm for the strikes had waned. Merchants were tired of their financial losses. Some of the workers started rioting against their union, since strike pay had dried up. The strikes’ organisers therefore had to settle the industrial (and political) dispute.

Contemporaries were unsure if the strikes had achieved their goal. Strike demands had been reduced and not all were met. Many new unions had been founded, but some were also closed by the authorities, and labour movement organisers had to go underground or face arrest and execution. But there was one clear winner: the Chinese Communist Party. If workers had previously mistrusted communists as hairy, badly dressed ‘extremists’, the Party was now acknowledged as a leader of labour. Imperialism in China would end, but not until after the Second World War and the era of global decolonisation.

Social

Related Articles

Bruree Workers Soviet Mills, in Bruree, County Limerick, declared soviet on 26 August 1921. Wiki Commons/Limerick Museum.
The First Soviet in Ireland
Dockers unloading sugar in the West India Docks at the end of the dock labourers' strike, 16th September, 1889
The Dockers Who Won

Popular articles

Vicente Cutanda - Una huelga de obreros en Vizcaya

What Have Strikes Achieved?
Zhu Youjian killing his daughter Princess Zhaoren, 20th century.

February 2nd 2023

It’s been 230 years since British pirates robbed the US of the metric system

How did the world’s largest economy get stuck with retro measurement?

icon Iain Thomson

Sun 22 Jan 2023 // 08:38 UTC

Feature In 1793, French scientist Joseph Dombey sailed for the newly formed United States at the request of Thomas Jefferson carrying two objects that could have changed America. He never made it, and now the US is stuck with a retro version of measurement that is unique in the modern world.

The first, a metal cylinder, was exactly one kilogram in mass. The second was a copper rod the length of a newly proposed distance measurement, the meter.

Jefferson was keen on the rationality of the metric system in the US and an avid Francophile. But Dombey’s ship was blown off course, captured by English privateers (pirates with government sanction), and the scientist died on the island of Montserrat while waiting to be ransomed.

And so America is one of a handful of countries that maintains its own unique forms of weights and measures.

The reason for this history lesson? Over the last holiday period this hack has been cooking and is sick of this pounds/ounces/odd pints business – and don’t even get me started on using cups as a unit of measurement.

It’s time for America to get out of the Stone Age and get on board with the International System of Units (SI), as the metric system used to be known.

There’s a certain amount of hypocrisy here – I’m British and we still cling to our pints, miles per hour, and I’m told drug dealers still deal in eighths and ‘teenths in the land of my birth. But the American system is bonkers, has cost the country many millions of dollars, an increasing amount of influence, and needs to be changed.

Brits and Americans…

The cylinder and rod Dombey was carrying, the former now owned by the US National Institute of Standards and Technology, was requested by Jefferson because the British system in place was utterly irrational.

When the UK settled in the Americas they brought with them a bastardized version of weights, measures and currencies. A Scottish pint, for example, was almost triple the size of an English equivalent until 1824, which speaks volumes about the drinking culture north of the border.

British measurements were initially standardized in the UK’s colonies, but it was a curious system, taking in Roman, Frankish, and frankly bizarre additions. Until 1971, in the UK a pound consisted of 240 pence, with 12 pence to the shilling and 20 shillings to the pound.

To make things even more confusing, individual settlements adopted their own local weights and measures. From 1700, Pennsylvania took control of its own measurements and other areas soon followed. But this mishmash of coins, distances and weights held the country back and Jefferson scored his first success in the foundation of a decimal system for the dollar.

“I question if a common measure of more convenient size than the Dollar could be proposed. The value of 100, 1,000, 10,000 dollars is well estimated by the mind; so is that of a tenth or hundredth of a dollar. Few transactions are above or below these limits,” he said [PDF].

So of course he’s on the least popular note

Jefferson wanted something new, more rational, and he was not alone. In the first ever State of the Union address in 1790, George Washington observed: “Uniformity in the Currency, Weights and Measures of the United States is an object of great importance, and will, I am persuaded, be duly attended to.”

America was a new country, and owed a large part of the success of the Revolutionary War to France, in particular the French navy. The two countries were close, and the metric system appealed to Jefferson’s mindset, and to many in the new nation.

And this desire for change wasn’t just limited to weights and measures. Also in 1793, Alexander Hamilton hired Noah Webster, who as a lexicographer and ardent revolutionary wanted America to cast off the remnants of the old colonial power. Webster wrote a dictionary, current versions of which can be found in almost every classroom in the US.

And then politics and Napoleon happened

Jefferson asked the French for other samples including a copper meter and a copy of the kilogram, which was sent in 1795, but by then things had changed somewhat since he was no longer running the show. On January 2, 1794, he was replaced as US Secretary of State by fellow Founding Father Edmund Randolph, who was much less keen on the government getting involved in such things.

To make matters worse, relations between America and France were deteriorating sharply. The French government felt that the newly formed nation wasn’t being supportive enough in helping Gallic forces fight the British in the largely European War of the First Coalition. In something of a hissy fit, the French government declined to invite representatives from the US to the international gathering at Paris in 1798-99 that set the initial standards for the metric system.

Jefferson’s plans were kicked into committee and while a form of standardization based on pounds and ounces was approved by the House, the Senate declined to rule on the matter.

Not that it mattered much longer. In 1812, Napoleon effectively abolished the enforcement of the metric system in France. Napoleon was known as Le Petit Caporal, with multiple reports he was five foot two. As we know now, he was around average height for the time.

After the French dictator was defeated, the case for the metric system in France sank into near-limbo at first, as it did in the US. But it gradually spread across Europe because you can’t keep a good idea down and science and industrialization were demanding it.

Welcome to the rational world

What has kept the metric system going is its inherent rationality. Rather than use a hodgepodge of local systems, why not build one based on measurements everyone could agree on configured around the number 10, which neatly matches the number of digits on most people’s hands?

Above all it’s universal, a gram means a gram in any culture. Meanwhile, buy a pint in the UK and you’ll get 20oz of beer, do the same in America and, depending where you are, you’ll likely get 16oz – a fact that still shocks British drinkers. The differences are also there with tons, and the odd concept of stones as a weight measurement.

Metric is by no means perfect. For example, in the initial French system, a gram, or grave as it was initially known, was the mass of one cubic centimeter of water. A meter was a 10 millionth of the distance between the pole and the equator – although the French weren’t exactly sure how far that was at the time.

The original metre carved into the Place Vendôme in Paris, some adjustment required

Since then the system has been revised a lot with discoveries of more natural constants. For example, a meter is now 1/299,792,458 of the distance light travels during a second. As of 1967, the second itself has been defined as “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom,” but better measurement by atomic clocks may change this.

The chief adherents to the metric system initially were scientists who desperately needed universal sources of measurement to compare notes and replicate experiments without the errors common when converting from one measuring system to another.

This is down to convoluted systems like 12 inches in a foot, three feet in a yard, 1,760 yards in a mile, compared to 100 centimeters in a meter and 1,000 meters to a kilometer. A US pound is 0.453592 kilograms, to six figures at least, these are the kind of numbers that cause mistakes to be made.

Most famously in recent memory was the Mars Climate Orbiter in 1999. The $125 million space probe broke up in the Martian atmosphere after engineers at Lockheed Martin, who built the instrument, used the US Customary System of measurement rather than metric measurements used by others on the project. The probe descended too close to the surface and was lost.

A more down-to-earth example came in 1983 with the Air Canada “Gimli Glider” incident, where pilots of a Boeing 767 underestimated the amount of fuel they needed because the navigational computer was measuring fuel in kilograms rather than pounds. With roughly 2.2 pounds to the kilogram, the aircraft took on less than half the fuel is needed and the engines failed at 41,000 feet (12,500m).

The two pilots were forced to glide the aircraft, containing 69 souls, to an old air force base in Gimli that luckily one of the pilots had served at. It was now being used as a drag strip but thankfully there were only a few minor injuries. 

And don’t even get me started on Celsius and Fahrenheit. With Celsius water freezes at 0 degrees and boils at 100 at ground level, compared to 32 and 212 for Fahrenheit. It’s a nonsensical system and the US is now the only nation in the world to use Fahrenheit to measure regular temperatures.

The slow and winding road

Back in 1821, Secretary of State John Quincy Adams reported to Congress on the measurements issue. In his seminal study on the topic he concluded that while a metric system based on natural constants was preferable, the amount of kerfuffle needed to change from the current regime would be highly disruptive and he wasn’t sure Congress had the right to overrule the systems used by individual states.

The disruption would have been large. The vast majority of America’s high value trade was with the UK and Canada, neither of which were metric.

In addition, American and British manufacturers rather liked the old ways. With the existing system, companies manufactured parts to their own specifications, meaning if you wanted spares you had to go buy them from the original manufacturer. This proved highly lucrative.

By the middle of the 19th century, things were changing… slightly. The US government scientists did start using some metric measurements for things like mapping out territory, even though its domestic system was more common for day-to-day use. The Civil War also spurred a push towards standardization, with some states like Utah briefly mandating the system.

Two big changes came around in the 20th century following two World Wars. Interchangeability of parts, particularly bolt threading, seriously hampered the Allied forces. In 1947, America joined the International Organization for Standardization and bolt threads went metric. Today the US Army uses metric to better integrate with NATO allies.

This has continued ever since American manufacturers realized they would have to accommodate the new systems if it wanted to sell more kit abroad. Today there are technically US measurement parts still being manufactured, particularly in some industries, but there is at least a standardized system for converting these to metric measurements.

In the 1960s, metric was renamed as the Le Système international d’unités (International System of Units) or SI and things started moving again in America. After Congressional study, President Gerald Ford signed the Metric Conversion Act in 1975, setting a plan for America to finally go metric as “the preferred system of weights and measures for United States trade and commerce.”

But it suffered some drawbacks. Firstly, the system was voluntary, which massively slowed down adoption. Secondly, a year later, the new US president Jimmy Carter was a strong proponent of the system, and this caused the opposition in Congress to largely oppose the plan.

President Reagan closed most of the moves to metric in 1982, but his successor, Bush, revived some of the plans in 1991, ordering US government departments to move over to metric as far as possible. The issue has been kicked down the road ever since.

Different cultures, different customs

These days the arguments over metric versus American measurements are more fraught, becoming a political issue between left and right. Witness Tucker Carlson’s cringe-worthy rant in which he describes metric as “the yoke of tyranny,” hilariously mispronouncing “kailograms.”

What in the world is he even talking about? pic.twitter.com/KhL8eS7mO1— George Takei (@GeorgeTakei) July 25, 2019

Given that trust-fund kid Carlson was educated in a Swiss boarding school, he knows how it’s pronounced, but never let the facts get in the way of invective.

As such, it seems unlikely that we’ll see anything change soon. But that day is coming – America is no longer the manufacturing giant it was and China is perfectly happy with the metric system, although it maintains other measurement for domestic societal use like Britain does with pints and miles.

There’s really no logical reason to not go metric – it’s a simple, universal system used by every nation in the world except for the US, Liberia and Myanmar. That’s hardly august company for the Land of the Free.

It will be a long, slow process. No country has managed a full shift to metric in less than a generation, with most it took two or more, and the UK seems to be going backwards. Now-former Prime Minister Boris Johnson was keen to see a return of the old UK Imperial measurements in Britain, which make the current American system look positively rational.

It may take generations before the issue is resolved in the UK, and longer still for the US. It may, in fact, never happen in America, but the SI system makes sense, is logically sound, and will remain the language of science, medicine and engineering for the vast majority of the world.

If the US doesn’t want to play catch-up with the rest of the world it will have to take rational measurements seriously. But that day isn’t coming soon, so in the meantime this hack will have to remain using old cookbooks and we’ll face more measurement mistakes together. ®

January 28th 2023

Science & Technology

The Colonial History of the Telegraph

Gutta-percha, a natural resin, enabled European countries to communicate with their colonial outposts around the world.

old morse key telegraph on wood table

An old morse key telegraph

Getty

By: Livia Gershon

January 21, 2023

3 minutes

Share

Tweet Email Print

Long before the internet, the telegraph brought much of the world together in a communications network. And, as historian John Tully writes, the new nineteenth-century technology was deeply entangled with colonialism, both in the uses it was put to and the raw material that made it possible—the now-obscure natural plastic gutta-percha.

Tully writes that the resin product, made from the sap of certain Southeast Asian trees, is similar to rubber, but without the bounce. When warmed in hot water, it becomes pliable before hardening again as it cools. It’s resistant to both water and acid. For centuries, Malay people had used the resin to make various tools. When Europeans learned about its uses in the nineteenth century, they adopted it for everything from shoe soles to water pipes. It even became part of the slang of the day—in the 1860s, New Englanders might refer to someone they disliked as an “old gutta-percha.” Perhaps most importantly, gutta-percha was perfect for coating copper telegraph wire, replacing much less efficient insulators like tarred cotton or hemp. It was especially important in protecting undersea cables, which simply wouldn’t have been practical without it.

Prior to the invention of the electric telegraph, Tully writes, it could take six months for news from a colonial outpost to reach the mother country.

And those undersea cables became a key part of colonial governance in the second half of the nineteenth century. Prior to the invention of the electric telegraph, Tully writes, it could take six months for news from a colonial outpost to reach the mother country, making imperial control difficult. For example, when Java’s Prince Diponegoro led an uprising against Dutch colonists in 1825, the Dutch government didn’t find out for months, delaying the arrival of reinforcements.

Then, in 1857, Indians rebelled against the rule of the British East India Company. This led panicked colonists to demand an expanded telegraph system. By 1865, Karachi had a near-instant communications line to London. Just a decade later, more than 100,000 miles of cable laid across seabeds brought Australia, South Africa, Newfoundland, and many places in between, into a global communication network largely run by colonial powers. Tully argues that none of this would have been possible without gutta-percha.

But the demand for gutta-percha was bad news for the rainforests where it was found. Tens of millions of trees were felled to extract the resin. Even a large tree might yield less than a pound of the stuff, and the growing telegraph system used as much as four million pounds a year. By the 1890s, ancient forests were in ruins and the species that produced gutta-percha were so rare that some cable companies had to decline projects because they couldn’t get enough of it.

The trees weren’t driven completely extinct, and, eventually, the wireless telegraph and synthetic plastics made its use in telegraph cables obsolete. Today, the resin is only used in certain specialty areas such as dentistry. Yet sadly, the decimation of the trees prefigured the fate of rainforests around the world under colonial and neocolonial global systems for more than a century to come.


January 17th 2023

The Tudor Roots of Modern Billionaires’ Philanthropy

The debate over how to manage the wealthy’s fortunes after their deaths traces its roots to Henry VIII and Elizabeth I

Nuri Heckler, The Conversation January 13, 2023


L to R: Andrew Carnegie, Elizabeth I, Henry VIII and Henry Ford
L to R: Andrew Carnegie, Elizabeth I, Henry VIII and Henry Ford Illustration by Meilan Solly / Photos via Wikimedia Commons under public domain

More than 230 of the world’s wealthiest people, including Elon Musk, Bill Gates and Warren Buffett, have promised to give at least half of their fortunes to charity within their lifetimes or in their wills by signing the Giving Pledge. Some of the most affluent, including Jeff Bezos (who hadn’t signed the Giving Pledge as of early 2023) and his ex-wife MacKenzie Scott (who did sign the pledge after their divorce in 2019) have declared that they will go further by giving most of their fortunes to charity before they die.

This movement stands in contrast to practices of many of the philanthropists of the late 19th and early 20th centuries. Industrial titans like oil baron John D. Rockefeller, automotive entrepreneur Henry Ford and steel magnate Andrew Carnegie established massive foundations that to this day have big pots of money at their disposal despite decades of charitable grantmaking. This kind of control over funds after death is usually illegal because of a “you can’t take it with you” legal doctrine that originated in England 500 years ago.

Known as the Rule Against Perpetuities, it holds that control over property must cease within 21 years of a death. But there is a loophole in that rule for money given to charities, which theoretically can flow forever. Without it, many of the largest American and British foundations would have closed their doors after disbursing all their funds long ago.

As a lawyer and researcher who studies nonprofit law and history, I wondered why American donors get to give from the grave.

Henry VIII had his eye on property

In a recent working paper that I wrote with my colleague Angela Eikenberry and Kenya Love, a graduate student, we explained that this debate goes back to the court of Tudor monarch Henry VIII.

The Rule Against Perpetuities developed in response to political upheaval in the 1530s. The old feudal law made it almost impossible for most properties to be sold, foreclosed upon or have their ownership changed in any way.

At the time, a small number of people and the Catholic Church controlled most of the wealth in England. Henry wanted to end this practice because it was difficult to tax property that never transferred, and property owners were mostly unaccountable to England’s monarchy. This encouraged fraud and led to a consolidation of wealth that threatened the king’s power.

Hans Holbein the Younger, Henry VIII, circa 1537
Hans Holbein the Younger, Henry VIII, circa 1537 Image © Museo Nacional Thyssen-Bornemisza, Madrid

As he sought to sever England’s ties to the Catholic Church, Henry had one eye on changing religious doctrine so he could divorce his first wife, Catherine of Aragon, and the other on all the property that would become available when he booted out the church.

After splitting with the church and securing his divorce, he enacted a new property system giving the British monarchy more power over wealth. Henry then used that power to seize property. Most of the property the king took first belonged to the church, but all property interests were more vulnerable under the new law.

Henry’s power grab angered the wealthy gentry, who launched a violent uprising known as the Pilgrimage of Grace.

After quelling that upheaval, Henry compromised by allowing the transfer of property from one generation to the next. But he didn’t let people tell others how to use their property after they died. The courts later developed the Rule Against Perpetuities to allow people to transfer property to their children when they turned 21 years old.

At the same time, wealthy Englishmen were encouraged to give large sums of money and property to help the poor. Some of these funds had strings attached for longer than the 21 years.

Elizabeth I codified the rule

Elizabeth I in her coronation robes
Elizabeth I in her coronation robes Public domain via Wikimedia Commons

Elizabeth I, Henry’s daughter with his ill-fated wife Anne Boleyn, became queen in 1558, after the deaths of her siblings Edward VI and Mary I. She used her reign to codify that previously informal charitable exception. By then it was the 1590s, a tough time for England, due to two wars, a pandemic, inflation and famine. Elizabeth needed to prevent unrest without raising taxes even further than she already had.

Elizabeth’s solution was a new law decreed in 1601. Known as the Statute of Charitable Uses, it encouraged the wealthy to make big charitable donations and gave courts the power to enforce the terms of the gifts.

The monarchy believed that partnering with charities would ease the burdens of the state to aid the poor.

This concept remains popular today, especially among conservatives in the United States and United Kingdom.

The charitable exception today

When the U.S. broke away from Great Britain and became an independent country, it was unclear whether it would stick with the charitable exception.

Some states initially rejected British law, but by the early 19th century, every state in the U.S. had adopted the Rule Against Perpetuities.

In the late 1800s, scholars started debating the value of the rule, even as large foundations took advantage of Elizabeth’s philanthropy loophole. My co-authors and I found that, as of 2022, 40 U.S. states had ended or limited the rule; every jurisdiction, including the District of Columbia, permits eternal control over donations.

Although this legal precept has endured, many scholars, charities and philanthropists question whether it makes sense to let foundations hang on to massive endowments with the goal of operating in the future in accordance with the wishes of a long-gone donor rather than spend that money to meet society’s needs today.

With such issues as climate change, spending more now could significantly decrease what it will cost later to resolve the problem.

View of the atrium of the Ford Foundation Building in New York
View of the atrium of the Ford Foundation Building in New York Elsie140 via Wikimedia Commons under CC BY-SA 4.0

Still other problems require change that is more likely to come from smaller nonprofits. In one example, many long-running foundations, including the Ford, Carnegie and Kellogg foundations, contributed large sums to help Flint, Michigan, after a shift in water supply brought lead in the tap water to poisonous levels. Some scholars argue this money undermined local community groups that better understood the needs of Flint’s residents.

Another argument is more philosophical: Why should dead billionaires get credit for helping to solve contemporary problems through the foundations bearing their names? This question often leads to a debate over whether history is being rewritten in ways that emphasize their philanthropy over the sometimes questionable ways that they secured their wealth.

Some of those very rich people who started massive foundations were racist and anti-Semitic. Does their use of this rule that’s been around for hundreds of years give them the right to influence how Americans solve 21st-century problems?

Nuri Heckler is an expert on public administration at the University of Nebraska Omaha. His research focuses on power in public organizations, including nonprofits, social enterprise and government.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

January 9th 2023

Rethinking the European Conquest of Native Americans

In a new book by Pekka Hämäläinen, a picture emerges of a four-century-long struggle for primacy among Native power centers in North America.By David Waldstreicher

Print of Comanche procession
Library of Congress

December 31, 2022

When the term Indian appears in the Declaration of Independence, it is used to refer to “savage” outsiders employed by the British as a way of keeping the colonists down. Eleven years later, in the U.S. Constitution, the Indigenous peoples of North America are presented  differently: as separate entities with which the federal government must negotiate. They also appear as insiders who are clearly within the borders of the new country yet not to be counted for purposes of representation. The same people are at once part of the oppression that justifies the need for independence, a rival for control of land, and a subjugated minority whose rights are ignored.

For the Finnish scholar Pekka Hämäläinen, this emphasis on what Native people meant to white Americans misses an important factor: Native power. The lore about Jamestown and Plymouth, Pocahontas and Squanto, leads many Americans to think in terms of tragedy and, eventually, disappearance. But actually, Indigenous people continued to control most of the interior continent long after they were outnumbered by the descendants of Europeans and Africans.

Indigenous Continent – The Epic Contest For North AmericaPekka Hämäläinen, National Geographic Books

When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Much more accurate is the picture Hämäläinen paints in his new book, Indigenous Continent: a North American history that encompasses 400 years of wars that Natives often, even mostly, won—or did not lose decisively in the exceptional way that the Powhatans and Pequots had by the 1640s. Out of these centuries of broader conflict with newcomers and one another, Native peoples established decentralized hives of power, and even new empires.

In a previous book, The Comanche Empire, Hämäläinen wrote of what he controversially referred to as a “reversed colonialism,” which regarded the aggressive, slaving equestrians of “greater Comanchería”—an area covering most of the Southwest—as imperialists in ways worth comparing to the French, English, Dutch, and Spanish in America. There was continued pushback from some scholars when Hämäläinen extended the argument northward in his 2019 study, Lakota America. (The impact of his work among historians may be measured by his appointment as the Rhodes Professor of American History at Oxford University.)

What was most distinctive about these two previous books was that Hämäläinen so convincingly explained the Indigenous strategies for survival and even conquest. Instead of focusing on the microbes that decimated Native populations, Hämäläinen showed how the Comanche developed what he termed a “politics of grass.” A unique grasslands ecosystem in the plains allowed them to cultivate huge herds of horses and gave the Comanche access to bison, which they parlayed into market dominance over peoples who could supply other goods they wanted, such as guns, preserved foods, and slaves for both trade and service as herders.

Hämäläinen treats Native civilizations as polities making war and alliances. In Indigenous Continent, there is less emphasis than in The Comanche Empire on specific ecosystems and how they informed Indigenous strategies. Instead, he describes so many Native nations and European settlements adapting to one another over such a wide and long time period that readers can appreciate anew how their fates were intertwined—shattering the simple binary of “Indians” and “settlers.” Indigenous peoples adapted strenuously and seasonally to environments that remained under their control but had to contend at the same time with Europeans and other refugees encroaching on their vague borders. These newcomers could become allies, kin, rivals, or victims.

Hämäläinen sees a larger pattern of often-blundering Europeans becoming part of Indigenous systems of reciprocity or exploitation, followed by violent resets. When Dutch or French traders were “generous with their wares” and did not make too many political demands, Natives pulled them into their orbit. Spanish and, later, British colonists, by contrast, more often demanded obeisance and control over land, leading to major conflicts such as the ones that engulfed the continent in the 1670s–80s and during the Seven Years’ War. These wars redirected European imperial projects, leading to the destruction of some nations, and the migration and recombination of others, such as the westward movement of the Lakota that led to their powerful position in the Missouri River Valley and, later, farther west. In this history, Indigenous “nomadic” mobility becomes grand strategy. North America is a continent of migrants battling for position long before the so-called nation of immigrants.

Recommended Reading

“Properly managed,” settlers and their goods “could be useful,” Hämäläinen writes. The five nations of the Iroquois (Haudenosaunee) confederacy established a pattern by turning tragic depopulation by epidemic into opportunities for what Hämäläinen calls  “mourning wars” attacking weakened tribes and gaining captives. They formed new alliances and capitalized on their geographic centrality between fur-supplying nations to the west and north, and French and Dutch and, later, English tool and gun suppliers to the east and south. Hämäläinen insists that their warfare was “measured, tactical,” that their use of torture was “political spectacle,” that their captives were actually adoptees, that their switching of sides in wartime and the Iroquois’ selling out of distant client tribes such as the Delaware was a “principled plasticity.” This could almost be an expert on European history talking about the Plantagenets, the Hapsburgs, or Rome.

And there’s the rub. Hämäläinen, a northern European, feels comfortable applying the ur-Western genre of the rise and fall of empires to Native America, but imperial history comes with more baggage. Hämäläinen seems certain that Comanche or other Indigenous imperial power was different in nature from the European varieties, but it often seems as if Indigenous peoples did many of the same things that European conquerors did. Whether the Iroquois had “imperial moments,” actually were an empire, or only played one for diplomatic advantage is only part of the issue. Hämäläinen doesn’t like the phrase settler colonialism. He worries that the current term of art for the particularly Anglo land-grabbing, eliminationist version of empire paints with too broad a brush. Perhaps it does. But so does his undefined concept of empire, which seems to play favorites at least as much as traditional European histories do.

If an empire is an expanding, at least somewhat centralized polity that exploits the resources of other entities, then the Iroquois, Comanche, Lakota, and others may well qualify. But what if emphasizing the prowess of warriors and chiefs, even if he refers to them as “soldiers” and “officials,” paradoxically reinforces exoticizing stereotypes? Hämäläinen is so enthralled with the surprising power and adaptability of the tribes that he doesn’t recognize the contradiction between his small-is-beautful praise of decentralized Indigenous cultures and his condescension toward Europeans huddling in their puny, river-hugging farms and towns.

Hämäläinen notes that small Native nations could be powerful too, and decisive in wars. His savvy Indigenous imperialists wisely prioritized their relationships, peaceful or not, with other Natives, using the British or French as suppliers of goods. Yet he praises them for the same resource exploitation and trade manipulation that appears capitalist and murderous when European imperialists do their version. In other words, he praises Natives when they win for winning. Who expanded over space, who won, is the story; epic battles are the chapters; territory is means and end.

And the wheel turns fast, followed by the rhetoric. When British people muscle out Natives or seek to intimidate them at treaty parleys, they are “haughty.” At the same time, cannibalism and torture are ennobled as strategies—when they empower Natives. Native power as terror may help explain genocidal settler responses, but it makes natives who aren’t just plain brave—including women, who had been producers of essential goods and makers of peace—fade away almost as quickly as they did in the old history. As readers, we gain a continental perspective, but strangely, we miss the forest for the battlefields.

It’s already well known why natives lost their land and, by the 19th century, no longer had regional majorities: germs, technology, greed, genocidal racism, and legal chicanery, not always in that order. Settler-colonial theory zeroes in on the desire to replace the Native population, one way or another, for a reason: Elimination was intended even when it failed in North America for generations.

To Hämäläinen, Natives dominated so much space for hundreds of years because of their “resistance,” which he makes literally the last word of his book. Are power and resistance the same thing? Many scholars associated with the Native American and Indigenous Studies Association find it outrageous to associate any qualities of empire with colonialism’s ultimate, and ongoing, victims. The academic and activist Nick Estes has accused Hämäläinen of “moral relativist” work that is “titillating white settler fantasies” and “winning awards” for doing so. Native American scholars, who labor as activists and community representatives as well as academics in white-dominated institutions, are especially skeptical when Indigenous people are seen as powerful enough to hurt anyone, even if the intent is to make stock figures more human. In America, tales of Native strength and opportunistic mobility contributed to the notion that all Natives were the same, and a threat to peace. The alternative categories of victim and rapacious settler help make better arguments for reparative justice.

In this light, the controversy over Native empires is reminiscent of what still happens when it’s pointed out that Africans participated in the slave trade—an argument used by anti-abolitionists in the 19th century and ever since to evade blame for the new-world slaveries that had turned deadlier and ideologically racial. It isn’t coincidental that Hämäläinen, as a fan of the most powerful Natives, renders slavery among Indigenous people as captivity and absorption, not as the commodified trade it became over time. Careful work by historians has made clear how enslavement of and by Natives became, repeatedly, a diplomatic tool and an economic engine that created precedents for the enslavement of Black Americans.

All genres of history have their limits, often shaped by politics. That should be very apparent in the age of the 1619 and 1776 projects. Like the Declaration and the Constitution, when it comes to Indigenous peoples, historians are still trying to have it both ways. Books like these are essential because American history needs to be seen from all perspectives, but there will be others that break more decisively with a story that’s focused on the imperial winners.

Indigenous Continent – The Epic Contest For North AmericaPekka Hämäläinen, National Geographic Books

January 8th 2023

Britain’s first black aristocrats

Share using EmailShare on TwitterShare on FacebookShare on Linkedin

Alamy

By Fedora Abu10th May 2021

Whitewashed stories about the British upper classes are being retold. Fedora Abu explores the Bridgerton effect, and talks to Lawrence Scott, author of Dangerous Freedom.

F

For centuries, the Royal Family, Britain’s wealthiest, most exclusive institution, has been synonymous with whiteness. And yet, for a brief moment, there she was: Her Royal Highness the Duchess of Sussex, a biracial black woman, on the balcony at Buckingham Palace. Her picture-perfect wedding to Prince Harry in 2018 was an extraordinary amalgamation of black culture and centuries-old royal traditions, as an African-American preacher and a gospel choir graced St George’s Chapel in Windsor. Watching on that sunny May afternoon, who would’ve known things would unravel the way they have three years on?

More like this:

Facing up to Britian’s murky past
Britain’s hidden slavery history
The woman changing how Africa is seen

Although heralded as a history-maker, the Duchess of Sussex is not actually the first woman of colour to have been part of the British upper classes. Dangerous Freedom, the latest novel by Trinidadian author Lawrence Scott, tells the story of the real historical figure Elizabeth Dido Belle, the mixed-race daughter of enslaved woman Maria Belle and Captain Sir John Lindsay. Born in 1761, she was taken in by her great-uncle, Lord Chief Justice William Murray, first Earl of Mansfield, and raised amid the lavish setting of Kenwood House in Hampstead, London, alongside her cousin Elizabeth. It was a rare arrangement, most likely unique, and today she is considered to be Britain’s first black aristocrat.Lawrence Scott's novel tells the story of Belle from a fresh perspective (Credit: Papillote Press)

Lawrence Scott’s novel tells the story of Belle from a fresh perspective (Credit: Papillote Press)

Scott’s exploration of Belle’s story began with a portrait. Painted by Scottish artist David Martin, the only known image of Belle shows her in a silk dress, pearls and turban, next to her cousin, in the grounds of Kenwood. It’s one of the few records of Belle’s life, along with a handful of written accounts: a mention in her father’s obituary in the London Chronicle describing her “amiable disposition and accomplishments”; a recollection by Thomas Hutchinson, a guest of Mansfield, of her joining the family after dinner, and her uncle’s fondness for her. These small nuggets – together with years of wider research – allowed Scott to gradually piece together a narrative.

As it happened, while Scott was delving into the life of Dido Belle, so were the makers of Belle, the 2014 film starring Gugu Mbatha-Raw that was many people’s first introduction to the forgotten figure. With those same fragments, director Amma Asante and screenwriter Misan Sagay spun a tale that followed two classic Hollywood plotlines: a love story, as Dido seeks to find a husband, but also a moral one as we await Mansfield’s ruling on a landmark slavery case. As might be expected, Belle is subjected to racist comments by peers and, in line with Hutchinson’s account, does not dine with her family – nor have a “coming out”. However, she is shown to have a warm relationship with her cousin “Bette” and her “Papa” Lord Mansfield, and a romantic interest in John Davinier, an anglicised version of his actual name D’Aviniere, who in the film is depicted as a white abolitionist clergyman and aspiring lawyer.

There’s this kind of whitewashing of these bits of colonial history – not really owning these details, these conflicts – Lawrence Scott

Two drafts into his novel when Belle came out, Scott was worried that the stories were too similar – but it turned out that wasn’t the case. Dangerous Freedom follows Belle’s life post-Kenwood – now known as Elizabeth D’Aviniere and married and with three sons, as she reflects on a childhood tinged with trauma, and yearns to know more about her mother. Her husband is not an aspiring lawyer but a steward, and cousin “Beth” is more snobbish than sisterly. Even the painting that inspired the novel is reframed: where many see Dido presented as an equal to her cousin, Scott’s Dido is “appalled” and “furious”, unable to recognise the “turbaned, bejewelled… tawny woman”.In a 1778 painting by David Martin, Dido Belle is depicted with her cousin Lady Elizabeth Murray (Credit: Alamy)

In a 1778 painting by David Martin, Dido Belle is depicted with her cousin Lady Elizabeth Murray (Credit: Alamy)

For Scott, the portrait itself is a romantic depiction of Belle that he aims to re-examine with his book – the painting’s motifs have not always been fully explored in whitewashed art history, and he has his own interpretation. “The Dido in the portrait is a very romanticised, exoticised, sexualised sort of image,” he says. “She has a lot of the tell-tale relics of 18th-Century portraiture, such as the bowl of fruit and flowers, which all these enslaved young boys and girls are carrying in other portraits. She’s carrying it differently, it’s a different kind of take, but I really wonder what [the artist] Martin was trying to do.” The film also hints at the likely sexualisation of Belle when in one scene a prospective suitor describes her as a “rare and exotic flower”. “One does not make a wife of the rare and exotic,” retorts his brother. “One samples it on the cotton fields.” 

Post-racial utopia

In fact, to find a black woman who married into the aristocracy, we have to fast forward another 250 years, when Emma McQuiston, the daughter of a black Nigerian father and white British mother, wedded Ceawlin Thynn, then Viscount Weymouth in 2013. In many ways, the experiences of Thynn (now the Marchioness of Bath) echo those of Dido: in interviews, she has addressed the racism and snobbery she first experienced in aristocratic circles, and her husband has shared that his mother expressed worries about “400 years of bloodline“.

Ironically, there has long been speculation that the Royal Family could itself have mixed-race ancestry. For decades, historians have debated whether Queen Charlotte, wife of King George III, had African heritage but was “white-passing” – as is alluded to in Dangerous Freedom. While many academics have cast doubt on the theory, it’s one that the writers of TV drama series Bridgerton run with, casting her as an unambiguously black woman. The show imagines a diverse “ton” (an abbreviation of the French phrase le bon ton, meaning sophisticated society), with other black characters including the fictional Duke of Hastings, who is society’s most eligible bachelor, and his confidante Lady Danbury. Viewed within the context of period dramas, which typically exclude people of colour for the sake of historical accuracy, Bridgerton’s ethnically diverse take on the aristocracy is initially refreshing. However, that feeling is complicated somewhat by the revelation that the Bridgerton universe is not exactly “colourblind”, but rather what is being depicted in the series is an imagined scenario where the marriage of Queen Charlotte to King George has ushered in a sort of post-racial utopia.

With all those palaces, jewels and paintings, it’s not hard to see why contemporary culture tends to romanticise black figures within the British upper classes

Light-hearted, frothy and filled with deliberate anachronisms, Bridgerton is not designed to stand up to rigorous analysis. Even so, the show’s handling of race has drawn criticism for being more revisionist than radical. The series is set in 1813, 20 years before slavery was fully abolished in Britain, and while the frocks, palaces and parties of Regency London all make for sumptuous viewing, a key source of all that wealth has been glossed over. What’s more, just as Harry and Meghan’s union made no material difference to ordinary black Britons, the suggestion that King George’s marriage to a black Queen Charlotte wiped out racial hierarchies altogether feels a touch too fantastical.In the TV drama series Bridgerton, Queen Charlotte is played by Golda Rosheuvel (Credit: Alamy)

In the TV drama series Bridgerton, Queen Charlotte is played by Golda Rosheuvel (Credit: Alamy)

In some ways, Bridgerton could be read as an accidental metaphor for Britain’s real-life rewriting of its own slave-trading past. That the Royal Family in particular had a major hand in transatlantic slavery ­– King Charles II and James, Duke of York, were primary shareholders in the Royal African Company, which trafficked more Africans to the Americas than any other institution – is hardly acknowledged today. “As [historian] David Olusoga is constantly arguing, there’s this kind of whitewashing of these bits of colonial history – not really owning these details, these conflicts,” says Scott. Instead, as University College London’s Catherine Hall notes, the history of slavery in Britain has been told as “the triumph of abolition”.

Olusoga himself has been among those digging up those details, and in 2015 he fronted the BBC documentary Britain’s Forgotten Slaveowners, which, together with the UCL Centre for the Study of the Legacies of British Slave-ownership, looked into who was granted a share of the £20m ($28m) in compensation for their loss of “property” post-abolition. It’s only in learning that this figure equates to £17bn ($24bn) in real terms (with half going to just 6% of the 46,000 claimants) – and that those payments continued to be made until 2015 – that we can begin to understand how much the slave trade shaped who holds wealth today.

It took the Black Lives Matter protests of last summer to accelerate the re-examination of Britain’s slave-trading history, including its links to stately homes. In September 2020, the National Trust published a report which found that a third of its estates had some connection to the spoils of the colonial era; a month later, Historic Royal Palaces announced it was launching a review into its own properties. Unsurprisingly, the prospect of “decolonising” some of Britain’s most prized country houses has sparked a “culture war” backlash, but a handful of figures among the landed gentry have been open to confronting the past. David Lascelles, Earl of Harewood, for example, has long been upfront about how the profits from slavery paid for Harewood House, even appearing in Olusoga’s documentary and making the house’s slavery archives public.The British aristocracy is multi-racial in the reimagined historical universe presented by TV series Bridgerton (Credit: Alamy)

The British aristocracy is multi-racial in the reimagined historical universe presented by TV series Bridgerton (Credit: Alamy)

“Much more now, great houses are bringing [this history] to the fore and having the documentation in the home,” says Scott. “Kenwood has done that to the extent that it has a copy of the portrait now… [and] the volunteers that take you around tell a much more conflicted story about it.” Still, even as these stories are revealed in more vivid detail, how we reckon with the ways in which they’ve influenced our present – and maybe even remedy some of the injustices ­– is a conversation yet to be had.

With all those palaces, jewels and paintings, it’s not hard to see why contemporary culture tends to romanticise black figures within the British upper classes. Works such as Dangerous Freedom are now offering an alternative view, stripping the aristocracy of its glamour, giving a voice to the enslaved and narrating the discrimination, isolation and tensions that we’ve seen still endure. The progressive fairytale – or utopian reimagining – will always have greater appeal. But perhaps, as Scott suggests, it’s time for a new story to be written.

Dangerous Freedom by Lawrence Scott (Papillote Press) is out now. 

December 30th 2022

How Diverse Was Medieval Britain?

An archaeologist explains how studies of ancient DNA and objects reveal that expansive migrations led to much greater diversity in medieval Britain than most people imagine today.

ByDuncan Sayer

29 Nov 2022

A photograph shows, through glass, a person with short black hair wearing glasses, a necklace, and a black shirt pointing at a graphic of stick figures. Different clusters of stick people interact in different scenarios as various red lines surround them, with several lines intersecting at a central node.

This article was originally published at The Conversation and has been republished with Creative Commons.

WHEN YOU IMAGINE LIFE for ordinary people in ancient Britain, you’d be forgiven for picturing quaint villages where everyone looked and spoke the same way. But a recent study could change the way historians think about early medieval communities.

Most of what we know about English history after the fall of the Roman Empire is limited to archaeological finds. There are only two contemporary accounts of this post-Roman period. Gildas (sixth century) and Bede (eighth century) were both monks who gave narrow descriptions of invasion by people from the continent and neither provides an objective account.

My team’s study, published in Nature, changes that. We analyzed DNA from the remains of 460 people from sites across Northern Europe and found evidence of mass migration from Europe to England and the movement of people from as far away as West Africa. Our study combined information from artifacts and human remains.

That meant we could dig deeper into the data to explore the human details of migration.

JOURNEY INTO ENGLAND’S PAST

This paper found that about 76 percent of the genetic ancestry in the early medieval English population we studied originated from what is today northern Germany and southern Scandinavia—Continental Northern European (CNE). This number is an average taken from 278 ancient skeletons sampled from the south and east coasts of England. It is strong evidence for mass migration into the British Isles after the end of Roman administration.

A photograph features an old bone comb on a plot of dirt with a red-and-white–striped centimeter ruler placed under it for measurement.

One of the most surprising discoveries was the skeleton of a young girl who died at about 10 or 11 years of age, found in Updown near Eastry in Kent. She was buried in typical early seventh-century style, with a finely made pot, knife, spoon, and bone comb. Her DNA, however, tells a more complex story. As well as 67 percent CNE ancestry, she also had 33 percent West African ancestry. Her African ancestor was most closely related to modern-day Esan and Yoruba populations in southern Nigeria.

Evidence of far-reaching commercial connections with Kent at this time are known. The garnets in many brooches found in this region came from Afghanistan, for example. And the movement of the Updown girl’s ancestors was likely linked to these ancient trading routes.

KEEPING IT IN THE FAMILY

Two women buried close by were sisters and had predominantly CNE ancestry. They were related to Updown girl—perhaps her aunts. The fact that all three were buried in a similar way, with brooches, buckles, and belt hangers, suggests the people who buried them chose to highlight similarities between Updown girl and her older female relatives when they dressed them and located the burials close together. They treated her as kin, as a girl from their village, because that is what she was.

The aunts also shared a close kinship with a young man buried with artifacts that implied some social status, including a spearhead and buckle. The graves of these four people were all close together. They were buried in a prominent position marked by small barrow mounds (ancient burial places covered with a large mound of earth and stones). The visibility of this spot, combined with their dress and DNA, marks these people as part of an important local family.

The site studied in most detail—Buckland, near Dover in Kent—had kinship groups that spanned at least four generations.

One family group with CNE ancestry is remarkable because of how quickly they integrated with western British and Irish (WBI) people. Within a few generations, traditions had merged between people born far away from each other. A 100 percent WBI woman had two daughters with a 100 percent CNE man. WBI ancestry entered this family again a generation later, in near 50/50 mixed-ancestry grandchildren. Objects, including similar brooches and weapons, were found in graves on both sides of this family, indicating shared values between people of different ancestries.

This family was buried in graves close together for three generations—that is, until a woman from the third generation was buried in a different cluster of graves to the north of the family group. One of her children, a boy, died at about 8 to 10 years of age. He was buried in the cluster of graves that included his maternal grandparents and their close family, and she laid her youngest child to rest in a grave surrounded by her family. But when the mother died, her adult children chose a spot close to their father for her grave. They considered her part of the paternal side of the family.

A map graphic features two halves. The left graphic is littered with mostly gray shapes, with a select few numbered and highlighted in red, blue, yellow, or green. The right graphic features tiers of red squares and circles connected by lines. A blue circle appears in the middle of the graphic, and some squares and circles toward the bottom have both colors—faded from red (left) to blue (right).

Another woman from Buckland had a unique haplotype, a set of DNA variants that tend to be inherited together. Both males and females inherit their haplogroup from their mothers. So her DNA suggests she had no maternal family in the community she was buried with.

The chemical isotopes from her teeth and bones indicate she was not born in Kent but moved there when she was 15–25 years old. An ornate gold pendant, called a bracteate, which may have been of Scandinavian origin, was found in her grave.

This suggests she left home from Scandinavia in her youth, and her mother’s family did not travel with her. She very likely had an exogamous marriage (marriage outside of one’s social group). What is striking is the physical distance that this partnership bridged. This woman traveled 700 miles, including a voyage across the North Sea, to start her family.

RETHINKING HISTORY

These people were migrants and the children of migrants who traveled in the fifth, sixth, and seventh centuries. Their stories are of community and intermarriage. The genetic data points to profound mobility within a time of mass migration, and the archaeological details help complete the family histories. Migration did not happen at the same time, nor did migrants come from the same place. Early Anglo-Saxon culture was a mixing pot of ideas, intermarriage, and movement. This genetic coalescing and cultural diversity created something new in the south and east of England after the Roman Empire ended.

A photograph features a person with short brown hair, handlebar mustache, and beard wearing a black vest and a gray striped scarf.

Duncan Sayer

Duncan Sayer is a reader in archaeology at the University of Central Lancashire. He directed excavations at Oakington early Anglo-Saxon cemetery and Ribchester Roman Fort, and has worked extensively in field archaeology. Sayer is the author of Ethics and Burial A

August 21st 2022

A London newspaper advertisment from 1947 recruiting hefty girls for the Metropolitan Police – Appledene Archives / London Evening News.

What the ‘golden age’ of flying was really like

Jacopo Prisco, CNN • Updated 5th August 2022

Bacchanalian motifs served as a backdrop to cocktail hour on Lufthansa's first-class 'Senator' service in 1958.

(CNN) — Cocktail lounges, five course meals, caviar served from ice sculptures and an endless flow of champagne: life on board airplanes was quite different during the “golden age of travel,” the period from the 1950s to the 1970s that is fondly remembered for its glamor and luxury.It coincided with the dawn of the jet age, ushered in by aircraft like the de Havilland Comet, the Boeing 707 and the Douglas DC-8, which were used in the 1950s for the first scheduled transatlantic services, before the introduction of the Queen of the Skies, the Boeing 747, in 1970. So what was it actually like to be there?”Air travel at that time was something special,” says Graham M. Simons, an aviation historian and author. “It was luxurious. It was smooth. It was fast. “People dressed up because of it. The staff was literally wearing haute couture uniforms. And there was much more space: seat pitch — that’s the distance between the seats on the aircraft — was probably 36 to 40 inches. Now it’s down to 28, as they cram more and more people on board.”

Golden era

Sunday roast is carved for passengers in first class on a BOAC VC10 in 1964.

Sunday roast is carved for passengers in first class on a BOAC VC10 in 1964.Airline: Style at 30,000 Feet/Keith LovegroveWith passenger numbers just a fraction of what they are today and fares too expensive for anyone but the wealthy, airlines weren’t worried about installing more seats, but more amenities.”The airlines were marketing their flights as luxurious means of transport, because in the early 1950s they were up against the cruise liners,” adds Simons. “So there were lounge areas, and the possibility of four, five, even six course meals. Olympic Airways had gold-plated cutlery in the first class cabins. “Some of the American airlines had fashion shows down the aisle, to help the passengers pass the time. At one stage, there was talk of putting baby grand pianos on the aircraft to provide entertainment.”The likes of Christian Dior, Chanel and Pierre Balmain were working with Air France, Olympic Airways and Singapore Airlines respectively to design crew uniforms. Being a flight attendant — or a stewardess, as they were called until the 1970s — was a dream job.”Flight crews looked like rock stars when they walked through the terminal, carrying their bags, almost in slow motion,” says designer and author of the book “Airline: Style at 30,000 Feet, Keith Lovegrove.”They were very stylish, and everybody was either handsome or beautiful.”Most passengers tried to follow suit.Related contentConfessions of a 1980s flight attendant

Relaxed attitude

Pan American World Airways is perhaps the airline most closely linked with the 'Golden age'.

Pan American World Airways is perhaps the airline most closely linked with the ‘Golden age’.Ivan Dmitri/Michael Ochs Archives/Getty Images”It was like going to a cocktail party. We had a shirt and tie and a jacket, which sounds ridiculous now, but was expected then,” adds Lovegrove, who began flying in the 1960s as a child with his family, often getting first class seats as his father worked in the airline industry. “When we flew on the jumbo jet, the first thing my brother and I would do was go up the spiral staircase to the top deck, and sit in the cocktail lounge.””This is the generation where you’d smoke cigarettes on board and you’d have free alcohol. “I don’t want to put anyone in trouble, but at a young age we were served a schooner of sherry before our supper, then champagne and then maybe a digestive afterwards, all below drinking age. “There was an incredible sense of freedom, despite the fact that you were stuck in this fuselage for a few hours.”According to Lovegrove, this relaxed attitude also extended to security.”There was very little of it,” he says. “We once flew out to the Middle East from the UK with a budgerigar, a pet bird, which my mother took on board in a shoebox as hand luggage.”She punched two holes in the top, so the little bird could breathe. When we were brought our three-course meal, she took the lettuce garnish off the prawn cocktail and laid it over the holes. The bird sucked it in. Security-wise, I don’t think you could get away with that today.”Related contentCognac and cigars: The golden age of inflight meals

‘Impeccable service’

A Pan Am flight attendant serves champagne in the first class cabin of a Boeing 747 jet.

A Pan Am flight attendant serves champagne in the first class cabin of a Boeing 747 jet. Tim Graham/Getty ImagesThe airline most often associated with the golden age of travel is Pan Am, the first operator of the Boeing 707 and 747 and the industry leader on transoceanic routes at the time. “My job with Pan Am was an adventure from the very day I started,” says Joan Policastro, a former flight attendant who worked with the airline from 1968 until its dissolution in 1991. “There was no comparison between flying for Pan Am and any other airline. They all looked up to it. “The food was spectacular and service was impeccable. We had ice swans in first class that we’d serve the caviar from, and Maxim’s of Paris [a renowned French restaurant] catered our food. Policastro recalls how passengers would come to a lounge in front of first class “to sit and chat” after the meal service. “A lot of times, that’s where we sat too, chatting with our passengers. Today, passengers don’t even pay attention to who’s on the airplane, but back then, it was a much more social and polite experience,” says Policastro, who worked as a flight attendant with Delta before retiring in 2019.Suzy Smith, who was also a flight attendant with Pan Am starting in 1967, also remembers sharing moments with passengers in the lounge, including celebrities like actors Vincent Price and Raquel Welch, anchorman Walter Cronkite and the Princess Grace of Monaco.