The Conversation

What if every pet was vegan? Here’s how much it would help the planet

At least a quarter of all human-generated greenhouse gas emissions to date can be traced to the livestock industry. Vast tracts are used to grow feed crops and to graze the world’s 92 billion cows, pigs, chickens and other animals slaughtered each year. This hunger for land means livestock farming is a leading cause of deforestation, as well as a significant drain on freshwater.

A global transition towards plant-based diets is urgently needed. But should this include our pets?

Researchers have struggled to determine how pet food affects the planet as there is not much data on what goes into it. Fortunately, a report published in 2020 detailed over 500 ingredients used by the pet food industry in the US – a country with more pets than any other.

I calculated the environmental impact of meat-based pet food based on this information. Then I asked what would happen if the entire world population of pet dogs and cats ate plant-based alternatives instead.

The results (summarised here) shatter the assumption that it is just people who need to change their diets for the sake of the environment.

Massive climate and nature benefits

If the world’s pet dogs were transitioned onto nutritious diets which excluded all animal products, it would save greenhouse gas emissions equivalent to 0.57 gigatonnes (1 gigatonne is 1 billion tonnes) of CO₂ a year – much more than the UK emitted in 2023 (0.38 gigatonnes) – and liberate an area of land larger than Mexico, potentially for habitat restoration which would boost carbon capture and biodiversity.

And what if the calories fed to animals meant for slaughter were instead used to create plant-based foods for pets? Most of the plant-based calories fed to livestock animals are lost during conversion to meat, milk or eggs. This is highly inefficient. A similar quantity of plant-based calories could feed many millions more people. In fact, a nutritious vegan diet for every pet dog would save enough calories to feed 450 million people – more than the entire EU population. At least six billion land-based “food animals” would also be spared from slaughter annually.

An indoor chicken farm. A vegan diet could free billions of animals from suffering. David Tadevosian/Shutterstock

Pet cats eat one billion land-based food animals annually, and vast numbers of fish. Feeding them nutritious vegan diets instead would eliminate greenhouse gas emissions equivalent to 0.09 gigatonnes of CO₂ – more than New Zealand’s annual emissions (with its very large methane-emitting dairy industry) – and would save an area of land larger than Germany. Seventy million additional people could also be fed using the food energy savings – more than the entire UK population.

Around 75% of the animal-based ingredients of pet food are byproducts of making food for humans. These byproducts include ears, snouts and internal organs, and are usually considered inedible by people. Some are sold cheaply to pet food manufacturers, and it’s long been assumed that this lowers its environmental impact by curbing the number of livestock animals that need to be killed.

However, my research using additional meat industry data demonstrates the opposite. I found that a smaller proportion of carcasses are used to make byproducts than meat. This increases the number of carcasses required to produce the same quantity of pet food ingredients. Demand for byproducts from the pet food industry actually increases the number of livestock animals killed. More livestock means more land, water and waste, including the greenhouse gas emissions heating Earth’s climate to dangerous levels.

Are vegan diets safe for pets?

Dogs are biologically omnivorous, and cats carnivorous. This means that they would naturally hunt and kill a variety of small mammals, birds and insects to obtain the nutrients needed for survival.

Of course, this is of little relevance to modern domesticated dogs and cats that normally eat commercial diets. Almost 50% of these diets comprise plant materials like grains, soy, fruits and vegetables. These are mixed with body parts from species dogs and cats would never naturally hunt (consider fish in cat food), and chemical flavourants, colourants and preservatives. The product, such as dry kibble, is fed at predictable times daily, and bears little resemblance to the natural diet of an ancestral dog or cat.

What dogs, cats, and indeed all species actually need is a set of nutrients including certain amino acids, vitamins and minerals, as well as macronutrients such as protein and carbohydrates. There is no biological requirement for meat. Provided manufacturers ensure all necessary nutrients are added, in the right proportions, modern commercial vegan diets are normally nutritionally sound.

A bowl of dog biscuits surrounded by vegetables. Vegan pet food can meet the nutritional needs of dogs and cats. Darya Lavinskaya/Shutterstock

By late 2024, 11 studies in dogs, three in cats, and one systematic review covering both had all demonstrated that dogs and cats thrive on modern vegan or vegetarian diets. Certain health benefits appear consistent across the research, such as a reduction in obesity and of conditions that may be triggered by animal-sourced allergens, like itchy skin and ears and gastrointestinal problems.

But do these naturally carnivorous animals (or omnivorous, in the case of dogs), actually enjoy vegan diets? Apparently so, according to a detailed analysis of their feeding behaviour. Dr Liam Satchell and I studied every known indicator of enjoyment, including jumping, barking, purring, licking, sniffing and salivating in 2,308 dogs and 1,135 cats. This was the largest study of its kind, and we found that, on average, pets seemed to enjoy vegan meals as much as meaty ones.

To address climate change and nature loss, UN secretary-general Antonio Guterres has said action is needed “on all fronts: everything, everywhere, all at once”. This certainly includes the livestock sector, which feeds not only us but our pets.

Fortunately, we can do this while providing a nutritious diet for the animals we love. To safeguard pet health, all diets, including vegan diets, should be manufactured by reputable pet food companies which carefully formulate their food to be nutritionally sound. For further advice, visit www.SustainablePetFood.info.The Conversation

Andrew Knight, Adjunct Professor (Animal Welfare), Murdoch University and Griffith University, Visiting Lecturer, University of Winchester

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Jimmy Carter’s idealism and humility left a lasting imprint on American life

Former US president Jimmy Carter, a man defined by his humility and idealism, has died at 100.

Many US presidents come from modest upbringings. Born in Plains, Georgia, Jimmy Carter’s Depression-era childhood was no exception. His home lacked running water and electricity, while his rural high school lacked a 12th grade.

What made Carter exceptional was the degree to which these humble beginnings would influence his life, most notably his time as America’s 39th president from 1977-1981.

How a peanut farmer became president

A farmer, nuclear submarine officer, state governor and proud Christian, Carter assumed office during a tumultuous time in American history. Three crises in particular are not only widely credited with helping elect the former peanut farmer into the Oval Office, but also still influence how Americans think about American power and politicians half a century later.

The first crisis occurred in March 1973, when newscasts on living room TVs across the country displayed what appeared to be the previously undefined limits of American power: the chaotic – and some would say humiliating – US withdrawal from Vietnam.

The second crisis began in October 1973, when members of the Organisation of Arab Petroleum Exporting Countries (OPEC) imposed an embargo on oil exports to the United States. It caused the price of oil per barrel to quadruple, the US economy to shrink by as much as 2.5%, and dramatic increases in unemployment and inflation.

The third and most prominent crisis, the Watergate scandal, forced President Richard Nixon to resign – the first presidential resignation in US history – amid considerable evidence that he committed crimes and abuses of power while in office. Nixon’s successor, and Carter’s Republican opponent in the 1976 presidential election, Gerald Ford, famously pardoned Nixon for any crimes he had committed in office.

The combination of Carter’s humility and idealism amid three major US crises – and his surprise victory in the early Democratic primary state of Iowa – created the unique conditions for a relatively unknown Georgia governor to win the 1976 election. His commitment to restore morality to the White House and US foreign policy, along with his campaign pledge to never lie to the American people, was exactly what many Americans sought from their president after such a turbulent period.

The presidency, 1977-1981

Carter began his White House journey engulfed by existing crises but his time in office undoubtedly featured its own share of crises too. Historians continue to debate how much Carter was responsible for the challenges he faced in office. However, his public approval ratings – 75% when he entered office in 1977 and 34% when he left office in 1981 – give an indication of where the American people placed their blame.

While early in his presidency much of the focus was on addressing the lingering energy crisis, Carter outlined his broader vision and policy agenda in his inaugural address on January 20 1977.

Carter first thanked outgoing President Ford for all that he had “done to heal our land” — a remarkable statement from a man who sharply criticised Ford’s pardon of Nixon. He went on to speak of “our recent mistakes”, the idea “if we despise our own government, we have no future”, and his hope for Americans to be “proud of their own government once again”.

Two years later, he echoed these sentiments in the most well-known speech of his presidency. Amid yet another oil shock that led to long lines at petrol stations, high inflation and an economic recession, Carter’s televised address to the nation decried a “crisis of confidence” amid “growing doubt about the meaning of our own lives”.

It was this speech, which posited that “all the legislation in the world can’t fix what’s wrong with America”, combined with his firing of five cabinet members a few days later, that many now point to as a turning point for the Carter administration from which it would never fully recover.

Carter’s righteous criticism of the Nixon and Ford administrations had been refreshing to voters when he was an outsider candidate. But such moralising lost its appeal and some perceived it as an abdication of responsibility after Carter had occupied the office for more than two years.

Ted Kennedy, the Democratic senator from Massachusetts, would go on to criticise Carter’s speech as one that dismissed “the golden promise that is America” and instead embraced a pessimistic vision in which Americans were “blamed for every national ill, scolded as greedy, wasteful and mired in malaise”.

Jimmy Carter with his wife, Rosalynn Carter, and mother-in-law, Allie Smith, in 1981. Wayne Perkins/AP

Only four months after Carter’s infamous speech, yet another crisis erupted. Supporters of Iranian leader Ayatollah Khomeini took 52 US diplomats hostage in Iran. They would end up being held captive for the rest of Carter’s term in office while the US government’s failed rescue mission in April 1980 only worsened the situation.

Carter undoubtedly racked up foreign policy successes in his normalisation of ties with China and his facilitating of an unprecedented peace agreement between the Israeli and Egyptian governments, known as the Camp David Accords. Ultimately, however, the perception of him having a failed presidency would be such a weight on his administration that Ted Kennedy chose to challenge Carter for the 1980 Democratic presidential ticket.

Carter would end up defeating Kennedy for the Democratic nomination but the damage done to Carter’s presidency allowed a far more optimistic Ronald Reagan to win in a landslide victory over the sitting president in November 1980.

The lasting significance of Jimmy Carter

After the 56-year-old president failed to win a second term, Carter in many ways came to exemplify what a post-presidential life could entail. This included diplomatic and humanitarian efforts that would win him the 2002 Nobel Peace Prize but also public commentary that would sometimes frustrate his successors in the Oval Office.

From his own organisation’s work championing human rights overseas to his commitment to building homes with Habitat for Humanity, Carter’s staunch Christian faith and idealism continued to define his life.

Today, most Americans may take it as unremarkable for a US president to champion human rights, but Carter was the first US president to posit that human rights were central to US foreign policy. While human rights have not always remained central to the policies of his presidential successors, it has undoubtedly influenced them. This includes Ronald Reagan, who criticised Carter’s human rights emphasis during the 1980 presidential campaign but would later take a strong stance against Soviet human rights abuses.

Most living Americans were not yet born on Carter’s last day in office. As a result, the former president is perhaps best known for his rich post-presidential life based out of the small rural town in Georgia he was born in – and where his secret service detail’s armoured vehicles were worth more than the home the former president lived in after departing the White House.

Regardless of whether they realise it or not, the humility, morality and idealism with which Jimmy Carter lived and governed continues to have an impact on Americans and American thinking to this day.The Conversation

Jared Mondschein, Director of Research, US Studies Centre, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Jimmy Carter’s lasting Cold War legacy: His human rights focus helped dismantle the Soviet Union

Former President Jimmy Carter, who died on Dec. 29, 2024, at age 100 at his home in Plains, Georgia, was a dark horse Democratic presidential candidate with little national recognition when he beat Republican incumbent Gerald Ford in 1976.

The introspective former peanut farmer pledged a new era of honesty and forthrightness at home and abroad, a promise that resonated with voters eager for change following the Watergate scandal and the Vietnam War.

His presidency, however, lasted only one term before Ronald Reagan defeated him. Since then, scholars have debated – and often maligned – Carter’s legacy, especially his foreign policy efforts that revolved around human rights.

Critics have described Carter’s foreign policies as “ineffectual” and “hopelessly muddled,” and their formulation demonstrated “weakness and indecision.”

As a historian researching Carter’s foreign policy initiatives, I conclude his overseas policies were far more effective than critics have claimed.

Two men in suits and ties, talking with their heads close.
President Jimmy Carter listens to Sen. Joseph R. Biden, D-Del., as they wait to speak at fund raising reception in Wilmington, Del. on Feb. 20, 1978. AP Photo/Barry Thumma, File


A Soviet strategy

The criticism of Carter’s foreign policies seems particularly mistaken when it comes to the Cold War, a period defined by decades of hostility, mutual distrust and arms buildup after World War II between the U.S. and Russia, then known as the Soviet Union or Union of Soviet Socialist Republics (USSR).

By the late 1970s, the Soviet Union’s economy and global influence were weakening. With the counsel of National Security Advisor Zbigniew Brzezinski, a Soviet expert, Carter exploited these weaknesses.

During his presidency, Carter insisted nations provide basic freedoms for their people – a moral weapon against which repressive leaders could not defend.

Carter soon openly criticized the Soviets for denying Russian Jews their basic civil rights, a violation of human rights protections outlined in the diplomatic agreement called the Helsinki Accords.

Carter’s team underscored these violations in arms control talks. The CIA flooded the USSR with books and articles to incite human rights activism. And Carter publicly supported Russian dissidents – including pro-democracy activist Andrei Sakharov – who were fighting an ideological war against socialist leaders.

Human rights were a cornerstone of President Jimmy Carter’s foreign policy. Here, a billboard with his picture on it in Liberia. AP Photo/Michel Lipchitz


Carter adviser Stuart Eizenstat argues that the administration attacked the Soviets “in their most vulnerable spot – mistreatment of their own citizens.”

This proved effective in sparking Soviet leader Mikhail Gorbachev’s social and political reforms of the late 1980s, best known by the Russian word “glasnost,” or “openness.”

The Afghan invasion

In December 1979, the Soviets invaded Afghanistan in response to the assassination of the Soviet-backed Afghan leader, Nur Mohammad Taraki. The invasion effectively ended an existing détente between the U.S. and USSR.

Beginning in July 1979, the U.S. was providing advice and nonlethal supplies to the mujahideen rebelling against the Soviet-backed regime. After the invasion, National Security Advisor Brzezinski advised Carter to respond aggressively to it. So the CIA and U.S. allies delivered weapons to the mujahideen, a program later expanded under Reagan.

Afghan rebels examine a Soviet-built armored personnel carrier and scores of other military vehicles left behind when the Mujahedeen fighters overran a Soviet-Afghan garrison. AP Photo/Joe Gaal


Carter’s move effectively engaged the Soviets in a proxy war that began to bleed the Soviet Union.

By providing the rebels with modern weapons, the U.S. was “giving to the USSR its Vietnam war,” according to Brzezinski: a progressively expensive war, a strain on the socialist economy and an erosion of their authority abroad.

Carter also imposed an embargo on U.S. grain sales to the Soviets in 1980. Agriculture was the USSR’s greatest economic weakness since the 1960s. The country’s unfavorable weather and climate contributed to successive poor growing seasons, and their heavy industrial development left the agricultural sector underfunded.

Economist Elizabeth Clayton concluded in 1985 that Carter’s embargo was effective in exacerbating this weakness.

Census data compiled between 1959 and 1979 show that 54 million people were added to the Soviet population. Clayton estimates that 2 to 3 million more people were added in each subsequent year. The Soviets were overwhelmed by the population boom and struggled to feed their people.

At the same time, Clayton found that monthly wages increased, which led to an increased demand for meat. But by 1985, there was a meat shortage in the USSR. Why? Carter’s grain embargo, although ended by Reagan in 1981, had a lasting impact on livestock feed that resulted in Russian farmers decreasing livestock production.

The embargo also forced the Soviets to pay premium prices for grain from other countries, nearly 25 percent above market prices.

For years, Soviet leaders promised better diets and health, but now their people had less food. The embargo battered a weak socialist economy and created another layer of instability for the growing population.

The Olympic boycott

In 1980, Carter pushed further to punish the Soviets. He convinced the U.S. Olympic Committee to refrain from competing in the upcoming Moscow Olympics while the Soviets repressed their people and occupied Afghanistan.

Carter not only promoted a boycott, but he also embargoed U.S. technology and other goods needed to produce the Olympics. He also stopped NBC from paying the final US$20 million owed to the USSR to broadcast the Olympics. China, Germany, Canada and Japan – superpowers of sport – also participated in the boycott.

Historian Allen Guttmann said, “The USSR lost a significant amount of international legitimacy on the Olympic question.” Dissidents relayed to Carter that the boycott was another jab at Soviet leadership. And in America, public opinion supported Carter’s bold move – 73% of Americans favored the boycott.

The Carter doctrine

In his 1980 State of the Union address, Carter revealed an aggressive Cold War military plan. He declared a “Carter doctrine,” which said that the Soviets’ attempt to gain control of Afghanistan, and possibly the region, was regarded as a threat to U.S. interests. And Carter was prepared to meet the threat with “military force.”

Carter also announced in his speech a five-year spending initiative to modernize and strengthen the military because he recognized the post-Vietnam military cuts weakened the U.S. against the USSR.

Ronald Reagan argued during the 1980 presidential campaign that, “Jimmy Carter risks our national security – our credibility – and damages American purposes by sending timid and even contradictory signals to the Soviet Union.” Carter’s policy was based on “weakness and illusion” and should be replaced “with one founded on improved military strength,” Reagan criticized.

In 1985, however, President Reagan publicly acknowledged that his predecessor demonstrated great timing in modernizing and strengthening the nation’s forces, which further increased economic and diplomatic pressure on the Soviets.

Reagan admitted that he felt “very bad” for misstating Carter’s policies and record on defense.

Carter is most lauded today for his post-presidency activism, public service and defending human rights. He was awarded the Nobel Peace Prize in 2002 for such efforts.

But that praise leaves out a significant portion of Carter’s presidential accomplishments. His foreign policy, emphasizing human rights, was a key instrument in dismantling the power of the Soviet Union.

This is an updated version of a story that was originally published on May 2, 2019.The Conversation

Robert C. Donnelly, Associate Professor of History, Gonzaga University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

What brought the decline of the eastern Roman Empire – and what can we learn from it?

Why empires fall is a question that fascinates many. But in the search for an answer, imagination can run wild. Suggestions have emerged in recent decades that attribute the rise and fall of ancient empires such as the Roman Empire to climate change and disease. This has prompted discussions over whether “536 was the worst year to be alive”.

That year, a volcanic eruption created a dust veil that blocked the sun in certain regions of the world. This, combined with a series of volcanic eruptions in the following decade, is claimed to have caused a decrease in the global temperature. Between 541 and 544, there was also the first and most severe documented occurrence of the Justinianic plague in the eastern Roman Empire (also referred to as the Byzantine Empire), in which millions of people died.

Studies show that there is no textual evidence for the effects of the dust veil in the eastern Mediterranean, and there is an extensive debate over the extent and length of the Justinianic plague. But, despite this, there are still many in academia who claim that changes to the climate and the outbreak of plague were catastrophic for the eastern Roman Empire.

Our research, which was published in November, shows that these claims are incorrect. They were derived from using isolated finds and small case studies that were projected onto the entire Roman Empire.

The use of large datasets from vast territories previously ruled by the Roman Empire presents a different scenario. Our findings reveal that there was no decline in the 6th century, but rather a new record in population and trade in the eastern Mediterranean.

We used both micro and large-scale data from various countries and regions. Micro-scale data included examining small regions and showing when the decline in this region or site occurred. Case studies, such as the site of the ancient city of Elusa in the north-western Negev desert in today’s Israel, were reexamined.

Previous research claimed that this site declined in the middle of the 6th century. A reanalysis of the carbon 14, a method for checking the age of an object made of organic material, and ceramic data used to date the site showed that this conclusion was incorrect. The decline only started in the 7th century.

Large-scale data included new databases compiled using archaeological survey, excavation and shipwreck finds. The survey and excavation databases, which were made up of tens of thousands of sites, were used to map the general changes in the size and number of sites for each historical period.

The shipwreck database showed the number of shipwrecks for each half century. This was used to highlight the shift in the volume of naval commerce.

Changes to naval commerce (150–750)

Our results showed that there was a high correlation in the archaeological record for numerous regions, covering modern-day Israel, Tunisia, Jordan, Cyprus, Turkey, Egypt and Greece. There was also a strong correlation between the different types of data.

Both the smaller case studies, and the larger datasets, showed there was no decrease in population or economy in the 6th century eastern Roman Empire. In fact, there seems to have been an increase in prosperity and demography. The decline occurred in the 7th century, and so cannot be connected to sudden climate change or the plague which happened more than half a century before.

It seems the Roman Empire entered the 7th century at the peak of its power. But Roman miscalculations, and their failure against their Persian opponents, brought the entire area into a downward spiral. This left the two empires weak and allowed Islam to rise.

This is not to say that there was no change in the climate during this period in some regions of the world. For example, there was a visible change in material culture and a general decline and abandonment of sites throughout Scandinavia in the middle of the 6th century, where this change in the climate was more extensive.

And today’s climate crisis is on course to bring much greater changes than those seen in the past. The sharp departure from historical environmental fluctuations has the power to irreversibly change the world as we know it.The Conversation

Lev Cosijns, PhD Candidate in the School of Archaeology, University of Oxford and Haggai Olshanetsky, Assistant Professor in the Department of History, University of Warsaw

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why your New Year’s resolution to go to the gym will fail

Come January, 40% of Americans will make New Years resolutions, and nearly half of them will aim to lose weight or get in shape.

But 80% of New Year’s resolutions fail by February, and gyms will experience a decrease in traffic after the first and second months of the year as those who made New Year’s resolutions to get in shape lose steam.

As a lecturer at Binghamton and former Olympic weightlifter, world champion powerlifter and strength coach, much of my life has been spent in training halls and gyms around the country. People often ask me, “How do I stay motivated to work out?”

Motivation and short-term objectives

Years back, when I was at the Olympic Training Center in Colorado Springs, Colorado, one of the sports psychologists told me that motivation is a lie.

It took me years of experience and research to figure out why, but I believe she was right.

Personally, I have no issues getting up on a cold and dark morning to train when a competition is drawing near. But when there is no immediate objective or goal in site, getting up that early is much harder.

Motivation is driven by emotion and that can be positive, as long as it is used for a short-term objective. For some, a New Year’s resolution can serve as a motivator. But since motivation is based on emotion, it can’t last long.

Think of it this way: No one can laugh or cry indefinitely, and that is exactly how we know that motivation will fail.

Emotion is a chemical release yielding a physiological response. If someone attempting to get in shape is reliant upon this reaction to propel them towards working out, they are almost sure to burn out, just like with a resolution.

When people buy gym memberships, they have the best of intentions in mind, but the commitments are made in a charged emotional state. Motivation helps with short-term objectives, but is virtually useless for objectives that require a greater length of time to accomplish.

In other words, don’t totally discount the value of motivation, but don’t count on it to last long either because it won’t.

Motivation will only get you so far. Rawpixel.com/Shutterstock.com


Discipline yields results

If motivation won’t help you reach your goals, what will?

The answer is discipline. Discipline, as I define it, is the ability to do what is necessary for success when it is hardest to do so. Another way to think of it is having the ability, not necessarily the desire, to do what you need to when you least want to.

Failure to get up when the alarm rings, the inability to walk away from a late night of partying before game day or eating a doughnut when you have committed to no processed sugar are all failures of discipline - not motivation.

The keys to discipline are practice and consistency. Discipline means repetitive – and sometimes boring – action. There are no shortcuts. You can thank motivation for the first three weeks or so of your successful gym attendance, but after that you need to credit discipline.

There is another clear line defining the difference between motivation and discipline. Motivation in and of itself typically fails to build other qualities necessary for advancement, but discipline does. Discipline develops confidence and patience.

Discipline builds consistency and consistency yields habits. It is those habits that, in the end, will ultimately define success.

[ You’re smart and curious about the world. So are The Conversation’s authors and editors.You can read us daily by subscribing to our newsletter. ]The Conversation

William Clark, Adjunct Lecturer of Health and Wellness Studies, Binghamton University, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

What America’s first board game can teach us about the aspirations of a young nation

In 2023 alone, the board game industry topped US$16.8 billion and is projected to reach $40.1 billion by 2032.

Classics like “Scrabble” are being refreshed and transformed, while newer inventions such as “Pandemic” and “Wingspan” have garnered millions of devotees.

This growing cardboard empire was on my mind when I visited the American Antiquarian Society in August 2023 to research its collection of early games.

As I sat in that archive, which houses such treasures as the 1640 Bay Psalm Book, the first book printed in British America, I beheld another first in American printing: a board game called “The Travellers’ Tour Through the United States.”

This forgotten game, printed the year after Missouri became a state, has a lot to say about America’s nascent board game industry, as well as how a young country saw itself.

An archival find

Produced by the New York cartography firm of F. & R. Lockwood, “The Travellers’ Tour Through the United States” was an imitation of earlier European geography games, a genre of educational game. Geography games generally used a map for a board, and the rules involved players reciting geographic facts as they race toward the finish.

“The Travellers’ Tour” first appeared in 1822, making it the earliest known board game printed in the U.S.

But for almost a century another game held that honor.

In 1894, the game manufacturer Parker Brothers acquired the rights to “The Mansion of Happiness,” an English game first produced in the U.S. in 1843. In its promotional materials, the company declared it “The first board game ever published in America.”

That distinction ended in 1991 when a game collector found the copy of “The Travellers’ Tour” in the archives of the American Antiquarian Society.

Zoom in of old printed board game that reads 'The Travellers Tour Through the United States.' New York. Published by F&R Lockwood. 154 Broad Way. 1822.'
Yhe title and printer’s address for the game. The copyright notice of July 12, 1822, appears in small type at the bottom. Library of Congress


A new game for the new year

By 1822 the American market for board games was already becoming established, and middle- and upper-class parents would buy games for their families to enjoy around the parlor table.

At that time, New Year’s – not Christmas – was the holiday for gift giving. Many booksellers, who earned money from the sale of books, playing cards and other paper goods throughout the year, would sell special wares to give as presents.

These items included holiday-themed books, puzzles – then called “dissected maps” – and paper dolls, as well as games imported from England such as “The New Game of Human Life” and “The Royal And Entertaining Game of Goose.”

Since “The Travellers’ Tour” was the first board game to employ a map of the U.S., it might have been an especially interesting gift to American consumers.

It’s difficult, however, to gauge just how popular “The Travellers’ Tour” was in its time. No sales records are known to exist, and since so few copies remain, it likely wasn’t a big seller.

A global database of library holdings shows only five copies of “The Travellers’ Tour” in institutions around the U.S. And while a handful of additional copies are housed in museums and private archives, the game is certainly a rarity.

Teetotums and travelers

Announcing itself as a “pleasing and instructive pastime,” “The Travellers’ Tour” consists of a hand-colored map of the then-24 states and a numbered list of 139 towns and cities, ranging from New York City to New Madrid, Missouri. Beside each number is the name and description of the corresponding town.

The key for the game features numbers associated with various cities and towns, with facts about each muncipality.
The ‘stop’ at Bennington, Vt., highlights the town’s Revolutionary War history, while Philadelphia’s entry points to the city’s educational institutions. Library of Congress
Using a variant spelling for the device, the instructions stipulate the game should be “performed with a Tetotum.” Small top-like devices with numbers around their sides called teetotums functioned as alternatives to dice, which were associated with immoral games of chance.

Once spun, the teetotum lands with a random side up, revealing a number. The player looks ahead that number of spaces on the map.

If they can recite from memory the name of the town or city, they move their token, or traveler, to that space. Whoever gets to New Orleans first, wins.

The key for the game features numbers associated with various cities and towns, with facts about each muncipality.
‘New-Orleans’ is the game’s ‘finish line.’ Library of Congress


An idealized portrait of a young country

Though not necessary to play “The Travellers’ Tour,” the descriptions provided for each location tell historians a lot about America’s national aspirations.

These accounts coalesce into a flattering portrait of the nation’s agricultural, commercial, historical and cultural character.

An ivory 'spinning' dice with black dots.
Teetotums were used in an era when dice were associated with vice. Museum Rotterdam/Wikimedia Commons, CC BY-SA


Promoting the value of education, the game highlights institutions of learning. For example, Philadelphia’s “literary and benevolent institutions are numerous and respectable.” Providence boasts “Brown University, a respectable literary institution.” And Boston’s “citizens … are enterprising and liberal in the support of religious and literary institutions.”

As the game pieces meander toward New Orleans, players learn about Richmond’s “fertile backcountry” and about the “polished manners and unaffected hospitality” of the citizens of Charleston. Savannah “contains many splendid edifices” and Columbia’s “South Carolina College bids fair to be a valuable institution.”

Absent from any corresponding descriptions, however, is any mention of what John C. Calhoun called America’s “peculiar institution” of slavery and its role in the fabric of the nation.

And while four entries briefly reference American Indians, no mention is made of the ongoing dispossession and genocide of millions of Indigenous people.

Though it promotes an American identity based on a sanitized version of the nation’s economic might and intellectual rigor, “The Travellers’ Tour” nonetheless represents an important step toward what has become a burgeoning American board game industry.

Two centuries later, board game culture has matured to the point that new titles such as “Freedom: The Underground Railroad” and “Votes for Women” push the genre to new heights, using the joy of play to teach the history of the era that spawned America’s first board game.The Conversation

Matthew Wynn Sivils, Professor of American Literature, Iowa State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Trump’s expiring 2017 tax cuts made income inequality worse and especially hurt Black Americans: study

The Tax Cuts and Jobs Act, a set of tax cuts Donald Trump signed into law during his first term as president, will expire on Dec. 31, 2024. As Trump and Republicans prepare to negotiate new tax cuts in 2025, it’s worth gleaning lessons from the president-elect’s first set of cuts.

The 2017 cuts were the most extensive revision to the Internal Revenue Code since the Ronald Reagan administration. The changes it imposed range from the tax that corporations pay on their foreign income to limits on the deductions individuals can take for their state and local tax payments.

Trump promised middle-class benefits at the time, but in practice more than 80% of the cuts went to corporations, tax partnerships and high-net-worth individuals. The cost to the U.S. deficit was huge − a total increase of US$1.9 trillion from 2018 to 2028, according to estimates from the Congressional Budget Office. The tax advantage to the middle class was small.

Advantages for Black Americans were smaller still. As a scholar of race and U.S. income taxation, I have analyzed the impact of Trump’s tax cuts. I found that the law has disadvantaged middle-income, low-income and Black taxpayers in several ways.

Cuts worsened disparities

These results are not new. They were present nearly 30 years ago when my colleague William Whitford and I used U.S. Census Bureau data to show that Black taxpayers paid more federal taxes than white taxpayers with the same income. In large part that’s because the legacy of slavery, Jim Crow and structural racism keeps Black people from owning homes.

The federal income tax is full of advantages for home ownership that many Black taxpayers are unable to reach. These benefits include the ability to deduct home mortgage interest and local property taxes, and the right to avoid taxes on up to $500,000 of profit on the sale of a home.

It’s harder for middle-class Black people to get a mortgage than it is for low-income white people. This is true even when Black Americans with high credit scores are compared with white Americans with low credit scores.

When Black people do get mortgages, they are charged higher rates than their white counterparts.

A Black family plays with young children in front of a suburban house.
It’s harder for middle-class Black people to get a mortgage than it is for low-income white people. MoMo Productions/Getty Images

Trump did not create these problems. But instead of closing these income and race disparities, his 2017 tax cuts made them worse.

Black taxpayers paid higher taxes than white taxpayers who matched them in income, employment, marriage and other significant factors.

Broken promises, broken trust

Fairness is an article of faith in American tax policy. A fair tax structure means that those earning similar incomes should pay similar taxes and stipulates that taxes should not increase income or wealth disparities.

Trump’s tax cuts contradict both principles.

Proponents of Trump’s cuts argued the corporate rate cut would trickle down to all Americans. This is a foundational belief of “supply side” economics, a philosophy that President Ronald Reagan made popular in the 1980s.

From the Reagan administration on, every tax cut for the rich has skewed to the wealthy.

Just like prior “trickle down” plans, Trump’s corporate tax cuts did not produce higher wages or increased household income. Instead, corporations used their extra cash to pay dividends to their shareholders and bonuses to their executives.

Over that same period, the bottom 90% of wage earners saw no gains in their real wages. Meanwhile, the AFL-CIO, a labor group, estimates that 51% of the corporate tax cuts went to business owners and 10% went to the top five highest-paid senior executives in each company. Fully 38% went to the top 10% of wage earners.

In other words, the income gap between wealthy Americans and everyone else has gotten much wider under Trump’s tax regime.

Stock market inequality

Trump’s tax cuts also increased income and wealth disparities by race because those corporate tax savings have gone primarily to wealthy shareholders rather than spreading throughout the population.

The reasons are simple. In the U.S., shareholders are mostly corporations, pension funds and wealthy individuals. And wealthy people in the U.S. are almost invariably white.

Sixty-six percent of white families own stocks, while less than 40% of Black families and less than 30% of Hispanic families do. Even when comparing Black and white families with the same income, the race gap in stock ownership remains.

These disparities stem from the same historical disadvantages that result in lower Black homeownership rates. Until the Civil War, virtually no Black person could own property or enter into a contract. After the Civil War, Black codes – laws that specifically controlled and oppressed Black people – forced free Black Americans to work as farmers or servants.

State prohibitions on Black people owning property, and public and private theft of Black-owned land, kept Black Americans from accumulating wealth.

A woman in front of Trump Tower holds a sign criticizing tax cuts.
A woman protests outside Trump Tower over the Trump administration’s proposed tax cut on Nov. 30, 2017, in New York City. Spencer Platt/Getty Images


Health care hit

That said, the Trump tax cuts hurt low-income taxpayers of all races.

One way they did so was by abolishing the individual mandate requiring all Americans to have basic health insurance. The Affordable Care Act, passed under President Barack Obama, launched new, government-subsidized health plans and penalized people for not having health insurance.

Department of the Treasury data shows almost 50 million Americans were covered by the Affordable Care Act since 2014. After the individual mandate was revoked, between 3 million and 13 million fewer people purchased health insurance in 2020.

Ending the mandate triggered a large drop in health insurance coverage, and research shows it was primarily lower-income people who stopped buying subsidized insurance from the Obamacare exchanges. These are the same people who are the most vulnerable to financial disaster from unpaid medical bills.

Going without insurance hurt all low-income Americans. But studies suggest the drop in Black Americans’ coverage under Trump’s plan outpaced that of white Americans. The rate of uninsured Black Americans rose from 10.7% in 2016 to 11.5% in 2018, following the mandate’s repeal.

The consumer price index conundrum

The Trump tax cuts also altered how the Internal Revenue Service calculates inflation adjustments for over 60 different provisions. These include the earned income tax credit and the child tax credit – both of which provide cash to low-wage workers – and the wages that must pay Social Security taxes.

Previously, the IRS used the consumer price index for urban consumers, which tracks rising prices by comparing the cost of the same goods as they rise or fall, to calculate inflation. The government then used that inflation number to adjust Social Security payments and earned income tax credit eligibility. It used the same figure to set the amount of income that is taxed at a given rate.

The Trump tax cuts ordered the IRS to calculate inflation adjustments using the chained consumer price index for urban consumers instead.

The difference between these two indexes is that the second one assumes people substitute cheaper goods as prices rise. For example, the chained consumer price index assumes shoppers will buy pork instead of beef if beef prices go up, easing the impact of inflation on a family’s overall grocery prices.

The IRS makes smaller inflation adjustments based on that assumption. But low-income neighborhoods have less access to the kind of budget-friendly options envisioned by the chained consumer price index.

And since even middle-class Black people are more likely than poor white people to live in low-income neighborhoods, Black taxpayers have been hit harder by rising prices.

What cost $1 in 2018 now costs $1.26. That’s a painful hike that Black families are less able to avoid.

The imminent expiration of the Trump tax cuts gives the upcoming GOP-led Congress the opportunity to undertake a thorough reevaluation of their effects. By prioritizing policies that address the well-known disparities exacerbated by these recent tax changes, lawmakers can work toward a fairer tax system that helps all Americans.The Conversation

Beverly Moran, Professor Emerita of Law, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Was Jesus really born in Bethlehem?

Every Christmas, a relatively small town in the Palestinian West Bank comes center stage: Bethlehem. Jesus, according to some biblical sources, was born in this town some two millennia ago.

Yet the New Testament Gospels do not agree about the details of Jesus’ birth in Bethlehem. Some do not mention Bethlehem or Jesus’ birth at all.

The Gospels’ different views might be hard to reconcile. But as a scholar of the New Testament, what I argue is that the Gospels offer an important insight into the Greco-Roman views of ethnic identity, including genealogies.

Today, genealogies may bring more awareness of one’s family medical history or help uncover lost family members. In the Greco-Roman era, birth stories and genealogical claims were used to establish rights to rule and link individuals with purported ancestral grandeur.

Gospel of Matthew

According to the Gospel of Matthew, the first Gospel in the canon of the New Testament, Joseph and Mary were in Bethlehem when Jesus was born. The story begins with wise men who come to the city of Jerusalem after seeing a star that they interpreted as signaling the birth of a new king.

It goes on to describe their meeting with the local Jewish king named Herod, of whom they inquire about the location of Jesus’ birth. The Gospel says that the star of Bethlehem subsequently leads them to a house – not a manger – where Jesus has been born to Joseph and Mary. Overjoyed, they worship Jesus and present gifts of gold, frankincense and myrrh. These were valuable gifts, especially frankincense and myrrh, which were costly fragrances that had medicinal use.

The Gospel explains that after their visit, Joseph has a dream where he is warned of Herod’s attempt to kill baby Jesus. When the wise men went to Herod with the news that a child had been born to be the king of the Jews, he made a plan to kill all young children to remove the threat to his throne. It then mentions how Joseph, Mary and infant Jesus leave for Egypt to escape King Herod’s attempt to assassinate all young children.

Matthew also says that after Herod dies from an illness, Joseph, Mary and Jesus do not return to Bethlehem. Instead, they travel north to Nazareth in Galilee, which is modern-day Nazareth in Israel.

Gospel of Luke

The Gospel of Luke, an account of Jesus’ life which was written during the same period as the Gospel of Matthew, has a different version of Jesus’ birth. The Gospel of Luke starts with Joseph and a pregnant Mary in Galilee. They journey to Bethlehem in response to a census that the Roman emperor Caesar Augustus required for all the Jewish people. Since Joseph was a descendant of King David, Bethlehem was the hometown where he was required to register.

The Gospel of Luke includes no flight to Egypt, no paranoid King Herod, no murder of children and no wise men visiting baby Jesus. Jesus is born in a manger because all the travelers overcrowded the guest rooms. After the birth, Joseph and Mary are visited not by wise men but shepherds, who were also overjoyed at Jesus’ birth.

Luke says these shepherds were notified about Jesus’ location in Bethlehem by angels. There is no guiding star in Luke’s story, nor do the shepherds bring gifts to baby Jesus. Luke also mentions that Joseph, Mary and Jesus leave Bethlehem eight days after his birth and travel to Jerusalem and then to Nazareth.

The differences between Matthew and Luke are nearly impossible to reconcile, although they do share some similarities. John Meier, a scholar on the historical Jesus, explains that Jesus’ “birth at Bethlehem is to be taken not as a historical fact” but as a “theological affirmation put into the form of an apparently historical narrative.” In other words, the belief that Jesus was a descendant of King David led to the development of a story about Jesus’ birth in Bethlehem.

Raymond Brown, another scholar on the Gospels, also states that “the two narratives are not only different – they are contrary to each other in a number of details.”

Mark’s and John’s Gospels

A Nativity scene showing the birth of Jesus in a manger. Swen Pförtner/picture alliance via Getty Images


What makes it more difficult is that neither the other Gospels, that of Mark and John, mentions Jesus’ birth or his connection to Bethlehem.

The Gospel of Mark is the earliest account of Jesus’ life, written around A.D. 60. The opening chapter of Mark says that Jesus is from “Nazareth of Galilee.” This is repeated throughout the Gospel on several occasions, and Bethlehem is never mentioned.

A blind beggar in the Gospel of Mark describes Jesus as both from Nazareth and the son of David, the second king of Israel and Judah during 1010-970 B.C. But King David was not born in Nazareth, nor associated with that city. He was from Bethlehem. Yet Mark doesn’t identify Jesus with the city Bethlehem.

The Gospel of John, written approximately 15 to 20 years after that of Mark, also does not associate Jesus with Bethlehem. Galilee is Jesus’ hometown. Jesus finds his first disciples, does several miracles and has brothers in Galilee.

This is not to say that John was unaware of Bethlehem’s significance. John mentions a debate where some Jewish people referred to the prophecy which claimed that the messiah would be a descendant of David and come from Bethlehem. But Jesus according to John’s Gospel is never associated with Bethlehem, but with Galilee, and more specifically, Nazareth.

The Gospels of Mark and John reveal that they either had trouble linking Bethlehem with Jesus, did not know his birthplace, or were not concerned with this city.

These were not the only ones. Apostle Paul, who wrote the earliest documents of the New Testament, considered Jesus a descendant of David but does not associate him with Bethlehem. The Book of Revelation also affirms that Jesus was a descendant of David but does not mention Bethlehem.

An ethnic identity

During the period of Jesus’ life, there were multiple perspectives on the Messiah. In one stream of Jewish thought, the Messiah was expected to be an everlasting ruler from the lineage of David. Other Jewish texts, such as the book 4 Ezra, written in the same century as the Gospels, and the Jewish sectarian Qumran literature, which is written two centuries earlier, also echo this belief.

But within the Hebrew Bible, a prophetic book called Micah, thought to be written around B.C. 722, prophesies that the messiah would come from David’s hometown, Bethlehem. This text is repeated in Matthew’s version. Luke mentions that Jesus is not only genealogically connected to King David, but also born in Bethlehem, “the city of David.”

[Deep knowledge, daily.Sign up for The Conversation’s newsletter.]

Genealogical claims were made for important ancient founders and political leaders. For example, Ion, the founder of the Greek colonies in Asia, was considered to be a descendant of Apollo. Alexander the Great, whose empire reached from Macedonia to India, was claimed to be a son of Hercules. Caesar Augustus, who was the first Roman emperor, was proclaimed as a descendant of Apollo. And a Jewish writer named Philo who lived in the first century wrote that Abraham and the Jewish priest and prophets were born of God.

Regardless of whether these claims were accepted at the time to be true, they shaped a person’s ethnic identity, political status and claims to honor. As the Greek historian Polybius explains, the renown deeds of ancestors are “part of the heritage of posterity.”

Matthew and Luke’s inclusion of the city of Bethlehem contributed to the claim that Jesus was the Messiah from a Davidic lineage. They made sure that readers were aware of Jesus’ genealogical connection to King David with the mention of this city. Birth stories in Bethlehem solidified the claim that Jesus was a rightful descendant of King David.

So today, when the importance of Bethlehem is heard in Christmas carols or displayed in Nativity scenes, the name of the town connects Jesus to an ancestral lineage and the prophetic hope for a new leader like King David.

The Conversation

Rodolfo Galvan Estrada III, Adjunct Assistant Professor of the New Testament, Fuller Theological Seminary

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Octopuses and their relatives are a new animal welfare frontier

We named him Squirt – not because he was the smallest of the 16 cuttlefish in the pool, but because anyone with the audacity to scoop him into a separate tank to study him was likely to get soaked. Squirt had notoriously accurate aim.

As a comparative psychologist, I’m used to assaults from my experimental subjects. I’ve been stung by bees, pinched by crayfish and battered by indignant pigeons. But, somehow, with Squirt it felt different. As he eyed us with his W-shaped pupils, he seemed clearly to be plotting against us.

A brown and white invertebrate swims over rocks and seaweed. A common cuttlefish (Sepia officinalis) in Portugal’s Arrábida Natural Park. Diego Delso/Wikipedia, CC BY-SA

Of course, I’m being anthropomorphic. Science does not yet have the tools to confirm whether cuttlefish have emotional states, or whether they are capable of conscious experience, much less sinister plots. But there’s undeniably something special about cephalopods – the class of ocean-dwelling invertebrates that includes cuttlefish, squid and octopus.

As researchers learn more about cehpalopods’ cognitive skills, there are calls to treat them in ways better aligned with their level of intelligence. California and Washington state both approved bans on octopus farming in 2024. Hawaii is considering similar action, and a ban on farming octopus or importing farmed octopus meat has been introduced in Congress. A planned octopus farm in Spain’s Canary Islands is attracting opposition from scientists and animal welfare advocates.

Critics offer many arguments against raising octopuses for food, including possible releases of waste, antibiotics or pathogens from aquaculture facilities. But as a psychologist, I see intelligence as the most intriguing part of the equation. Just how smart are cephalopods, really? After all, it’s legal to farm chickens and cows. Is an octopus smarter than, say, a turkey?

A bright orange octopus attached to the arm of an underwater research vehicle. A deepwater octopus investigates the port manipulator arm of the ALVIN submersible research vessel. NOAA, CC BY

A big, diverse group

Cephalopods are a broad class of mollusks that includes the coleoids – cuttlefish, octopus and squid – as well as the chambered nautilus. Coleoids range in size from adult squid only a few millimeters long (Idiosepius) to the largest living invertebrates, the giant squid (Architeuthis) and colossal squid (Mesonychoteuthis) which can grow to over 40 feet in length and weigh over 1,000 pounds.

Some of these species live alone in the nearly featureless darkness of the deep ocean; others live socially on active, sunny coral reefs. Many are skilled hunters, but some feed passively on floating debris. Because of this enormous diversity, the size and complexity of cephalopod brains and behaviors also varies tremendously.

Almost everything that’s known about cephalopod cognition comes from intensive study of just a few species. When considering the welfare of a designated species of captive octopus, it’s important to be careful about using data collected from a distant evolutionary relative.

Marine biologist Roger Hanlon explains the distributed structure of cephalopod brains and how they use that neural power.

Can we even measure alien intelligence?

Intelligence is fiendishly hard to define and measure, even in humans. The challenge grows exponentially in studying animals with sensory, motivational and problem-solving skills that differ profoundly from ours.

Historically, researchers have tended to focus on whether animals think like humans, ignoring the abilities that animals may have that humans lack. To avoid this problem, scientists have tried to find more objective measures of cognitive abilities.

One option is a relative measure of brain to body size. The best-studied species of octopus, Octopus vulgaris, has about 500 million neurons; that’s relatively large for its small body size and similar to a starling, rabbit or turkey.

More accurate measures may include the size, neuron count or surface area of specific brain structures thought to be important for learning. While this is useful in mammals, the nervous system of an octopus is built completely differently.

Over half of the neurons in Octopus vulgaris, about 300 million, are not in the brain at all, but distributed in “mini-brains,” or ganglia, in the arms. Within the central brain, most of the remaining neurons are dedicated to visual processing, leaving less than a quarter of its neurons for other processes such as learning and memory.

In other species of octopus, the general structure is similar, but complexity varies. Wrinkles and folds in the brain increase its surface area and may enhance neural connections and communication. Some species of octopus, notably those living in reef habitats, have more wrinkled brains than those living in the deep sea, suggesting that these species may possess a higher degree of intelligence.

Holding out for a better snack

Because brain structure is not a foolproof measure of intelligence, behavioral tests may provide better evidence. One of the highly complex behaviors that many cephalopods show is visual camouflage. They can open and close tiny sacs just below their skin that contain colored pigments and reflectors, revealing specific colors. Octopus vulgaris has up to 150,000 chromatophores, or pigment sacs, in a single square inch of skin.

Like many cephalopods, the common cuttlefish (Sepia officinalis) is thought to be colorblind. But it can use its excellent vision to produce a dizzying array of patterns across its body as camouflage. The Australian giant cuttlefish, Sepia apama, uses its chromatophores to communicate, creating patterns that attract mates and warn off aggressors. This ability can also come in handy for hunting; many cephalopods are ambush predators that blend into the background or even lure their prey.

The hallmark of intelligent behavior, however, is learning and memory – and there is plenty of evidence that some octopuses and cuttlefish learn in a way that is comparable to learning in vertebrates. The common cuttlefish (Sepia officinalis), as well as the common octopus (Octopus vulgaris) and the day octopus (Octopus cyanea), can all form simple associations, such as learning which image on a screen predicts that food will appear.

Some cephalopods may be capable of more complicated forms of learning, such as reversal learning – learning to flexibly adjust behavior when different stimuli signal reward. They may also be able to inhibit impulsive responses. In a 2021 study that gave common cuttlefish a choice between a less desirable but immediate snack of crab and a preferred treat of live shrimp after a delay, many of the cuttlefish chose to wait for the shrimp.

Cuttlefish perform in an experiment adapted from the Stanford “marshmallow test,” which was designed to see whether children could practice delayed gratification.

A new frontier for animal welfare

Considering what’s known about their brain structures, sensory systems and learning capacity, it appears that cephalopods as a group may be similar in intelligence to vertebrates as a group. Since many societies have animal welfare standards for mice, rats, chickens and other vertebrates, logic would suggest that there’s an equal case for regulations enforcing humane treatment of cephalopods.

Such rules generally specify that when a species is held in captivity, its housing conditions should support the animal’s welfare and natural behavior. This view has led some U.S. states to outlaw confined cages for egg-laying hens and crates too narrow for pregnant sows to turn around.

Animal welfare regulations say little about invertebrates, but guidelines for the care and use of captive cephalopods have started to appear over the past decade. In 2010, the European Union required considering ethical issues when using cephalopods for research. And in 2015, AAALAC International, an international accreditation organization for ethical animal research, and the Federation of European Laboratory Animal Science Associations promoted guidelines for the care and use of cephalopods in research. The U.S. National Institutes of Health is currently considering similar guidelines.

The “alien” minds of octopuses and their relatives are fascinating, not the least because they provide a mirror through which we can reflect on more familiar forms of intelligence. Deciding which species deserve moral consideration requires selecting criteria, such as neuron count or learning capacity, to inform those choices.

Once these criteria are set, it may be well to also consider how they apply to the rodents, birds and fish that occupy more familiar roles in our lives.The Conversation

Rachel Blaser, Professor of Neuroscience, Cognition and Behavior, University of San Diego

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Climate of fear driving local officials to quit as new study finds threats and abuse rampant

Threats and harassment are pushing some politicians out of office, scaring off some would-be candidates and even compelling some elected officials to change their vote.

Those are some of the conclusions of a new study I led on political violence in Southern California.

Rising threats against public officials is a national problem.

Between 2013 and 2016, there were, on average, 38 federal charges involving threats to public officials per year, according to the National Counterterrorism Innovation, Technology and Education Center, a research center. That average sharply increased between 2017 and 2022, when an average of 62 federal charges were brought annually for threats to public officials.

When elected officials worry for their safety, it has implications for all Americans. Democracy suffers when people are governed by fear.

‘Respectful discourse has been lost’

I am the founder and director of the Violence, Inequality and Power Lab, or VIP Lab, housed at the University of San Diego’s Kroc Institute for Peace and Justice. Over the past two years, the VIP Lab has been collecting data to understand the frequency and severity of threats against local elected officials in Southern California.

Our research focused on California’s three southernmost counties – San Diego, Riverside and Imperial. Together, these counties have just under 6 million residents, or roughly 15% of California’s population.

To capture as complete a picture as possible, we did a survey and interviews, reviewed news coverage and social media accounts, and scoured literature nationwide.

The first year, we focused only on San Diego County, surveying 330 mayors, city councilors, county board of supervisor members and school board and community college board members. Over 25% of survey recipients responded. Of them, 75% reported being threatened or harassed at least once in the past five years. Roughly half said the abuse occurred at least monthly.

Respondents had found their name shared on the dark web and seen cars drive past their homes in an intimidating manner. They’d been followed after public meetings and blocked from leaving. In some cases, their families were harassed.

“As a parent, [I] feel vulnerable,” one city council member said, adding that he’s become “very guarded with [my] kid in public.”

Topics that were most likely to prompt threats and harassment included COVID-19, gun control, school curricula and LGBTQ+ rights.

“Since the pandemic, people have been mobilized into different silos or groups of people,” said a school board member interviewed in 2023. “[R]espectful discourse has been lost in all of this.”

In year two, we sent surveys to 785 elected officials in all three counties. Two-thirds of respondents reported having been threatened or harassed at least once in the previous five years. Roughly the same number said verbal attacks had become a routine part of public service.

These attacks come from the public, they told us, and from other elected officials. Officials have been accused of corruption, called idiots and told they should die. School board members face allegations that they “don’t care about kids.”

The threats “are verbal, at council meetings, outside of meetings, during breaks,” said one interviewee serving on a city council. “I’ve been harassed by city council members, staff members, the city manager and the city attorney.”

A troubling trend

In simple terms, our research suggests that at least two of every three people who serve in public office in Southern California will be threatened, intimidated or harassed during their tenure.

Survey results suggest the average female elected official who experiences abuse is threatened or harassed at least six times as often as her male peers. Men reported being on the receiving end of abuse about once a year, while women suffer abuse almost monthly.

The attacks against women are more likely to be personalized – referring to their looks or their family members – and have a sexual nature.

It was “slanderous stuff,” one school board member told us of abusive text messages that started in 2022 after many years of service. “Language of being evil … of not being a Christian woman.”

Her husband was also followed by a car, and her home was circled by the same vehicle. No one else on her board reported similar abuse.

We heard many accounts like this from female elected officials in Southern California. One city councilwoman filed two police reports against men who threatened, harassed and stalked her. A second was threatened throughout her campaign and time in office, including by a man who used a racial slur and threatened to “take care of” her with his AK-47.

Even so, our most recent survey revealed that male elected officials are most concerned about political violence. Sixty-four percent reported that things had become worse during their time in office, compared with 50% of women.

Counterintuitively, white, male, rural and conservative respondents all reported that threats and harassment had gotten worse more often than their nonwhite, female, urban and liberal counterparts – even though nonwhite, female, urban and liberal respondents reported more threats and harassment overall.

This finding may reflect a meaningful shift in how threats are used in politics. We believe that those responsible for abuse previously targeted the most vulnerable elected officials – namely women and other underrepresented groups.

But as it becomes more common to use threats and harassment as a means to influence decision-making, everyone is a target.

Most of the abuse we documented is, thankfully, not physical. But “hostile, aggressive or violent acts motivated by political objectives or a desire to directly or indirectly affect political change or change in governance” is, by definition, political violence.

And our research shows that this constant, low-level abuse is taking its toll on people and communities.

Fear-based governing

Our study results mirror findings from other research on growing political violence in the U.S.

The number of threats targeting members of Congress went up 88% between 2018 and 2021, from 5,206 in 2018 to 9,625 in 2021.

Meanwhile, a 2023 study on state legislators by the nonprofit Brennan Center for Justice found that 89% had been threatened, harassed or insulted at some point over the previous three years. That means roughly 6,000 of the approximately 7,000 state legislators in the U.S. have been abused or intimidated since 2020.

Armed mob bursts through a door
The Capitol insurrection of Jan. 6, 2021, demonstrated for many Americans the threat of political violence. Brent Stirton/Getty Images


Most Americans don’t need these data points: Three-quarters of Americans already believe political violence is a problem, according to the States United Democracy Center.

Constituents have a right, even an imperative, to make their opinions known to the individuals they elect. Accountability and representation are essential to democracy. But there is a line between expressing disagreement and using intimidation or violence to influence policy decisions. And the latter can have some distinctly undemocratic outcomes.

Six percent of the elected officials we interviewed said they had actually changed their vote on a specific issue due to the climate of fear. And 43% of our survey respondents said that threats and harassment have caused them to consider leaving their post.

“I don’t think it’s fair to have to fight so hard,” said one relatively new school board member. “I’m mad at myself for letting the bullies win.”

The climate of fear is also keeping people from serving. Nationwide, 69% of mayors surveyed by the Mayors Innovation Project said they knew someone who had decided not to run for office due to threats or fear of violence.

When fear – rather than the needs of community – becomes a driving force in politics, democracy loses. That’s rule by the powerful, not rule by the people.The Conversation

Rachel Locke, Director, Violence, Inequality and Power Lab, Kroc Institute for Peace and Justice, University of San Diego

This article is republished from The Conversation under a Creative Commons license. Read the original article.

What 2025 holds for interest rates, inflation and your pocketbook

Heading into 2024, we said the U.S. economy would likely continue growing, in spite of pundits’ forecast that a recession would strike. The past year showcased strong economic growth, moderating inflation, and efficiency gains, leading most economists and the financial press to stop expecting a downturn.

But what economists call “soft landings” – when an economy slows just enough to curb inflation, but not enough to cause a recession – are only soft until they aren’t.

As we turn to 2025, we’re optimistic the economy will keep growing. But that’s not without some caveats. Here are the key questions and risks we’re watching as the U.S. rings in the new year.

The Federal Reserve and interest rates

Some people expected a downturn in 2022 – and again in 2023 and 2024 – due to the Federal Reserve’s hawkish interest-rate decisions. The Fed raised rates rapidly in 2022 and held them high throughout 2023 and much of 2024. But in the last four months of 2024, the Fed slashed rates three times – most recently on Dec. 18.

While the recent rate cuts mark a strategic shift, the pace of future cuts is expected to slow in 2024, as Fed Chair Jerome Powell suggested at the December meeting of the Federal Open Market Committee. Markets have expected this change of pace for some time, but some economists remain concerned about heightened risks of an economic slowdown.

When Fed policymakers set short-term interest rates, they consider whether inflation and unemployment are too high or low, which affects whether they should stimulate the economy or pump the brakes. The interest rate that neither stimulates nor restricts economic activity, often referred to as R* or the neutral rate, is unknown, which makes the Fed’s job challenging.

However, the terminal rate – which is where Fed policymakers expect rates will settle in for the long run – is now at 3%, which is the highest since 2016. This has led futures markets to wonder if a hiking cycle may be coming into focus, while others ask if the era of low rates is over.

Inflation and economic uncertainty

This shift in the Federal Reserve’s approach underscores a key uncertainty for 2025: While some economists are concerned the recent uptick in unemployment may continue, others worry about sticky inflation. The Fed’s challenge will be striking the right balance — continuing to support economic activity while ensuring inflation, currently hovering around 2.4%, doesn’t reignite.

We do anticipate that interest rates will stay elevated amid slowing inflation, which remains above the Fed’s 2% target rate. Still, we’re optimistic this high-rate environment won’t weigh too heavily on consumers and the economy.

While gross domestic product growth for the third quarter was revised up to 3.1% and the fourth quarter is projected to grow similarly quickly, in 2025 it could finally show signs of slowing from its recent pace. However, we expect it to continue to exceed consensus forecasts of 2.2% and longer-run expectations of 2%.

Fiscal policy, tariffs and tax cuts: risks or tailwinds?

While inflation has declined from 9.1% in June 2022 to less than 3%, the Federal Reserve’s 2% target remains elusive.

Amid this backdrop, several new risks loom on the horizon. Key among them are potential tariff increases, which could disrupt trade, push up the prices of goods and even strengthen the U.S. dollar.

The average effective U.S. tariff rate is 2%, but even a fivefold increase to 10% could escalate trade tensions, create economic challenges and complicate inflation forecasts. Consider that, historically, every 1% increase in tariff rates has resulted in a 0.1% higher annual inflation rate, on average.

Still, we hope tariffs serve as more of a negotiating tactic for the incoming administration than an actual policy proposal.

Tariffs are just one of several proposals from the incoming Trump administration that present further uncertainty. Stricter immigration policies could create labor shortages and increase prices, while government spending cuts could weigh down economic growth.

Tax cuts – a likely policy focus – may offset some risk and spur growth, especially if coupled with productivity-enhancing investments. However, tax cuts may also result in a growing budget deficit, which is another risk to the longer-term economic outlook.

Count us as two financial economists hoping only certain inflation measures fall slower than expected, and everyone’s expectations for future inflation remain low. If so, the Federal Reserve should be able to look beyond short-term changes in inflation and focus on metrics that are more useful for predicting long-term inflation.

Consumer behavior and the job market

Labor markets have softened but remain resilient.

Hiring rates are normalizing, while layoffs and unemployment – 4.2%, up from 3.7% at the start of 2024 – remain low despite edging up. The U.S. economy could remain resilient into 2025, with continued growth in real incomes bolstering purchasing power. This income growth has supported consumer sentiment and reduced inequality, since low-income households have seen the greatest benefits.

However, elevated debt balances, given increased consumer spending, suggest some Americans are under financial stress even though income growth has outpaced increases in consumer debt.

While a higher unemployment rate is a concern, this risk to date appears limited, potentially due to labor hoarding – which is when employers are afraid to let go of employees they no longer require due to the difficulty in hiring new workers. Higher unemployment is also an issue the Fed has the tools to address – if it must.

This leaves us cautiously optimistic that resilient consumers will continue to retain jobs, supporting their growing purchasing power.

Equities and financial markets

The outlook for 2025 remains promising, with continued economic growth driven by resilient consumer spending, steadying labor markets, and less restrictive monetary policy.

Yet current price targets for stocks are at historic highs for a post-rally period, which is surprising and may offer reasons for caution. Higher-for-longer interest rates could put pressure on corporate debt levels and rate-sensitive sectors, such as housing and utilities.

Corporate earnings, however, remain strong, buoyed by cost savings and productivity gains. Stock performance may be subdued, but underperforming or discounted stocks could rebound, presenting opportunities for gains in 2025.

Artificial intelligence provides a bright spot, leading to recent outperformance in the tech-heavy NASDAQ and related investments. And onshoring continues to provide growth opportunities for companies reshaping supply chains to meet domestic demand.

To be fair, uncertainty persists, and economists know forecasting is for the weather. That’s why investors should always remain well-diversified.

But with inflation closer to the Fed’s target and wages rising faster than inflation, we’re optimistic that continued economic growth will pave the way for a financially positive year ahead.

Here’s hoping we get even more right about 2025 than we did this past year.The Conversation

D. Brian Blank, Associate Professor of Finance, Mississippi State University and Brandy Hadley, Associate Professor of Finance and Distinguished Scholar of Applied Investments, Appalachian State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI has a stupid secret

Two of San Francisco’s leading players in artificial intelligence have challenged the public to come up with questions capable of testing the capabilities of large language models (LLMs) like Google Gemini and OpenAI’s o1. Scale AI, which specialises in preparing the vast tracts of data on which the LLMs are trained, teamed up with the Center for AI Safety (CAIS) to launch the initiative, Humanity’s Last Exam.

Featuring prizes of US$5,000 (£3,800) for those who come up with the top 50 questions selected for the test, Scale and CAIS say the goal is to test how close we are to achieving “expert-level AI systems” using the “largest, broadest coalition of experts in history”.

Why do this? The leading LLMs are already acing many established tests in intelligence, mathematics and law, but it’s hard to be sure how meaningful this is. In many cases, they may have pre-learned the answers due to the gargantuan quantities of data on which they are trained, including a significant percentage of everything on the internet.

Data is fundamental to this whole area. It is behind the paradigm shift from conventional computing to AI, from “telling” to “showing” these machines what to do. This requires good training datasets, but also good tests. Developers typically do this using data that hasn’t already been used for training, known in the jargon as “test datasets”.

If LLMs are not already able to pre-learn the answer to established tests like bar exams, they probably will soon. The AI analytics site Epoch estimates that 2028 will mark the point at which the AIs will effectively have read everything ever written by humans. An equally important challenge is how to keep assessing AIs once that rubicon has been crossed.

Of course, the internet is expanding all the time, with millions of new items being added daily. Could that take care of these problems?

Perhaps, but this bleeds into another insidious difficulty, referred to as “model collapse”. As the internet becomes increasingly flooded by AI-generated material which recirculates into future AI training sets, this may cause AIs to perform increasingly poorly. To overcome this problem, many developers are already collecting data from their AIs’ human interactions, adding fresh data for training and testing.

Some specialists argue that AIs also need to become “embodied”: moving around in the real world and acquiring their own experiences, as humans do. This might sound far-fetched until you realise that Tesla has been doing it for years with its cars. Another opportunity is human wearables, such as Meta’s popular smart glasses by Ray-Ban. These are equipped with cameras and microphones, and can be used to collect vast quantities of human-centric video and audio data.

Narrow tests

Yet even if such products guarantee enough training data in future, there is still the conundrum of how to define and measure intelligence – particularly artificial general intelligence (AGI), meaning an AI that equals or surpasses human intelligence.

Traditional human IQ tests have long been controversial for failing to capture the multifaceted nature of intelligence, encompassing everything from language to mathematics to empathy to sense of direction.

There’s an analagous problem with the tests used on AIs. There are many well established tests covering such tasks as summarising text, understanding it, drawing correct inferences from information, recognising human poses and gestures, and machine vision.

Some tests are being retired, usually because the AIs are doing so well at them, but they’re so task-specific as to be very narrow measures of intelligence. For instance, the chess-playing AI Stockfish is way ahead of Magnus Carlsen, the highest scoring human player of all time, on the Elo rating system. Yet Stockfish is incapable of doing other tasks such as understanding language. Clearly it would be wrong to conflate its chess capabilities with broader intelligence.

But with AIs now demonstrating broader intelligent behaviour, the challenge is to devise new benchmarks for comparing and measuring their progress. One notable approach has come from French Google engineer François Chollet. He argues that true intelligence lies in the ability to adapt and generalise learning to new, unseen situations. In 2019, he came up with the “abstraction and reasoning corpus” (ARC), a collection of puzzles in the form of simple visual grids designed to test an AI’s ability to infer and apply abstract rules.

Unlike previous benchmarks that test visual object recognition by training an AI on millions of images, each with information about the objects contained, ARC gives it minimal examples in advance. The AI has to figure out the puzzle logic and can’t just learn all the possible answers.

Though the ARC tests aren’t particularly difficult for humans to solve, there’s a prize of US$600,000 to the first AI system to reach a score of 85%. At the time of writing, we’re a long way from that point. Two recent leading LLMs, OpenAI’s o1 preview and Anthropic’s Sonnet 3.5, both score 21% on the ARC public leaderboard (known as the ARC-AGI-Pub).

Another recent attempt using OpenAI’s GPT-4o scored 50%, but somewhat controversially because the approach generated thousands of possible solutions before choosing the one that gave the best answer for the test. Even then, this was still reassuringly far from triggering the prize – or matching human performances of over 90%.

While ARC remains one of the most credible attempts to test for genuine intelligence in AI today, the Scale/CAIS initiative shows that the search continues for compelling alternatives. (Fascinatingly, we may never see some of the prize-winning questions. They won’t be published on the internet, to ensure the AIs don’t get a peek at the exam papers.)

We need to know when machines are getting close to human-level reasoning, with all the safety, ethical and moral questions this raises. At that point, we’ll presumably be left with an even harder exam question: how to test for a superintelligence. That’s an even more mind-bending task that we need to figure out.The Conversation

Andrew Rogoyski, Innovation Director - Surrey Institute of People-Centred AI, University of Surrey

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Cats, whales and even robotic catfish: Inside the world's most bizarre secret spy weapons

The death of a spy is rarely newsworthy, due to the secrecy surrounding it. But when a white beluga whale suspected of spying for Moscow was found dead in Norwegian waters in September, the animal soon became a minor celebrity.

Hvaldimir (a play on the Norwegian word for whale, hval, and the first name of Russian president) was even given an official autopsy by the Norwegian Directorate of Fisheries.

The whale had been uncovered as a spy in 2019, and is one in a long line of animals which have been used by the intelligence services. Among them was a Soviet programme to train marine animals as spies and assassins, which collapsed in 1991.

The US ran similar experiments with animals, some dating back to the 1960s. One of the CIA’s more unusual attempts to use animals as spies was Operation Acoustic Kitty.

The idea was to implant a microphone and antenna into the cat and use it to eavesdrop on potentially interesting conversations. The test of the “prototype” went horribly wrong when the cat wandered off and was run over by a taxi, leading to the programme being quickly abandoned.

The history of spy pigeons

A more successful example was the use of spy pigeons. Equipped with tiny cameras, pigeons could easily access otherwise restricted areas and “take photos” without arousing suspicion before safely returning to home base using their extraordinary homing ability.

What became a very successful CIA programme during the cold war took its inspiration from British efforts during the second world war.

Over time, technology created opportunities to exploit the stealthiness of animals while eliminating their unpredictability. Project Aquiline aimed to create a bird-like drone fully equipped in the style of more traditional spy planes, but smaller and more versatile so it could get closer to its targets.

Another, even more miniature version was the insectothopter that the CIA developed in the 1970s. Although neither the aquiline or insectothopter designs ever became fully operational, they are acknowledged as forerunners of today’s drones.

Fast-forward to the 1990s, and the CIA’s robotic catfish Charlie emerges as one in a longer line of successfully operationalised underwater drones that are more effective and less vulnerable than the hapless Hvaldimir.

Exploding rat carcasses

But effectiveness is not always best measured in the success of an unusual spy method.

A British second world war plan to use explosive-filled rat carcasses and distribute them to boiler rooms in German factories where they would then explode once shoved into a boiler appeared to be doomed when the first consignment of about 100 dead rats was intercepted by the Germans.

But the discovery of the rats, and the sheer ingenuity behind the plan, led to such paranoia that the “trouble caused to them was a much greater success … than if the rats had actually been used”.

A history of spy animals from the CIA.

While working with animals often proved problematic, attempts to gain advantage by disguising devices as inanimate objects have also proved a source of embarrassment. One such effort involved the MI6 station in Moscow trying to improve on the “dead letter drop” technique of obtaining secret information from spies in Russia.

Rather than risk leaving secret information in a pre-arranged location, MI6’s version of James Bond’s Q came up with the idea that the information could be transmitted electronically to a receiver hidden in a fake rock placed near the ministry in question which could then be downloaded by a subsequent walk past.

The focused activity of many men in suits in one part of this park, however, led to the discovery of the rock. The revelation of the operation in 2006 caused massive embarrassment to the UK government. That this was not MI6’s finest hour was suggested by headlines ridiculing the Moscow spy-rock as “more Johnny English than James Bond”.

While intelligence organisations are always looking for innovative means to enhance their spy craft, arguably the most successful application of intelligence comes in the form of human improvisation. A notable example of this was the clandestine extraction of Oleg Gordievsky in 1985 after the cover of one of the west’s most valuable double-agents working for British intelligence was blown.

A useful bag of crisps

The team of two British diplomats and their wives had to negotiate three Soviet and two Finnish checkpoints. As the first guard dog approached, one of the party offered the sniffing Alsatian a cheese and onion crisp, duly taking the Alsatian off the scent of Gordievsky who was hiding in the boot of the car.

When another dog began sniffing at the boot, a most ingenious and successful method of spy craft was brought into play. The wife of one of the diplomats placed her 18-month old baby on the car boot, changed the baby’s nappy, and then dropped the freshly filled and steaming deposit on the ground, successfully distracting the dog and its handler.

These actions were never part of the extraction plan for Gordievsky but were an equally instinctive and ingenious improvisation by those used to operating in hostile environments and practised at deceiving the unwanted attentions of enemy agents.

Expensive research budgets and promising technological advances provide an edge in certain circumstances, but the most effective spy techniques may still rely on the application of quick thinking and bold, fearless action.The Conversation

Stefan Wolff, Professor of International Security, University of Birmingham and David Hastings Dunn, Professor of International Politics in the Department of Political Science and International Studies, University of Birmingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Julia Child’s France, pig slaughter in Portugal and a culinary detective: 5 delicious food writing classics

Holidays are traditionally a time of celebration and feasting. So, as our minds turn to food and our stomachs rumble, why not read about it?

These five food titles, ranging from a chef’s memoir to a foodie crime novel, offer a smorgasbord of perspectives on the ways food shapes our culture, our identities, our environment and our selves. All of them will leave you hungry!

A Cook’s Tour by Anthony Bourdain

A Cook’s Tour (2001) follows late chef and TV personality Anthony Bourdain on a global culinary adventure as he searches for “the perfect meal”. While Bourdain doesn’t find perfection, he does discover the centrality of food in preserving culture and building relationships.

In Portugal, he gets involved in the yearly pig slaughter – visceral and confronting, despite his experience as a chef – and revels in the celebration, conviviality and hospitality that accompanies this centuries-old tradition. In Vietnam, he builds tentative relationships with locals by joining them in drinking “moonshine from a plastic cola bottle” on the banks of the Mekong.

The book is engaging, witty and sharp, but also poignant. It encourages us to not only think about where our food comes from, but about the meanings we ascribe to it and the communities we build around it.

My Life in France by Julia Child (with Alex Prud’homme)

Julia Child was an unlikely culinary icon. She didn’t really learn to cook until she moved from the United States to France with her husband, Paul, in 1948. On her return, she introduced not just her home country but the English-speaking world to the art of French cooking.

My Life in France (2005), co-written with journalist Alex Prud’homme, tells the story of “a crucial period of transformation” in which she found her “true calling” and started writing Mastering the Art of French Cooking (1961) with Simone Beck and Louisette Bertholle.

My Life in France is bursting at the seams with Child’s signature joie de vivre: she certainly doesn’t take herself seriously. It is also a snapshot of postwar French cuisine, as experienced by someone encountering something completely transformative – and deciding to share her experience with the world, despite the obstacles.

Salt, Fat, Acid, Heat by Samin Nosrat

Judging by the subtitle, Mastering the Art of Good Cooking, Samin Nosrat’s 2016 book, Salt, Fat, Acid, Heat, took some inspiration from Mastering the Art of French Cooking. However, it is eminently more beginner friendly. While the book has recipes (good ones), it is not a recipe book per se. Rather, it is a set of instructions on how to cook: or, if you already have the basics down, how to cook better. Yet, unlike other cooking reference books, it tells a story.

Iranian–American Nosrat, who trained at the acclaimed restaurant Chez Panisse, introduces her readers to her four elements of good cooking, one at a time. She introduces culinary theory, scientific principles and tips and tricks, in an accessible and engaging way.

This information is interspersed with vignettes from Nosrat’s culinary life and supported by excellent illustrations. It is not only a good read, but a cookbook you will reach for time and again.

Death in the Dordogne by Martin Walker

It may be strange to see a mystery novel on this list, but sometimes we want a palate cleanser, a sweet treat to end a meal. Martin Walker’s Death in the Dordogne (2009) is just the thing.

Bruno Courrèges is chief of police in the small town of St. Denis in the Dordogne, in south-west France. While there is a murder to be solved (the death of an elderly war veteran), Bruno’s other major obsession is the food and wine of the Périgord region, which Walker describes in delicious detail.

As Bruno travels around the countryside solving the mystery, he eats: omelettes scented with black truffle, ripe red strawberries, flaky croissants, and fresh trout cooked in the open air. Alongside this feast, the book also probes the complexities of a changing, modern France – including the impact of immigration and the rise of right-wing politics. The perfect Boxing Day read.

Cod by Mark Kurlansky

Cod: a Biography of the Fish that Changed the World (1997) is a book about the voracious appetite of the human race and the effects of appetite.

The story Kurlansky tells is not just the millennia-long saga of the low-fat, white-fleshed fish that was indispensable to cuisines across Europe. It is that, of course – but it’s also a story about the rise of colonialism and capitalism, international conflict, the slave trade, the insatiable search for commodities, and the environmental legacy of new technologies.

Cod was first published almost 30 years ago, soon after the North Atlantic cod fishing industry had reached a point of collapse due to overfishing. In 2024, for the first time since the early 1990s, the Canadian government lifted its moratorium on commercial cod fishing off the coast of Newfoundland and Labrador, in light of improved cod stocks.

Kurlansky’s writing is evocative – you can feel the chill and the fog of the cod banks. Intrepid cooks may even attempt some of the recipes.The Conversation

Lauren Samuelsson, Honorary Fellow in History, University of Wollongong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

From the Big Bogan to Larry the Lobster, why do towns build big things?

Big Things first appeared in Australia in the 1960s, beginning with the Big Scotsman (1962) in Medindie, South Australia, the Big Banana (1964) in Coffs Harbour, New South Wales, and the Big Murray Cod (1968) in Tocumwal, NSW.

These structures were inspired by earlier North American examples, such as Lucy the Elephant (1882) in New Jersey, and several big doughnuts in California.

While they differed in subject matter, all aimed to attract the attention of passing motorists: in the 1950s–1960s, private car ownership had soared and highway construction spread.

Towns and regions across Australia, New Zealand and North America used oversized landmarks to get travellers to stop, take a photo and hopefully spend money at local businesses.

As awareness of these giant landmarks grew, so did the desire of other communities to have their own.

Within a few decades, Australia’s Big Things had become a beloved fixture of road trips and summer holidays.

A big cultural impact

My research shows the number of Big Things being constructed in Australia hit an initial peak in the 1980s before experiencing a temporary decline.

By the 2000s, however, towns as far afield as Tully in Queensland (Big Golden Gumboot), Cressy in Tasmania (Big Trout), and Exmouth in Western Australia (Big Prawn) were reviving the tradition.

Soon, Big Things became firmly entrenched in Australian popular culture: featuring on limited edition Redheads matchboxes (2010), and on sets of Australia Post stamps (2007 and 2023).

But some of the older structures experienced declining popularity: the Big Wool Bales in Hamilton, Victoria (closed 2020), Victoria’s Giant Gippsland Earth Worm in Bass (closed 2020) and the Big Cask Wine in Mourquong, NSW (closed 2012), survive only in holiday photos and people’s memories.

Icons like Larry the Lobster (Kingston, SA), the Big Prawn (Ballina, NSW), and the Big Pineapple (Nambour, Queensland) have battled changes in ownership, threat of demolition, and closure.

Despite these challenges, and debates over heritage conservation, construction of these giant landmarks has not slowed.

The Big Bogan was erected in 2015 in Nyngan, NSW, by community members who were eager to encourage visitors to the area.

A local progress association in the small town of Thallon in Queensland unveiled William the Big Wombat in 2018, also with the aim to bring attention to the area.

Similar hopes were held for the Big Watermelon erected in 2018 (Chinchilla, Queensland), and the Big Tractor (Carnamah, WA) which opened this year.

Through my research, I spoke with many people involved with projects such as these, and they said they’d selected objects that were iconic to their area.

This could be a product they specialise in, a local native animal, or, in the case of the Big Bogan, a joke based on the name of nearby Bogan River.

Most builders openly acknowledge their primary motivation is to promote the region, attract tourist dollars and investment, and revive towns that have seen better days.

But do Big Things actually achieve these goals? Unfortunately, there is no easy answer.

An economic return?

Local economies are complex, as are the reasons people choose to visit. Many Big Things are constructed on the sides of highways that connect Australia’s numerous regional towns.

People who stop for photos may not set out with the goal of visiting that Big Thing – it may simply be convenient to take a break there while on the way somewhere else.

And if people do stop, it doesn’t guarantee they will spend more than the cost of filling up their car with petrol, if that.

Over the years, tourism researchers have developed several different models for calculating the impact of rural and regional tourism on local economies.

However, none of these approaches has proven to be universally effective. Most scholars agree tourists aren’t likely to travel long distances for any one reason.

They will consider a range of factors including food and accommodation, and the closeness of numerous attractions. In other words: building a Big Thing won’t guarantee a sustained increase in tourism to the area on its own.

Communities should factor this in when considering erection of a Big Thing, especially given the cost of construction.

The Big Mango in Bowen reportedly cost $A90,000 when it was built in 2002, while the organisers of the Big Tractor in Carnamah raised more than $600,000 to cover its price tag.

The spread of social media and easy access to media outlets via the internet offers communities another reason to build Big Things, however.

Australians are not the only ones fascinated by Big Things, and when a new one is unveiled — or an existing one goes “missing”, as the Big Mango did in 2014 — it is often covered by the press and then shared online.

These giant landmarks are also highly “Instagrammable”: a 2015 survey revealed that six of Australia’s 20 most Instagrammed tourist attractions were Big Things.

This sort of coverage doesn’t necessarily guarantee the long-term revival of a town’s economy.

But it can help to remind people of the town’s existence, and it gives locals a memorable image on which to build.The Conversation

Amy Clarke, Senior Lecturer in History, specialising in built heritage and material culture, University of the Sunshine Coast

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why sending a belated gift is not as bad as you probably think − and late is better than never

If finding the right present and making sure the recipient gets it on time leaves you feeling anxious, you’re not alone. More than half of Americans say that gift-giving stresses them out.

Concerns about on-time delivery are so common that people share holiday deadlines for each shipping service. And in the event that you can’t meet these deadlines, there are now handy etiquette guides offering advice for how to inform the recipient.

If you’ve sent late gifts thanks to shipping delays, depleted stocks or even good old-fashioned procrastination, our new research may offer some welcome news.

In a series of studies that will soon be published in the Journal of Consumer Psychology, we found that people overestimate the negative consequences of sending a late gift.

Trying to follow norms

Why do people tend to overestimate these consequences? Our findings indicate that when people give presents, they pay more attention to norms about gifting than the recipients do.

For example, other researchers have found that people tend to be reluctant to give used products as presents because there’s a norm that gifts should be new. In reality, though, many people are often open to receiving used stuff.

We found that this mismatch also applies to beliefs about the importance of timing. Many people worry that a late gift will signal that they don’t care about the recipient. They then fear their relationship will suffer.

In reality, though, these fears are largely unfounded. Gift recipients are much less worried about when the gift arrives.

Unfortunately, aside from causing unnecessary worry, being overly sensitive about giving a late present can also influence the gift you choose to buy.

A Postal Service worker places packages on a parcel sorting machine. A U.S. Postal Service worker places packages on a parcel sorting machine on Dec. 12, 2022. Alejandra Villa Loarca/Newsday RM via Getty Images

Compensating for lateness

To test how lateness concerns affect gift choice, we conducted an online study before Mother’s Day in 2021. We had 201 adults participate in a raffle. They could choose to send their mother either a cheaper gift basket that would arrive in time for the occasion or a more expensive one that would arrive late.

Concerns about lateness led nearly 70% of the participants to choose the less expensive and more prompt option.

In another study, we conducted the same kind of raffle for Father’s Day and got similar results.

Aside from finding that people will choose inferior items to ensure speedier delivery, we also found that givers may feel that they can compensate for lateness with effort.

In another online study of 805 adults, we discovered that participants were less likely to expect a late delivery to damage a relationship if they signaled their care for the recipient in a different way. For example, they believed that putting an item together by hand, versus purchasing it preassembled, could compensate for a present being belated.

Better late than never?

If sending something late isn’t as bad as expected, you may wonder whether it’s OK to simply not send anything at all.

We’d caution against going that route.

In another online study of 903 participants, we found that recipients believed that not receiving anything at all was more likely to harm a relationship than receiving something as much as two months late.

That is, late is better than never as far as those receiving gifts are concerned.

You may want to keep that in mind, even if that new gaming console, action figure or virtual reality headset is sold out this holiday season. It could still be a welcome surprise if it arrives in January or February.The Conversation

Rebecca Walker Reczek, Professor of Marketing, The Ohio State University; Cory Haltman, Ph.D. Candidate in Marketing, The Ohio State University, and Grant Donnelly, Assistant Professor of Marketing, The Ohio State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Here are 5 of the most frustrating health insurer tactics — and why they exist

The U.S. has made great progress in getting more people insured since the Affordable Care Act took effect in 2014. The share of uninsured Americans ages 18 to 64 fell from 18% before the ACA to 9.5% in 2022. And preexisting conditions no longer prevent coverage or lead to an increase in premiums.

Yet even for those with health insurance, coverage does not ensure access to care, much less high-quality and affordable care. Research shows that 1 in 3 Americans seeking care report delaying or forgoing treatment because of the “administrative burdens” of dealing with health insurance and the health care system, creating additional barriers beyond costs.

Some of these are basic tasks, such as scheduling appointments. But others relate to strategies that health insurers use to shape the care that their patients are able to receive – tactics that are often unpopular with both doctors and patients.

In addition, more than 40% of Americans under 65 have high-deductible plans, meaning patients face significant upfront costs to using care. As a result, nearly a quarter are unable to afford care despite being insured.

As scholars of health care quality and policy, we study how the affordability and design of health insurance affects people’s health as well as their out-of-pocket costs.

We’d like to unpack five of the most common strategies used by health insurers to ensure that care is medically necessary, cost-effective or both.

At best, these practices help ensure appropriate care is delivered at the lowest possible cost. At worst, these practices are overly burdensome and can be counterproductive, depriving insured patients of the care they need.

Claim denials

The strategy of denial of claims has gotten a lot of attention in the aftermath of the killing of UnitedHealthcare chief executive officer Brian Thompson, partly because the insurer has higher rates of denials than its peers. Overall, nearly 20% of Americans with coverage through health insurance marketplaces created by the ACA had a claim denied in 2021.

While denial may be warranted in some cases, such as if a particular service isn’t covered by that plan – amounting to 14% of in-network claim denials – more than three-quarters of denials in 2021 did not list a specific reason. This happens after the service has already taken place, meaning that patients are sent a bill for the full amount when claims are denied.

Although the ACA required standardized processes for appealing claims, patients don’t often understand or feel comfortable navigating an appeal. Even if you understand the process, navigating all of the paperwork and logistics of an appeal is time-consuming. Gaps by income and race in pursuing and winning appeals only deepen mistrust among those already struggling to get appropriate care and make ends meet.

Middle-aged couple sits on couch with bills and planner in front of them, a laptop in the foreground.
Patients receive a bill for the full amount after a claim is denied. Ridofranz/iStock via Getty Images Plus

Prior authorization

Prior authorization requires providers to get approval in advance from the insurer before delivering a procedure or medication – under the guise of “medical necessity” as well as improving efficiency and quality of care.

Although being judicious with high-cost procedures and drugs make intuitive sense, in practice these policies can lead to delays in care or even death.

In addition, the growing use of artificial intelligence in recent years to streamline prior authorization has come under scrutiny. This includes a 2023 class action lawsuit filed against UnitedHealthcare for algorithmic denials of rehabilitative care, which prompted the federal government to issue new guidelines.

The American Medical Association found that 95% of physicians report that dealing with prior authorization “somewhat” or “significantly” increases physician burnout, and over 90% believe that the requirement negatively affects patients. The physicians surveyed by the association also reported that over 75% of patients “often” or “sometimes” failed to follow through on recommended care due to challenges with prior authorizations.

Doctors and their staff may deal with dozens of prior authorization requests per week on average, which take time and attention away from patient care. For example, there were nearly two prior-authorization requests per Medicare Advantage enrollee in 2022, or more than 46 million in total.

Prior authorization can be a time-consuming, multistep process that slows down and often blocks patients from receiving care.

Smaller networks

Health insurance plans contract with physicians and hospitals to form their networks, with the ACA requiring them to “ensure a sufficient choice of providers.”

If a plan has too small of a network, patients can have a hard time finding a doctor who takes their insurance, or they may have to wait longer for an appointment.

Despite state oversight and regulation, the breadth of plan networks has significantly narrowed over time. Nearly 15% of HealthCare.gov plans had no in-network physicians for at least one of nine major specialties, and over 15% of physicians listed in Medicaid managed-care provider directories saw no Medicaid patients. Inaccurate provider directories amplify the problem, since patients may choose a plan based on bad information and then have trouble finding care.

Surprise billing

The No Surprises Act went into effect in 2022 to protect consumers against unexpected bills from care received out of network. These bills usually come with a higher deductible and an out-of-pocket maximum that is typically twice as high as in-network care as well as higher coinsurance rates.

Prior to that law, 18% of emergency visits and 16% of in-network hospital stays led to at least one surprise bill.

While the No Surprises Act has helped address some problems, a notable gap is that it does not apply to ambulance services. Nearly 30% of emergency transports and 26% of nonemergency transports may have resulted in a surprise bill between 2014 and 2017.

Pharmacy benefit managers

The largest health insurance companies all have their own pharmacy benefit managers.

Three of them – Aetna’s CVS Caremark, Cigna’s Express Scripts and UnitedHealthcare’s Optum Rx – processed almost 80% of the total prescriptions dispensed by U.S. pharmacies in 2023.

Beyond how market concentration affects competition and prices, insurers’ owning pharmacy benefit managers exploits a loophole in how much insurers are required to spend on patient care.

The ACA requires insurers to maintain a medical loss ratio of 80% to 85%, meaning they should spend 80 to 85 cents of every dollar of premiums for medical care. Pharmaceuticals account for a growing share of health care spending, and plans are able to keep that money within the parent company through the pharmacy benefit managers that they own.

Moreover, pharmacy benefit managers inflate drug costs to overpay their own vertically integrated pharmacies, which in turn means higher out-of-pocket costs based on the inflated prices. Most pharmacy benefit managers also prevent drug manufacturer co-pay assistance programs from counting toward patients’ cost sharing, such as deductibles, which prolongs how long patients have to pay out of pocket.

Policy goals versus reality

Despite how far the U.S. has come in making sure most Americans have access to affordable health insurance, being insured increasingly isn’t enough to guarantee access to the care and medications that they need.

The industry reports that profit margins are only 3% to 6%, yet the billions of dollars in profits they earn every year may feel to many like a direct result of the day-to-day struggles that patients face getting the care they need.

These insurer tactics can adversely affect patients’ health and their trust in the health care system, which leaves patients in unthinkably difficult circumstances. It also undercuts the government’s goal of bringing affordable health care to all.The Conversation

Monica S. Aswani, Assistant Professor of Health Services Administration, University of Alabama at Birmingham and Paul Shafer, Assistant Professor of Health Law, Policy and Management, Boston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How the government can stop ‘churches’ from getting treated like real churches by the IRS

The Family Research Council is a conservative advocacy group with a “biblical worldview.” While it has a church ministries department that works with churches from several evangelical Christian denominations that share its perspectives, it does not represent a single denomination. Although its activities are primarily focused on policy, advocacy, government lobbying and public communication, the Internal Revenue Service granted the council’s application to be treated as “an association of churches” in 2020.

Concerned that the IRS had erred in allowing the council and similar groups to be designated churches or associations of churches, Democratic members of the House of Representatives sent the Treasury secretary and the IRS commissioner letters in 2022 and 2024 expressing alarm. The House Democrats pointed to what appeared to be “abuse” of the tax code and asked the IRS to “determine whether existing guidance is sufficient to prevent abuse and what resources or Congressional actions are needed.”

As a professor of nonprofit law, I believe some groups that aren’t churches or associations of churches want to be designated that way to avoid the scrutiny being a charitable organization otherwise requires. At the same time, some other groups that should qualify as churches may have difficulty doing so because of the IRS’ outdated test for that status.

Together with my colleague Ellen P. Aprill, I recently published a paper outlining two main arguments in favor of revising the federal government’s definitions of churches as they pertain to tax law.

No 990s means less scrutiny

All charitable nonprofits, including churches, get the same basic benefits under federal tax law. This means they don’t have to pay taxes on their revenue and that donors can deduct the value of their gifts from their taxable income – as long as they itemize deductions on their tax return.

Unlike other tax-exempt charities, churches don’t have to file 990 forms. That means the public does not have access to churches’ staff pay, board membership and funding details, which are in this publicly available tax form that all other charities must complete every year. The availability of 990 forms enhances the transparency and accountability of the nonprofit sector.

And churches and associations of churches are unlikely to get audited by the IRS. Federal law requires that a senior IRS official “reasonably believes” the church or association has violated federal tax rules before beginning an investigation. This means that an official must have reason to believe the organization has violated federal tax law before obtaining any information from the organization.

This standard is higher than what’s needed before an audit can begin for all other tax-exempt organizations and indeed all taxpayers. For everyone else, the IRS is free to begin an examination based only on a suspicion of a violation or even based on random selection.

Also, unlike other tax-exempt charities, churches and church associations are automatically eligible for their tax-exempt status. They don’t have to apply for it.

Why churches get special treatment

Congress has passed laws granting churches and what it calls “integrated auxiliaries” and “conventions or associations of churches” special protections because the First Amendment to the U.S. Constitution protects religious freedom.

Churches include houses of worship ranging in size from a handful of parishioners to megachurches with 10,000 or more people attending weekly services. Houses of worship of all faiths, including synagogues, mosques and temples, count as churches, according to the IRS.

Integrated auxiliaries are church schools and other organizations affiliated with churches or conventions and primarily supported by internal church sources, as opposed to by the public or government.

Conventions or associations of churches are organizations that have houses of worship from either a single denomination or from multiple denominations as their members. Most denominational bodies, such as the executive committee of the Southern Baptist Convention and the U.S. Conference of Catholic Bishops, are likely conventions or associations of churches, although the IRS does not publish a list of such entities.

Not every religious nonprofit belongs in one of these categories.

For example, the University of Notre Dame, where I teach law students and conduct legal research, and World Vision, a global humanitarian group, are both religious organizations that do not fall into any of these categories. This makes sense, because Notre Dame and World Vision are primarily engaged in activities other than fostering a religious congregation or coordinating the activities of churches within a single denomination.

The IRS has long relied on a 14-factor test to distinguish churches from the other religious nonprofits. Examples of those factors include having ordained ministers, a formal doctrine, a distinct membership and a regular congregation attending religious services.

It’s not necessary for all the factors to apply to pass this test.

Yet for almost as long, courts have been uncomfortable with this test because it draws heavily on the traditional characteristics of Protestant Christian churches, as the U.S. Court of Federal Claims explained in a 2009 ruling. This system therefore may be a poor fit for houses of worship of other faiths, especially given the increasing diversity of faith communities.

These courts have instead adopted an “associational test.” It focuses on whether the organization’s congregants hold religious services on a regular basis and gather in person on other occasions.

With the growth of virtual and televised religious services, an update of this test is overdue.

An older couple gets married over Zoom in a mostly empty church with people wearing masks. A couple get married in May 2020 in a mostly empty church, with a screen set up so guests can watch over Zoom. Andrew Caballero-Reynolds/AFP via Getty Images

Proposed solutions

Aprill and I recommend that the IRS change its definition for churches to the associational one adopted by some courts in rulings as early as 1980. As the U.S. Court of Federal Claims explained in that 2009 ruling, this test focuses on whether a body of believers assembles regularly to worship. Given technological advances, the IRS should also make it clear that this test can be satisfied through remote participation in religious services using interactive, teleconferencing apps such as Zoom.

This definition would be also better suited for congregations of all faiths because some faiths do not prioritize many of the factors included in the IRS test, such as having a formal code of doctrine or requiring members to not be associated with other houses of worship or faiths. And it would better reflect how some Americans participate in religious services today.

We recommend that the IRS revisit its test for being a church and that Congress pass a law that would change the definition of church associations. The new law could limit associations of churches to organizations that represent a single denomination, as Congress likely initially intended.

This latter change would make it harder for religious organizations that are primarily involved in bringing churches from multiple faiths together to engage in advocacy or other activities to obtain this status and the lack of transparency and accountability that come with it. We believe Congress, not the IRS, should make this change because of the potential political tensions that narrowing the definition could create.

We don’t think the changes would impinge upon the special role that churches have in our society. Indeed, the revised test for qualifying as a church would better fit with both the increasing variety of faiths in our country and technological advancements.The Conversation

Lloyd Hitoshi Mayer, Professor of Law, University of Notre Dame

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Retailers that make it harder to return stuff face backlash from their customers

In 2018, L.L. Bean ended its century-old “lifetime” return policy, limiting returns to one year after purchase and requiring receipts. The demise of this popular policy sparked backlash, with several customers filing lawsuits.

It also inspired my team of operations management researchers to study how customers respond when retailers make their return policies more strict. Our key finding: Whether they often or rarely return products they’ve purchased, consumers object – unless those retailers explain why.

I work with a group of researchers examining product return policies and how they affect consumers and retailers.

As we explained in an article published in the Journal of Operations Management, we designed experiments to study whether and why return policy restrictions irk customers. We also wanted to understand what retailers can do to minimize backlash after making it harder for customers to return stuff.

We conducted three experiments in which we presented scenarios to 1,500 U.S. consumers who played the role of loyal customers of a fictional retailer. We examined their reactions to the fictional retailer’s return policy restrictions, such as charging a 15% restocking fee and limiting open-ended return windows to 365, 180 and 30 days.

Participants became less willing to buy anything from the fictional retailer after it restricted its long-standing lenient return policy. They also said they would become less willing to recommend the retailer to others.

This occurred because the customers began to distrust the retailer and its ability to offer a high-quality service. The backlash was stronger when the restriction was more severe. Even those consumers who said they usually don’t return any products often reacted negatively.

When the fictional retailer announced its new, harsher return policy using official communication channels and provided a rationale, there was less backlash. Consumers found the changes more justified if the retailer highlighted increased “return abuse,” in which customers return products they’ve already used, or the high cost of processing returns.

You might presume that making it harder and more costly to return stuff could drive some shoppers away. Our research shows that the concern is valid and explains why. It also shows how communicating return policy changes directly with customers can help prevent or reduce backlash against retailers.

A big department store decked out for the holiday season in red and white colors. Customers visit Macy’s department store on Nov. 29, 2024, in Chicago for holiday shopping. Kamil Krzaczynski/Getty Images

Why it matters

Americans returned products worth an estimated US$890 billion to retailers in 2024. Processing a single item typically costs $21 to $46. Most of this merchandise ends up in landfills.

The rise of e-commerce and other technological changes have contributed to this trend. Another factor is the ease with which consumers may return stuff long after making a purchase and get a full refund.

Many other retailers besides L.L. Bean have done away with their long-standing lenient return policies. Over the past decade, for example, Macy’s, a department store chain, and Kohl’s, a big-box clothing store chain, have shortened the time frames for returns.

Macy’s restricted its open-ended return window to one year in 2016, further winnowed it to 180 days in 2017, then to 90 days in 2019. It then stopped accepting returns after 30 days in 2023. Kohl’s didn’t have any time limit on returns it would accept until 2019. Then it imposed a 180-day limit. Others, such as fast-fashion giants Zara and H&M, now charge their customers fees when they return merchandise.

However, research shows that customers value no-questions-asked return policies and see them as a sign of high-quality service. And when these arrangements become the industry standard, customers can get angry if retailers fail to meet it.

Interestingly, most retailers that restricted their policies didn’t tell customers directly. Instead, they quietly updated the new policies on websites, store displays and receipts. Although not drawing attention to bad news might appear prudent – as most customers wouldn’t notice the changes that way – dozens of threads on Reddit about these changes suggest that this isn’t always true.

What still isn’t known

We focused on restrictions on refunds and how long after a purchase customers could return merchandise. Other restrictions, such as retailers making heavily discounted items ineligible for returns, could also be worth investigating.

The Research Brief is a short take about interesting academic work.The Conversation

Huseyn Abdulla, Assistant Professor of Supply Chain Management, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Here’s what we've learned in the 20 years since the deadliest natural disaster in modern history

On Boxing Day 2004, an earthquake in the Indian Ocean near Indonesia set off a tsunami which killed almost 250,000 people. It was the deadliest natural disaster this century, and was probably the deadliest tsunami in human history.

As coastal engineers who specialise in tsunamis and how to prepare for them, we have seen how the events of 2004 reshaped our global disaster management systems. Among the lessons learned since that day, three themes stand out.

First, the importance of early warning systems, providing time to escape impact zones. Second, the importance of local preparations and educating people about the risks. Finally, the ongoing need for – but not overreliance on – coastal defences.

The evolution of early warning systems

The absence of a comprehensive early warning system contributed to the devastating loss of life in 2004. About 35,000 people died in Sri Lanka, for instance, which wasn’t hit until two hours after the earthquake.

Significant investment has been made in the years since, including the Indian Ocean tsunami warning system which operates across 27 member states. This system was able to issue warnings within eight minutes when another earthquake struck the same part of Indonesia in 2012. Similarly, when an earthquake hit Noto, Japan, in January 2024, swiftly issued tsunami warnings and evacuation orders undoubtedly saved lives.

However, these systems are not in use globally and weren’t able to detect the tsunamis that swept the Tongan islands in 2022 following the eruption of an undersea volcano in the South Pacific. In this instance, better monitoring of the volcano would have helped detect the early signs of a tsunami.

Championing community resilience

But early warning systems alone are not enough. We still need education and awareness campaigns, evacuation drills, and disaster response plans.

This sort of planning proved effective in the village of Jike, Japan, which was hit by the Noto tsunami in January 2024. Having learned from a major tsunami in 2011 (the one that hit Fukushima nuclear power plant), engineers constructed new evacuation routes to tsunami shelters. Though the village was destroyed, residents evacuated up a steep stairway and no casualties were reported in Jike.

Photos of harbour and stairs Left: The coastline near Jike, Japan. Right: The lifesaving evacuation route to the top of the hill behind Jike. Tomoya Shibayama

The role of engineering defences

In the years since the Boxing Day tsunami, countries at risk have invested in “hard” engineering defences including seawalls, offshore breakwaters and flood levees. While these structures offer a measure of protection, their effectiveness is limited.

In Japan, the idea that hard measures can protect against the loss of life has been discarded, with the view that large-scale tsunamis can overwhelm even the most robust defences. For instance, in 2011, even a rubble breakwater followed by a five-metre-high wall could not protect the city of Watari. The tsunami covered half the city and hundreds of people died.

Tsunamis in the past decade or two have exposed vulnerabilities in existing protection strategies, with our field surveys showing breakwaters and other structures having suffered severe damage. While complete failure is expected in the face of extreme events, it’s crucial that certain critical infrastructure, such as power plants, are designed to withstand the biggest tsunamis. This requires further research into resilient engineering designs that may be able to partially fail but remain functional.

Man measures house Measuring inundation depth at a house damaged by the 2004 tsunami, during the authors’ field survey in Polhena, Sri Lanka. Ravindra Jayaratne

After the 2011 tsunami, Japanese engineers created two tsunami measurement levels. Level one tsunamis are more frequent, occurring perhaps once every century, but less dangerous.

Level two tsunamis are the big ones that any given bit of coastline might expect only once every thousand or so years: Indian Ocean 2004, Japan 2011. It is these tsunamis that critical infrastructure like power plants must prepare for. Nothing will entirely hold back a 2004-sized tsunami, but the goal is for structures to overflow without being destroyed. They should still be able to assist the evacuation process by reducing tsunami height and delaying the time it takes.

Sand and water machine in a lab. In the labs, the authors work on modelling how seawalls will respond to a tsunami. Ravindra Jayaratne

Despite evolving views on hard defences, there remains value in building and planning coastal urban areas in more sustainable and responsible ways. In particular, critical infrastructure and densely populated areas in tsunami-threatened regions should be built on higher ground where possible.

Engineering advancements must also account for environmental consequences, including damage to ecosystems and disruption of natural coastal processes, with consideration given to nature-based solutions. Strengthening coral reefs with rock armour or heavy sandbags, and planting coastal forests as buffer zones may be a cheaper and more ecologically sensitive option than building high walls.

Climate change and the road ahead

The progress is undeniable. However, tsunami and earthquake data still isn’t shared widely around the world, and local authorities and experts often don’t communicate the risk to residents of flood-prone communities. The passage of time can erode the memory of best practice when it comes to people’s disaster preparedness.

Added to that, rapid climate change is making sea levels rise and extreme weather, such as storms, more frequent. This doesn’t cause more tsunamis, but it can make them worse, and it does make “hard” defences less sustainable in the long term.

While significant and urgent challenges remain, they are not insurmountable. By continuing to learn more about tsunamis and to prepare for the worst, we can minimise their impact and protect millions of lives.

Keep reading...Show less

The myths of Ayn Rand pose real threats to democracy

Coinbase's plan to go public last April highlights a troubling trend among tech companies: Its founding team will maintain voting control, making it mostly immune to the wishes of outside investors.

The best-known U.S. cryptocurrency exchange is doing this by creating two classes of shares. One class will be available to the public. The other is reserved for the founders, insiders and early investors, and will wield 20 times the voting power of regular shares. That will ensure that after all is said and done, the insiders will control 53.5% of the votes.

Coinbase will join dozens of other publicly traded tech companies – many with household names such as Google, Facebook, Doordash, Airbnb and Slack – that have issued two types of shares in an effort to retain control for founders and insiders. The reason this is becoming increasingly popular has a lot to do with Ayn Rand, one of Silicon Valley's favorite authors, and the “myth of the founder" her writings have helped inspire.

Engaged investors and governance experts like me generally loathe dual-class shares because they undermine executive accountability by making it harder to rein in a wayward CEO. I first stumbled upon this method executives use to limit the influence of pesky outsiders while working on my doctoral dissertation on hostile takeovers in the late 1980s.

But the risks of this trend are greater than simply entrenching bad management. Today, given the role tech companies play in virtually every corner of American life, it poses a threat to democracy as well.

All in the family

Dual-class voting structures have been around for decades.

When Ford Motor Co. went public in 1956, its founding family used the arrangement to maintain 40% of the voting rights. Newspaper companies like The New York Times and The Washington Post often use the arrangement to protect their journalistic independence from Wall Street's insatiable demands for profitability.

In a typical dual-class structure, the company will sell one class of shares to the public, usually called class A shares, while founders, executives and others retain class B shares with enough voting power to maintain majority voting control. This allows the class B shareholders to determine the outcome of matters that come up for a shareholder vote, such as who is on the company's board.

Advocates see a dual-class structure as a way to fend off short-term thinking. In principle, this insulation from investor pressure can allow the company to take a long-term perspective and make tough strategic changes even at the expense of short-term share price declines. Family-controlled businesses often view it as a way to preserve their legacy, which is why Ford remains a family company after more than a century.

It also makes a company effectively immune from hostile takeovers and the whims of activist investors.

Checks and balances

But this insulation comes at a cost for investors, who lose a crucial check on management.

Indeed, dual-class shares essentially short-circuit almost all the other means that limit executive power. The board of directors, elected by shareholder vote, is the ultimate authority within the corporation that oversees management. Voting for directors and proposals on the annual ballot are the main methods shareholders have to ensure management accountability, other than simply selling their shares.

Recent research shows that the value and stock returns of dual-class companies are lower than other businesses, and they're more likely to overpay their CEO and waste money on expensive acquisitions.

Companies with dual-class shares rarely made up more than 10% of public listings in a given year until the 2000s, when tech startups began using them more frequently, according to data collected by University of Florida business professor Jay Ritter. The dam began to break after Facebook went public in 2012 with a dual-class stock structure that kept founder Mark Zuckerberg firmly in control – he alone controls almost 60% of the company.

In 2020, over 40% of tech companies that went public did so with two or more classes of shares with unequal voting rights.

This has alarmed governance experts, some investors and legal scholars.

Ayn Rand and the myth of the superhuman founder

If the dual-class structure is bad for investors, then why are so many tech companies able to convince them to buy their shares when they go public?

I attribute it to Silicon Valley's mythology of the founder –- what I would dub an “Ayn Rand theory of corporate governance" that credits founders with superhuman vision and competence that merit deference from lesser mortals. Rand's novels, most notably “Atlas Shrugged," portray an America in which titans of business hold up the world by creating innovation and value but are beset by moochers and looters who want to take or regulate what they have created.

Perhaps unsurprisingly, Rand has a strong following among tech founders, whose creative genius may be “threatened" by any form of outside regulation. Elon Musk, Coinbase founder Brian Armstrong and even the late Steve Jobs all have recommended “Atlas Shrugged."

Her work is also celebrated by the venture capitalists who typically finance tech startups – many of whom were founders themselves.

The basic idea is simple: Only the founder has the vision, charisma and smarts to steer the company forward.

It begins with a powerful founding story. Michael Dell and Zuckerberg created their multibillion-dollar companies in their dorm rooms. Founding partner pairs Steve Jobs and Steve Wozniak and Bill Hewlett and David Packard built their first computer companies in the garage – Apple and Hewlett-Packard, respectively. Often the stories are true, but sometimes, as in Apple's case, less so.

And from there, founders face a gantlet of rigorous testing: recruiting collaborators, gathering customers and, perhaps most importantly, attracting multiple rounds of funding from venture capitalists. Each round serves to further validate the founder's leadership competence.

The Founders Fund, a venture capital firm that has backed dozens of tech companies, including Airbnb, Palantir and Lyft, is one of the biggest proselytizers for this myth, as it makes clear in its “manifesto."

“The entrepreneurs who make it have a near-messianic attitude and believe their company is essential to making the world a better place," it asserts. True to its stated belief, the fund says it has “never removed a single founder," which is why it has been a big supporter of dual-class share structures.

Another venture capitalist who seems to favor giving founders extra power is Netscape founder Marc Andreessen. His venture capital firm Andreessen Horowitz is Coinbase's biggest investor. And most of the companies in its portfolio that have gone public also used a dual-class share structure, according to my own review of their securities filings.

Bad for companies, bad for democracy

Giving founders voting control disrupts the checks and balances needed to keep business accountable and can lead to big problems.

WeWork founder Adam Neumann, for example, demanded “unambiguous authority to fire or overrule any director or employee." As his behavior became increasingly erratic, the company hemorrhaged cash in the lead-up to its ultimately canceled initial public offering.

Investors forced out Uber's Travis Kalanick in 2017, but not before he's said to have created a workplace culture that allegedly allowed sexual harassment and discrimination to fester. When Uber finally went public in 2019, it shed its dual-class structure.

There is some evidence that founder-CEOs are less gifted at management than other kinds of leaders, and their companies' performance can suffer as a consequence.

But investors who buy shares in these companies know the risks going in. There's much more at stake than their money.

What happens when powerful, unconstrained founders control the most powerful companies in the world?

The tech sector is increasingly laying claim to central command posts of the U.S. economy. Americans' access to news and information, financial services, social networks and even groceries is mediated by a handful of companies controlled by a handful of people.

Recall that in the wake of the Jan. 6 Capitol insurrection, the CEOs of Facebook and Twitter were able to eject former President Donald Trump from his favorite means of communication – virtually silencing him overnight. And Apple, Google and Amazon cut off Parler, the right-wing social media platform used by some of the insurrectionists to plan their actions. Not all of these companies have dual-class shares, but this illustrates just how much power tech companies have over America's political discourse.

One does not have to disagree with their decision to see that a form of political power is becoming increasingly concentrated in the hands of companies with limited outside oversight.

[Deep knowledge, daily.Sign up for The Conversation's newsletter.]The Conversation

Jerry Davis, Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford and Professor of Management and Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

More than 60 years later, Langston Hughes’ ‘Black Nativity’ is still a pillar of African American theater

During the end of every calendar year, a particular holiday performance pops up in African American communities and cultural centers across the nation. “Black Nativity” is a cherished cultural tradition to some and completely unknown to others.

One wonderful yet confounding thing about this show is that depending on where you see it, you will see significantly different productions – from Intiman Theatre in Seattle to Penumbra Theatre in St. Paul or the National Center of Afro-American Artists in Boston.

This might seem counterintuitive, but it is exactly what was intended by the author: Langston Hughes.

A man in a long, bright blue tunic poses elegantly on stage against an orange backdrop. The poet’s Christmas play was first produced in 1961. Charles A. Smith/Jackson State University/Historically Black Colleges & Universities via Getty Images

1 artist, 2 movements

Hughes, a noted although still underappreciated writer, is often associated with the Harlem Renaissance just after World War I, which spurred the growth of jazz. This era – when he penned some of his most famous poems, such as “The Negro Speaks of Rivers” – was the first African American arts movement since Emancipation.

But Hughes is one of a handful of artists whose work spanned both the Harlem Renaissance and the Black Arts Movement of the 1960s and ’70s, which partnered with the modern Civil Rights Movement. In 1961, when Hughes created “Black Nativity,” the Black Arts Movement was still in its infancy, but its early ethos was in the air.

Back in the 1920s, civil rights leader W.E.B. DuBois developed the Krigwa Players, a group that originated in Harlem but had satellite organizations in Cleveland, Baltimore, Philadelphia and Washington, D.C. The objectives of the Krigwa Players, published in the NAACP’s Crisis magazine, were that African American communities create art “for us,” “by us,” “about us” and “near us.” As Black consciousness grew and evolved in the 1960s, however, Black artists wanted to go beyond that criteria. They wanted to place African American life in all corners of existence, including ideas that were imposed on Black culture and transforming them to empower Black people.

Hughes’ desire to write “Black Nativity” was his attempt to reclaim the story of Jesus’ birth for African Americans – to show the son of God, the ultimate salvation, emerging from the Black community. American notions of Jesus were almost always depicted as white, with just a few exceptions. Hughes’ play, on the other hand, called for an entirely Black cast, including the mother and father of Jesus.

A woman in a white dress and black stockings dances among a crowd of people sitting outside in a city plaza. Dancer Cristyne Lawson, who performed in a London production of ‘Black Nativity’ in 1962. Daily Express/Pictorial Parade/Archive Photos/Getty Images

Freedom and flexibility

Moving people from the margins of a story to the center can prompt artists to find more creative forms. What Hughes developed was less like a simple, straightforward narrative and more like jazz, with improvisation at its center.

The playwright wanted to make a production with elasticity: a ritual with a basic frame, but plenty of flexibility. Hughes started to experiment with this ritual form in his 1938 play “Don’t You Want to Be Free?” performed by the Harlem Suitcase Theater. The play used African American history as a frame, calling to unite poor Black and white people to fight the exploitation of the rich.

“Black Nativity,” originally titled “Wasn’t That a Mighty Day,” is rooted in gospel music. The 27 songs in the original text serve as a sonic framing tool. It was to have a large choir – 160 singers strong, in the first production – as well as a narrator, and two dancers to embody Mary and Joseph. The script calls for “no set (only a platform of various levels,) a star and a place for a manger.”

A dozen performers in white robes stand with arms outstretched around a couple sitting on the floor of an empty stage. A production of ‘Black Nativity’ in Rotterdam, the Netherlands, in 1962. Eric Koch/Dutch National Archives via Wikimedia Commons

Hughes was an appreciator of modern dance and enlisted two of the best to hold the roles of Joseph and Mary: Alvin Ailey and Carmen de Lavallade. Yes, that Alvin Ailey, who went on to found one of the country’s most famous dance ensembles.

By all accounts, the dances that Ailey and de Lavallade constructed were brilliant – but were never seen by the public. The pair quit the show last-minute and were replaced by new dancers who could not use their choreography.

My former professor, the late George Houston Bass, was once Hughes’ secretary. Bass told me that Ailey and de Lavallade left in dispute over the title, which Hughes wanted to change to “Black Nativity.”

Ailey and de Lavallade, however, thought that “Wasn’t That a Might Day” was more inclusive. The dancers felt the show told the story of Jesus, and there was no need to focus on the emphasis on race – not entirely different from debates today. Should we emphasize that Barack Obama was a Black president, or a president who happened to be Black?

‘Black Nativity’ in the 21st century

I directed “Black Nativity” for Penumbra Theatre for a few years, starting in 2008, partnering with the Twin Cities’ TU Dance company.

Lou Bellamy, the founder and artistic director of the theater, told me there were audience members who came back every year. It was a tradition for many families originally from the Twin Cities to come from the four corners of the earth to see “Black Nativity” and visit their relatives – in that order of importance.

He went on to tell me that the audiences liked when we tweaked the show, but we had to keep the frame – including many of the gospel classics from the original, such as “Go Tell It On the Mountain.”

A man in a tank top lifts a woman in black pants and a black tank top as they rehearse a dance. Marion Willis, playing Joseph, and Karah Abiog, playing Mary, rehearse for a Penumbra Theatre Company production of ‘Black Nativity’ in 2000. David Brewster/Star Tribune via Getty Images

In the original text of the play, the narrator told the story of Joseph traveling to Bethlehem with his pregnant wife, because Emperor Caesar Augustus had required that everyone be taxed. It starts with Mary and Joseph looking for a room.

In my version, the interior narrative centers on an upper-middle-class Black family that is visited by a stranger who helps them find the true meaning of Christmas. Mary and Joseph are a truly extended part of the family who show up with the stranger and need a place for the holidays. They are not left to a manger, but brought into a home – prompting the audience to reexamine how to welcome the Lord into their homes and hearts.

Bellamy, the artistic director, also said that I had a responsibility to the theater’s bottom line. He pulled out a spreadsheet and showed me the proceeds of the previous year’s mounting. I believe Bellamy’s quote to me was, “I don’t care if you put the devil in the middle of it. We have to make this number.”

Financially, Hughes’ ritual play has become fuel for African American cultural institutions to maintain themselves. Penumbra, for example, has been an anchor in the Rondo neighborhood of St. Paul for almost half a century. “Black Nativity” is also an anchor for the National Center of Afro-American Artists, which has produced its own version since 1968; Karamu House in Cleveland; the Black Theatre Troupe in Phoenix; and many more.

The flexibility of this play and the resilience of these institutions is why “Black Nativity” is still here – and will stay for a long time. Hughes’ vision allows for African American theaters as old as Karamu House, the nation’s oldest, or the newest playhouse today to make their own “Joy to the World.”The Conversation

Dominic Taylor, Acting Chair of Theater, School of Theater, Film and Television, University of California, Los Angeles

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Ten films that bend, stretch and play with time — from Citizen Kane to Memento

The festive season can have a strange effect on our perception of time. Days blur together, hours stretch or vanish, and a sense of timelessness sets in. So, what better period to enjoy films that help us to reflect on time itself?

From mind-bending narratives to meditative explorations on time’s passage, these films are perfect for losing yourself – and finding new perspectives on time.

1. Citizen Kane (1941)

Orson Welles’ cinematic masterpiece doesn’t just tell the story of publishing tycoon Charles Foster Kane, it fragments it. It begins with Kane’s death and enigmatic final word, “Rosebud”. The film then unfolds in flashbacks narrated by those who knew him as they seek to discover the word’s meaning.

Each perspective adds a layer to his life while challenging the idea of a singular truth. Welles uses time as a puzzle, showing how memory and perception overlap to shape our understanding of the past.

Citizen Kane trailer.

2. Memento (2000)

Christopher Nolan’s breakthrough film has a reverse chronological structure, intercut with black-and-white sequences moving forward in time. The story is told through a series of scenes that move backwards while the protagonist, Leonard Shelby (Guy Pearce), moves forward with no short-term memory.

The film opens with the end so we know what happens but we don’t know why or how we got there. Each scene ends where the previous scene began, creating a sense of disorientation that mirrors Leonard’s condition.

3. The Clock (2010)

Christian Marclay’s 24-hour video installation turns time itself into art. It includes a stunning montage of scenes from film and television that feature clocks, timepieces or people waiting. More than 12,000 clips are meticulously assembled to create an artwork that itself functions as a clock.

The film’s presentation is synchronised with the local time, resulting in the time shown in any scene being the actual time. This makes viewers acutely aware of time’s passage while simultaneously losing themselves in a hypnotic stream of cinematic moments.

Cinematic and actual time run parallel in a 24-hour montage in The Clock.

4. High Noon (1952)

This landmark Western film collapses real time with screen time. Marshal Will Kane (Gary Cooper) is preparing to retire and leave town with his new wife, Amy (Grace Kelly). But he receives news that Frank Miller, a criminal he sent to prison, has been pardoned and is arriving on the noon train seeking revenge.

Despite pleas from his wife and townspeople to flee, Kane decides to stay and face Miller and his gang. He then finds himself increasingly isolated as the town abandons him. The film unfolds in approximate real time (85 minutes) between 10.40am and noon.

5. The Killing (1956)

Stanley Kubrick’s non-linear “one-last job” heist movie fragments time to brilliant effect. The narrative unfolds in a series of progressive flashbacks and even “flash sideways”, in which the actions and events are repeated from different characters’ points of view.

The studio hated it and asked him to cut it in a conventional fashion. But Kubrick abandoned the re-edit and returned the film to its original structure. As he told film critic Alexander Walker in 1971: “It was the handling of time that may have made this more than just a good crime film.”

The Killing’s official trailer from 1956.

6. Donnie Darko (2001)

This cult favourite merges teenage alienation and mental health with metaphysical time travel. Jake Gyllenhaal’s Donnie is haunted by visions and drawn into a “tangent universe” where time corrupts and loops back on itself. The film’s complex temporal structure involves parallel universes, predestination and sacrifice.

Its ambiguous ending leaves viewers debating whether Donnie’s actions were heroic sacrifice or delusion, making time itself an unreliable narrator.

7. Groundhog Day (1993)

Bill Murray’s cynical weatherman wakes up to the same day – again and again. As he relives February 1’s Groundhog Day in an endless loop, he is able to improve himself. He eventually evolves from selfishness and cynicism to empathy and kindness.

Interestingly, the film doesn’t reflect on why its protagonist relives the same day over and over again, and just accepts it.

8. Run Lola Run (1998)

This German-language thriller tells the same story three times, each with a different outcome. It presents alternative scenarios of Lola’s (Franka Potente) attempt to save her boyfriend’s life.

The film explores chaos theory and the butterfly effect through kinetic storytelling, with tiny variations in Lola’s choices rippling into dramatically different futures. The film’s use of different media, including animation and still photography, for different temporal states adds visual sophistication to its exploration of chance and choice.

9. Arrival (2016)

Time is not linear, at least not for the alien visitors in Denis Villeneuve’s sci-fi drama. As linguist Louise Banks (Amy Adams) learns to decode their language, she begins to experience time as they do – all at once.

The “Heptapod” language requires understanding the entire sentence before beginning it. This serves as a metaphor for how we might experience time if we could see it all at the same time.

10. Back to the Future (1985)

Movie poster for Back to the Future. Marty McFly races through time. Ralf Liebhold/Shutterstock

Few films play with the concept of time as joyfully as Robert Zemeckis’s 1980s classic, and no list of this type would be complete without it. Marty McFly (Michael J. Fox) adventures between the 1980s and 1950s using a DeLorean car retrofitted as a time machine.

It explores time, space and consequence, as Marty races to ensure his teenage parents fall in love to restore the future. It also spawned two popular sequels.

All of these films remind us that time isn’t just a backdrop. It’s a force that shapes our lives, memories and stories. As you sink into the cosy limbo of the season, let these cinematic journeys through time inspire reflection on your own.The Conversation

Nathan Abrams, Professor of Film Studies, Bangor University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Which infectious disease is likely to be the biggest emerging problem in 2025?

COVID emerged suddenly, spread rapidly and killed millions of people around the world. Since then, I think it’s fair to say that most people have been nervous about the emergence of the next big infectious disease – be that a virus, bacterium, fungus or parasite.

With COVID in retreat (thanks to highly effective vaccines), the three infectious diseases causing public health officials the greatest concern are malaria (a parasite), HIV (a virus) and tuberculosis (a bacterium). Between them, they kill around 2 million people each year.

And then there are the watchlists of priority pathogens – especially those that have become resistant to the drugs usually used to treat them, such as antibiotics and antivirals.

Scientists must also constantly scan the horizon for the next potential problem. While this could come in any form of pathogen, certain groups are more likely than others to cause swift outbreaks, and that includes influenza viruses.

One influenza virus is causing great concern right now and is teetering on the edge of being a serious problem in 2025. This is influenza A subtype H5N1, sometimes referred to as “bird flu”. This virus is widely spread in both wild and domestic birds, such as poultry. Recently, it has also been infecting dairy cattle in several US states and found in horses in Mongolia.

When influenza cases start increasing in animals such as birds, there is always a worry that it could jump to humans. Indeed, bird flu can infect humans with 61 cases in the US this year already, mostly resulting from farm workers coming into contact with infected cattle and people drinking raw milk.

Compared with only two cases in the Americas in the previous two years, this is quite a large increase. Coupling this with a 30% mortality rate from human infections, bird flu is quickly jumping up the list of public health officials’ priorities.

Luckily, H5N1 bird flu doesn’t seem to transmit from person to person, which greatly reduces its likelihood of causing a pandemic in humans. Influenza viruses have to attach to molecular structures called sialic receptors on the outside of cells in order to get inside and start replicating.

Flu viruses that are highly adapted to humans recognise these sialic receptors very well, making it easy for them to get inside our cells, which contributes to their spread between humans. Bird flu, on the other hand, is highly adapted to bird sialic receptors and has some mismatches when “binding” (attaching) to human ones. So, in its current form, H5N1 can’t easily spread in humans.

However, a recent study showed that a single mutation in the flu genome could make H5N1 adept at spreading from human to human, which could jump-start a pandemic.

If this strain of bird flu makes that switch and can start transmitting between humans, governments must act quickly to control the spread. Centres for disease control around the world have drawn up pandemic preparedness plans for bird flu and other diseases that are on the horizon.

For example, the UK has bought 5 million doses of H5 vaccine that can protect against bird flu, in preparation for that risk in 2025.

Even without the potential ability to spread between humans, bird flu is likely to affect animal health even more in 2025. This not only has large animal welfare implications but also the potential to disrupt food supply and have economic effects as well.

Cows on a dairy farm. Bird flu has been spreading in dairy herds in the US. BearFotos/Shutterstock

Everything is connected

This work all falls under the umbrella of “one health”: looking at human, animal and environmental health as interconnected entities, all with equal importance and effect on each other.

By understanding and preventing disease in our environment and the animals around us, we can better prepare and combat those diseases entering humans. Similarly, by surveying and disrupting infectious diseases in humans, we can protect our animals and the environment’s health too.

However, we must not forget about the continuing “slow pandemics” in humans, such as malaria, HIV, tuberculosis and other pathogens. Tackling them is paramount alongside scanning the horizon for any new diseases that might yet come.The Conversation

Conor Meehan, Associate Professor of Microbial Bioinformatics, Nottingham Trent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Increased surveillance at the Canada-U.S. border means more asylum seekers could die

At a press conference on Dec. 17, the Canadian federal government announced proposed new measures to expand its management of Canada’s border with the United States. These measures were intended to appease the incoming Trump administration and to avoid a threatened 25 per cent import tariff.

The proposal includes expansions of border technologies, including RCMP counterintelligence, 24/7 surveillance between ports of entry, helicopters, drones and mobile towers. But what will this mean for people seeking asylum?

If the U.S.-Mexico border is any indication, it will mean more death.

a barbed-wire topped wall running through a desert The U.S.-Mexico border wall in California. (Shutterstock)

Criminalizing migration

At the press conference, Dominic LeBlanc, the minister of finance and intergovernmental affairs, reaffirmed Canada’s relationship with the incoming Trump administration. Framed around politics of difference, and relying on the fearmongering trope of migration as a “crisis,” Canada’s new border plan will also cost taxpayers $1.3 billion.

During the press conference, LeBlanc’s remarks conflated migration with trafficking and crime, relying on “crimmigration,” or the use of criminalization to discipline, exclude, or expel migrants or others seen as not entitled to be in a country. LeBlanc also made direct reference to preventing fraud in the asylum system, with the driving forces behind this new border plan being “minimizing border volumes” and “removing irritants” to the U.S.

Minister LeBlanc details Canada’s border security plan on Dec. 17, 2024.

However, these framings weaken the global right to asylum, which is an internationally protected right guaranteed by the 1951 Refugee Convention and sections 96 and 97 of Canada’s own Immigration and Refugee Protection Protection Act.

Canada’s own courts have also found that the U.S. is not a safe country for some refugees.

Deadly borders

Since 2018, I have been researching technology and migration. I have worked at and studied various borders around the world, starting in Canada, moving south to the U.S.-Mexico border and including various countries in Europe and East Africa, as well as the Palestinian territories. Over the years, I have worked with hundreds of people seeking safety and witnessed the horrific conditions they have to survive.

The Sonoran Desert containing the U.S.-Mexico border has become what anthropologist Jason de Leon calls “the land of open graves.” Researchers have shown that deaths have increased every year as a result of growing surveillance and deterrence mechanisms. I have witnessed these spaces of death in the Sonoran Desert and European borders, with people on the move succumbing to these sharpening borders.

two wooden crosses marking graves in the desert Author’s photograph of graves in the Sonoran Desert — research has shown that more people die every year crossing into the U.S. through Mexico. (P. Molnar), CC BY

Canadian borders are not devoid of death. Families have frozen and drowned attempting to enter Canada. Others, like Seidu Mohammed and Razak Iyal, nearly froze to death and lost limbs as a result of frostbite; they later received refugee status and became Canadian citizens in 2023.

‘Extreme vulnerability’

Throughout the press conference, a clear theme emerged again and again: Canada’s border plan will “expand and deepen the relationship” between Canada and U.S. through border management, including both data sharing and operational support. The border management plan will include an aerial intelligence task force to provide non-stop surveillance. The mandate of the Canada Border Services Agency will also expand, and include a joint operational strike force.

In November, president-elect Donald Trump named former Immigration and Customs Enforcement director Tom Homan as his administration’s “border czar.” Homan explicitly called out Canada after his appointment, calling the Canadian border “an extreme vulnerability.”

Trump has also made pointed comments directed at Justin Trudeau, referring to him as “governor” and to Canada as the 51st state. And with Trump’s aggressive “America First” policies and the 25 per cent tariff threat, appeasing the incoming administration by strengthening border surveillance at the Canada-U.S. border is the lowest hanging fruit for the Trudeau administration to strengthen its hand.

Creeping surveillance

Border surveillance technologies do not remain at the border. In 2021, communities in Vermont and New York have already raised concerns about possible privacy infringements with the installation of surveillance towers.

There are also fears of growing surveillance and repression of journalists and the migrant justice sector as a whole.

And surveillance technologies used at the border have also been repurposed: for example, robo-dogs first employed at the U.S.-Mexico border have appeared in New York City and facial recognition technologies ubiquitous at airports are also being used on sports fans in stadiums.

a robot dog surrounded by people A remote-controlled robot dog in San Bernardino, Calif. used for search-and-rescue operations and law enforcement use. (Shutterstock)

The big business of borders

Taxpayers will foot the bill of this new border strategy to the hefty tune of $1.3 billion. This amount is part of a growing and lucrative border industrial complex that is now worth a staggering US$68 billion dollars and projected to grow exponentially to nearly a trillion dollars by 2031.

But taxpayers do not benefit. Instead, the private sector makes up the market place of technical solutions to the so-called “problem” of migration. In this lucrative ecosystem built on fear of “the migrant other,” it is the private sector actors and not taxpayers who benefit.

Instead of succumbing to the exclusionary politics of the incoming U.S. administration, we should call for transparency and accountability in the development and deployment of new technologies. There is also a need for more governance and laws to curtail these high-risk tech experiments before more people die at Canada’s borders.

Instead of spending $1.3 billion dollars on surveillance technologies that infringe upon people’s rights, Canada should strengthen its asylum system and civil society support. Canada should also remember its international human rights obligations, and resist the U.S. political rhetoric of dehumanizing people who are seeking safety and protection.The Conversation

Petra Molnar, Associate Director, Refugee Law Lab, York University, Canada

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Freemasons, homosexuals and corrupt elites in Cameroon: Inside an African conspiracy theory

An unusual and fascinating new book has been written by two anthropologists, called Conspiracy Narratives from Postcolonial Africa: Freemasonry, Homosexuality, and Illicit Enrichment. It explores an ongoing conspiracy theory in Cameroon and neighbouring Gabon that corrupt elites spread homosexuality through their connections to secret orders like the Freemasons. They trace the origins of the conspiracy theory to a moral panic in Cameroon in 2005. They then move back in time to understand what it all means. We asked the authors to tell us more.

What sparked the moral panic in Cameroon?

On Christmas day in 2005 the archbishop of Yaoundé, Cameroon’s capital, the famously homophobic Victor Tonye Bakot, surprised the nation with a sermon attacking the national elite. He accused them of spreading homosexuality by forcing anal sex on young men eager to get a job.

The sermon was all the more surprising since the archbishop spoke in the Yaoundé cathedral with the nation’s leaders right in front of him, including then president Paul Biya. It offered a new variation of the Catholic church’s attacks on Freemasonry.

Freemasonry is a male-only organisation that engages in secretive rituals and promotes a moral order but is not a religion. It emerged around 1700 in Scotland when liberal thinkers joined old guilds of masons. A Grand Lodge was established in London in 1717. Freemasonry then spread to France and French colonies.

In France, Freemasonry was fiercely attacked by the Catholic church, worried by the increasingly secular tendencies of the brotherhood and its supposedly central role in the French Revolution. So, even though the 2005 attack by the Cameroonian archbishop was nothing new, it hit like a wave in this specific context and created a conspiracy theory that lives on today.

Only a month after the sermon, several newspapers started publishing lists of “supposed” or even “prominent” homosexuals. (The so-called Affaire des listes – the list affair).

They named ministers and other politicians, sports and music stars, and even some senior religious leaders. Denouncing the elite as homosexuals corrupting the nation had become an outlet for people’s dissatisfaction with the regime.

The elite did not know how to defend itself against this attack. At first Biya asked for respect for people’s privacy. But when new rumours and hints were published, the government launched a witch-hunt against supposed homosexuals.

Same-sex practices had been criminalised by presidential decree in Cameroon in 1972. But until 2005 this was seldom applied. Since then, however, people suspected of such “criminal” behaviour have been harassed by arbitrary arrests and imprisonment.

Since 2000 homophobia has been on the rise on the African continent but, as Cameroonian sociologist Patrick Awondo emphasised, its “politicisation” takes different forms in each country. In Cameroon – and to a lesser extent Gabon – the homosexual targeted by popular outrage is politicians and the elite. The supposed omnipresence of Freemasonry and other global associations in higher circles is a key factor here.

You view this as a conspiracy theory?

Our analysis of this powerful attack as a conspiracy narrative addresses what might be one of the major challenges for academics today: how to deal with the tsunami of conspiracy theories that haunt politics globally.

These range from Trump and QAnon to the “street parliaments” in the early 2000s in Côte d’Ivoire (well-known public spaces used to defend the rule of President Laurent Gbagbo).

Academics used to see it as their first task to refute such conspiracy theories, but there is an increasing realisation of the futility of such an approach. Supporters may resent claims of scientific knowledge as superior and stick all the more to their convictions. Sociologists have noted it might be more urgent to first try to understand why these often improbable stories can gain such power.

We propose in our book that historicising conspiracy theories might be an answer. That is, studying them as products of specific historical settings.

So what is the historical background of the panic?

The first chapters of our book deal with the histories of masonism and anti-masonism in European and African settings. We try to understand why views that Freemasonry is tied to same-sex practices remained particularly resistant in French-speaking Africa.

We also consider the changing balance between the secrecy of the brotherhood and public display in post-colonial Africa. A good example is the leaked 2009 video showing the inauguration of Gabon’s president Ali Bongo as the Grand Master of the Grand Lodge of Gabon.

Going public like this is quite exceptional for a Freemason. Bongo probably hoped to impress his numerous adversaries by boasting of his access to special forms of power. But it also showed him as a neocolonial stooge as he was being led by representatives of the Grand Lodge Nationale de France.

The leaked video of Gabon’s President Ali Bongo.

Of course, linking homosexuality to Freemasonry strengthens the claim – now made by many in the continent – that homosexuality is un-African and imposed by colonialism. But for Cameroon and Gabon, there are hard-to-ignore signals that this was not the case.

Take, for example, the work of German ethnographer Günther Tessmann. He worked among the Fang people on the border between Cameroon and Gabon just after 1900, before the establishment of colonial authority. A recurring concept is biang akuma (the “medicine” of riches), which to Tessmann’s surprise was associated with sex between men. He highlighted in 1913 already two dimensions of popular perceptions of homosexuality in many African contexts: the association with “witchcraft” and also with enrichment.

This last element came out strongly from our comparisons with Côte d'Ivoire, Senegal and Nigeria. The idea of the anus as a source of enrichment has a long history in Africa. This puts the complaints of present-day Cameroonians that they live under anusocratie (rule of the anus) in a broader perspective.

It also helped us to contribute further to debates about the need for further decolonising queer studies.

Clearly, to understand the complexities of the puzzling links between Freemasonry, homosexuality and illicit enrichment, we must move beyond ideas of fixed identities.

A crucial contribution to the debate on homosexuality that exploded after 2005 came from Cameroonian anthropologist Sévérin Abega. He insisted that to understand the perceptions of it one has to take into account a local belief among some communities in Cameroon that every person has a double.

Thus Abega foreshadowed recent debates by Cameroonian scholars like Francis Nyamnjoh on African personhood as incomplete and Achille Mbembe on the return of animism.

Another decisive factor in understanding why linking Freemasonry to homosexuality became such a hot political issue after 2000 is the internet. Internet access was a watershed, bringing relief for LGBTIQ+ people in Cameroon and Gabon. But it also strengthened a backlash against ideas of a gay identity associated with the west. (In a twist to these developments, Biya’s daughter, Brenda Biya, caused fierce debate by coming out as lesbian in July 2024.)

What do you hope readers will take away?

The book offers insight into the role – as omnipresent as it is understudied – of Freemasonry and similar global orders in Africa. It adds to studies of the association of same-sex unions with “witchcraft” and illicit enrichment in west Africa.

Such aspects are mostly absent from activist-oriented studies – no doubt for good reasons – but essential for understanding the popular debates and struggles over same-sex issues in Africa today.

But the main contribution of the book might be in our attempt to analyse a powerful conspiracy narrative, not by trying to refute it but by historicising it. The question is whether African visions of the person as fluid and frontiers as porous – also when it comes to sexuality – can overcome the tendency to think of identities as fixed.The Conversation

Peter Geschiere, Professor Emeritus of African Anthropology, University of Amsterdam and Rogers Orock, Assistant Professor of Africana Studies, Lafayette College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Alice Doesn’t Live Here Anymore at 50: the film that marks a path not taken in Scorsese’s career

Alice Doesn’t Live Here Anymore, released on December 9 1974, is a fascinating composite of both 1970s New Hollywood and the legacy of the women-centred melodrama of the 1930s and ‘40s.

It is now mostly remembered as an early film directed by Martin Scorsese. But it was actually a project initiated by its lead actor, Ellen Burstyn, fresh off a series of acclaimed films including The Last Picture Show (1971), The King of Marvin Gardens (1972) and The Exorcist (1973).

The film would go on to be a significant commercial success, earn Burstyn the Academy Award for Best Actress, and inspire a much less gritty and profane sitcom that would last for nine seasons and featured only one (male) member of the original cast.

A step toward Hollywood

The subsequent critical reputation of Alice Doesn’t Live Here Anymore is somewhat skewed by its status as an atypical Scorsese film.

The director had only made three features: Who’s That Knocking at My Door (1967), Boxcar Bertha (1972) and Mean Streets (1973). Largely working outside the mainstream, he already had a significant critical reputation as a chronicler of flawed urban ethnic masculinity.

It is also fascinating to hear, this early in his career, Scorsese reminisce about how conscious he was of his growing reputation and of not wanting to be pigeonholed into a particular mode of cinema. He actively embraced the opportunity to make his first true Hollywood film.

He also felt the need to reorientate his focus away from men – though they still appear prominently – and embrace a female-centred narrative. There was also an insistence on working with women in key creative roles, and Scorsese followed Burstyn’s lead in terms of adjusting the script, encouraging improvisation and the nuance of performance.

Although women do feature prominently in subsequent Scorsese films such as New York, New York (1977), The Age of Innocence (1993) and Killers of the Flower Moon (2023), it can be argued Alice Doesn’t Live Here Anymore is Scorsese’s only narrative feature that centres on female experience.

It has been criticised for its overly mild feminism. But Burstyn was keen to make a movie that focused on the everyday pressures and desires of its carefully grounded female characters.

In the relatively inhospitable masculine terrain of New Hollywood, Alice Doesn’t Live Here Anymore is an outlier.

Scorsese is most commonly talked about as an iconoclast. But a key element of his career has also seen him operate within the system and maintain a capacity to work on large budgets and projects.

His desire to work with technologies such as 3D, large streaming companies, and actors like Leonardo DiCaprio (one of the few truly bankable actors in 21st-century cinema) have their roots in Scorsese’s employment by Warner Bros on this project.

He even expressed excitement about using the old Columbia Pictures sound stages. Alice Doesn’t Live Here Anymore would allow him to fuse contemporary – arguably feminist – sensibilities with the kind of star “package” designed in earlier times for actors such as Bette Davis and Joan Crawford.

Scorsese constantly toggles between cinema’s present and past, seeing them as inextricably entwined.

The path not taken

The film follows Alice (Burstyn) and her son Tommy as they travel from New Mexico to Arizona in pursuit of her dream of becoming a singer. It is one of many road movies made during this era and provides a fascinating time-capsule portrait of the desert and often ugly urban landscapes it travels through.

Although her pursuit of a career bubbles beneath the surface, the story is more concerned with the men Alice encounters and the camaraderie she forges with her fellow waitresses in a restaurant (the inevitable focus of the subsequent sitcom).

There is nothing particularly new or groundbreaking about this, but the film is most memorable for the small, often idiosyncratic scenes between Alice and her son. For the surprising moments of kindness, hard-won connection and violence Alice encounters. For the genuinely offbeat performance by Jodie Foster as Tommy’s worldly young friend. The needle drop of particular songs on the soundtrack.

Kris Kristofferson also provides an uncommonly soulful, weathered and comparatively gentle representation of masculinity.

Alice Doesn’t Live Here Anymore represents an important watershed in Scorsese’s career, and also a path not taken.

Although he has continued to work within and to the side of the mainstream, he has rarely produced a subsequent film with such warmth and sympathy for its central characters.

As a portrait of flawed humanity, it is miles away from his next feature, Taxi Driver (1976). After that, there was perhaps no turning back. Both for better and for worse.The Conversation

Adrian Danks, Associate professor in Cinema and Media Studies, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Hanukkah came to America

Hanukkah may be the best known Jewish holiday in the United States. But despite its popularity in the U.S., Hanukkah is ranked one of Judaism’s minor festivals, and nowhere else does it garner such attention. The holiday is mostly a domestic celebration, although special holiday prayers also expand synagogue worship.

So how did Hanukkah attain its special place in America?

Hanukkah’s back story

The word “Hanukkah” means dedication. It commemorates the rededicating of the ancient Temple in Jerusalem in 165 B.C. when Jews – led by a band of brothers called the Maccabees – tossed out statues of Hellenic gods that had been placed there by King Antiochus IV when he conquered Judea. Antiochus aimed to plant Hellenic culture throughout his kingdom, and that included worshipping its gods.

Legend has it that during the dedication, as people prepared to light the Temple’s large oil lamps to signify the presence of God, only a tiny bit of holy oil could be found. Yet, that little bit of oil remained alight for eight days until more could be prepared. Thus, each Hanukkah evening, for eight nights, Jews light a candle, adding an additional one as the holiday progresses throughout the festival.

Hanukkah’s American story

Today, America is home to almost 7 million Jews. But Jews did not always find it easy to be Jewish in America. Until the late 19th century, America’s Jewish population was very small and grew to only as many as 250,000 in 1880. The basic goods of Jewish religious life – such as kosher meat and candles, Torah scrolls, and Jewish calendars – were often hard to find.

In those early days, major Jewish religious events took special planning and effort, and minor festivals like Hanukkah often slipped by unnoticed.

My own study of American Jewish history has recently focused on Hanukkah’s development.

It began with a simple holiday hymn written in 1840 by Penina Moise, a Jewish Sunday school teacher in Charleston, South Carolina. Her evangelical Christian neighbors worked hard to bring the local Jews into the Christian fold. They urged Jews to agree that only by becoming Christian could they attain God’s love and ultimately reach Heaven.

Moise, a famed poet, saw the holiday celebrating dedication to Judaism as an occasion to inspire Jewish dedication despite Christian challenges. Her congregation, Beth Elohim, publicized the hymn by including it in their hymnbook.

This English language hymn expressed a feeling common to many American Jews living as a tiny minority. “Great Arbiter of human fate whose glory ne'er decays,” Moise began the hymn, “To Thee alone we dedicate the song and soul of praise.”

It became a favorite among American Jews and could be heard in congregations around the country for another century.

Shortly after the Civil War, Cincinnati Rabbi Max Lilienthal learned about special Christmas events for children held in some local churches. To adapt them for children in his own congregation, he created a Hanukkah assembly where the holiday’s story was told, blessings and hymns were sung, candles were lighted and sweets were distributed to the children.

His friend, Rabbi Isaac M. Wise, created a similar event for his own congregation. Wise and Lilienthal edited national Jewish magazines where they publicized these innovative Hanukkah assemblies, encouraging other congregations to establish their own.

Lilienthal and Wise also aimed to reform Judaism, streamlining it and emphasizing the rabbi’s role as teacher. Because they felt their changes would help Judaism survive in the modern age, they called themselves “Modern Maccabees.” Through their efforts, special Hanukkah events for children became standard in American synagogues.

20th-century expansion

By 1900, industrial America produced the abundance of goods exchanged each Dec. 25. Christmas’ domestic celebrations and gifts to children provided a shared religious experience to American Christians otherwise separated by denominational divisions. As a home celebration, it sidestepped the theological and institutional loyalties voiced in churches.

For the 2.3 million Jewish immigrants who entered the U.S. between 1881 and 1924, providing their children with gifts in December proved they were becoming American and obtaining a better life.

But by giving those gifts at Hanukkah, instead of adopting Christmas, they also expressed their own ideals of American religious freedom, as well as their own dedication to Judaism.

A Hanukkah religious service and party in 1940. Center for Jewish History, NYC

After World War II, many Jews relocated from urban centers. Suburban Jewish children often comprised small minorities in public schools and found themselves coerced to participate in Christmas assemblies. Teachers, administrators and peers often pressured them to sing Christian hymns and assert statements of Christian faith.

From the 1950s through the 1980s, as Jewish parents argued for their children’s right to freedom from religious coercion, they also embellished Hanukkah. Suburban synagogues expanded their Hanukkah programming.

As I detail in my book, Jewish families embellished domestic Hanukkah celebrations with decorations, nightly gifts and holiday parties to enhance Hanukkah’s impact. In suburbia, Hanukkah’s theme of dedication to Judaism shone with special meaning. Rabbinical associations, national Jewish clubs and advertisers of Hanukkah goods carried the ideas for expanded Hanukkah festivities nationwide.

In the 21st century, Hanukkah accomplishes many tasks. Amid Christmas, it reminds Jews of Jewish dedication. Its domestic celebration enhances Jewish family life. In its similarity to Christmas domestic gift-giving, Hanukkah makes Judaism attractive to children and – according to my college students – relatable to Jews’ Christian neighbors. In many interfaith families, this shared festivity furthers domestic tranquility.

In America, this minor festival has attained major significance.The Conversation

Dianne Ashton, Professor of Religion, Rowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

From dead galaxies to mysterious red dots, here’s what the James Webb telescope has found in just 3 years

On this day three years ago, we witnessed the nail-biting launch of the James Webb Space Telescope (JWST), the largest and most powerful telescope humans have ever sent into space.

It took 30 years to build, but in three short years of operation, JWST has already revolutionised our view of the cosmos.

It’s explored our own Solar System, studied the atmospheres of distant planets in search of signs of life and probed the farthest depths to find the very first stars and galaxies formed in the universe.

Here’s what JWST has taught us about the early universe since its launch – and the new mysteries it has uncovered.

Eerie blue monsters

JWST has pushed the boundary of how far we can look into the universe to find the first stars and galaxies. With Earth’s atmosphere out of the way, its location in space makes for perfect conditions to peer into the depths of the cosmos with infrared light.

The current record for the most distant galaxy confirmed by JWST dates back to a time when the universe was only about 300 million years old. Surprisingly, within this short time window, this galaxy managed to form about 400 million times the mass of our Sun.

This indicates star formation in the early universe was extremely efficient. And this galaxy is not the only one.

When galaxies grow, their stars explode, creating dust. The bigger the galaxy, the more dust it has. This dust makes galaxies appear red because it absorbs the blue light. But here’s the catch: JWST has shown these first galaxies to be shockingly bright, massive and very blue, with no sign of any dust. That’s a real puzzle.

There are many theories to explain the weird nature of these first galaxies. Do they have huge stars that just collapse due to gravity without undergoing massive supernova explosions?

Or do they have such large explosions that all dust is pushed away far from the galaxy, exposing a blue, dust-free core? Perhaps the dust is destroyed due to the intense radiation from these early exotic stars – we just don’t know yet.

Artist’s impression of what a blue galaxy in the early universe would look like. ESO/M. Kornmesser.

Unusual chemistry in early galaxies

The early stars were the key building blocks of what eventually became life. The universe began with only hydrogen, helium and a small amount of lithium. All other elements, from the calcium in our bones to the oxygen in the air we breathe, were forged in the cores of these stars.

JWST has discovered that early galaxies also have unusual chemical features.

They contain a significant amount of nitrogen, far more than what we observe in our Sun, while most other metals are present in lower quantities. This suggests there were processes at play in the early universe we don’t yet fully understand.

JWST has shown our models of how stars drive the chemical evolution of galaxies are still incomplete, meaning we still don’t fully understand the conditions that led to our existence.

A small image of a telescope with charts of chemical elements on the right side. Different chemical elements observed in one of the first galaxies in the universe uncovered by JWST. Adapted from Castellano et al., 2024 The Astrophysical Journal; JWST-GLASS and UNCOVER Teams

Small things that ended the cosmic dark arges

Using massive clusters of galaxies as gigantic magnifying glasses, JWST’s sensitive cameras can also peer deep into the cosmos to find the faintest galaxies.

We pushed further to find the point at which galaxies become so faint, they stop forming stars altogether. This helps us understand the conditions under which galaxy formation comes to an end.

JWST is yet to find this limit. However, it has uncovered many faint galaxies, far more than anticipated, emitting over four times the energetic photons (light particles) we expected.

The discovery suggests these small galaxies may have played a crucial role in ending the cosmic “dark ages” not long after the Big Bang.

The faintest galaxies uncovered by JWST in the early cosmos. Rectangles highlight the apertures of JWST’s near infrared spectrograph array, through which light was captured and analysed to unravel the mysteries of the galaxies’ chemical compositions. Atek et al., 2024, Nature

The mysterious case of the little red dots

The very first images of JWST resulted in another dramatic, unexpected discovery. The early universe is inhabited by an abundance of “little red dots”: extremely compact red colour sources of unknown origin.

Initially, they were thought to be massive super-dense galaxies that shouldn’t be possible, but detailed observations in the past year have revealed a combination of deeply puzzling and contradictory properties.

Bright hydrogen gas is emitting light at enormous speeds, thousands of kilometres per second, characteristic of gas swirling around a supermassive black hole.

This phenomenon, called an active galactic nucleus, usually indicates a feeding frenzy where a supermassive black hole is gobbling up all the gas around it, growing rapidly.

But these are not your garden variety active galactic nuclei. For starters: they don’t emit any detectable X-rays, as is normally expected. Even more intriguingly, they seem to have the features of star populations.

Could these galaxies be both stars and active galactic nuclei at the same time? Or some evolutionary stage in between? Whatever they are, the little red dots are probably going to teach us something about the birth of both supermassive black holes and stars in galaxies.

An image of galaxies with several red ones highlighted in a series of boxes. In the background, the JWST image of the Pandora Cluster (Abell 2744) is displayed, with a little red dot highlighted in a blue inset. The foreground inset on the left showcases a montage of several little red dots discovered by JWST. Adapted from Furtak et al., and Matthee et al., The Astrophysical Journal, 2023-2024; JWST-GLASS and UNCOVER Teams

The impossibly early galaxies

As well as extremely lively early galaxies, JWST has also found extremely dead corpses: galaxies in the early universe that are relics of intense star formation at cosmic dawn.

These corpses had been found by Hubble and ground-based telescopes, but only JWST had the power to dissect their light to reveal how long they’ve been dead.

It has uncovered some extremely massive galaxies (as massive as our Milky Way today and more) that formed in the first 700 million years of cosmic history. Our current galaxy formation models can’t explain these objects – they are too big and formed too early.

Cosmologists are still debating whether the models can be bent to fit (for example, maybe early star formation was extremely efficient) or whether we have to reconsider the nature of dark matter and how it gives rise to early collapsing objects.

JWST will turn up many more of these objects in the next year and study the existing ones in greater detail. Either way, we will know soon.

What’s next for JWST?

Just within its first steps, the telescope has revealed many shortcomings of our current models of the universe. While we are refining our models to account for the updates JWST has brought us, we are most excited about the unknown unknowns.

The mysterious red dots were hiding from our view. What else is lingering in the depths of cosmos? JWST will soon tell us. The Conversation

Themiya Nanayakkara, Scientist at the James Webb Australian Data Centre, Swinburne University of Technology; Ivo Labbe, ARC Future Fellow / Associate Professor, Swinburne University of Technology, and Karl Glazebrook, ARC Laureate Fellow & Distinguished Professor, Centre for Astrophysics & Supercomputing, Swinburne University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Christmas album that heralded the end of a folk musical era

For those looking to introduce some musical conflict into the holidays, Bob Dylan’s Christmas in the Heart remains a great choice in its 15th anniversary – like it or not.

Before Dylan really got started, an iconic group opened the door to mainstream folk success for Dylan and his contemporaries. And at the height of their popularity, they also released an unexpected Christmas album.

But instead of becoming a perennial classic, it seemed to foreshadow the approaching end for the group’s dominance at the peak of popular music.

That album was The Kingston Trio’s ill-fated The Last Month of the Year from 1960.

The ‘hottest act in show business’

The Kingston Trio are often remembered as a clean-cut, sanitised and goofy footnote in musical history. Their matching striped shirts may be a difficult fashion choice to rehabilitate today, but the trio’s impact on popular music was explosive.

Popular performances in 1957 San Francisco led quickly to their self-titled first album the following year. Reshaping folk music for a mainstream audience energised professional and amateur performers.

Critic Greil Marcus describes their breakthrough hit, 1958’s Tom Dooley, as having “the same effect on hearts and minds in 1958 that Nirvana’s Smells Like Teen Spirit and Nevermind did in 1991”.

By the time they released their Christmas album, they were the “hottest act in show business”.

In the previous two years, they’d had five number one albums on the Billboard charts. Four of their albums were in the top ten at the same time. They reportedly generated 15% of Capitol Records’ annual sales.

Following that phenomenal success, one early response to their Christmas album noted:

By now it’s fairly well established that the Kingston Trio could record Row, Row Your Boat in 12 languages, put it on wax, and the album would sell a half-million copies. As a consequence, there’s little doubt that The Last Month of the Year will be one of the big sellers this Christmas.

Instead, The Last Month of the Year became their first studio album not to reach number one.

Although still successful, their later albums never reached number one or Gold Album status again. Founding member Dave Guard left in 1961. A new lineup with replacement John Stewart had peaks of success, enduring in a changing folk scene – but never quite recapturing those initial years.

‘Perhaps the most unusual set of the year’

The Kingston Trio were lambasted, then and now, for their commercial focus. Nevertheless, The Last Month of the Year stands in contrast to many enduring commercial norms.

Contemporary responses to The Last Month of the Year noted “a number of almost unknown Christmas songs instead of the usual diet of standard carols” and “perhaps the most unusual set of the year”.

There are none of the 1940s and 1950s staples that have persisted through the decades. Nat King Cole opened his 1960 album The Magic of Christmas with a spirited Deck the Halls. Both Ella Fitzgerald (on Ella Wishes You a Swinging Christmas) and Peggy Lee (on Christmas Carousel) opened their Christmas albums of the same year with Jingle Bells.

In contrast, The Kingston Trio’s opening track is a subdued version of the 16th century Coventry Carol, a lullaby for the children Herod ordered to be killed. The restrained use of a celeste, or bell-piano, summons Christmas vibes but largely augments the sombre harmonies.

Opening with the biblical Massacre of the Innocents was certainly one way to set The Last Month of the Year apart from its jolly competitors.

Range, energy and appropriation

Other songs include delicate folk (All Through the Night), traditional rounds (A Round About Christmas), historical carols (Sing We Noel) and uncharacteristic original lyrics (The White Snows of Winter).

Spirituals Go Where I Send Thee and The Last Month of the Year (What Month Was Jesus Born In) allow the trio to focus on the kind of energy (and appropriation) that had defined much of their previous output.

Goodnight My Baby charms as a Christmas Eve lullaby that’s too excited to lull anyone to sleep.

Adding oddness, Mary Mild reshapes the strange apocryphal The Bitter Withy where a child Jesus creates “a bridge of the beams of the sun” to encourage children to play with him. The Kingston Trio only hint at the song’s common outcome that leaves his playmates dead and Mary meting out some corporal punishment.

Perhaps more restrained than their usual performances, the album nevertheless guides listeners through some of the styles and sources that the Trio’s brand of popular folk could draw on.

A Christmas album that still has something to offer

The Last Month of the year wasn’t the cause, but it occupies a turning point where The Kingston Trio’s cultural dominance began to slip.

A posed photo of the band. The Kingston Trio in 1957. The Kingston Trio/Wikimedia Commons, CC BY-SA

Soon after, Bob Dylan’s song Blowin’ in the Wind (published in 1962) and album The Freewheelin’ Bob Dylan (1963) marked a new era of folk that revived its political energy (for a while).

As folk music further solidified its place in the civil rights movement, the Kingston Trio’s collegiate party vibes and perceived apoliticism seemed out of step.

When Dylan released his Christmas album in 2009, one critic asked “Is he sincere? Does he mean it?”

That’s also a question that defined and dogged The Kingston Trio from the outset of the folk revival they ushered in. Are these goofy guys serious?

The Last Month of the Year is an intriguing and ambitious album by a group that, for a short but influential time, reshaped popular music.

It’s a forgotten Christmas album that might still have something new to offer a Christmas-weary listener.The Conversation

Kit MacFarlane, Lecturer, Creative Writing and Literature, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Reel resistance and Netflix’s removal of Palestinian films

Netflix faces calls for a boycott after it removed its “Palestinian Stories” collection this October. This includes approximately 24 films.

Netflix cited the expiration of three-year licences as the reason for pulling the films from the collection.

Nonetheless, some viewers were outraged and almost 12,000 people signed a CodePink petition calling on Netflix to reinstate the films.

At a time when Palestinians are facing what scholars, United Nations experts and Amnesty International are calling a genocide, Netflix’s move could be seen as a silencing of Palestinian narratives.

The disappearance of these films from Netflix in this moment has deeper implications. The removal of almost all films in this category represents a significant act of cultural erasure and anti-Palestinian racism.

There is a long history of the erasure of Palestine.

Cultural erasure

Book cover: ‘The Ethnic Cleansing of Palestine’ by Ilan Pappe, professor of history at the College of Social Sciences and International Studies at the University of Exeter. (Simon & Schuster)

Since the Nakba of 1948, Zionist militias have systematically ethnically cleansed Palestinians and destroyed hundreds of cities, towns and villages, while also targeting Palestinian culture.

Palestinian visual archives and books were looted, stolen and hidden away in Israeli-controlled state archives, classified and often kept under restricted access. This targeting of visual culture is not incidental. It is a calculated act of cultural erasure aimed at severing the connection between a people, their land and history.

Another notable instance of cultural erasure includes the thefts of the Palestinian Liberation Organization’s (PLO) visual archives and cinematic materials. In 1982, the PLO Arts and Culture Section, Research Centre and other PLO offices were looted during the Israeli invasion of Lebanon. The Palestinian Cinema Institutions film archives were moved during the invasion and later disappeared. Theft and looting also occured during the Second Intifada in the early 2000s and recurrent bombardments of Gaza.

This plundering of Palestinian cultural institutions, archives and libraries resulted in the loss of invaluable cultural materials, including visual archives.

To maintain Zionist colonial mythologies about the establishment of Israel, the state systematically stole, destroyed and holds captive Palestinian films and other historical and cultural materials.

Palestinian liberation cinema

By the mid-20th century, Palestinian cinema emerged as a vital component of global Third Worldism, a unifying global ideology and philosophy of anticolonial solidarity and liberation.

Palestinian cinema aligned with revolutionary filmmakers and cinema groups in Asia, Africa and Latin America, all seeking to reclaim their histories, culture and identity in the face of imperial domination.

This photo is taken by Hani Jawharieh, a Palestinian filmmaker who was killed in 1976 while filming in the Aintoura Mountains of Lebanon. CC BY

The PLO’s revolutionary films of the 1960s and 1970s were driven by the national liberation struggle and the desire to document the Palestinian revolution. Created as part of a broader campaign against colonialism and imperialism, PLO filmmakers aimed to rally international solidarity for the Palestinian cause through Afro-Asian, Tricontinental and socialist cultural networks.

Censorship

Censorship became one of the primary mechanisms for repressing cultural production in the Third World. Colonial and imperial powers, as well as allied governments, banned films, books, periodicals, newspapers and art that conveyed anti-colonial and anti-imperialist sentiments. Their films and cultural works were denied distribution in western and local markets.

Settler colonial states such as Israel rely on the destruction and suppression of the colonized narratives to erase historical and cultural connections to land. By doing so, they undermine Indigenous Palestinian claims to sovereignty and self-determination.

Many Palestinian cultural workers including writers, poets and filmmakers were persecuted, imprisoned, exiled, assassinated and killed.

In an essay about the 1982 Israeli invasion of Lebanon and the Sabra and Shatila massacres, the late Palestinian American literature professor, Edward Said, explained how the West systematically denies Palestinians the agency to tell their own stories. He said the West’s biased coverage and suppression of Palestinian narratives distorts the region’s history and justifies Israeli aggression. For a more truthful understanding of history, Palestinians needed the right “to narrate,” he said.

Resistance

Despite the denial to narrate, generations of Palestinian filmmakers, including Elia Suleiman, Michel Khleifi, Mai Masri, Annemarie Jacir and many others, have contributed to and evolved this cinematic tradition of resistance.

Their films centre the lived experiences of Palestinians under settler colonialism, occupation, apartheid and exile.

By capturing the Palestinian struggle, freedom dreams, joy, hopes and humour, they help to humanize a population.

A scene from TIFF selection, Farha, about a girl trying to pursue her education in 1948 Palestine just before the Nakba.

After Netflix first launched the Palestinian Stories collection in 2021, the company was criticized by the Zionist organization, Im Tirtzu. They pressured Netflix to purge Palestinian films.

A year later, Netflix faced more pushback — this time from Israeli officials — when it released Farha, a film set against the backdrop of the 1948 Nakba. Israeli Finance Minister Avigdor Lieberman even took steps to revoke state funding from theatres that screened the film.

The Israeli television series Fauda, produced by former IDF soldiers Lior Raz and Avi Issacharoff, remains on the platform. Fauda portrays an undercover Israeli military unit operating in the West Bank. The series has faced significant criticism for perpetuating racist stereotypes, glorifying Israeli military actions, and whitewashing the Israeli occupation and systemic oppression of Palestinians.

Such media helps to legitimize and normalize violent actions committed against Palestinians.

Suppression in the time of genocide

In a time of genocide, Palestinian stories, films, cultural production, media and visual culture transcend being mere cultural artifacts. They are tools of defiance, sumud (steadfastness), historical memory, documentation and preservation against erasure. They assert the fundamental right to Palestinian liberation and the right to narrate and exist even while being annihilated.

As such, in the past 400+ days, Israel has intensified its systematic silencing and erasure of Palestinian narratives.

One hundred thirty-seven journalists and media workers have been killed across the occupied Palestinian Territories and Lebanon since Israel declared war on Hamas following its Al-Aqsa Flood Operation on Oct. 7, 2023. According to the Committee to Protect Journalists, there are almost no professional journalists left in northern Gaza to document Israel’s ethnic cleansing. It has been the deadliest period for journalists in the world since CPJ began collecting data in 1992.

Israel has also targeted, detained, tortured, raped and killed academics, students, health-care workers and cultural workers; many who have shared eyewitness accounts and narrated their stories of genocide on social media platforms.

Israel has censored and silenced Palestinian narratives through media manipulation, digital censorship and the destruction of journalistic infrastructure. Palestinian cultural and academic institutions, cultural heritage and archives have also been bombed and destroyed in Gaza, termed scholasticide. The aim of this destruction is to obliterate historical memory, and suppress documentation of atrocities.

The genocide and scholasticide will prevent the Palestinian people’s ability to fully preserve centuries of history, knowledge, culture and archives.

Netflix’s decision to remove the Palestinian Stories collection and not renew the licences of the films during this time makes it complicit in the erasure of Palestinian culture.The Conversation

Chandni Desai, Assistant professor, Education, University of Toronto

This article is republished from The Conversation under a Creative Commons license. Read the original article.

BRAND NEW STORIES
@2024 - AlterNet Media Inc. All Rights Reserved. - "Poynter" fonts provided by fontsempire.com.