Science Archive

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

In the case of Covid. the whole outpuring of how to deal with and exaggerate this problem, brings such so called experts to the fore, with guesswork passed off as science -with no account to the wider harm done to health and mass survival. R.J Cook

‘I see dead people’: why so many of us believe in ghosts

October 30, 2020 11.22am GMT

Author

  1. Anna Stone Senior Lecturer in Psychology, University of East London

Disclosure statement

Anna Stone does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners

University of East London

University of East London provides funding as a member of The Conversation UK.

The Conversation UK receives funding from these organisations

View the full list

CC BY NDWe believe in the free flow of information
Republish our articles for free, online or in print, under Creative Commons licence.

Halloween seems an appropriate time of year to share the story of the Chaffin family and how a ghost helped decide a dispute over an inheritance. James L Chaffin of Monksville, North Carolina, died after an accident in 1921, leaving his estate in full to his favourite son Marshall and nothing to his wife and three other children. A year later Marshall died, so the house and 120 acres of land went to Marshall’s widow and son.

But four years later, his youngest son James “Pink” Chaffin started having extraordinary dreams in which his father visited him and directed him to the location of a second, later will in which Chaffin senior left the property divided between his widow and the surviving children. The case went to court and, as you’d expect, the newspapers of the time went mad for the story.

The court found in Pink’s favour and, thanks to the publicity, the Society for Psychical Research (SPR) investigated, finally coming to the conclusion that Pink had indeed been visited by his father’s ghost. Pink himself never wavered from this explanation, stating: “I was fully convinced that my father’s spirit had visited me for the purpose of explaining some mistake.”

Unlikely as it might seem in the cold light of day, ghosts and hauntings are a mainstream area of belief. Recent studies by YouGov in the UK and the USA show that between 30% and 50% of the population says they believe in ghosts. Belief in ghosts also appears to be global, with most (if not all) cultures around the world having some widely accepted kind of ghosts.

The existence of a ghost as an incorporeal (bodyless) soul or spirit of a dead person or animal is contrary to the laws of nature as we understand them, so it seems there is something here that calls for explanation. We can look at the worlds of literature, philosophy and anthropology for some of the reasons why people are so keen to believe.

Blithe (and vengeful) spirits

The desire for justice and the belief in some form of supernatural protection (which we see in more major religions) address basic human needs. Ghosts have long been thought of as vehicles for justice. Shakespeare’s Hamlet is visited by the ghost of his murdered father seeking revenge on his murderer. In Macbeth, meanwhile, the murdered Banquo points an accusing finger at the man responsible for his death.

19th-century painting of a ghost at a feast of medieval noblemen and women.
Unwelcome guest: Banquo’s ghost from Shakespeare’s Macbeth. Théodore Chassériau

This idea has its equivalents today in various countries. In Kenya, a murdered person may become an ngoma, a spirit who pursues their murderer, sometimes causing them to give themself up to the police. Or in Russia the rusalka is the spirit of a dead woman who died by drowning and now lures men to their death. She may be released when her death is avenged.

Ghosts can also be friends and protectors. In Charles Dickens’s A Christmas Carol, Ebeneezer Scrooge is helped by the ghosts of Christmas Present, Past and Future to mend his hardhearted ways before it’s too late. In the Sixth Sense (spoiler alert), the ghost character played by Bruce Willis helps a young boy to come to terms with his ability to see ghosts and to help them to find peace. Many people are comforted by thinking that their deceased loves ones are watching over them and perhaps guiding them.

But many people also like to believe that death is not the end of existence – it’s a comfort when we lose people we love or when we face the idea of our own mortality. Many cultures around the world have had beliefs that the dead can communicate with the living, and the phenomenon of spiritualism supposes that we can communicate with the spirits of the dead, often through the services of specially talented spirit mediums.

And we love to be scared, as long as we know we aren’t actually in danger. Halloween TV schedules are full of films where a group of (usually young) volunteers spends a night in a haunted house (with gory results). We seem to enjoy the illusion of danger and ghost stories can offer this kind of thrill.

Body and soul

Belief in ghosts finds support in the longstanding philosophical idea that humans are naïve dualists, naturally believing that our physical being is separate from our consciousness. This view of ourselves makes it easy for us to entertain the idea that our mind could have an existence separate from our body – opening the door to believing that our mind or consciousness could survive death, and so perhaps become a ghost.

Looking at how the brain works, the experience of hallucinations is a lot more common than many people realise. The SPR, founded in 1882, collected thousands of verified first-hand reports of visual or auditory hallucinations of a recently deceased person. More recent research suggests that a majority of elderly bereaved people may experience visual or auditory hallucinations of their departed loved ones that persist for a few months.

Another source of hallucinations is the phenomenon of sleep paralysis, which may be experienced when falling asleep or waking up. This temporary paralysis is sometimes accompanied by the hallucination of a figure in the room that could be interpreted as a supernatural being. The idea that this could be a supernatural visitation is easier to understand when you think that when we believe in a phenomenon, we are more likely to experience it.

Consider what might happen if you were in a reputedly haunted house at night and you saw something moving in the corner of your eye. If you believe in ghosts, you might interpret what you saw as a ghost. This is an example of top-down perception in which what we see is influenced by what we expect to see. And, in the dark, where it might be difficult to see properly, our brain makes the best inference it can, which will depend on what we think is likely – and that could be a ghost.

According to the Dutch philosopher Baruch Spinoza, belief comes quickly and naturally, whereas scepticism is slow and unnatural. In a study of neural activity, Harris and colleagues discovered that believing a statement requires less effort than disbelieving it.

Given these multiple reasons for us to believe in ghosts, it seems that the belief is likely to be with us for many years to come.

Comment There is a bit of a bourgeoise sneer about this article. How is it that she and so many others can ridicule peoples belief in ghosts, but we have to listen to Islam intruding self righteously into our culture, but must not dismiss their beliefs or offend them in the way this writer dismisses believers in ghosts ? That’s liberals for you. A true scientist, as opposed to the sort of SAGE technicians about to do even more social and economic damage, keeps an open mind. R.J Cook

REVEALED: The scientific PROOF that shows reincarnation is REAL

WHILE many scientists will dispel the notion of reincarnation as a myth, there are some credible experts out there who believe that it is a genuine phenomenon.

Dr Ian Stevenson, former Professor of Psychiatry at the University of Virginia School of Medicine and former chair of the Department of Psychiatry and Neurology, dedicated the majority of his career to finding evidence of reincarnation, until his death in 2007.

Dr Stevenson claims to have found over 3,000 examples of reincarnation during his time which he shared with the scientific community.

In a study titled ‘Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons’, Dr Stevenson used facial recognition to analyse similarities between the claimant and their alleged prior incarnation, while also studying birth marks.

He wrote in his study: “About 35 per cent of children who claim to remember previous lives have birthmarks and/or birth defects that they (or adult informants) attribute to wounds on a person whose life the child remembers. The cases of 210 such children have been investigated. 

3

Are people ‘reborn’ once they die? (Image: GETTY)

“The birthmarks were usually areas of hairless, puckered skin; some were areas of little or no pigmentation (hypopigmented macules); others were areas of increased pigmentation (hyperpigmented nevi).

“The birth defects were nearly always of rare types. In cases in which a deceased person was identified the details of whose life unmistakably matched the child’s statements, a close correspondence was nearly always found between the birthmarks and/or birth defects on the child and the wounds on the deceased person. 

afterlife

Have scientists found proof of reincarnation? (Image: GETTY)

“In 43 of 49 cases in which a medical document (usually a postmortem report) was obtained, it confirmed the correspondence between wounds and birthmarks (or birth defects).”

In a separate study, Dr Stevenson interviewed three children who claimed to remember aspects of their previous lives.

4

Many believe that the soul leaves the body after death (Image: GETTY)

The children made 30-40 statements each regarding memories that they themselves had not experienced, and through verification, he found that up to 92 per cent of the statements were correct.

The article, published on Scientific Exploration, Dr Stevenson wrote: “It was possible in each case to find a family that had lost a member whose life corresponded to the subject’s statements. 

Dr Ian Stevenson, former Professor of Psychiatry at the University of Virginia School of Medicine and former chair of the Department of Psychiatry and Neurology, dedicated the majority of his career to finding evidence of reincarnation, until his death in 2007.

Dr Stevenson claims to have found over 3,000 examples of reincarnation during his time which he shared with the scientific community.

In a study titled ‘Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons’, Dr Stevenson used facial recognition to analyse similarities between the claimant and their alleged prior incarnation, while also studying birth marks.

He wrote in his study: “About 35 per cent of children who claim to remember previous lives have birthmarks and/or birth defects that they (or adult informants) attribute to wounds on a person whose life the child remembers. The cases of 210 such children have been investigated. 

3

Are people ‘reborn’ once they die? (Image: GETTY)

“The birthmarks were usually areas of hairless, puckered skin; some were areas of little or no pigmentation (hypopigmented macules); others were areas of increased pigmentation (hyperpigmented nevi).

“The birth defects were nearly always of rare types. In cases in which a deceased person was identified the details of whose life unmistakably matched the child’s statements, a close correspondence was nearly always found between the birthmarks and/or birth defects on the child and the wounds on the deceased person. 

afterlife

Have scientists found proof of reincarnation? (Image: GETTY)

“In 43 of 49 cases in which a medical document (usually a postmortem report) was obtained, it confirmed the correspondence between wounds and birthmarks (or birth defects).”

In a separate study, Dr Stevenson interviewed three children who claimed to remember aspects of their previous lives.

4

Many believe that the soul leaves the body after death (Image: GETTY)

The children made 30-40 statements each regarding memories that they themselves had not experienced, and through verification, he found that up to 92 per cent of the statements were correct.

Dr Ian Stevenson, former Professor of Psychiatry at the University of Virginia School of Medicine and former chair of the Department of Psychiatry and Neurology, dedicated the majority of his career to finding evidence of reincarnation, until his death in 2007.

Dr Stevenson claims to have found over 3,000 examples of reincarnation during his time which he shared with the scientific community.

In a study titled ‘Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons’, Dr Stevenson used facial recognition to analyse similarities between the claimant and their alleged prior incarnation, while also studying birth marks.

He wrote in his study: “About 35 per cent of children who claim to remember previous lives have birthmarks and/or birth defects that they (or adult informants) attribute to wounds on a person whose life the child remembers. The cases of 210 such children have been investigated. 

3

Are people ‘reborn’ once they die? (Image: GETTY)

“The birthmarks were usually areas of hairless, puckered skin; some were areas of little or no pigmentation (hypopigmented macules); others were areas of increased pigmentation (hyperpigmented nevi).

“The birth defects were nearly always of rare types. In cases in which a deceased person was identified the details of whose life unmistakably matched the child’s statements, a close correspondence was nearly always found between the birthmarks and/or birth defects on the child and the wounds on the deceased person. 

afterlife

Have scientists found proof of reincarnation? (Image: GETTY)

“In 43 of 49 cases in which a medical document (usually a postmortem report) was obtained, it confirmed the correspondence between wounds and birthmarks (or birth defects).”

In a separate study, Dr Stevenson interviewed three children who claimed to remember aspects of their previous lives.

4

Many believe that the soul leaves the body after death (Image: GETTY)

The children made 30-40 statements each regarding memories that they themselves had not experienced, and through verification, he found that up to 92 per cent of the statements were correct.

The article, published on Scientific Exploration, Dr Stevenson wrote: “It was possible in each case to find a family that had lost a member whose life corresponded to the subject’s statements. 

What happens after you die? That used to be just a religious question, but science is starting to weigh in. Sam Littlefair looks at the evidence that you’ve lived before.

James Huston (left) was the only pilot on the aircraft carrier Natoma Bay to die in the battle of Iwo Jima. More than five decades later, a two-year-old named James Leninger (right) talked about flying off a boat named “Natoma” near Iwo Jima and made drawings of fighter planes getting shot down.

On March 3, 1945, James Huston, a twenty-one-year-old U.S. Navy pilot, flew his final flight. He took off from the USS Natoma Bay, an aircraft carrier engaged in the battle of Iwo Jima. Huston was flying with a squadron of eight pilots, including his friend Jack Larsen, to strike a nearby Japanese transport vessel. Huston’s plane was shot in the nose and crashed in the ocean.

Fifty-three years later, in April of 1998, a couple from Louisiana named Bruce and Andrea Leninger gave birth to a boy. They named him James.

When he was twenty-two months old, James and his father visited a flight museum, and James discovered a fascination with planes—especially World War II aircraft, which he would stare at in awe. James got a video about a Navy flight squad, which he watched repeatedly for weeks.

One of James Leninger’s drawings of fighter planes.

Within two months, James started saying the phrase, “Airplane crash on fire,” including when he saw his father off on trips at the airport. He would slam his toy planes nose-first into the coffee table, ruining the surface with dozens of scratches.

James started having nightmares, first with screaming, and then with words like, “Airplane crash on fire! Little man can’t get out!”, while thrashing and kicking his legs.

Eventually, James talked to his parents about the crash. James said, “Before I was born, I was a pilot and my airplane got shot in the engine, and it crashed in the water, and that’s how I died.” James said that he flew off of a boat and his plane was shot by the Japanese. When his parents asked the name of the boat, he said “Natoma.”

SIGN UP FOR LION’S ROAR NEWSLETTERS

Get even more Buddhist wisdom delivered straight to your inbox! Sign up for Lion’s Roar free email newsletters.

When his parents asked James who “little man” was, he would say “James” or “me.” When his parents asked if he could remember anyone else, he offered the name “Jack Larsen.” When James was two and a half, he saw a photo of Iwo Jima in a book, and said “My plane got shot down there, Daddy.”

When James Leninger was eleven years old, Jim Tucker came to visit him and his family. Tucker, a psychiatrist from the University of Virginia, is one of the world’s leading researchers on the scientific study of reincarnation or rebirth. He spent two days interviewing the Leninger family, and says that James represents one of the strongest cases of seeming reincarnation that he has ever investigated.

“You’ve got this child with nightmares focusing on plane crashes, who says he was shot down by the Japanese, flew off a ship called ‘Natoma,’ had a friend there named Jack Larsen, his plane got hit in the engine, crashed in the water, quickly sank, and said he was killed at Iwo Jima. We have documentation for all of this,” says Tucker in an interview.

“It turns out there was one guy from the ship Natoma Bay who was killed during the Iwo Jima operations, and everything we have documented from James’ statements fits for this guy’s life.”

As a toddler, James Leninger was fascinated with airplanes and knew obscure details about WWII aircraft.

Jim Tucker grew up in North Carolina. He was a Southern Baptist, but when he started training in psychiatry, he left behind any religious or spiritual worldview. Years later, he read about the work of a psychiatrist named Ian Stevenson in the local paper.

Stevenson was a well-respected academic who left his position as chair of psychiatry at the University of Virginia in the 1960s to undertake a full-time study of reincarnation. Though his papers never got published in any mainstream scientific journals, he received appreciative reviews in respected publications like The Journal of the American Medical Association, The American Journal of Psychology, and The Lancet. Before his death in 2007, Stevenson handed over much of his work to Tucker at the University of Virginia’s Division of Perceptual Studies.

The first step in researching the possibility of rebirth is the collection of reports of past life memories. Individually, any one report, like James Leninger’s, proves little. But when thousands of the cases are analyzed collectively, they can yield compelling evidence.

After decades of research, the Division of Perceptual Studies now houses 2,500 detailed records of children who have reported memories of past lives. Tucker has written two books summarizing the research, Life Before Life and Return to Life. In Life Before Life, Tucker writes, “The best explanation for the strongest cases is that memories, emotions, and even physical injuries can sometimes carry over from one life to the next.”

Children in rebirth cases generally start making statements about past lives between the ages of two and four and stop by the age of six or seven, the age when most children lose early childhood memories.

A typical case of Tucker’s starts with a communication from a parent whose child has described a past life. Parents often have no prior interest in reincarnation, and they get in touch with Tucker out of distress—their child is describing things there is no logical way they could have experienced. Tucker corresponds with the parents to find out more. If it sounds like a strong case, with the possibility of identifying a previous life, he proceeds.

‘This is not like what you might see on TV, where someone says they were Cleopatra. These kids are typically talking about being an ordinary person.’

When Tucker meets with a family, he interviews the parents, the child, and other potential informants. He fills out an eight-page registration form, and collects records, photographs, and evidence. Eventually, he codes more than two hundred variables for each case into a database.

In the best cases, the researcher meets the family before they’ve identified a suspected previous identity. If the researchers can identify the previous personality (PP) first, they have the opportunity to perform controlled tests.

In a recent case, Tucker met a family whose son remembered fighting in the jungle in the Vietnam War and getting killed in action. The boy gave a name for the PP. When the parents looked up the name, they found that it was a real person. Before doing any further research, they contacted Tucker.

Tucker did a controlled test with the boy, who was five. He showed him eight pairs of photos. In each pair, one photo was related to the soldier’s life and one was not—such as a photo of his high school and a photo of a high school he didn’t go to. For two of the pairs, the boy made no choice. In the remaining six, he chose correctly.

In another case, a seven-year-old girl named Nicole remembered living in a small town, on “C Street,” in the early 1900s. She remembered much of the town having been destroyed by a fire and often talked of wanting to go home. Through research, Tucker hypothesized Nicole was describing Virginia City, Nevada, a small mining town that was destroyed by fire in 1875, where the main road was “C Street.” Tucker traveled to Virginia City with Nicole and her mother. As they drove down the road into the town, Nicole remarked, “They didn’t have these black roads when I lived here before.”

Nicole had described strange memories of her previous life. She said there were trees floating in the water. She said horses walked down the streets. And she talked about a “hooley dance.” In the town, they discovered that there had once been a massive network of river flumes used to transport logs to the town to construct nearby mineshafts. They discovered that wild horses wandered through the streets of the town. And that a “hooley” is a type of Irish dance that was popular there.

“We weren’t able to identify a specific individual,” says Tucker. “But there are parts of the case that are hard to dismiss.”

As her plane was lifting off from Nevada, Nicole burst into tears. “I don’t want to leave here,” she said.

Her mother asked if she really believed Virginia City was her home. “No,” said Nicole. “I know it was.”

Researcher Jim Tucker, author of “Life Before Life,” who has collected hundreds of cases of children claiming past-life memories.

Tucker is trying to investigate scientifically a question that has traditionally been the province of religion: what happens after we die? Two of the world’s largest religions, Hinduism and Buddhism, argue that we are reborn.

Certain schools of Buddhism don’t particularly concern themselves with the idea of rebirth, and some modern analysts argue that the Buddha taught it simply as a matter of convenience because it was the accepted belief in the India of his time. Most Buddhists, however, see it as central to the teachings on the suffering of samsara—the wheel of cyclic existence—and nirvana, the state of enlightenment in which one is free from the karma that drives rebirth (although one may still choose to be reborn in order to follow the bodhisattva path of compassion).

Buddhists generally prefer the term “rebirth” to “reincarnation” to differentiate between the Hindu and Buddhist views. The concept of reincarnation generally refers to the transmigration of an atman, or soul, from lifetime to lifetime. This is the Hindu view, and it is how reincarnation is generally understood in the West.

Instead, Buddhism teaches the doctrine of anatman, or non-self, which says there is no permanent, unchanging entity such as a soul. In reality, we are an ever-changing collection of consciousnesses, feelings, perceptions, and impulses that we struggle to hold together to maintain the illusion of a self.

See also: The Buddhist Teachings on Rebirth

In the Buddhist view, the momentum, or “karma,” of this illusory self is carried forward from moment to moment—and from lifetime to lifetime. But it’s not really “you” that is reborn. It’s just the illusion of “you.” When asked what gets reborn, Buddhist teacher Chögyam Trungpa Rinpoche reportedly said, “Your bad habits.”

For Jim Tucker, though, the spiritual connections—Buddhist or otherwise—are incidental. “I’m purely investigating what the facts show,” he says, “as opposed to how much they may agree or disagree with particular belief systems.”

Rebirth is just one component of a theory of consciousness that Tucker is working on. “The mainstream materialist position is that consciousness is produced by the brain, this meat,” he says. “So consciousness is what some people would call an epiphenomenon, a byproduct.”

He sees it the other way around: our minds don’t exist in the world; the world exists in our minds. Tucker describes waking reality as like a “shared dream,” and when we die, we don’t go to another place. We go into another dream.

Tucker’s dream model parallels some key Buddhist concepts. In Buddhism, reality is described as illusion, often compared to a sleeping dream. In Siddhartha Gautama’s final realization, he reportedly saw the truth of rebirth and recalled all of his past lives. Later that night, he attained enlightenment, exited the cycle of death and rebirth, and earned the title of “Buddha”—which literally means “one who is awake.”

In fact, according to scripture, the Buddha met most—if not all—of Jim Tucker’s six criteria for a proven case of rebirth. Maybe he would have made an interesting case.

Memory is only one phenomenon associated with past lives, and memories alone are not enough to make a case.

In order to proceed with an investigation, Tucker’s team requires that a case meet at least two out of six criteria:

  1. a specific prediction of rebirth, as in the Tibetan Buddhist tulku system
  2. a family member (usually the mother) dreaming about the previous personality (PP) coming
  3. birthmarks or birth defects that seem to correspond to the previous life
  4. corroborated statements about a previous life
  5. recognitions by the child of persons or objects connected to the PP
  6. unusual behaviors connected to the PP

Tucker’s research suggests that, if rebirth is real, much more than memories pass from one life to the next. Many children have behaviors and emotions that seem closely related to their previous life.

Emotionality is a signal of a strong case. The more emotion a child shows when recalling a past life, the stronger their case tends to be. When children start talking about past-life memories, they’re often impassioned. Sometimes they demand to be taken to their “other” family. When talking about their past life, the child might talk in the first person, confuse past and present, and get upset. Sometimes they try to run away.

In one case, a boy named Joey talked about his “other mother” dying in a car accident. Tucker recounts the following scene in Life Before Life: “One night at dinner when he was almost four years old, he stood up in his chair and appeared pale as he looked intently at his mother and said, ‘You are not my family—my family is dead.’ He cried quietly for a minute as a tear rolled down his cheek, then he sat back down and continued with his meal.”

In another unsettling case, a British boy recalled the life of a German WWII pilot. At age two, he started talking about crashing his plane while on a bombing mission over England. When he learned to draw, he drew swastikas and eagles. He goose-stepped and did the Nazi salute. He wanted to live in Germany, and had an unusual taste for sausages and thick soups.

In some cases, these emotions manifest in symptoms that look like post-traumatic stress disorder, but without any obvious trauma in this life. Some of these children engage in “post-traumatic play” in which they act out their trauma—often the way the PP died—with toys. One boy repeatedly acted out his PP’s suicide, pretending a stick was a rifle and putting it under his chin. In the cases where the PP died unnaturally, more than a third of the children had phobias related to the mode of death. Among the children whose PP died by drowning, a majority were afraid of water.                                        ‚

Positive attributes can also carry over, seemingly. In almost one in ten cases, parents report that their child has an unusual skill related to the previous life. Some of those cases involve “xenoglossy,” the ability to speak or write a language one couldn’t have learned by normal means.

A two-year-old boy named Hunter remembered verifiable details from a past life as a famous pro golfer. Hunter took toy golf clubs everywhere he went. He started golf lessons three years ahead of the minimum age, and his instructors called him a prodigy. By age seven, he had competed in fifty junior tournaments, winning forty-one.

In many cases, the child is born with marks or birth defects that seem connected to wounds on the PP’s body. Ian Stevenson reported two hundred such cases in his monograph Reincarnation and Biology, including several in which a child who remembered having been shot had a small, round birthmark (matching a bullet entrance wound) and, on the other side of their body, a larger, irregularly-shaped birthmark (matching a bullet exit wound).

Ryan, a boy from Oklahoma (left, with his mother and Jim Tucker) remembered many details about a life as a Hollywood movie extra and agent. After investigation, the details matched the life of a man named Marty Martyn (right), who died in 1964.

Patrick, born in the Midwest in 1992, had several notable birthmarks. His older half-brother, Kevin, had died of cancer twelve years before Patrick was born. Kevin was in good health until he was sixteen months old, when he developed a limp, caused by cancer. He was admitted to the hospital, and doctors saw that he had a swelling above his right ear, also caused by the disease. His left eye was protruding and bleeding slightly, and he eventually went blind in that eye. The doctors gave him fluid through an IV inserted in the right side of his neck. He died within a few months.

Soon after Patrick’s birth, his mother noticed a dark line on his neck, exactly where Kevin’s IV had been; an opacity in his left eye, where Kevin’s eye had protruded; and a bump above his right ear, where Kevin had swelling. When he began to walk, Patrick had a limp, like his brother. At age four, he began recalling memories from Kevin’s life and saying he had been Kevin.

See also: A comparison of modern reincarnation research and the traditional Buddhist views

Tucker says there is no perfect case, but, collectively, the research becomes hard to rationalize with normal explanations. Research has demonstrated that children in past-life memory cases are no more prone to fantasies, suggestions, or dissociation than other children. Regarding coincidence, statisticians have declined to do statistical analyses because the cases involve too many complex factors, but one statistician commented, “phrases like ‘highly improbable’ and ‘extremely rare’ come to mind.” For the stronger cases, the most feasible explanation is elaborate fraud, but it’s hard to see any reason why a family would make up such a story. Often, reincarnation actually violates their belief system, and many families remain anonymous, anyway.

“This is not like what you might see on TV, where someone says they were Cleopatra,” says Tucker. “These kids are typically talking about being an ordinary person. The child has numerous memories of a nondescript life.”

Tucker quotes the late astronomer and science writer Carl Sagan, whose last book was The Demon Haunted World, a repudiation of pseudoscience and a classic work on skepticism. In it, Sagan wrote that there were three paranormal phenomena he believed deserve serious study, the third being “that young children sometimes report details of a previous life, which upon checking turn out to be accurate and which they could not have known about in any other way than reincarnation.”

Sagan didn’t say he believed in reincarnation, but he felt the research had yielded enough evidence that it deserved further study. Until an idea is disproven, Sagan said, it’s critical that we engage with the idea with openness and ruthless scrutiny. “This,” he wrote, “is how deep truths are winnowed from deep nonsense.”

Further Reading

Can you help us at a critical time?

COVID-19 has brought tremendous suffering, uncertainty, fear, and strain to the world.

Our sincere wish is that these Buddhist teachings, guided practices, and stories can be a balm in these difficult times. Over the past month, over 400,000 readers like you have visited our site, reading almost a million pages and streaming over 120,000 hours of video teachings. We want to provide even more Buddhist wisdom but our resources are strained. Can you help us?

No one is free from the pandemic’s impact, including Lion’s Roar. We rely significantly on advertising and newsstand sales to support our work — both of which have dropped precipitously this year. Can you lend your support to Lion’s Roar at this critical time?SUPPORT LION’S ROAR

About Sam Littlefair

Sam Littlefair is the former editor of LionsRoar.com. He has also written for The Coast, Mindful, and Atlantic Books Today. Find him on Twitter, @samlfair, and Facebook, @samlfair.

Topics: Lion’s Roar – May ’18, Rebirth and Reincarnation, Science, Wellness

Related Posts…

Halloween Hatha
by Cyndi Lee The Myth of Multitasking
by Sharon Salzberg The Middle Way of Stress
by Judy Lief

What Is Multi-Dimensional Space? October 26th 2020

Join the Community

Follow @wiseGEEK

Subscribe to wiseGEEK

Learn something new every day. Alan Rankin Last Modified Date: October 23, 2020

Humans experience day-to-day reality in four dimensions: the three physical dimensions and time. According to Albert Einstein’s theory of relativity, time is actually the fourth physical dimension, with measurable characteristics similar to the other three. An ongoing field of study in physics is the attempt to explain both relativity and quantum theory, which governs reality at very small scales. Several proposals in this field suggest the existence of multi-dimensional space. In other words, there may be additional physical dimensions that humans cannot perceive.

Tesseracts visually represent the four dimensions, including time.

Tesseracts visually represent the four dimensions, including time.

The science surrounding multi-dimensional space is so mind-boggling that even the physicists who study it do not fully understand it. It may be helpful to start with the three observable dimensions, which correspond to the height, width, and length of a physical object. Einstein, in his work on general relativity in the early 20th century, demonstrated that time is also a physical dimension. This is observable only in extreme conditions; for example, the immense gravity of a planetary body can actually slow down time in its near vicinity. The new model of the universe created by this theory is known as space-time.

In theory, gravity from a massive object bends space-time around it.
In theory, gravity from a massive object bends space-time around it.

Since Einstein’s era, scientists have discovered many of the universe’s secrets, but not nearly all. A major field of study, quantum mechanics, is devoted to learning about the smallest particles of matter and how they interact. These particles behave in a very different manner than the matter of observable reality. Physicist John Wheeler is reported to have said, “If you are not completely confused by quantum mechanics, you do not understand it.” It has been suggested that multi-dimensional space can explain the strange behavior of these elementary particles.

For much of the 20th and 21st centuries, physicists have tried to reconcile the discoveries of Einstein with those of quantum physics. It is believed that such a theory would explain much that is still unknown about the universe, including poorly understood forces such as gravity. One of the leading contenders for this theory is known variously as superstring theory, supersymmetry, or M-theory. This theory, while explaining many aspects of quantum mechanics, can only be correct if reality has 10, 11, or as many as 26 dimensions. Thus, many physicists believe multi-dimensional space is likely.

The extra dimensions of this multi-dimensional space would exist beyond the ability of humans to observe them. Some scientists suggest they are folded or curled into the observable three dimensions in such a way that they cannot be seen by ordinary methods. Scientists hope their effects can be documented by watching how elementary particles behave when they collide. Many experiments in the world’s particle accelerator laboratories, such as CERN in Europe, are conducted to search for this evidence. Other theories claim to reconcile relativity and quantum mechanics without requiring the existence of multi-dimensional space; which theory is correct remains to be seen.

Dreams Are The REAL World

~ admin

Arno Pienaar – Dreams are reality just as much as the real world is classified as reality. Dreams are your actual own reality and the real world is the creator’s reality. 

Dreams are by far the most intriguing aspect of existence for a human-being. Within them we behold experiences that the conscious mind may recollect, but for the most part, cannot make sense of. The only sense we can gain from them is the way they make us feel intuitively.

SunSkyCloudsMountainWaterTreeStone

Subconscious Guiding Mechanism

The feeling is known to be the message carried over from the guiding mechanism of the sub-conscious mind.

The guidance we receive in our dreams comes, in fact, from our very selves, although the access we have to everything is only tapped into briefly, when the conscious mind is completely shut down in the sleeping state.

The subconscious tends to show us the things that dominate our consciousness whenever it has the chance and the onus is on us to sort out the way we live our lives in the primary waking state, which is where we embody programming that is keeping us out of our own paradise, fully conscious in the now.

Labels such as the astral plane, dream-scape or the fourth dimension, have served to make people believe that this dimension of reality is somehow not as real as the “real” world, or that the dream state is not as valid as the waking state.

This is one of the biggest lies ever as the dream state is in fact the only reality where you can tap into the unconscious side of yourself, which you otherwise cannot perceive, except during transcendental states under psychedelics or during disciplined meditational practices.

Dreams offer a vital glimpse into your dark side, the unconscious embedded programming which corrupts absolutely until light has shone on it.

The dream state shows us what we are unconsciously projecting as a reality and must be used to face the truth of what you have mistaken for reality.

A person with an eating disorder will, for sure, have plenty of dreams involving gluttony, a nimfo will have many lustful encounters in the dreamstate, a narcissist will have audiences worshiping himself, or himself worshiping himself and someone filled with hatred will encounter scenes I wish not to elaborate on.

The patterns of your dreams and especially recurring themes, are projections within your unconscious mind that is governing the ultimate experience of your “waking state.”

I believe the new heaven and earth is the merging of heaven (dreams) and earth (matrix) into one conclusive experience.

Besides for showing us what needs attention, dreams also transcend the rules and laws of matter, time and space.

The successful lucid dreamer gains an entire new heaven and earth, where the absolute impossible is only possible.

For the one who gains access to everything through the dream state, the constraints of the so called real world in the waking state becomes but a monkey on the back.

When you can fly, see and talk to anybody, go anywhere you choose, then returning to the world of matter, time and space, is arguably a nightmare.

Anybody with a sound mind would choose to exist beyond the limitations of the matrix-construct. There are many that already do.

The Real World vs. the Dream World

The greatest of sages have enlightened us that the REAL WORLD is indeed the illusion, maya, or manyan.

If what we have thought to be real is, in fact, the veil to fool us that we are weak, small and limited, then our dreams must be the real world and this experience, here, is just an aspect of ourselves that is in dire need of deprogramming from the jaws of hypnotic spell-casting.

There is actually no such thing as reality. There is also no such thing as the real world. What makes the “waking state” the real world and the “dream state” the unreal world?

People would argue that the matrix is a world in which our physical bodies are housed, and that we always return after sleep to continue our existence in the real world.

Morpheus exclaimed that the body cannot survive without the mind. What he ment was that the body is but a projection of the mind.

Have you ever had a dream that was interrupted unexpectedly, only to continue from where you had left off when you go back to sleep?

Do you have a sanctuary which you visit regularly in the dream state? A safe have in your sub-conscious mind?

When we have the intent to return to any reality we do so, as it is proven by fellow lucid dreamers.

What if I told you that this matrix-hive is a dream just like any other dream you have, and that billions of souls share this dream together?

Do you think these souls consciously chose to share this dream together? The answer is no, they were merely incepted by an idea that “this is the real world” from the very beings that summoned them into this plane through the sexual act.

Every night we have to re-energize ourselves by accessing the dream world (i.e. the actual world)/True Source of infinite Potential, which is the reservoir that refills us, only to return to give that energy to the dreamworld which we believe to be the real world. This “real world” only seems like the REAL WORLD because most of its inhabitants believe just that.

Pause and Continue

Just like we can pause a dream when interrupted and return, so do we can pause the “real world”. Whether you believe it or not, we only return to the “waking reality” because we have forsaken ourselves for it and we expect to return to it on a daily basis.

We intend to always come back because we have such a large investment in an illusion and this is our chain to the physical world. We are so attached to this dream, that we evenreincarnate to continue from where we left off, because this dream is able to trap you in limbo forever.

We have capitulated to it and, in so doing, gave it absolute power over us. We are in fact in a reality of another, not in our own. That is why we cannot manifest what we want in it, because it has claimed ownership over us here and while we are in it, we are subject to its rules, laws and limitations.

When one enters the dimension of another, one fall subject to its construct.

In the case of the Real World, the Real World has been hacked by a virus that affects all the beings that embrace the code of that matrix. It is like a spiderweb that traps souls.

As soon as we wake up in the morning, we start dreaming of the world we share together. As long as the mind machine is in power, it will always kick in again after waking up.

Whatever it is we believe, becomes activated by this dream once more, so to validate our contribution to this illusion, to which we have agreed.

The time is now to turn all of it back to the five elements, so that we can have our own reality again!

Hyperdimensionality is a Reality Identity Crisis

We are only hyper-dimensional beings because we are not in our own reality yet — we are in the middle of two realities fighting over us. It is time to come to terms with this identity crisis.

We cannot be forced to be in a dimension we choose not to partake in, a dimension that was made to fool you into believe it is the alpha & omega.

It is this very choice (rejecting the digital holographic program) that many are now making, which is breaking the mirror on the wall (destroying the illusion).

Deprogramming Souls of Matrix-Based Constructs is coming in 2016. The spiderweb will be disentangled and the laws of time, matter and space will be transcended in the Real World, once we will regain full consciousness in the NOW.

Original source Dreamcatcherreality.com

SF Source How To Exit The Matrix  May 2016

Physicists Say There’s a 90 Percent Chance Civilization Will Soon Collapse October 9th 2020

Final Countdown

If humanity continues down its current path, civilization as we know it is heading toward “irreversible collapse” in a matter of decades.

That’s according to research published in the journal Scientific Reports, which models out our future based on current rates of deforestation and other resource use. As Motherboard reports, even the rosiest projections in the research show a 90 percent chance of catastrophe.

Last Gasp

The paper, penned by physicists from the Alan Turing Institute and the University of Tarapacá, predicts that deforestation will claim the last forests on Earth in between 100 and 200 years. Coupled with global population changes and resource consumption, that’s bad new for humanity.

“Clearly it is unrealistic to imagine that the human society would start to be affected by the deforestation only when the last tree would be cut down,” reads the paper.

Coming Soon

In light of that, the duo predicts that society as we know it could end within 20 to 40 years.

In lighter news, Motherboard reports that the global rate of deforestation has actually decreased in recent years. But there’s still a net loss in forest overall — and newly-planted trees can’t protect the environment nearly as well as old-growth forest.

If humanity continues down its current path, civilization as we know it is heading toward “irreversible collapse” in a matter of decades.

In light of that, the duo predicts that society as we know it could end within 20 to 40 years.

“Calculations show that, maintaining the actual rate of population growth and resource consumption, in particular forest consumption, we have a few decades left before an irreversible collapse of our civilization,” reads the paper.

READ MORE: Theoretical Physicists Say 90% Chance of Societal Collapse Within Several Decades [Motherboard]

More on societal collapse: Doomsday Report Author: Earth’s Leaders Have Failed

What is Binary, and Why Do Computers Use It?

Anthony Heddings@anthonyheddings
October 1, 2018, 6:40am EDT

Computers don’t understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand. You’re probably familiar with base 10—the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

Counting in Binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on—doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we’re going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we’re held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There’s another base system that’s also used in programming: hexadecimal. Although computers don’t run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

So Why Do Computers Use Binary?

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “on” state—represented by negative charge—and an “off” state—represented by a positive charge. For those unsure of why the “off” is represented by a positive charge, it’s because electrons have a negative charge—more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we’ve kept the same fundamental principles. Modern computers use what’s known as a transistor to perform calculations with binary. Here’s a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small—all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that’s mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics).

But Why Only Base 2?

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” While some of it comes down to tradition in how computers are built, to add another digit would mean we’d have to distinguish between different levels of current—not just “off” and “on,” but also states like “on a little bit” and “on a lot.”

The problem here is if you wanted to use multiple levels of voltage, you’d need a way to easily perform calculations with them, and the hardware for that isn’t viable as a replacement for binary computing. It indeed does exist; it’s called a ternary computer, and it’s been around since the 1950s, but that’s pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work’s been done on developing them at the same tiny scales as binary.

The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”and how they’re used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what’s known as a truth table:

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it’s also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx/Shutterstock, Wikipedia, Wikipedia, Wikipedia, WikipediaREAD NEXT

What is Binary, and Why Do Computers Use It?

Anthony Heddings@anthonyheddings
October 1, 2018, 6:40am EDT

Computers don’t understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand. You’re probably familiar with base 10—the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

Counting in Binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on—doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we’re going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we’re held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There’s another base system that’s also used in programming: hexadecimal. Although computers don’t run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

So Why Do Computers Use Binary?

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “on” state—represented by negative charge—and an “off” state—represented by a positive charge. For those unsure of why the “off” is represented by a positive charge, it’s because electrons have a negative charge—more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we’ve kept the same fundamental principles. Modern computers use what’s known as a transistor to perform calculations with binary. Here’s a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small—all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that’s mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics).

But Why Only Base 2?

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” While some of it comes down to tradition in how computers are built, to add another digit would mean we’d have to distinguish between different levels of current—not just “off” and “on,” but also states like “on a little bit” and “on a lot.”

The problem here is if you wanted to use multiple levels of voltage, you’d need a way to easily perform calculations with them, and the hardware for that isn’t viable as a replacement for binary computing. It indeed does exist; it’s called a ternary computer, and it’s been around since the 1950s, but that’s pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work’s been done on developing them at the same tiny scales as binary.

The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”and how they’re used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what’s known as a truth table:

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it’s also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx/Shutterstock, Wikipedia, Wikipedia, Wikipedia, WikipediaREAD NEXT

Anthony Heddings
Anthony Heddings is the resident cloud engineer for LifeSavvy Media, a technical writer, programmer, and an expert at Amazon’s AWS platform. He’s written hundreds of articles for How-To Geek and CloudSavvy IT that have been read millions of times

If The Big Bang Wasn’t The Beginning, What Was It? Posted September 30th 2020

Ethan SiegelSenior ContributorStarts With A BangContributor GroupScienceThe Universe is out there, waiting for you to discover it.

The history of our expanding Universe is one illustrated image.
Our entire cosmic history is theoretically well-understood, but only because we understand the … [+] NICOLE RAGER FULLER / NATIONAL SCIENCE FOUNDATION

For more than 50 years, we’ve had definitive scientific evidence that our Universe, as we know it, began with the hot Big Bang. The Universe is expanding, cooling, and full of clumps (like planets, stars, and galaxies) today because it was smaller, hotter, denser, and more uniform in the past. If you extrapolate all the way back to the earliest moments possible, you can imagine that everything we see today was once concentrated into a single point: a singularity, which marks the birth of space and time itself.

At least, we thought that was the story: the Universe was born a finite amount of time ago, and started off with the Big Bang. Today, however, we know a whole lot more than we did back then, and the picture isn’t quite so clear. The Big Bang can no longer be described as the very beginning of the Universe that we know, and the hot Big Bang almost certainly doesn’t equate to the birth of space and time. So, if the Big Bang wasn’t truly the beginning, what was it? Here’s what the science tells us.

Looking back at the distant Universe with NASA's Hubble space telescope.
Nearby, the stars and galaxies we see look very much like our own. But as we look farther away, we … [+] NASA, ESA, AND A. FEILD (STSCI)

Our Universe, as we observe it today, almost certainly emerged from a hot, dense, almost-perfectly uniform state early on. In particular, there are four pieces of evidence that all point to this scenario: Recommended For You

  1. the Hubble expansion of the Universe, which shows that the amount that light from a distant object is redshifted is proportional to the distance to that object,
  2. the existence of a leftover glow — the Cosmic Microwave Background (CMB) — in all directions, with the same temperature everywhere just a few degrees above absolute zero,
  3. light elements — hydrogen, deuterium, helium-3, helium-4, and lithium-7 — that exist in a particular ratio of abundances back before any stars were formed,
  4. and a cosmic web of structure that gets denser and clumpier, with more space between larger and larger clumps, as time goes on.

These four facts: the Hubble expansion of the Universe, the existence and properties of the CMB, the abundance of the light elements from Big Bang nucleosynthesis, and the formation and growth of large-scale structure in the Universe, represent the four cornerstones of the Big Bang.

The cosmic microwave background & large-scale structure are two cosmological cornerstones.
The largest-scale observations in the Universe, from the cosmic microwave background to the cosmic … [+] Chris Blake and Sam Moorfield

Why are these the four cornerstones? In the 1920s, Edwin Hubble, using the largest, most powerful telescope in the world at the time, was able to measure how individual stars varied in brightness over time, even in galaxies beyond our own. That enabled us to know how far away the galaxies that housed those stars were. By combining that information with data about how significantly the atomic spectral lines from those galaxies were shifted, we could determine what the relationship was between distance and a spectral shift.

As it turned out, it was simple, straightforward, and linear: Hubble’s law. The farther away a galaxy was, the more significantly its light was redshifted, or shifted systematically towards longer wavelengths. In the context of General Relativity, that corresponds to a Universe whose very fabric is expanding with time. As time marches on, all points in the Universe that aren’t somehow bound together (either gravitationally or by some other force) will expand away from one another, causing any emitted light to be shifted towards longer wavelengths by time the observer receives it.

How light redshifts and distances change over time in the expanding Universe.
This simplified animation shows how light redshifts and how distances between unbound objects change … [+] Rob Knop

Although there are many possible explanations for the effect we observe as Hubble’s Law, the Big Bang is a unique idea among those possibilities. The idea is simple and straightforward in its simplicity, but also breathtaking in how powerful it is. It simply says this:

  • the Universe is expanding and stretching light to longer wavelengths (and lower energies and temperatures) today,
  • and that means, if we extrapolate backwards, the Universe was denser and hotter earlier on.
  • Because it’s been gravitating the whole time, the Universe gets clumpier and forms larger, more massive structures later on.
  • If we go back to early enough times, we’ll see that galaxies were smaller, more numerous, and made of intrinsically younger, bluer stars.
  • If we go back earlier still, we’ll find a time where no stars have had time to form.
  • Even earlier, and we’ll find that it’s hot enough that light, at some early time, would have split even neutral atoms apart, creating an ionized plasma which “releases” the radiation at last when the Universe does become neutral. (The origin of the CMB.)
  • And at even earlier times still, things were hot enough that even atomic nuclei would be blasted apart; transitioning to a cooler phase allows the first stable nuclear reactions, yielding the light elements, to proceed.
As the Universe cools, atomic nuclei form, followed by neutral atoms as it cools further.
As the Universe cools, atomic nuclei form, followed by neutral atoms as it cools further. All of … [+] E. Siegel

All of these claims, at some point during the 20th century, were validated and confirmed by observations. We’ve measured the clumpiness of the Universe, and found that it increases exactly as predicted as time goes on. We’ve measured how galaxies evolve with distance (and cosmic time), and found that the earlier, more distant ones are overall younger, bluer, more numerous, and smaller in size. We’ve discovered and measured the CMB, and not only does it spectacularly match the Big Bang’s predictions, but we’ve observed how its temperature changes (increases) at earlier times. And we’ve successfully measured the primordial abundances of the light elements, finding a spectacular agreement with the predictions of Big Bang nucleosynthesis.

We can extrapolate back even further if we like: beyond the limits of what our current technology has the capability to directly observe. We can imagine the Universe getting even denser, hotter, and more compact than it was when protons and neutrons were being blasted apart. If we stepped back even earlier, we’d see neutrinos and antineutrinos, which need about a light-year of solid lead to stop half of them, start to interact with electrons and other particles in the early Universe. Beginning in the mid-2010s, we were able to detect their imprint on first the photons of the CMB and, a few years later, on the large-scale structure that would later grow in the Universe.

The impact of neutrinos on the large-scale structure features in the Universe.
If there were no oscillations due to matter interacting with radiation in the Universe, there would … [+] D. Baumann et al. (2019), Nature Physics

That’s the earliest signal, thus far, we’ve ever detected from the hot Big Bang. But there’s nothing stopping us from running the clock back farther: all the way to the extremes. At some point:

  • it gets hot and dense enough that particle-antiparticle pairs get created out of pure energy, simply from quantum conservation laws and Einstein’s E = mc²,
  • the Universe gets denser than individual protons and neutrons, causing it to behave as a quark-gluon plasma rather than as individual nucleons,
  • the Universe gets even hotter, causing the electroweak force to unify, the Higgs symmetry to be restored, and for fundamental particles to lose their rest mass,

and then we go to energies that lie beyond the limits of known, tested physics, even from particle accelerators and cosmic rays. Some processes must occur under those conditions to reproduce the Universe we see. Something must have created dark matter. Something must have created more matter than antimatter in our Universe. And something must have happened, at some point, for the Universe to exist at all.

An illustration of the Big Bang from an initially hot, dense state to our modern Universe.
There is a large suite of scientific evidence that supports the picture of the expanding Universe … [+] NASA / GSFC

From the moment this extrapolation was first considered back in the 1920s — and then again in its more modern forms in the 1940s and 1960s — the thinking was that the Big Bang takes you all the way back to a singularity. In many ways, the big idea of the Big Bang was that if you have a Universe filled with matter and radiation, and it’s expanding today, then if you go far enough back in time, you’ll come to a state that’s so hot and so dense that the laws of physics themselves break down.

At some point, you achieve energies, densities, and temperatures that are so large that the quantum uncertainty inherent to nature leads to consequences that make no sense. Quantum fluctuations would routinely create black holes that encompass the entire Universe. Probabilities, if you try to compute them, give answers that are either negative or greater than 1: both physical impossibilities. We know that gravity and quantum physics don’t make sense at these extremes, and that’s what a singularity is: a place where the laws of physics are no longer useful. Under these extreme conditions, it’s possible that space and time themselves can emerge. This, originally, was the idea of the Big Bang: a birth to time and space themselves.

The Big Bang, from the earliest stages, to modern-day galaxies.
A visual history of the expanding Universe includes the hot, dense state known as the Big Bang and … [+] NASA / CXC / M. WEISS

But all of that was based on the notion that we actually could extrapolate the Big Bang scenario as far back as we wanted: to arbitrarily high energies, temperatures, densities, and early times. As it turned out, that created a number of physical puzzles that defied explanation. Puzzles such as:

  • Why did causally disconnected regions of space — regions with insufficient time to exchange information, even at the speed of light — have identical temperatures to one another?
  • Why was the initial expansion rate of the Universe in balance with the total amount of energy in the Universe so perfectly: to more than 50 decimal places, to deliver a “flat” Universe today?
  • And why, if we achieved these ultra-high temperatures and densities early on, don’t we see any leftover relic remnants from those times in our Universe today?

If you still want to invoke the Big Bang, the only answer you can give is, “well, the Universe must have been born that way, and there is no reason why.” But in physics, that’s akin to throwing up your hands in surrender. Instead, there’s another approach: to concoct a mechanism that could explain those observed properties, while reproducing all the successes of the Big Bang, and still making new predictions about phenomena we could observe that differ from the conventional Big Bang.

The 3 big puzzles, the horizon, flatness, and monopole problems, that inflation solves.
In the top panel, our modern Universe has the same properties (including temperature) everywhere … [+] E. SIEGEL / BEYOND THE GALAXY

About 40 years ago, that’s exactly the idea that was put forth: cosmic inflation. Instead of extrapolating the Big Bang all the way back to a singularity, inflation basically says that there’s a cutoff: you can go back to a certain high temperature and density, but no further. According to the big idea of cosmic inflation, this hot, dense, uniform state was preceded by a state where:

  • the Universe wasn’t filled with matter and radiation,
  • but instead possessed a large amount of energy intrinsic to the fabric of space itself,
  • which caused the Universe to expand exponentially (and at a constant, unchanging rate),
  • which drives the Universe to be flat, empty, and uniform (up to the scale of quantum fluctuations),
  • and then inflation ends, converting that intrinsic-to-space energy into matter and radiation,

and that’s where the hot Big Bang comes from. Not only did this solve the puzzles the Big Bang couldn’t explain, but it made multiple new predictions that have since been verified. There’s a lot we still don’t know about cosmic inflation, but the data that’s come in over the last 3 decades overwhelmingly supports the existence of this inflationary state: that preceded and set up the hot Big Bang.

How inflation and quantum fluctuations give rise to the Universe we observe today.
The quantum fluctuations that occur during inflation get stretched across the Universe, and when … [+] E. SIEGEL, WITH IMAGES DERIVED FROM ESA/PLANCK AND THE DOE/NASA/ NSF INTERAGENCY TASK FORCE ON CMB RESEARCH

All of this, taken together, is enough to tell us what the Big Bang is and what it isn’t. It is the notion that our Universe emerged from a hotter, denser, more uniform state in the distant past. It is not the idea that things got arbitrarily hot and dense until the laws of physics no longer applied.

It is the notion that, as the Universe expanded, cooled, and gravitated, we annihilated away our excess antimatter, formed protons and neutrons and light nuclei, atoms, and eventually, stars, galaxies, and the Universe we recognize today. It is no longer considered inevitable that space and time emerged from a singularity 13.8 billion years ago.

And it is a set of conditions that applies at very early times, but was preceded by a different set of conditions (inflation) that came before it. The Big Bang might not be the very beginning of the Universe itself, but it is the beginning of our Universe as we recognize it. It’s not “the” beginning, but it is “our” beginning. It may not be the entire story on its own, but it’s a vital part of the universal cosmic story that connects us all.

Follow me on Twitter. Check out my website or some of my other work hereEthan Siegel

I am a Ph.D. astrophysicist, author, and science communicator, who professes physics and astronomy at various colleges. I have won numerous awards for science writing…

Artificial intelligence with virtual hanging head on podium. Global world cybernetic mind controls humanity. Digital Brain with AI in the spotlight. Super computer. science futuristic concept.

brain wave

Neuralink: 3 neuroscientists react to Elon Musk’s brain chip reveal September 17th 2020

With a pig-filled demonstration, Neuralink revealed its latest advancements in brain implants this week. But what do scientists think of Elon Musk’s company’s grand claims?ShutterstockMike Brown9.4.2020 8:00 AM

What does the future look like for humans and machines? Elon Musk would argue that it involves wiring brains directly up to computers – but neuroscientists tell Inverse that’s easier said than done.

On August 28, Musk and his team unveiled the latest updates from secretive firm Neuralink with a demo featuring pigs implanted with their brain chip device. These chips are called Links, and they measure 0.9 inches wide by 0.3 inches tall. They connect to the brain via wires, and provide a battery life of 12 hours per charge, after which the user would need to wirelessly charge again. During the demo, a screen showed the real-time spikes of neurons firing in the brain of one pig, Gertrude, as she snuffed around her pen during the event.

More like this

Innovation9.8.2020 9:53 PM”Plug and play” brain prosthesis could change how people with paralysis use implantsBy Sarah WellsInnovation9.8.2020 9:32 PMStarship: Watch SpaceX nail 6th test David GrossmanInnovation9.2.2020 10:55 AMMusk Reads: Neuralink’s big revealBy Mike Brown.

It was an event designed to show how far Neuralink has come in terms of making its science objectives reality. But how much of Musk’s ambitions for Links are still in the realm of science fiction?

Neuralink argues the chips will one day have medical applications, listing a whole manner of ailments that its chips could feasibly solve. Memory loss, depression, seizures, and brain damage were all suggested as conditions where a generalized brain device like the Link could help.

Ralph Adolphs, Bren Professor of Psychology, Neuroscience, and Biology at California Institute of Technology, tells Inverse Neuralink’s announcement was “tremendously exciting” and “a huge technical achievement.”

Neuralink is “a good example of technology outstripping our current ability to know how to use it,” Adolphs says. “The primary initial application will be for people who are ill and for clinical reasons it is justified to implant such a chip into their brain. It would be unethical to do so right now in a healthy person.”

“But who knows what the future holds?” He adds.

Adolphs says the chip is comparable to the natural processes that emerge through evolution. Currently, to interface between the brain and the world, humans use their hands and mouth. But to imagine just sitting and thinking about these actions is a lot harder, so a lot of the future work will need to focus on making this interface with the world feel more natural, Adolphs says.

Achieving that goal could be further out than the Neuralink demo suggested. John Krakauer, chief medical and scientific officer at MindMaze and professor of neurology at Johns Hopkins University, tells Inverse that his view is humanity is “still a long way away” from consumer-level linkups.

“Let me give a more specific concern: The device we saw was placed over a single sensorimotor area,” Krakauer says. “If we want to read thoughts rather than movements (assuming we knew their neural basis) where do we put it? How many will we need? How does one avoid having one’s scalp studded with them? No mention of any of this of course.”

While a brain linkup may get people “excited” because it “has echoes of Charles Xavier in the X-Men,” Krakauer argues that there’s plenty of potential non-invasive solutions to help people with the conditions Neuralink says its technology will treat.

These existing solutions don’t require invasive surgery, but Krakauer fears “the cool factor clouds critical thinking.”

But Elon Musk, Neuralink’s CEO, wants the Link to take humans far beyond new medical treatments.

The ultimate objective, according to Musk, is for Neuralink to help create a symbiotic relationship between humans and computers. Musk argues that Neuralink-like devices could help humanity keep up with super-fast machines. But Krakauer finds such an ambition troubling.

“I would like to see less unsubstantiated hype about a brain ‘Alexa’ and interfacing with A.I.,” Krakauer says. “The argument is if you can’t avoid the singularity, join it. I’m sorry but this angle is just ridiculous.”

Neuralink's link implant.
Neuralink’s link implant.Neuralink

Even a general-purpose linkup could be much further away from development than it may seem. Musk told WaitButWhy in 2017 that a general-purpose linkup could be eight to 10 years away for people with no disability. That would place the timescale for roll-out somewhere around 2027 at the latest — seven years from now.

Kevin Tracey, a neurosurgery professor and president of the Feinstein Institutes for Medical Research, tells Inverse that he “can’t imagine” that any of the publicly suggested diseases could see a solution “sooner than 10 years.” Considering that Neuralink hopes to offer the device as a medical solution before it moves to more general-purpose implants, these notes of caution cast the company’s timeline into doubt.

But unlike Krakauer, Tracey argues that “we need more hype right now.” Not enough attention has been paid to this area of research, he says.

“In the United States for the last 20 years, the federal government’s investment supporting research hasn’t kept up with inflation,” Tracey says. “There’s been this idea that things are pretty good and we don’t have to spend so much money on research. That’s nonsense. COVID proved we need to raise enthusiasm and investment.”

Neuralink’s device is just one part of the brain linkup puzzle, Tracey explains. There are three fields at play: molecular medicine to make and find the targets, neuroscience to understand how the pathways control the target, and the devices themselves. Advances in each area can help the others. Neuralink may help map new pathways, for example, but it’s just one aspect of what needs to be done to make it work as planned.

Neuralink’s smaller chips may also help avoid issues with brain scarring seen with larger devices, Tracey says. And advancements in robots can also help with surgeries, an area Neuralink has detailed before.

But perhaps the biggest benefit from the announcement is making the field cool again.

“If and to the extent that a new, very cool device elevates the discussion on the neuroscience implications of new devices, and what do we need to get these things to the benefit of humanity through more science, that’s all good,” Tracey says.

How sleep helps us lose weight September 12th 2020

When it comes to weight loss, diet and exercise are usually thought of as the two key factors that will achieve results. However, sleep is an often-neglected lifestyle factor that also plays an important role.

The recommended sleep duration for adults is seven to nine hours a night, but many people often sleep for less than this. Research has shown that sleeping less than the recommended amount is linked to having greater body fat, increased risk of obesity, and can also influence how easily you lose weight on a calorie-controlled diet.

Typically, the goal for weight loss is usually to decrease body fat while retaining as much muscle mass as possible. Not obtaining the correct amount of sleep can determine how much fat is lost as well as how much muscle mass you retain while on a calorie restricted diet.

One study found that sleeping 5.5 hours each night over a two-week period while on a calorie-restricted diet resulted in less fat loss when compared to sleeping 8.5 hours each night. But it also resulted a greater loss of fat-free mass (including muscle).

Another study has shown similar results over an eight-week period when sleep was reduced by only one hour each night for five nights of the week. These results showed that even catch-up sleep at the weekend may not be enough to reverse the negative effects of sleep deprivation while on a calorie-controlled diet.

Metabolism, appetite, and sleep

There are several reasons why shorter sleep may be associated with higher body weight and affect weight loss. These include changes in metabolism, appetite and food selection.

Sleep influences two important appetite hormones in our body – leptin and ghrelin. Leptin is a hormone that decreases appetite, so when leptin levels are high we usually feel fuller. On the other hand, ghrelin is a hormone that can stimulate appetite, and is often referred to as the “hunger hormone” because it’s thought to be responsible for the feeling of hunger.

One study found that sleep restriction increases levels of ghrelin and decreases leptin. Another study, which included a sample of 1,024 adults, also found that short sleep was associated with higher levels of ghrelin and lower levels of leptin. This combination could increase a person’s appetite, making calorie-restriction more difficult to adhere to, and may make a person more likely to overeat.

Consequently, increased food intake due to changes in appetite hormones may result in weight gain. This means that, in the long term, sleep deprivation may lead to weight gain due to these changes in appetite. So getting a good night’s sleep should be prioritised.

Along with changes in appetite hormones, reduced sleep has also been shown to impact on food selection and the way the brain perceives food. Researchers have found that the areas of the brain responsible for reward are more active in response to food after sleep loss (six nights of only four hours’ sleep) when compared to people who had good sleep (six nights of nine hours’ sleep).

This could possibly explain why sleep-deprived people snack more often and tend to choose carbohydrate-rich foods and sweet-tasting snacks, compared to those who get enough sleep.

Person's hands typing on keyboard while eating unhealthy snacks.
Sleep deprivation may make you eat more unhealthy food during the day. Flotsam/ Shutterstock

Sleep duration also influences metabolism, particularly glucose (sugar) metabolism. When food is eaten, our bodies release insulin, a hormone that helps to process the glucose in our blood. However, sleep loss can impair our bodies’ response to insulin, reducing its ability to uptake glucose. We may be able to recover from the occasional night of sleep loss, but in the long term this could lead to health conditions such as obesity and type 2 diabetes.

Our own research has shown that a single night of sleep restriction (only four hours’ sleep) is enough to impair the insulin response to glucose intake in healthy young men. Given that sleep-deprived people already tend to choose foods high in glucose due to increased appetite and reward-seeking behaviour, the impaired ability to process glucose can make things worse.

An excess of glucose (both from increased intake and a reduced ability to uptake into the tissues) could be converted to fatty acids and stored as fat. Collectively, this can accumulate over the long term, leading to weight gain.

However, physical activity may show promise as a countermeasure against the detrimental impact of poor sleep. Exercise has a positive impact on appetite, by reducing ghrelin levels and increasing levels of peptide YY, a hormone that is released from the gut, and is associated with the feeling of being satisfied and full.

After exercise, people tend to eat less, particularly when the energy expended by exercise is taken into account. However, it’s unknown if this still remains in the context of sleep restriction.

Research has also shown that exercise training may protect against the metabolic impairments that result from a lack of sleep, by improving the body’s response to insulin, leading to improved glucose control.

We have also shown the potential benefits of just a single session of exercise on glucose metabolism after sleep restriction. While this shows promise, studies are yet to determine the role of long-term physical activity in people with poor sleep.

It’s clear that sleep is important for losing weight. A lack of sleep can increase appetite by changing hormones, makes us more likely to eat unhealthy foods, and influences how body fat is lost while counting our calories. Sleep should therefore be considered as an essential alongside diet and physical activity as part of a healthy lifestyle

Elon Musk Says Settlers Will Likely Die on Mars. He’s Right.

But is that such a bad thing?

Mars or Milton Keynes, What’s the difference ?

By Caroline Delbert 

Sep 2, 2020

Earlier this week, Elon Musk said there’s a “good chance” settlers in the first Mars missions will die. And while that’s easy to imagine, he and others are working hard to plan and minimize the risk of death by hardship or accident. In fact, the goal is to have people comfortably die on Mars after a long life of work and play that, we hope, looks at least a little like life on Earth.

Let’s explore it together.

There are already major structural questions about how humans will settle on Mars. How will we aim Musk’s planned hundreds of Starships at Mars during the right times for the shortest, safest trips? How will a spaceship turn into something that safely lands on the planet’s surface? How will astronauts reasonably survive a yearlong trip in cramped, close quarters where maximum possible volume is allotted to supplies?

And all of that is before anyone even touches the surface.

Then there are logistical reasons to talk about potential Mars settlers in, well, actuarial terms. First, the trip itself will take a year based on current estimates, and applicants to settlement programs are told to expect this trip to be one way.

It follows, statistically, that there’s an almost certain “chance” these settlers will die on Mars, because their lives will continue there until they naturally end. Musk is referring to accidental death in tough conditions, but people are likely to stay on Mars

When Mars One opened applications in 2013, people flocked to audition to die on Mars after a one-way trip and a lifetime of settlement. As chemist and applicant Taylor Rose Nations said in a 2014 podcast episode:

“If I can go to Mars and be a human guinea pig, I’m willing to sort of donate my body to science. I feel like it’s worth it for me personally, and it’s kind of a selfish thing, but just to turn around and look and see Earth. That’s a lifelong total dream.”

Musk said in a conference Monday that building reusable rocket technology and robust, “complex life support” are his major priorities, based on his long-term goals of settling humans on Mars. Musk has successfully transported astronauts to the International Space Station (ISS), where NASA and global space administrations already have long-term life support technology in place. But that’s not the same as, for example, NASA’s advanced life support projects:

“Advanced life support (ALS) technologies required for future human missions include improved physico-chemical technologies for atmosphere revitalization, water recovery, and waste processing/resource recovery; biological processors for food production; and systems modeling, analysis, and controls associated with integrated subsystems operations.”

In other words, while the ISS does many of these different functions like water recovery, people on the moon (for NASA) or Mars (for Musk’s SpaceX) will require long-term life support for the same group of people, not a group that rotates every few months with frequent short trips from Earth.

And if the Mars colony plans to endure and put down roots, that means having food, shelter, medical care, and mental and emotional stimulation for the entire population.

There must be redundancies and ways to repair everything. Researchers like 3D printers and chemical processes such as ligand bonding as they plan these hypothetical missions, because it’s more prudent to send raw materials that can be turned into 100 different things or 50 different medicines. The right chemical processes can recycle discarded items into fertilizer molecules.

“Good chance you’ll die, it’s going to be tough going,” Musk said, “but it will be pretty glorious if it works out.”

David Bohm, Quantum Mechanics and Enlightenment

The visionary physicist, whose ideas remain influential, sought spiritual as well as scientific illumination. September 8th 2020

Scientific American

  • John Horgan
GettyImages-3251636.jpg

Theoretical physicist Dr. David J. Bohm at a 1971 symposium in London. Photo by Keystone.

Some scientists seek to clarify reality, others to mystify it. David Bohm seemed driven by both impulses. He is renowned for promoting a sensible (according to Einstein and other experts) interpretation of quantum mechanics. But Bohm also asserted that science can never fully explain the world, and his 1980 book Wholeness and the Implicate Order delved into spirituality. Bohm’s interpretation of quantum mechanics has attracted increasing attention lately. He is a hero of Adam Becker’s 2018 book What Is Real? The Unfinished Quest for the Meaning of Quantum Mechanics (reviewed by James Gleick, David Albert and Peter Woit). In The End of Science I tried to make sense of this paradoxical truth-seeker, who died in 1992 at the age of 74. Below is an edited version of that profile. See also my post on another quantum visionary, John Wheeler. –John Horgan

In August 1992 I visited David Bohm at his home in a London suburb. His skin was alarmingly pale, especially in contrast to his purplish lips and dark, wiry hair. His frame, sinking into a large armchair, seemed limp, languorous, and at the same time suffused with nervous energy. One hand cupped the top of his head, the other gripped an armrest. His fingers, long and blue-veined, with tapered, yellow nails, were splayed. He was recovering, he said, from a heart attack.

Bohm’s wife brought us tea and biscuits and vanished. Bohm spoke haltingly at first, but gradually the words came faster, in a low, urgent monotone. His mouth was apparently dry, because he kept smacking his lips. Occasionally, after making an observation that amused him, he pulled his lips back from his teeth in a semblance of a smile. He also had the disconcerting habit of pausing every few sentences and saying, “Is that clear?” or simply, “Hmmm?” I was often so hopelessly befuddled that I just smiled and nodded. But Bohm could be bracingly clear, too. Like an exotic subatomic particle, he oscillated in and out of focus.

Born and raised in the U.S., Bohm left in 1951, the height of anti-communist hysteria, after refusing to answer questions from a Congressional committee about whether he or anyone he knew was a communist. After stays in Brazil and Israel, he settled in England. Bohm was a scientific dissident too. He rebelled against the dominant interpretation of quantum mechanics, the so-called Copenhagen interpretation promulgated by Danish physicist Niels Bohr.

Bohm began questioning the Copenhagen interpretation in the late 1940s while writing a book on quantum mechanics. According to the Copenhagen interpretation, a quantum entity such as an electron has no definite existence apart from our observation of it. We cannot say with certainty whether it is either a wave or a particle. The interpretation also rejects the possibility that the seemingly probabilistic behavior of quantum systems stems from underlying, deterministic mechanisms.

Bohm found this view unacceptable. “The whole idea of science so far has been to say that underlying the phenomenon is some reality which explains things,” he explained. “It was not that Bohr denied reality, but he said quantum mechanics implied there was nothing more that could be said about it.” Such a view reduced quantum mechanics to “a system of formulas that we use to make predictions or to control things technologically. I said that’s not enough. I don’t think I would be very interested in science if that were all there was.”

In 1952 Bohm proposed that particles are indeed particles–and at all times, not just when they are observed in a certain way. Their behavior is determined by a force that Bohm called the “pilot wave.” Any effort to observe a particle alters its behavior by disturbing the pilot wave. Bohm thus gave the uncertainty principle a purely physical rather than metaphysical meaning. Niels Bohr had interpreted the uncertainty principle as meaning “not that there is uncertainty, but that there is an inherent ambiguity” in a quantum system, Bohm explained.

Bohm’s interpretation gets rid of one quantum paradox, wave/particle duality, but it preserves and even highlights another, nonlocality, the capacity of one particle to influence another instantaneously across vast distances. Einstein had drawn attention to nonlocality in 1935 in an effort to show that quantum mechanics must be flawed. Together with Boris Podolsky and Nathan Rosen, Einstein proposed a thought experiment involving two particles that spring from a common source and fly in opposite directions.

According to the standard model of quantum mechanics, neither particle has fixed properties, such as momentum, before it is measured. But by measuring one particle’s momentum, the physicist instantaneously forces the other particle, no matter how distant, to assume a fixed momentum. Deriding this effect as “spooky action at a distance,” Einstein argued that quantum mechanics must be flawed or incomplete. But in 1980 French physicists demonstrated spooky action in a laboratory. Bohm never had any doubts about the experiment’s outcome. “It would have been a terrific surprise to find out otherwise,” he said.

But here is the paradox of Bohm: Although he tried to make the world more sensible with his pilot-wave model, he also argued that complete clarity is impossible. He reached this conclusion after seeing an experiment on television, in which a drop of ink was squeezed onto a cylinder of glycerine. When the cylinder was rotated, the ink diffused through the glycerine in an apparently irreversible fashion. Its order seemed to have disintegrated. But when the direction of rotation was reversed, the ink gathered into a drop again.

The experiment inspired Bohm to write Wholeness and the Implicate Order, published in 1980. He proposed that underlying physical appearances, the “explicate order,” there is a deeper, hidden “implicate order.” Applying this concept to the quantum realm, Bohm proposed that the implicate order is a field consisting of an infinite number of fluctuating pilot waves. The overlapping of these waves generates what appears to us as particles, which constitute the explicate order. Even space and time might be manifestations of a deeper, implicate order, according to Bohm.

To plumb the implicate order, Bohm said, physicists might need to jettison basic assumptions about nature. During the Enlightenment, thinkers such as Newton and Descartes replaced the ancients’ organic concept of order with a mechanistic view. Even after the advent of relativity and quantum mechanics, “the basic idea is still the same,” Bohm told me, “a mechanical order described by coordinates.”

Bohm hoped scientists would eventually move beyond mechanistic and even mathematical paradigms. “We have an assumption now that’s getting stronger and stronger that mathematics is the only way to deal with reality,” Bohm said. “Because it’s worked so well for a while, we’ve assumed that it has to be that way.”

Someday, science and art will merge, Bohm predicted. “This division of art and science is temporary,” he observed. “It didn’t exist in the past, and there’s no reason why it should go on in the future.” Just as art consists not simply of works of art but of an “attitude, the artistic spirit,” so does science consist not in the accumulation of knowledge but in the creation of fresh modes of perception. “The ability to perceive or think differently is more important than the knowledge gained,” Bohm explained.

Bohm rejected the claim of physicists such as Hawking and Weinberg that physics can achieve a final “theory of everything” that explains the world. Science is an infinite, “inexhaustible process,” he said. “The form of knowledge is to have at any moment something essential, and the appearance can be explained. But then when we look deeper at these essential things they turn out to have some feature of appearances. We’re not ever going to get a final essence which isn’t also the appearance of something.”

Bohm feared that belief in a final theory might become self-fulfilling. “If you have fish in a tank and you put a glass barrier in there, the fish keep away from it,” he noted. “And then if you take away the glass barrier they never cross the barrier and they think the whole world is that.” He chuckled drily. “So your thought that this is the end could be the barrier to looking further.” Trying to convince me that final knowledge is unattainable, Bohm offered the following argument:

“Anything known has to be determined by its limits. And that’s not just quantitative but qualitative. The theory is this and not that. Now it’s consistent to propose that there is the unlimited. You have to notice that if you say there is the unlimited, it cannot be different, because then the unlimited will limit the limited, by saying that the limited is not the unlimited, right? The unlimited must include the limited. We have to say, from the unlimited the limited arises, in a creative process. That’s consistent. Therefore we say that no matter how far we go there is the unlimited. It seems that no matter how far you go, somebody will come up with another point you have to answer. And I don’t see how you could ever settle that.”

To my relief, Bohm’s wife entered the room and asked if we wanted more tea. As she refilled my cup, I pointed out a book on Buddhism on a shelf and asked Bohm if he was interested in spirituality. He nodded. He had been a friend of Krishnamurti, one of the first modern Indian sages to try to show Westerners how to achieve the state of spiritual serenity and grace called enlightenment. Was Krishnamurti enlightened? “In some ways, yes,” Bohm replied. “His basic thing was to go into thought, to get to the end of it, completely, and thought would become a different kind of consciousness.”

Of course, one could never truly plumb one’s own mind, Bohm said. Any attempt to examine one’s own thought changes it–just as the measurement of an electron alters its course. We cannot achieve final self-knowledge, Bohm seemed to imply, any more we can achieve a final theory of physics.

Was Krishnamurti a happy person? Bohm seemed puzzled by my question. “That’s hard to say,” he replied. “He was unhappy at times, but I think he was pretty happy overall. The thing is not about happiness, really.” Bohm frowned, as if realizing the import of what he had just said.

I said goodbye to Bohm and his wife and departed. Outside, a light rain was falling. I walked up the path to the street and glanced back at Bohm’s house, a modest whitewashed cottage on a street of modest whitewashed cottages. He died of a heart attack two months later.

In Wholeness and the Implicate Order Bohm insisted on the importance of “playfulness” in science, and in life, but Bohm, in his writings and in person, was anything but playful. For him, truth-seeking was not a game, it was a dreadful, impossible, necessary task. Bohm was desperate to know, to discover the secret of everything, but he knew it wasn’t attainable, not for any mortal being. No one gets out of the fish tank alive.

John Horgan directs the Center for Science Writings at the Stevens Institute of Technology. His books include “The End of Science,” “The End of War” and “Mind-Body Problems,” available for free at mindbodyproblems.com.

The views expressed are those of the author(s) and are not necessarily those of Scientific American. Scientific American

More from Scientific American

This post originally appeared on Scientific American and was published July 23, 2018. This article is republished here with permission.

Why String Theory Is Still Not Even Wrong

A Frozen Graveyard: The Sad Tales of Antarctica’s Deaths

Beneath layers of snow and ice on the world’s coldest continent, there may be hundreds of people buried forever. Martha Henriques investigates their stories.

BBC Future

  • Martha Henriques
p06h2xhx.jpg

Crevasses can be deadly; this vehicle in the 1950s had a lucky escape. Credit: Getty Images.

In the bleak, almost pristine land at the edge of the world, there are the frozen remains of human bodies – and each one tells a story of humanity’s relationship with this inhospitable continent.

Even with all our technology and knowledge of the dangers of Antarctica, it can remain deadly for anyone who goes there. Inland, temperatures can plummet to nearly -90C (-130F). In some places, winds can reach 200mph (322km/h). And the weather is not the only risk.

Many bodies of scientists and explorers who perished in this harsh place are beyond reach of retrieval. Some are discovered decades or more than a century later. But many that were lost will never be found, buried so deep in ice sheets or crevasses that they will never emerge – or they are headed out towards the sea within creeping glaciers and calving ice.

The stories behind these deaths range from unsolved mysteries to freak accidents. In the second of the series Frozen Continent, BBC Future explored what these events reveal about life on the planet’s most inhospitable landmass.

1800s: Mystery of the Chilean Bones

At Livingston Island, among the South Shetlands off the Antarctic Peninsula, a human skull and femur have been lying near the shore for 175 years. They are the oldest human remains ever found in Antarctica.

The bones were discovered on the beach in the 1980s. Chilean researchers found that they belonged to a woman who died when she was about 21 years old. She was an indigenous person from southern Chile, 1,000km (620 miles) away.

Analysis of the bones suggested that she died between 1819 and 1825. The earlier end of that range would put her among the very first people to have been in Antarctica.

A Russian orthodox church sits on a small rise above Chile’s research base. Credit: Yadvinder Malhi.

The question is, how did she get there? The traditional canoes of the indigenous Chileans couldn’t have supported her on such a long voyage through what can be incredibly rough seas.

“There’s no evidence for an independent Amerindian presence in the South Shetlands,” says Michael Pearson, an Antarctic heritage consultant and independent researcher. “It’s not a journey you’d make in a bark canoe.”

The original interpretation by the Chilean researchers was that she was an indigenous guide to the sealers travelling from the northern hemisphere to the Antarctic islands that had been newly discovered by William Smith in 1819. But women taking part in expeditions to the far south in those early days was virtually unheard of.

Sealers did have a close relationship with the indigenous people of southern Chile, says Melisa Salerno, an archaeologist of the Argentinean Scientific and Technical Research Council (Conicet). Sometimes they would exchange seal skins with each other. It’s not out of the question that they traded expertise and knowledge, too. But the two cultures’ interactions weren’t always friendly.

“Sometimes it was a violent situation,” says Salerno. “The sealers could just take a woman from one beach and later leave her far away on another.”

Any scientist or explorer visiting Antarctica knows that they could be at risk. Credit: Getty Images.

A lack of surviving logs and journals from the early ships sailing south to Antarctica makes it even more difficult to trace this woman’s history.

Her story is unique among the early human presence in Antarctica. A woman who, by all the usual accounts, shouldn’t have been there – but somehow she was. Her bones mark the start of human activity on Antarctica, and the unavoidable loss of life that comes with trying to occupy this inhospitable continent.

29 March 1912: Scott’s South Pole Expedition Crew

Robert Falcon Scott’s team of British explorers reached the South Pole on 17 January 1912, just three weeks after the Norwegian team led by Roald Amundsen had departed from the same spot.

The British group’s morale was crushed when they discovered that they had not arrived first. Soon after, things would get much worse.

Attaining the pole was a feat to test human endurance, and Scott had been under huge pressure. As well as dealing with the immediate challenges of the harsh climate and lack of natural resources like wood for building, he had a crew of more than 60 men to lead. More pressure came from the high hopes of his colleagues back home.

Robert Falcon Scott writing his journal. Credit: Herbert Ponting/Wikipedia.

“They mean to do or die – that is the spirit in which they are going to the Antarctic,” Leonard Darwin, a president of the Royal Geographical Society and son of Charles Darwin, said in a speech at the time.

“Captain Scott is going to prove once again that the manhood of the nation is not dead … the self-respect of the whole nation is certainly increased by such adventures as this,” he said.

Scott was not impervious to the expectations. “He was a very rounded, human character,” says Max Jones, a historian of heroism and polar exploration at the University of Manchester. “In his journals, you find he’s racked with doubts and anxieties about whether he’s up to the task and that makes him more appealing. He had failings and weaknesses too.”

Despite his worries and doubts, the mindset of “do or die” drove the team to take risks that might seem alien to us now.

On the team’s return from the pole, Edgar Evans died first, in February. Then Lawrence Oates. He had considered himself a burden, thinking the team could not return home with him holding them back. “I am just going outside and may be some time,” he said on 17 March.

Members of the ill-fated British expedition to the pole. Credit: Getty Images.

Perhaps he had not realised how close the rest of the group were to death. The bodies of Oates and Evans were never found, but Scott, Edward Wilson and Henry Bowers were discovered by a search party several months after their deaths. They had died on 29 March 1912, according to the date in Scott’s diary entry. The search party covered them with snow and left them where they lay.

“I do not think human beings ever came through such a month as we have come through,” Scott wrote in his diary’s final pages. The team knew they were within 18km (11 miles) of the last food depot, with the supplies that could have saved them. But they were confined to a tent for days, growing weaker, trapped by a fierce blizzard.

“They were prepared to risk their lives and they saw that as legitimate. You can view that as part of a mindset of imperial masculinity, tied up with enduring hardship and hostile environments,” says Jones. “I’m not saying that they had a death wish, but I think that they were willing to die.”

14 October 1965: Jeremy Bailey, David Wild and John Wilson

Four men were riding a Muskeg tractor and its sledges near the Heimefront Mountains, to the east of their base at Halley Research Station in East Antarctica, close to the Weddell Sea. The Muskeg was a heavy-duty vehicle designed to haul people and supplies over long distances on the ice. A team of dogs ran behind.

Three of the men were in the cab. The fourth, John Ross, sat behind on the sledge at the back, close to the huskies. Jeremy (Jerry) Bailey, a scientist measuring the depth of the ice beneath the tractor, was driving. He and David (Dai) Wild, a surveyor, and John Wilson, a doctor, were scanning the ice ahead. Snow obscured much of the small, flat windscreen. The group had been travelling all day, taking turns to warm up in the cab or sit out back on the sledge.

Ross was staring out at the vast ice, snow and Stella Group mountains. At about 8:30, the dogs alongside the sledge stopped running. The sledge had ground to a halt.

Ross, muffled with a balaclava and two anoraks, had heard nothing. He turned to see that the Muskeg was gone. Ahead, the first sledge was leaning down into the ice. Ross ran up to it to find it had wedged in the top of a large crevasse running directly across their course. The Muskeg itself had fallen about 30m (100ft) into the crevasse. Down below, its tracks were wedged vertically against one ice wall, and the cab had been flattened hard against the other.

Ross shouted down. There was no reply from the three men in the cab. After about 20 minutes of shouting, Ross heard a reply. The exchange, as he recorded it from memory soon after the event, was brief:

Ross: Dai?

Bailey: Dai’s dead. It’s me.

Ross: Is that John or Jerry?

Bailey: Jerry.

Ross: How is John?

Bailey: He’s a goner, mate.

Ross: What about yourself?

Bailey: I’m all smashed up.

Ross: Can you move about at all or tie a rope round yourself?

Bailey: I’m all smashed up.

Ross tried climbing down into the crevasse, but the descent was difficult. Bailey told him not to risk it, but Ross tried anyway. After several attempts, Bailey stopped responding to Ross’s calls. Ross heard a scream from the crevasse. After that, Bailey didn’t respond.

Crevasses – deep clefts in the ice stretching down hundreds of feet – are serious threats while travelling across the Antarctic. On 14 October 1965, there had been strong winds kicking up drifts and spreading snow far over the landscape, according to reports on the accident held at the British Antarctic Survey archives. This concealed the top of the chasms, and crucially, the thin blue line in the ice ahead of each drop that would have warned the men to stop.

“You can imagine – there’s a bit of drift about, and there’s bits of ice on the windscreen, your fingers are bloody cold, and you think it’s about time to stop anyway,” says Rod Rhys Jones, one of the expedition party who had not gone on that trip with the Muskeg. He points to the crevassed area the Muskeg had been driving over, on a map of the continent spread over his coffee table, littered with books on the Antarctic.

Many bodies are never recovered; others are buried on the continent. Credit: Getty Images.

“You’re driving along over the ice and thumping and bumping and banging. You don’t see the little blue line.”

Jones questions whether the team had been given adequate training for the hazards of travel in Antarctica. They were young men, mostly fresh out of university. Many of them had little experience in harsh physical conditions. Much of their time preparing for life in Antarctica was spent learning to use the scientific equipment they would need, not training them in how to avoid accidents on the ice.

Each accident in Antarctica has slowly led to changes in the way people travelled and were trained. Reports filed after the incident recommended several ways to make travel through crevassed regions safer, from adapting the vehicle, to new ways to hitch them together.

August 1982: Ambrose Morgan, Kevin Ockleton and John Coll

The three men set out over the ice for an expedition to a nearby island in the depths of the Antarctic winter.

The sea ice was firm, and they made it easily to Petermann Island. The southern aurora was visible in the sky, unusually bright and strong enough to wipe out communications. The team reached the island safely and camped out at a hut near the shore.

Soon after reaching the shore, a large storm blew in that, by the next day, entirely destroyed the sea ice. The group was stranded, but concern among the party was low. There was enough food in the hut to last three people more than a month.

In the next few days, the sea ice failed to reform as storms swept and disrupted the ice in the channel.

Death is never far away in Antarctica. Credit: Richard Fisher.

There were no books or papers in the hut, and contact with the outside world was limited to scheduled radio transmissions to the base. Soon, it had been two weeks. The transmissions were kept brief, as the batteries in their radios were getting weaker and weaker. The team grew restless. Gentoo and Adelie penguins surrounded the hut. They might have looked endearing, but their smell soon began to bother the men.

Things got worse. The team got diarrhoea, as it turned out some of the food in the hut was much older than they had thought. The stench of the penguins didn’t make them feel any better. They killed and ate a few to boost their supplies.

The men waited with increasing frustration, complaining of boredom on their radio transmissions to base. On Friday 13 August 1982, they were seen through a telescope, waving back to the main base. Radio batteries were running low. The sea ice had reformed again, providing a tantalising hope for escape.

Two days later, on Sunday 15 August, the group didn’t check in on the radio at the scheduled time. Then another large storm blew in.

The men at the base climbed up to a high point where they could see the island. All the sea ice was gone again, taken out by the storm.

“These guys had done something which we all did – go out on a little trip to the island,” says Pete Salino, who had been on the main base at the time. The three men were never seen again.

There were very strong currents around the island. Reliable, thick ice formed relatively rarely, Salino recalls. The way they tested whether the ice would hold them was primitive – they would whack it with a wooden stick tipped with metal to see if it would smash.

Even after an extensive search, the bodies were never found. Salino suspects the men went out onto the ice when it reformed and either got stuck or weren’t able to turn back when the storm blew in.

“It does sound mad now, sitting in a cosy room in Surrey,” Salino says. “When we used to go out, there was always a risk of falling through, but you’d always go prepared. We’d always have spare clothing in a sealed bag. We all accepted the risk and felt that it could have been any of us.”

Legacy of Death

For those who experience the loss of colleagues and friends in Antarctica, grieving can be uniquely difficult. When a friend disappears or a body cannot be recovered, the typical human rituals of death – a burial, a last goodbye – elude those left behind.

Clifford Shelley, a British geophysicist based at Argentine Islands off the Antarctic Peninsula in the late 1970s, lost friends who were climbing the nearby peak Mount Peary in 1976. It was thought that those men – Geoffrey Hargreaves, Michael Walker and Graham Whitfield – were trapped in an avalanche. Signs of their camp were found by an air search, but their bodies were never recovered.

The graves of past explorers. Credit: Getty Images.

“You just wait and wait, but there’s nothing. Then you just sort of lose hope,” Shelley says.

Even when the body is recovered, the demanding nature of life and work on Antarctica can make it a hard place to grieve. Ron Pinder, a radio operator in the South Orkneys in the late 1950s and early 1960s, still mourns someone who slipped from a cliff on Signy Island while tagging birds in 1961. The body of his friend, Roger Filer, was found at the foot of a 20ft (6m) cliff below the nests where he was thought to have been tagging birds. His body was buried on the island.

“It is 57 years ago now. It is in the distant past. But it affects me more now than it did then. Life was such that you had to get on with it,” Pinder says.

The same rings true for Shelley. “I don’t think we did really process it,” he says. “It remains at the back of your mind. But it’s certainly a mixed feeling, because Antarctica is superbly beautiful, both during the winter and the summer. It’s the best place to be and we were doing the things we wanted to do.”

The monument to those who lost their lives at the Scott Polar Research Institute. Credit: swancharlotte/Wikipedia/CC BY-SA 4.0.

These deaths have led to changes in how people work in Antarctica. As a result, the people there today can live more safely on this hazardous, isolated continent. Although terrible incidents still happen, much has been learned from earlier fatalities.

For the friends and families of the dead, there is an ongoing effort to make sure their lost loved ones are not forgotten. Outside the Scott Polar Research Institute in Cambridge, UK, two high curved oak pillars lean towards one another, gently touching at the top. It is half of a monument to the dead, erected by the British Antarctic Monument Trust, set up by Rod Rhys Jones and Brian Dorsett-Bailey, Jeremy’s brother, to recognise and honour those who died in Antarctica. The other half of the monument is a long slither of metal leaning slightly towards the sea at Port Stanley in the Falkland Islands, where many of the researchers set off for the last leg of their journey to Antarctica.

Viewed from one end so they align, the oak pillars curve away from each other, leaving a long tapering empty space between them. The shape of that void is perfectly filled by the tall steel shard mounted on a plinth on the other side of the world. It is a physical symbol that spans the hemispheres, connecting home with the vast and wild continent that drew these scientists away for the last time.

More from BBC Future

Water, Water, Every Where — And Now Scientists Know Where It Came From September 3rd 2020

Nell Greenfieldboyce 2010

Nell Greenfieldboyce

Water on Earth is omnipresent and essential for life as we know it, and yet scientists remain a bit baffled about where all of this water came from: Was it present when the planet formed, or did the planet form dry and only later get its water from impacts with water-rich objects such as comets?

A new study in the journal Science suggests that the Earth likely got a lot ofits precious water from the original materials that built the planet, instead of having water arrive later from afar.

The researchers who did this study went looking for signs of water in a rare kind of meteorite. Only about 2% of the meteorites found on Earth are so-called enstatite chondrite meteorites. Their chemical makeup suggests they’re close to the kind of primordial stuff that glommed together and produced our planet 4.5 billion years ago.

You wouldn’t necessarily know how special these meteorites are at first glance. “It’s a bit like a gray rock,” says Laurette Piani, a researcher in France at the Centre de Recherches Pétrographiques et Géochimiques.

What she wanted to know about these rocks is how much hydrogen was in there — because that’s what could produce water.

Space

NASA Braves The Heat To Get Up Close And Personal With Our Sun

Compared with planets such as Jupiter and Saturn, the Earth formed close to the sun. Scientists have long thought that the temperatures must have been hot enough to prevent any water from being in the form of ice. That means there would be no ice to join with the swirling bits of rock and dust that were smashing into each other and slowly building up the young Earth.

If this is all true, our home planet must have been watered later on, perhaps when it got hit by icy comets or meteorites with water-rich minerals coming from farther out in the solar system.

Space

Frosty Asteroid May Give Clues About Earth’s Oceans

Even though that’s been the prevailing view, some planetary scientists don’t buy it. After all, the story of Earth’s water would be a lot more simple and straightforward if the water was just present to begin with.

So Piani and her colleagues recently took a close look at 13 of those unusual meteorites, which are also thought to have formed close in to the sun.

“Before the study, there were almost no measurement of the hydrogen or water in this meteorite,” Piani says. Those measurements that did exist were inconsistent, she says, and were done on meteorites that could have undergone changes after falling to the Earth’s surface.

“We do not want to have meteorites that were altered and modified by the Earth processes,” Piani explains, saying that they deliberately selected the most pristine meteorites possible.

The researchers then analyzed the meteorite’s chemical makeup to see how much hydrogen was in there. Since hydrogen can react with oxygen to produce water, knowing how much hydrogen is in the rocks indicates how much water this material could have contributed to a growing Earth.

What they found was much less hydrogen than in more ordinary meteorites.

Still, what was there would be enough to explain plenty of Earth’s water — at least several times the amount of water in the Earth’s present-day oceans. “It’s a very big quantity of water in the initial material,” Piani says. “And this was never really considered before.”

What’s more, the team also measured the deuterium-to-hydrogen ratio in the meteorites and found that it’s similar to what’s known to exist in the interior of the Earth — which also contains a lot of water. This is additional evidence that there’s a link between our planet’s water and the basic building materials that were present when it formed.

The findings pleased Anne Peslier, a planetary scientist at NASA’s Johnson Space Center in Houston, who wasn’t part of the research team but has a special interest in water.

“I was happy because it makes it nice and simple,” Peslier says. “We don’t have to invoke complicated models where we have to bring material, water-rich material from the outer part of the solar system.”

She says the delivery of so much water from way out there would have required something unusual to disturb the orbits of this water-rich material, such as Jupiter having a little trip inside the inner solar system.

“So here, we just don’t need Jupiter. We don’t need to do anything weird. We just grab the material that was there where the Earth formed, and that’s where the water comes from,” Peslier says.

Even if a lot of the water was there at the start, however, she thinks some must have arrived later on. “I think it’s both,” she says.

Despite these convincing results, she says, there’s still plenty of watery mysteries to plumb. For example, researchers are still trying to determine exactly how much water is locked deep inside the Earth, but it’s surely substantial — several oceans’ worth.

“There is more water down beneath our feet,” Peslier says, “than there is that you see at the surface.”

A 16-Million-Year-Old Tree Tells a Deep Story of the Passage of Time

The sequoia tree slab is an invitation to begin thinking about a vast timescale that includes everything from fossils of armored amoebas to the great Tyrannosaurus rex.

Smithsonian Magazine

  • Riley Black
GettyImages-584372283.jpg

How many answers are hidden inside the giants? Photo by Kelly Cheng Travel Photography / Getty Images.

Paleobotanist Scott Wing hopes that he’s wrong. Even though he carefully counted each ring in an immense, ancient slab of sequoia, the scientist notes that there’s always a little bit of uncertainty in the count. Wing came up with about 260, but, he says, it’s likely a young visitor may one day write him saying: “You’re off by three.” And that would a good thing, Wing says, because it’d be another moment in our ongoing conversation about time.

The shining slab, preserved and polished, is the keystone to consideration of time and our place in it in the “Hall of Fossils—Deep Time” exhibition at the Smithsonian’s National Museum of Natural History. The fossil greets visitors at one of the show’s entrances and just like the physical tree, what the sequoia represents has layers.

Each yearly delineation on the sequoia’s surface is a small part of a far grander story that ties together all of life on Earth. Scientists know this as Deep Time. It’s not just on the scale of centuries, millennia, epochs, or periods, but the ongoing flow that goes back to the origins of our universe, the formation of the Earth, and the evolution of all life, up through this present moment. It’s the backdrop for everything we see around us today, and it can be understood through techniques as different as absolute dating of radioactive minerals and counting the rings of a prehistoric tree. Each part informs the whole.

In decades past, the Smithsonian’s fossil halls were known for the ancient celebrities they contained. There was the dinosaur hall, and the fossil mammal hall, surrounded by the remains of other extinct organisms. But now all of those lost species have been brought together into an integrated story of dynamic and dramatic change. The sequoia is an invitation to begin thinking about how we fit into the vast timescale that includes everything from fossils of armored amoebas called forams to the great Tyrannosaurus rex.

Exactly how the sequoia fossil came to be at the Smithsonian is not entirely clear. The piece was gifted to the museum long ago, “before my time,” Wing says. Still, enough of the tree’s backstory is known to identify it as a massive tree that grew in what’s now central Oregon about 16 million years ago. This tree was once a long-lived part of a true forest primeval.

There are fossils both far older and more recent in the recesses of the Deep Time displays. But what makes the sequoia a fitting introduction to the story that unfolds behind it, Wing says, is that the rings offer different ways to think about time. Given that the sequoia grew seasonally, each ring marks the passage of another year, and visitors can look at the approximately 260 delineations and think about what such a time span represents.

Wing says, people can play the classic game of comparing the tree’s life to a human lifespan. If a long human life is about 80 years, Wing says, then people can count 80, 160, and 240 years, meaning the sequoia grew and thrived over the course of approximately three human lifespans—but during a time when our own ancestors resembled gibbon-like apes. Time is not something that life simply passes through. In everything—from the rings of an ancient tree to the very bones in your body—time is part of life.

The record of that life—and even afterlife—lies between the lines. “You can really see that this tree was growing like crazy in its initial one hundred years or so,” Wing says, with the growth slowing as the tree became larger. And despite the slab’s ancient age, some of the original organic material is still locked inside.

“This tree was alive, photosynthesizing, pulling carbon dioxide out of the atmosphere, turning it into sugars and into lignin and cellulose to make cell walls,” Wing says. After the tree perished, water carrying silica and other minerals coated the log to preserve the wood and protect some of those organic components inside. “The carbon atoms that came out of the atmosphere 16 million years ago are locked in this chunk of glass.”

And so visitors are drawn even further back, not only through the life of the tree itself but through a time span so great that it’s difficult to comprehend. A little back of the envelope math indicates that the tree represents about three human lifetimes, but that the time between when the sequoia was alive and the present could contain about 200,000 human lifetimes. The numbers grow so large that they begin to become abstract. The sequoia is a way to touch that history and start to feel the pull of all those ages past, and what they mean to us. “Time is so vast,” Wing says, “that this giant slab of a tree is just scratching the surface.”

Riley Black is a freelance science writer specializing in evolution, paleontology and natural history who blogs regularly for Scientific American.Smithsonian Magazine

More from Smithsonian Magazine

This post originally appeared on Smithsonian Magazine and was published June 10, 2019. This article is republished here with permission.

“Holy Grail” Metallic Hydrogen Is Going to Change Everything

The substance has the potential to revolutionize everything from space travel to the energy grid. August 26th 2020

Inverse

  • Kastalia Medrano
GettyImages-532102097.jpg

Photo from Stocktrek Images / Getty Images.

Two Harvard scientists have succeeded in creating an entirely new substance long believed to be the “holy grail” of physics — metallic hydrogen, a material of unparalleled power that could one day propel humans into deep space. The research was published in January 2017 in the journal Science.

Scientists created the metallic hydrogen by pressurizing a hydrogen sample to more pounds per square inch than exists at the center of the Earth. This broke the molecule down from its solid state and allowed the particles to dissociate into atomic hydrogen.

The best rocket fuel we currently have is liquid hydrogen and liquid oxygen, burned for propellant. The efficacy of such substances is characterized by “specific impulse,” the measure of impulse fuel can give a rocket to propel it forward.

“People at NASA or the Air Force have told me that if they could get an increase from 450 seconds [of specific impulse] to 500 seconds, that would have a huge impact on rocketry,” Isaac Silvera, the Thomas D. Cabot Professor of the Natural Sciences at Harvard University, told Inverse by phone. “If you can trigger metallic hydrogen to recover to the molecular phase, [the energy release] calculated for that is 1700 seconds.”

Metallic hydrogen could potentially enable rockets to get into orbit in a single stage, even allowing humans to explore the outer planets. Metallic hydrogen is predicted to be “metastable” — meaning if you make it at a very high pressure then release it, it’ll stay at that pressure. A diamond, for example, is a metastable form of graphite. If you take graphite, pressurize it, then heat it, it becomes a diamond; if you take the pressure off, it’s still a diamond. But if you heat it again, it will revert back to graphite.

Scientists first theorized atomic metallic hydrogen a century ago. Silvera, who created the substance along with post-doctoral fellow Ranga Dias, has been chasing it since 1982 and working as a professor of physics at the University of Amsterdam.

Metallic hydrogen has also been predicted to be a high- or possibly room-temperature superconductor. There are no other known room-temperature superconductors in existence, meaning the applications are immense — particularly for the electric grid, which suffers for energy lost through heat dissipation. It could also facilitate magnetic levitation for futuristic high-speed trains; substantially improve performance of electric cars; and revolutionize the way energy is produced and stored.

But that’s all still likely a couple of decades off. The next step in terms of practical application is to determine if metallic hydrogen is indeed metastable. Right now Silvera has a very small quantity. If the substance does turn out to be metastable, it might be used to create room-temperature crystal and — by spraying atomic hydrogen onto the surface —use it like a seed to grow more, the way synthetic diamonds are made. Inverse

More from Inverse

This Is How Your Brain Becomes Addicted to Caffeine August 23rd 2020

Regular ingestion of the drug alters your brain’s chemical make up, leading to fatigue, headaches and nausea if you try to quit.

Smithsonian Magazine

Regular caffeine use alters your brain’s chemical makeup, leading to fatigue, headaches and nausea if you try to quit.

Within 24 hours of quitting the drug, your withdrawal symptoms begin. Initially, they’re subtle: The first thing you notice is that you feel mentally foggy, and lack alertness. Your muscles are fatigued, even when you haven’t done anything strenuous, and you suspect that you’re more irritable than usual.

Over time, an unmistakable throbbing headache sets in, making it difficult to concentrate on anything. Eventually, as your body protests having the drug taken away, you might even feel dull muscle pains, nausea and other flu-like symptoms.

This isn’t heroin, tobacco or even alcohol withdrawal. We’re talking about quitting caffeine, a substance consumed so widely (the FDA reports that more than 80 percent of American adults drink it daily) and in such mundane settings (say, at an office meeting or in your car) that we often forget it’s a drug—and by far the world’s most popular psychoactive one.

Like many drugs, caffeine is chemically addictive, a fact that scientists established back in 1994. In May 2013, with the publication of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM), caffeine withdrawal was finally included as a mental disorder for the first time—even though its merits for inclusion are symptoms that regular coffee-drinkers have long known well from the times they’ve gone off it for a day or more.

Why, exactly, is caffeine addictive? The reason stems from the way the drug affects the human brain, producing the alert feeling that caffeine drinkers crave.

Soon after you drink (or eat) something containing caffeine, it’s absorbed through the small intestine and dissolved into the bloodstream. Because the chemical is both water- and fat-soluble (meaning that it can dissolve in water-based solutions—think blood—as well as fat-based substances, such as our cell membranes), it’s able to penetrate the blood-brain barrier and enter the brain.

Structurally, caffeine closely resembles a molecule that’s naturally present in our brain, called adenosine (which is a by product of many cellular processes, including cellular respiration)—so much so, in fact, that caffeine can fit neatly into our brain cells’ receptors for adenosine, effectively blocking them off. Normally, the adenosine produced over time locks into these receptors and produces a feeling of tiredness.

Caffeine_and_adenosine.svg.png

Caffeine structurally resembles adenosine enough for it to fit into the brain’s adenosine receptors.

When caffeine molecules are blocking those receptors, they prevent this from occurring, thereby generating a sense of alertness and energy for a few hours.

Additionally, some of the brain’s own natural stimulants (such as dopamine) work more effectively when the adenosine receptors are blocked, and all the surplus adenosine floating around in the brain cues the adrenal glands to secrete adrenaline, another stimulant.

For this reason, caffeine isn’t technically a stimulant on its own, says Stephen R. Braun, the author or Buzzed: the Science and Lore of Caffeine and Alcohol, but a stimulant enabler: a substance that lets our natural stimulants run wild. Ingesting caffeine, he writes, is akin to “putting a block of wood under one of the brain’s primary brake pedals.” This block stays in place for anywhere from four to six hours, depending on the person’s age, size and other factors, until the caffeine is eventually metabolized by the body.

In people who take advantage of this process on a daily basis (i.e. coffee/tea, soda or energy drink addicts), the brain’s chemistry and physical characteristics actually change over time as a result. The most notable change is that brain cells grow more adenosine receptors, which is the brain’s attempt to maintain equilibrium in the face of a constant onslaught of caffeine, with its adenosine receptors so regularly plugged (studies indicate that the brain also responds by decreasing the number of receptors for norepinephrine, a stimulant). This explains why regular coffee drinkers build up a tolerance over time—because you have more adenosine receptors, it takes more caffeine to block a significant proportion of them and achieve the desired effect.

This also explains why suddenly giving up caffeine entirely can trigger a range of withdrawal effects. The underlying chemistry is complex and not fully understood, but the principle is that your brain is used to operating in one set of conditions (with an artificially-inflated number of adenosine receptors, and a decreased number of norepinephrine receptors) that depend upon regular ingestion of caffeine. Suddenly, without the drug, the altered brain chemistry causes all sorts of problems, including the dreaded caffeine withdrawal headache.

The good news is that, compared to many drug addictions, the effects are relatively short-term. To kick the thing, you only need to get through about 7-12 days of symptoms without drinking any caffeine. During that period, your brain will naturally decrease the number of adenosine receptors on each cell, responding to the sudden lack of caffeine ingestion. If you can make it that long without a cup of joe or a spot of tea, the levels of adenosine receptors in your brain reset to their baseline levels, and your addiction will be broken.

Joseph Stromberg Smithsonian

Is Consciousness an Illusion?

Philosopher Daniel Dennett holds a distinctive and openly paradoxical position on the question of consciousness. August 20th 2020

The New York Review of Books

  • Thomas Nagel
nagel_1-030917.jpg

Daniel Dennett at the Centro Cultural de la Ciencia, Buenos Aires, Argentina, June 2016. Photo by Soledad Aznarez / AP Images.

For fifty years the philosopher Daniel Dennett has been engaged in a grand project of disenchantment of the human world, using science to free us from what he deems illusions—illusions that are difficult to dislodge because they are so natural. In From Bacteria to Bach and Back, his eighteenth book (thirteenth as sole author), Dennett presents a valuable and typically lucid synthesis of his worldview. Though it is supported by reams of scientific data, he acknowledges that much of what he says is conjectural rather than proven, either empirically or philosophically.

Dennett is always good company. He has a gargantuan appetite for scientific knowledge, and is one of the best people I know at transmitting it and explaining its significance, clearly and without superficiality. He writes with wit and elegance; and in this book especially, though it is frankly partisan, he tries hard to grasp and defuse the sources of resistance to his point of view. He recognizes that some of what he asks us to believe is strongly counterintuitive. I shall explain eventually why I think the overall project cannot succeed, but first let me set out the argument, which contains much that is true and insightful.

The book has a historical structure, taking us from the prebiotic world to human minds and human civilization. It relies on different forms of evolution by natural selection, both biological and cultural, as its most important method of explanation. Dennett holds fast to the assumption that we are just physical objects and that any appearance to the contrary must be accounted for in a way that is consistent with this truth. Bach’s or Picasso’s creative genius, and our conscious experience of hearing Bach’s Fourth Brandenburg Concerto or seeing Picasso’s Girl Before a Mirror, all arose by a sequence of physical events beginning with the chemical composition of the earth’s surface before the appearance of unicellular organisms. Dennett identifies two unsolved problems along this path: the origin of life at its beginning and the origin of human culture much more recently. But that is no reason not to speculate.

The task Dennett sets himself is framed by a famous distinction drawn by the philosopher Wilfrid Sellars between the “manifest image” and the “scientific image”—two ways of seeing the world we live in. According to the manifest image, Dennett writes, the world is

full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.

According to the scientific image, on the other hand, the world

is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?).

This, according to Dennett, is the world as it is in itself, not just for us, and the task is to explain scientifically how the world of molecules has come to include creatures like us, complex physical objects to whom everything, including they themselves, appears so different.

He greatly extends Sellars’s point by observing that the concept of the manifest image can be generalized to apply not only to humans but to all other living beings, all the way down to bacteria. All organisms have biological sensors and physical reactions that allow them to detect and respond appropriately only to certain features of their environment—“affordances,” Dennett calls them—that are nourishing, noxious, safe, dangerous, sources of energy or reproductive possibility, potential predators or prey.

For each type of organism, whether plant or animal, these are the things that define their world, that are salient and important for them; they can ignore the rest. Whatever the underlying physiological mechanisms, the content of the manifest image reveals itself in what the organisms do and how they react to their environment; it need not imply that the organisms are consciously aware of their surroundings. But in its earliest forms, it is the first step on the route to awareness.


The lengthy process of evolution that generates these results is first biological and then, in our case, cultural, and only at the very end is it guided partly by intelligent design, made possible by the unique capacities of the human mind and human civilization. But as Dennett says, the biosphere is saturated with design from the beginning—everything from the genetic code embodied in DNA to the metabolism of unicellular organisms to the operation of the human visual system—design that is not the product of intention and that does not depend on understanding.

One of Dennett’s most important claims is that most of what we and our fellow organisms do to stay alive, cope with the world and one another, and reproduce is not understood by us or them. It is competence without comprehension. This is obviously true of organisms like bacteria and trees that have no comprehension at all, but it is equally true of creatures like us who comprehend a good deal. Most of what we do, and what our bodies do—digest a meal, move certain muscles to grasp a doorknob, or convert the impact of sound waves on our eardrums into meaningful sentences—is done for reasons that are not our reasons. Rather, they are what Dennett calls free-floating reasons, grounded in the pressures of natural selection that caused these behaviors and processes to become part of our repertoire. There are reasons why these patterns have emerged and survived, but we don’t know those reasons, and we don’t have to know them to display the competencies that allow us to function.

Nor do we have to understand the mechanisms that underlie those competencies. In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,

like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.

He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology.


Our user-illusions were not, like the little icons on the desktop screen, created by an intelligent interface designer. Nearly all of them—such as our images of people, their faces, voices, and actions, the perception of some things as delicious or comfortable and others as disgusting or dangerous—are the products of “bottom-up” design, understandable through the theory of evolution by natural selection, rather than “top-down” design by an intelligent being. Darwin, in what Dennett calls a “strange inversion of reasoning,” showed us how to resist the intuitive tendency always to explain competence and design by intelligence, and how to replace it with explanation by natural selection, a mindless process of accidental variation, replication, and differential survival.

As for the underlying mechanisms, we now have a general idea of how they might work because of another strange inversion of reasoning, due to Alan Turing, the creator of the computer, who saw how a mindless machine could do arithmetic perfectly without knowing what it was doing. This can be applied to all kinds of calculation and procedural control, in natural as well as in artificial systems, so that their competence does not depend on comprehension. Dennett’s claim is that when we put these two insights together, we see that

all the brilliance and comprehension in the world arises ultimately out of uncomprehending competences compounded over time into ever more competent—and hence comprehending—systems. This is indeed a strange inversion, overthrowing the pre-Darwinian mind-first vision of Creation with a mind-last vision of the eventual evolution of us, intelligent designers at long last.

And he adds:

Turing himself is one of the twigs on the Tree of Life, and his artifacts, concrete and abstract, are indirectly products of the blind Darwinian processes in the same way spider webs and beaver dams are….

An essential, culminating stage of this process is cultural evolution, much of which, Dennett believes, is as uncomprehending as biological evolution. He quotes Peter Godfrey-Smith’s definition, from which it is clear that the concept of evolution can apply more widely:

Evolution by natural selection is change in a population due to (i) variation in the characteristics of members of the population, (ii) which causes different rates of reproduction, and (iii) which is heritable.

In the biological case, variation is caused by mutations in DNA, and it is heritable through reproduction, sexual or otherwise. But the same pattern applies to variation in behavior that is not genetically caused, and that is heritable only in the sense that other members of the population can copy it, whether it be a game, a word, a superstition, or a mode of dress.


This is the territory of what Richard Dawkins memorably christened “memes,” and Dennett shows that the concept is genuinely useful in describing the formation and evolution of culture. He defines “memes” thus:

They are a kind of way of behaving (roughly) that can be copied, transmitted, remembered, taught, shunned, denounced, brandished, ridiculed, parodied, censored, hallowed.

They include such things as the meme for wearing your baseball cap backward or for building an arch of a certain shape; but the best examples of memes are words. A word, like a virus, needs a host to reproduce, and it will survive only if it is eventually transmitted to other hosts, people who learn it by imitation:

Like a virus, it is designed (by evolution, mainly) to provoke and enhance its own replication, and every token it generates is one of its offspring. The set of tokens descended from an ancestor token form a type, which is thus like a species.

ezgif.com-webp-to-jpg.jpg

Alan Turing; drawing by David Levine.

The distinction between type and token comes from the philosophy of language: the word “tomato” is a type, of which any individual utterance or inscription or occurrence in thought is a token. The different tokens may be physically very different—you say “tomayto,” I say “tomahto”—but what unites them is the perceptual capacity of different speakers to recognize them all as instances of the type. That is why people speaking the same language with different accents, or typing with different fonts, can understand each other.

A child picks up its native language without any comprehension of how it works. Dennett believes, plausibly, that language must have originated in an equally unplanned way, perhaps initially by the spontaneous attachment of sounds to prelinguistic thoughts. (And not only sounds but gestures: as Dennett observes, we find it very difficult to talk without moving our hands, an indication that the earliest language may have been partly nonvocal.) Eventually such memes coalesced to form languages as we know them, intricate structures with vast expressive capacity, shared by substantial populations.

Language permits us to transcend space and time by communicating about what is not present, to accumulate shared bodies of knowledge, and with writing to store them outside of individual minds, resulting in the vast body of collective knowledge and practice dispersed among many minds that constitutes civilization. Language also enables us to turn our attention to our own thoughts and develop them deliberately in the kind of top-down creativity characteristic of science, art, technology, and institutional design.

But such top-down research and development is possible only on a deep foundation of competence whose development was largely bottom-up, the result of cultural evolution by natural selection. Without denigrating the contributions of individual genius, Dennett urges us not to forget its indispensable precondition, the arms race over millennia of competing memes—exemplified by the essentially unplanned evolution, survival, and extinction of languages.

Of course the biological evolution of the human brain made all of this possible, together with some coevolution of brain and culture over the past 50,000 years, but at this point we can only speculate about what happened. Dennett cites recent research in support of the view that brain architecture is the product of bottom-up competition and coalition-formation among neurons—partly in response to the invasion of memes. But whatever the details, if Dennett is right that we are physical objects, it follows that all the capacities for understanding, all the values, perceptions, and thoughts that present us with the manifest image and allow us to form the scientific image, have their real existence as systems of representation in the central nervous system.


This brings us to the question of consciousness, on which Dennett holds a distinctive and openly paradoxical position. Our manifest image of the world and ourselves includes as a prominent part not only the physical body and central nervous system but our own consciousness with its elaborate features—sensory, emotional, and cognitive—as well as the consciousness of other humans and many nonhuman species. In keeping with his general view of the manifest image, Dennett holds that consciousness is not part of reality in the way the brain is. Rather, it is a particularly salient and convincing user-illusion, an illusion that is indispensable in our dealings with one another and in monitoring and managing ourselves, but an illusion nonetheless.

You may well ask how consciousness can be an illusion, since every illusion is itself a conscious experience—an appearance that doesn’t correspond to reality. So it cannot appear to me that I am conscious though I am not: as Descartes famously observed, the reality of my own consciousness is the one thing I cannot be deluded about. The way Dennett avoids this apparent contradiction takes us to the heart of his position, which is to deny the authority of the first-person perspective with regard to consciousness and the mind generally.

The view is so unnatural that it is hard to convey, but it has something in common with the behaviorism that was prevalent in psychology at the middle of the last century. Dennett believes that our conception of conscious creatures with subjective inner lives—which are not describable merely in physical terms—is a useful fiction that allows us to predict how those creatures will behave and to interact with them. He has coined the term “heterophenomenology” to describe the (strictly false) attribution each of us makes to others of an inner mental theater—full of sensory experiences of colors, shapes, tastes, sounds, images of furniture, landscapes, and so forth—that contains their representation of the world.

According to Dennett, however, the reality is that the representations that underlie human behavior are found in neural structures of which we know very little. And the same is true of the similar conception we have of our own minds. That conception does not capture an inner reality, but has arisen as a consequence of our need to communicate to others in rough and graspable fashion our various competencies and dispositions (and also, sometimes, to conceal them):

Curiously, then, our first-person point of view of our own minds is not so different from our second-person point of view of others’ minds: we don’t see, or hear, or feel, the complicated neural machinery churning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all.

The trouble is that Dennett concludes not only that there is much more behind our behavioral competencies than is revealed to the first-person point of view—which is certainly true—but that nothing whatever is revealed to the first-person point of view but a “version” of the neural machinery. In other words, when I look at the American flag, it may seem to me that there are red stripes in my subjective visual field, but that is an illusion: the only reality, of which this is “an interpreted, digested version,” is that a physical process I can’t describe is going on in my visual cortex.

I am reminded of the Marx Brothers line: “Who are you going to believe, me or your own eyes?” Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”

If I understand him, this requires us to interpret ourselves behavioristically: when it seems to me that I have a subjective conscious experience, that experience is just a belief, manifested in what I am inclined to say. According to Dennett, the red stripes that appear in my visual field when I look at the flag are just the “intentional object” of such a belief, as Santa Claus is the intentional object of a child’s belief in Santa Claus. Neither of them is real. Recall that even trees and bacteria have a manifest image, which is to be understood through their outward behavior. The same, it turns out, is true of us: the manifest image is not an image after all.


There is no reason to go through such mental contortions in the name of science. The spectacular progress of the physical sciences since the seventeenth century was made possible by the exclusion of the mental from their purview. To say that there is more to reality than physics can account for is not a piece of mysticism: it is an acknowledgment that we are nowhere near a theory of everything, and that science will have to expand to accommodate facts of a kind fundamentally different from those that physics is designed to explain. It should not disturb us that this may have radical consequences, especially for Dennett’s favorite natural science, biology: the theory of evolution, which in its current form is a purely physical theory, may have to incorporate nonphysical factors to account for consciousness, if consciousness is not, as he thinks, an illusion. Materialism remains a widespread view, but science does not progress by tailoring the data to fit a prevailing theory.

There is much in the book that I haven’t discussed, about education, information theory, prebiotic chemistry, the analysis of meaning, the psychological role of probability, the classification of types of minds, and artificial intelligence. Dennett’s reflections on the history and prospects of artificial intelligence and how we should manage its development and our relation to it are informative and wise. He concludes:

The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence….

We should hope that new cognitive prostheses will continue to be designed to be parasitic, to be tools, not collaborators. Their only “innate” goal, set up by their creators, should be to respond, constructively and transparently, to the demands of the user.

About the true nature of the human mind, Dennett is on one side of an old argument that goes back to Descartes. He pays tribute to Descartes, citing the power of what he calls “Cartesian gravity,” the pull of the first-person point of view; and he calls the allegedly illusory realm of consciousness the “Cartesian Theater.” The argument will no doubt go on for a long time, and the only way to advance understanding is for the participants to develop and defend their rival conceptions as fully as possible—as Dennett has done. Even those who find the overall view unbelievable will find much to interest them in this book.

Thomas Nagel is University Professor Emeritus at NYU. He is the author of “The View From Nowhere, Mortal Questions, and Mind and Cosmos,” among other books.
The New York Review of Books

More from The New York Review of Books

This post originally appeared on The New York Review of Books

Wireless Charging Risk August 16th 2020

Wireless charging is increasingly common in modern smartphones, and there’s even speculation that Apple might ditch charging via a cable entirely in the near future. But the slight convenience of juicing up your phone by plopping it onto a pad rather than plugging it in comes with a surprisingly robust environmental cost. According to new calculations from OneZero and iFixit, wireless charging is drastically less efficient than charging with a cord, so much so that the widespread adoption of this technology could necessitate the construction of dozens of new power plants around the world. (Unless manufacturers find other ways to make up for the energy drain, of course.)

On paper, wireless charging sounds appealing. Just drop a phone down on a charger and it will start charging. There’s no wear and tear on charging ports, and chargers can even be built into furniture. Not all of the energy that comes out of a wall outlet, however, ends up in a phone’s battery. Some of it gets lost in the process as heat.

While this is true of all forms of charging to a certain extent, wireless chargers lose a lot of energy compared to cables. They get even less efficient when the coils in the phone aren’t aligned properly with the coils in the charging pad, a surprisingly common problem.

To get a sense of how much extra power is lost when using wireless charging versus wired charging in the real world, I tested a Pixel 4 using multiple wireless chargers, as well as the standard charging cable that comes with the phone. I used a high-precision power meter that sits between the charging block and the power outlet to measure power consumption.

In my tests, I found that wireless charging used, on average, around 47% more power than a cable.

Charging the phone from completely dead to 100% using a cable took an average of 14.26 watt-hours (Wh). Using a wireless charger took, on average, 21.01 Wh. That comes out to slightly more than 47% more energy for the convenience of not plugging in a cable. In other words, the phone had to work harder, generate more heat, and suck up more energy when wirelessly charging to fill the same size battery.

How the phone was positioned on the charger significantly affected charging efficiency. The flat Yootech charger I tested was difficult to line up properly. Initially I intended to measure power consumption with the coils aligned as well as possible, then intentionally misalign them to detect the difference.

Instead, during one test, I noticed that the phone wasn’t charging. It looked like it was aligned properly, but while trying to fiddle with it, the difference between positions that charged properly and those that didn’t charge at all could be measured in millimeters. Without a visual indicator, it would be impossible to tell. Without careful alignment, this could make the phone take way more energy to charge than necessary or, more annoyingly, not charge at all.

The first test with the Yootech pad — before I figured out how to align the coils properly — took a whopping 25.62 Wh to charge, or 80% more energy than an average cable charge. Hearing about the hypothetical inefficiencies online was one thing, but here I could see how I’d nearly doubled the amount of power it took to charge my phone by setting it down slightly wrong instead of just plugging in a cable.

Google’s official Pixel Stand fared better, likely due to its propped-up design. Since the base of the phone sits flat, the coils can only be misaligned from left to right — circular pads like the Yootech allow for misalignment in any direction. Again, the threshold was a few millimeters of difference tops (as seen below), but the Pixel Stand continued charging while misaligned, albeit slower and using more power. In general, the propped-up design helped align the coils without much fiddling, but it still used an average of 19.8 Wh, or 39% more power, to charge the phone than cables.

On top of this, both wireless chargers independently consumed a small amount of power when no phone was charging at all — around 0.25 watts, which might not sound like much, but over 24 hours it would consume around six watt-hours. A household with multiple wireless chargers left plugged in — say, a charger by the bed, one in the living room, and another in the office — could waste the same amount of power in a day as it would take to fully charge a phone. By contrast, in my testing the normal cable charger did not draw any measurable amount of power.

While wireless charging might use relatively more power than a cable, it’s often written off as negligible. The extra power consumed by charging one phone with wireless charging versus a cable is the equivalent of leaving one extra LED light bulb on for a few hours. It might not even register on your power bill. At scale, however, it can turn into an environmental problem.

“I think in terms of power consumption, for me worrying about how much I’m paying for electricity, I don’t think it’s a factor,” Kyle Wiens, CEO of iFixit, told OneZero. “If all of a sudden, the 3 billion[-plus] smartphones that are in use, if all of them take 50% more power to charge, that adds up to a big amount. So it’s a society-wide issue, not a personal issue.”

To get a frame of reference for scale, iFixit helped me calculate the impact that the kind of excess power drain I experienced could have if every smartphone user on the planet switched to wireless charging — not a likely scenario any time soon, but neither was 3.5 billion people carrying around smartphones, say, 30 years ago.

“We worked out that at 100% efficiency from wall socket to battery, it would take about 73 coal power plantsrunning for a day to charge the 3.5 billion smartphone batteries once fully,” iFixit technical writer Arthur Shi told OneZero. But if people place their phones wrong and reduce the efficiency of their charging, the number grows: “If the wireless charging efficiency was only 50%, you would need to double the [73] power plants in order to charge all the batteries.”

If everyone in the world switched to wireless charging, it would have a measurable impact on the global power grid.

This is rough math, of course. Measuring power consumption by the number of power plants devices require is a bit like measuring how many vehicles it takes to transport a couple dozen people. It could take a dozen two-seat convertibles, or one bus. Shi’s math assumed relatively small coal power plants outputting 50 MW, as many power plants in the United States are, but those same needs could also be met by a couple very large power plants outputting more than 2,000 MW (of which the United States has only 29).

However, the broader point remains the same: If everyone in the world switched to wireless charging, it would have a measurable impact on the global power grid. While tech companies like Apple and Google tout how environmentally friendly their phones are, power consumption often goes overlooked. “They want to cover the carbon impact of the product over their entire life cycle?” Wiens said. “The entire life cycle includes all the power that these things ever consumed plugged into the wall.”

There are some things that companies can do to balance out the excess power wireless chargers use. Manufacturers can design phones to disable wireless charging if their coils aren’t aligned — instead of allowing excessively inefficient charging for the sake of user experience — or design chargers to hold phones so they align properly. They can also continue to offer wired charging, which might mean Apple’s rumored future port-less phone would have to wait.

Finally, tech companies can work to offset their excesses in one area with savings in another. Wireless charging is only one small piece of the environmental picture, and environmental reports for major phones from Google and Apple only loosely point to energy efficiency and make no mention of the impact of using wireless chargers. There are many ways tech companies could be more energy-efficient to put less strain on our power grids. Until wireless charging itself gets a more thorough examination, though, the world would probably be better off if we all stuck to good old-fashioned plugs.

Update:A previous version of this article misstated two units of measurement in reference to the Pixel Stand charger. It consumes 0.25 watts when plugged in without a phone attached, which over 24 hours would consume around six watt-hours.

Bill Gates on Covid: Most US Tests Are ‘Completely Garbage’

The techie-turned-philanthropist on vaccines, Trump, and why social media is “a poisoned chalice.”

For 20 years, Bill Gates has been easing out of the roles that made him rich and famous—CEO, chief software architect, and chair of Microsoft—and devoting his brainpower and passion to the Bill and Melinda Gates Foundation, abandoning earnings calls and antitrust hearings for the metrics of disease eradication and carbon reduction. This year, after he left the Microsoft board, one would have thought he would have relished shedding the spotlight directed at the four CEOs of big tech companies called before Congress.

But as with many of us, 2020 had different plans for Gates. An early Cassandra who warned of our lack of preparedness for a global pandemic, he became one of the most credible figures as his foundation made huge investments in vaccines, treatments, and testing. He also became a target of the plague of misinformation afoot in the land, as logorrheic critics accused him of planning to inject microchips in vaccine recipients. (Fact check: false. In case you were wondering.)

My first interview with Gates was in 1983, and I’ve long lost count of how many times I’ve spoken to him since. He’s yelled at me (more in the earlier years) and made me laugh (more in the latter years). But I’ve never looked forward to speaking to him more than in our year of Covid. We connected on Wednesday, remotely of course. In discussing our country’s failed responses, his issues with his friend Mark Zuckerberg’s social networks, and the innovations that might help us out of this mess, Gates did not disappoint. The interview has been edited for length and clarity.

WIRED: You have been warning us about a global pandemic for years. Now that it has happened just as you predicted, are you disappointed with the performance of the United States? Exclusive Offer.

Bill Gates: Yeah. There’s three time periods, all of which have disappointments. There is 2015 until this particular pandemic hit. If we had built up the diagnostic, therapeutic, and vaccine platforms, and if we’d done the simulations to understand what the key steps were, we’d be dramatically better off. Then there’s the time period of the first few months of the pandemic, when the US actually made it harder for the commercial testing companies to get their tests approved, the CDC had this very low volume test that didn’t work at first, and they weren’t letting people test. The travel ban came too late, and it was too narrow to do anything. Then, after the first few months, eventually we figured out about masks, and that leadership is important.Get WIRED Access SubscribeMost Popular

Advertisement

So you’re disappointed, but are you surprised?

I’m surprised at the US situation because the smartest people on epidemiology in the world, by a lot, are at the CDC. I would have expected them to do better. You would expect the CDC to be the most visible, not the White House or even Anthony Fauci. But they haven’t been the face of the epidemic. They are trained to communicate and not try to panic people but get people to take things seriously. They have basically been muzzled since the beginning. We called the CDC, but they told us we had to talk to the White House a bunch of times. Now they say, “Look, we’re doing a great job on testing, we don’t want to talk to you.” Even the simplest things, which would greatly improve this system, they feel would be admitting there is some imperfection and so they are not interested.

Do you think it’s the agencies that fell down or just the leadership at the top, the White House?

We can do the postmortem at some point. We still have a pandemic going on, and we should focus on that. The White House didn’t allow the CDC to do its job after March. There was a window where they were engaged, but then the White House didn’t let them do that. So the variance between the US and other countries isn’t that first period, it’s the subsequent period where the messages—the opening up, the leadership on masks, those things—are not the CDC’s fault. They said not to open back up; they said that leadership has to be a model of face mask usage. I think they have done a good job since April, but we haven’t had the benefit of it.

At this point, are you optimistic?

Yes. You have to admit there’s been trillions of dollars of economic damage done and a lot of debts, but the innovation pipeline on scaling up diagnostics, on new therapeutics, on vaccines is actually quite impressive. And that makes me feel like, for the rich world, we should largely be able to end this thing by the end of 2021, and for the world at large by the end of 2022. That is only because of the scale of the innovation that’s taking place. Now whenever we get this done, we will have lost many years in malaria and polio and HIV and the indebtedness of countries of all sizes and instability. It’ll take you years beyond that before you’d even get back to where you were at the start of 2020. It’s not World War I or World War II, but it is in that order of magnitude as a negative shock to the system.

In March it was unimaginable that you’d be giving us that timeline and saying it’s great.

Well it’s because of innovation that you don’t have to contemplate an even sadder statement, which is this thing will be raging for five years until natural immunity is our only hope.

Let’s talk vaccines, which your foundation is investing in. Is there anything that’s shaping up relatively quickly that could be safe and effective?

Before the epidemic came, we saw huge potential in the RNA vaccines—Moderna, Pfizer/BioNTech, and CureVac. Right now, because of the way you manufacture them, and the difficulty of scaling up, they are more likely—if they are helpful—to help in the rich countries. They won’t be the low-cost, scalable solution for the world at large. There you’d look more at AstraZeneca or Johnson & Johnson. This disease, from both the animal data and the phase 1 data, seems to be very vaccine preventable. There are questions still. It will take us awhile to figure out the duration [of protection], and the efficacy in elderly, although we think that’s going to be quite good. Are there any side effects, which you really have to get out in those large phase 3 groups and even after that through lots of monitoring to see if there are any autoimmune diseases or conditions that the vaccine could interact with in a deleterious fashion.Most Popular

Advertisement

Are you concerned that in our rush to get a vaccine we are going to approve something that isn’t safe and effective?

Yeah. In China and Russia they are moving full speed ahead. I bet there’ll be some vaccines that will get out to lots of patients without the full regulatory review somewhere in the world. We probably need three or four months, no matter what, of phase 3 data, just to look for side effects. The FDA, to their credit, at least so far, is sticking to requiring proof of efficacy. So far they have behaved very professionally despite the political pressure. There may be pressure, but people are saying no, make sure that that’s not allowed. The irony is that this is a president who is a vaccine skeptic. Every meeting I have with him he is like, “Hey, I don’t know about vaccines, and you have to meet with this guy Robert Kennedy Jr. who hates vaccines and spreads crazy stuff about them.”

Wasn’t Kennedy Jr. talking about you using vaccines to implant chips into people?

Yeah, you’re right. He, Roger Stone, Laura Ingraham. They do it in this kind of way: “I’ve heard lots of people say X, Y, Z.” That’s kind of Trumpish plausible deniability. Anyway, there was a meeting where Francis Collins, Tony Fauci, and I had to [attend], and they had no data about anything. When we would say, “But wait a minute, that’s not real data,” they’d say, “Look, Trump told you you have to sit and listen, so just shut up and listen anyway.” So it’s a bit ironic that the president is now trying to have some benefit from a vaccine.

What goes through your head when you’re in a meeting hearing misinformation, and the President of the United States wants you to keep your mouth shut?

That was a bit strange. I haven’t met directly with the president since March of 2018. I made it clear I’m glad to talk to him about the epidemic anytime. And I have talked to Debbie Birx, I’ve talked to Pence, I’ve talked to Mnuchin, Pompeo, particularly on the issue of, Is the US showing up in terms of providing money to procure the vaccine for the developing countries? There have been lots of meetings, but we haven’t been able to get the US to show up. It’s very important to be able to tell the vaccine companies to build extra factories for the billions of doses, that there is procurement money to buy those for the marginal cost. So in this supplemental bill, I’m calling everyone I can to get 4 billion through GAVI for vaccines and 4 billion through a global fund for therapeutics. That’s less than 1 percent to the bill, but in terms of saving lives and getting us back to normal, that under 1 percent is by far the most important thing if we can get it in there.

Speaking of therapeutics, if you were in the hospital and you have the disease and you’re looking over the doctor’s shoulder, what treatment are you going to ask for?

Remdesivir. Sadly the trials in the US have been so chaotic that the actual proven effect is kind of small. Potentially the effect is much larger than that. It’s insane how confused the trials here in the US have been. The supply of that is going up in the US; it will be quite available for the next few months. Also dexamethasone—it’s actually a fairly cheap drug—that’s for late-stage disease.Most Popular

Advertisement

I’m assuming you’re not going to have trouble paying for it, Bill, so you could ask for anything.

Well, I don’t want special treatment, so that’s a tricky thing. Other antivirals are two to three months away. Antibodies are two to three months away. We’ve had about a factor-of-two improvement in hospital outcomes already, and that’s with just remdesivir and dexamethasone. These other things will be additive to that.

You helped fund a Covid diagnostic testing program in Seattle that got quicker results, and it wasn’t so intrusive. The FDA put it on pause. What happened?

There’s this thing where the health worker jams the deep turbinate, in the back of your nose, which actually hurts and makes you sneeze on the healthy worker. We showed that the quality of the results can be equivalent if you just put a self-test in the tip of your nose with a cotton swab. The FDA made us jump through some hoops to prove that you didn’t need to refrigerate the result, that it could go back in a dry plastic bag, and so on. So the delay there was just normal double checking, maybe overly careful but not based on some political angle. Because of what we have done at FDA, you can buy these cheaper swabs that are available by the billions. So anybody who’s using the deep turbinate now is just out of date. It’s a mistake, because it slows things down.

But people aren’t getting their tests back quickly enough.

Well, that’s just stupidity. The majority of all US tests are completely garbage, wasted. If you don’t care how late the date is and you reimburse at the same level, of course they’re going to take every customer. Because they are making ridiculous money, and it’s mostly rich people that are getting access to that. You have to have the reimbursement system pay a little bit extra for 24 hours, pay the normal fee for 48 hours, and pay nothing [if it isn’t done by then]. And they will fix it overnight.

Why don’t we just do that?

Because the federal government sets that reimbursement system. When we tell them to change it they say, “As far as we can tell, we’re just doing a great job, it’s amazing!” Here we are, this is August. We are the only country in the world where we waste the most money on tests. Fix the reimbursement. Set up the CDC website. But I have been on that kick, and people are tired of listening to me.

As someone who has built your life on science and logic, I’m curious what you think when you see so many people signing onto this anti-science view of the world.

Well, strangely, I’m involved in almost everything that anti-science is fighting. I’m involved with climate change, GMOs, and vaccines. The irony is that it’s digital social media that allows this kind of titillating, oversimplistic explanation of, “OK, there’s just an evil person, and that explains all of this.” And when you have [posts] encrypted, there is no way to know what it is. I personally believe government should not allow those types of lies or fraud or child pornography [to be hidden with encryption like WhatsApp or Facebook Messenger].

Well, you’re friends with Mark Zuckerberg. Have you talked to him about this?

After I said this publicly, he sent me mail. I like Mark, I think he’s got very good values, but he and I do disagree on the trade-offs involved there. The lies are so titillating you have to be able to see them and at least slow them down. Like that video where, what do they call her, the sperm woman? That got over 10 million views! [Note: It was more than 20 million.] Well how good are these guys at blocking things, where once something got the 10 million views and everybody was talking about it, they didn’t delete the link or the searchability? So it was meaningless. They claim, “Oh, now we don’t have it.” What effect did that have? Anybody can go watch that thing! So I am a little bit at odds with the way that these conspiracy theories spread, many of which are anti-vaccine things. We give literally tens of billions for vaccines to save lives, then people turn around saying, “No, we’re trying to make money and we’re trying to end lives.” That’s kind of a wild inversion of what our values are and what our track record is.Most Popular

Advertisement

As you are the technology adviser to Microsoft, I think you can look forward in a few months to fighting this battle yourself when the company owns TikTok.

Yeah, my critique of dance moves will be fantastically value-added for them.

TikTok is more than just dance moves. There’s political content.

I know, I’m kidding. You’re right. Who knows what’s going to happen with that deal. But yes, it’s a poison chalice. Being big in the social media business is no simple game, like the encryption issue.

So are you wary of Microsoft getting into that game?

I mean, this may sound self-serving, but I think that the game being more competitive is probably a good thing. But having Trump kill off the only competitor, it’s pretty bizarre.

Do you understand what rule or regulation the president is invoking to demand that TikTok sell to an American company and then take a cut of the sales price?

I agree that the principle this is proceeding on is singly strange. The cut thing, that’s doubly strange. Anyway, Microsoft will have to deal with all of that.

You have been very cautious in staying away from the political arena. But the issues you care most about—public health and climate change—have had huge setbacks because of who leads the country. Are you reconsidering spending on political change?

The foundation needs to be bipartisan. Whoever gets elected in the US, we are going to want to work with them. We do care a lot about competence, and hopefully voters will take into account how this administration has done at picking competent people and should that weigh into their vote. But there’s going to be plenty of money on both sides of this election, and I don’t like diverting money to political things. Even though the pandemic has made it pretty clear we should expect better, there’s other people who will put their time into the campaigning piece.

Did you have deja vu last week when those tech CEOs testified remotely before Congress?

Yeah. I had a whole committee attacking me, and they had four at a time. I mean, Jesus Christ, what’s the Congress coming to? If you want to give a guy a hard time, give him at least a whole day that he has to sit there on the hot seat by himself! And they didn’t even have to get on a plane!

Do you think the antitrust concerns are the same as when Microsoft was under the gun, or has the landscape changed?

Even without antitrust rules, tech does tend to be quite competitive. And even though in the short run you don’t think it’s going to dislodge people, there will be changes that will keep bringing prices down. But there are a lot of valid issues, and if you’re super-successful, the pleasure of going in front of the Congress comes with the territory.

How has your life changed living under the pandemic?

I used to travel a lot. If I wanted to see President Macron and say, “Hey, give money for the coronavirus vaccine,” to really show I’m serious I’d go there. Now, we had a GAVI replenishment summit where I just sat at home and got up a little early. I am able to get a lot done. My kids are home more than I thought they would be, which at least for me is a nice thing. I’m microwaving more food. I’m getting fairly good at it. The pandemic sadly is less painful for those who were better off before the pandemic.

Do you have a go-to mask you use?

No, I use a pretty ugly normal mask. I change it every day. Maybe I should get a designer mask or something creative, but I just use this surgical-looking mask.

Comment Gates calls social media a poisoned challice because it was intended to be a disinformation highway. Covid 19 is very useful to Gates class. Philantropist he is not. His money grabbing organisation has exploited Chinese slave labour for years. Cheap manufactured computers have been crucial to the development of social media, making Gates super rich. He speaks for very profound and wealthy vested interests. As for the masks, there is no evidence that they or lockdown works.

The impact of Covid 19 has been on old, already sick and most importantly BAME – remember the mantra ‘Black Lives Matter.’ All white men are equally privileged and have no right to an opinion unless they are part of the devious manipulative controlling elite. As for herd immunity or vaccine, for that elite these dreams must be beyond the horizon. That is why they immediately rubbish the Russian vaccine. The elite have us right where they want us. Our fears and preoccupations must be BAME, domestic violence , sex crimes ,feminist demands and fighting racists – our fears focused on Russia and China. That elite faked the figures for the first wave and are determined to find or fake evidence of a second one. Robert Cook

Forget Everything You Think You Know About Time

Is a linear representation of time accurate? This physicist says no.

Nautilus

  • Brian Gallagher

In April 2018, in the famous Faraday Theatre at the Royal Institution in London, Carlo Rovelli gave an hour-long lecture on the nature of time. A red thread spanned the stage, a metaphor for the Italian theoretical physicist’s subject. “Time is a long line,” he said. To the left lies the past—the dinosaurs, the big bang—and to the right, the future—the unknown. “We’re sort of here,” he said, hanging a carabiner on it, as a marker for the present.

Then he flipped the script. “I’m going to tell you that time is not like that,” he explained.

Rovelli went on to challenge our common-sense notion of time, starting with the idea that it ticks everywhere at a uniform rate. In fact, clocks tick slower when they are in a stronger gravitational field. When you move nearby clocks showing the same time into different fields—one in space, the other on Earth, say—and then bring them back together again, they will show different times. “It’s a fact,” Rovelli said, and it means “your head is older than your feet.” Also a non-starter is any shared sense of “now.” We don’t really share the present moment with anyone. “If I look at you, I see you now—well, but not really, because light takes time to come from you to me,” he said. “So I see you sort of a little bit in the past.” As a result, “now” means nothing beyond the temporal bubble “in which we can disregard the time it takes light to go back and forth.”

Rovelli turned next to the idea that time flows in only one direction, from past to future. Unlike general relativity, quantum mechanics, and particle physics, thermodynamics embeds a direction of time. Its second law states that the total entropy, or disorder, in an isolated system never decreases over time. Yet this doesn’t mean that our conventional notion of time is on any firmer grounding, Rovelli said. Entropy, or disorder, is subjective: “Order is in the eye of the person who looks.” In other words the distinction between past and future, the growth of entropy over time, depends on a macroscopic effect—“the way we have described the system, which in turn depends on how we interact with the system,” he said.

“A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Getting to the last common notion of time, Rovelli became a little more cautious. His scientific argument that time is discrete—that it is not seamless, but has quanta—is less solid. “Why? Because I’m still doing it! It’s not yet in the textbook.” The equations for quantum gravity he’s written down suggest three things, he said, about what “clocks measure.” First, there’s a minimal amount of time—its units are not infinitely small. Second, since a clock, like every object, is quantum, it can be in a superposition of time readings. “You cannot say between this event and this event is a certain amount of time, because, as always in quantum mechanics, there could be a probability distribution of time passing.” Which means that, third, in quantum gravity, you can have “a local notion of a sequence of events, which is a minimal notion of time, and that’s the only thing that remains,” Rovelli said. Events aren’t ordered in a line “but are confused and connected” to each other without “a preferred time variable—anything can work as a variable.”

Even the notion that the present is fleeting doesn’t hold up to scrutiny. It is certainly true that the present is “horrendously short” in classical, Newtonian physics. “But that’s not the way the world is designed,” Rovelli explained. Light traces a cone, or consecutively larger circles, in four-dimensional spacetime like ripples on a pond that grow larger as they travel. No information can cross the bounds of the light cone because that would require information to travel faster than the speed of light.

“In spacetime, the past is whatever is inside our past light-cone,” Rovelli said, gesturing with his hands the shape of an upside down cone. “So it’s whatever can affect us. The future is this opposite thing,” he went on, now gesturing an upright cone. “So in between the past and the future, there isn’t just a single line—there’s a huge amount of time.” Rovelli asked an audience member to imagine that he lived in Andromeda, which is two and a half million light years away. “A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Listening to Rovelli’s description, I was reminded of a phrase from his book, The Order of Time: Studying time “is like holding a snowflake in your hands: gradually, as you study it, it melts between your fingers and vanishes.” 

Brian Gallagher is the editor of Facts So Romantic, the Nautilus blog. Follow him on Twitter @BSGallagher.
Nautilus

More from Nautilus

Big Bounce Simulations Challenge the Big Bang

Detailed computer simulations have found that a cosmic contraction can generate features of the universe that we observe today.

In a cyclic universe, periods of expansion alternate with periods of contraction. The universe has no beginning and no end.

Samuel Velasco/Quanta Magazine

Charlie Wood

Contributing Writer

August 4, 2020

Cyclic Universe

The standard story of the birth of the cosmos goes something like this: Nearly 14 billion years ago, a tremendous amount of energy materialized as if from nowhere.

In a brief moment of rapid expansion, that burst of energy inflated the cosmos like a balloon. The expansion straightened out any large-scale curvature, leading to a geometry that we now describe as flat. Matter also thoroughly mixed together, so that now the cosmos appears largely (though not perfectly) featureless. Here and there, clumps of particles have created galaxies and stars, but these are just minuscule specks on an otherwise unblemished cosmic canvas.

That theory, which textbooks call inflation, matches all observations to date and is preferred by most cosmologists. But it has conceptual implications that some find disturbing. In most regions of space-time, the rapid expansion would never stop. As a consequence, inflation can’t help but produce a multiverse — a technicolor existence with an infinite variety of pocket universes, one of which we call home. To critics, inflation predicts everything, which means it ultimately predicts nothing. “Inflation doesn’t work as it was intended to work,” said Paul Steinhardt, an architect of inflation who has become one of its most prominent critics.

In recent years, Steinhardt and others have been developing a different story of how our universe came to be. They have revived the idea of a cyclical universe: one that periodically grows and contracts. They hope to replicate the universe that we see — flat and smooth — without the baggage that comes with a bang.

In ‘Brief History of Time’ Stephen Hawkins suggests that all the matter in the universe ariginated from a pin head size store of infinitely dense matter. That seemed unlikely to me. The idea of an ever expanding universe is based on that concept. Robert Cook.

Abstractions​ navigates promising ideas in science and mathematics. Journey with us and join the conversation.

To that end, Steinhardt and his collaborators recently teamed up with researchers who specialize in computational models of gravity. They analyzed how a collapsing universe would change its own structure, and they ultimately discovered that contraction can beat inflation at its own game. No matter how bizarre and twisted the universe looked before it contracted, the collapse would efficiently erase a wide range of primordial wrinkles.

“It’s very important, what they claim they’ve done,” said Leonardo Senatore, a cosmologist at Stanford University who has analyzed inflation using a similar approach. There are aspects of the work he hasn’t yet had a chance to investigate, he said, but at first glance “it looks like they’ve done it.”

Squeezing the View

Over the last year and a half, a fresh view of the cyclic, or “ekpyrotic,” universe has emerged from a collaboration between Steinhardt, Anna Ijjas, a cosmologist at the Max Planck Institute for Gravitational Physics in Germany, and others — one that achieves renewal without collapse.

When it comes to visualizing expansion and contraction, people often focus on a balloonlike universe whose change in size is described by a “scale factor.” But a second measure — the Hubble radius, which is the greatest distance we can see — gets short shrift. The equations of general relativity let them evolve independently, and, crucially, you can flatten the universe by changing either.

Picture an ant on a balloon. Inflation is like blowing up the balloon. It puts the onus of smoothing and flattening primarily on the swelling cosmos. In the cyclic universe, however, the smoothing happens during a period of contraction. During this epoch, the balloon deflates modestly, but the real work is done by a drastically shrinking horizon. It’s as if the ant views everything through an increasingly powerful magnifying glass. The distance it can see shrinks, and thus its world grows more and more featureless.

Lucy Reading-Ikkanda/Quanta Magazine

Steinhardt and company imagine a universe that expands for perhaps a trillion years, driven by the energy of an omnipresent (and hypothetical) field, whose behavior we currently attribute to dark energy. When this energy field eventually grows sparse, the cosmos starts to gently deflate. Over billions of years a contracting scale factor brings everything a bit closer, but not all the way down to a point. The dramatic change comes from the Hubble radius, which rushes in and eventually becomes microscopic. The universe’s contraction recharges the energy field, which heats up the cosmos and vaporizes its atoms. A bounce ensues, and the cycle starts anew.

In the bounce model, the microscopic Hubble radius ensures smoothness and flatness. And whereas inflation blows up many initial imperfections into giant plots of multiverse real estate, slow contraction squeezes them essentially out of existence. We are left with a cosmos that has no beginning, no end, no singularity at the Big Bang, and no multiverse.

From Any Cosmos to Ours

One challenge for both inflation and bounce cosmologies is to show that their respective energy fields create the right universe no matter how they get started. “Our philosophy is that there should be no philosophy,” Ijjas said. “You know it works when you don’t have to ask under what condition it works.”

She and Steinhardt criticize inflation for doing its job only in special cases, such as when its energy field forms without notable features and with little motion. Theorists have explored these situations most thoroughly, in part because they are the only examples tractable with chalkboard mathematics. In recent computer simulations, which Ijjas and Steinhardt describe in a pair of preprints posted online in June, the team stress-tested their slow-contraction model with a range of baby universes too wild for pen-and paper analysis.

Adapting code developed by Frans Pretorius, a theoretical physicist at Princeton University who specializes in computational models of general relativity, the collaboration explored twisted and lumpy fields, fields moving in the wrong direction, even fields born with halves racing in opposing directions. In nearly every case, contraction swiftly produced a universe as boring as ours.

“You let it go and — bam! In a few cosmic moments of slow contraction it looks as smooth as silk,” Steinhardt said.

Katy Clough, a cosmologist at the University of Oxford who also specializes in numerical solutions of general relativity, called the new simulations “very comprehensive.” But she also noted that computational advances have only recently made this kind of analysis possible, so the full range of conditions that inflation can handle remains uncharted.

“It’s been semi-covered, but it needs a lot more work,” she said.

While interest in Ijjas and Steinhardt’s model varies, most cosmologists agree that inflation remains the paradigm to beat. “[Slow contraction] is not an equal contender at this point,” said Gregory Gabadadze, a cosmologist at New York University.

The collaboration will next flesh out the bounce itself — a more complex stage that requires novel interactions to push everything apart again. Ijjas already has one bounce theory that upgrades general relativity with a new interaction between matter and space-time, and she suspects that other mechanisms exist too. She plans to put her model on the computer soon to understand its behavior in detail.

Related:

Physicists Debate Hawking’s Idea That the Universe Had No Beginning
How the Universe Got Its Bounce Back
A Fight for the Soul of Science

The group hopes that after gluing the contraction and expansion stages together, they’ll identify unique features of a bouncing universe that astronomers might spot.

The collaboration has not worked out every detail of a cyclic cosmos with no bang and no crunch, much less shown that we live in one. But Steinhardt now feels optimistic that the model will soon offer a viable alternative to the multiverse. “The roadblocks I was most worried about have been surpassed,” he said. “I’m not kept up at night anymore.”

Editor’s note: Some of this research was funded in part by the Simons Foundation, which also funds this editorially independent magazine. Simons Foundation funding decisions play no role in our coverage. M

KODAK Digital Still Camera

This Scientist Believes Ageing Is Optional august 10th 2020

In his book, “Lifespan,” celebrated scientist David Sinclair lays out exactly why we age—and why he thinks we don’t have to.

Outside

  • Graham Averill
life-review-book_h.jpg

If scientist David Sinclair is correct about aging, we might not have to age as quickly as we do. Photo by tomazl / iStock.

The oldest-known living person is Kane Tanaka, a Japanese woman who is a mind-boggling 116 years old. But if you ask David Sinclair, he’d argue that 116 is just middle age. At least, he thinks it should be. Sinclair is one of the leading scientists in the field of aging, and he believes that growing old isn’t a natural part of life—it’s a disease that needs a cure.

Sounds crazy, right? Sinclair, a Harvard professor who made Time’s list of the 100 most influential people in the world in 2014, will acquiesce that everyone has to die at some point, but he argues that we can double our life expectancy and live healthy, active lives right up until the end.

His 2019 book, Lifespan: Why We Age and Why We Don’t Have To ($28, Atria Books), out this fall, details the cutting-edge science that’s taking place in the field of longevity right now. The quick takeaway from this not-so-quick read: scientists are tossing out previous assumptions about aging, and they’ve discovered several tools that you can employ right now to slow down, and in some cases, reverse the clock.

In the nineties, as a postdoc in an MIT lab, Sinclair caused a stir in the field when he discovered the mechanism that leads to aging in yeast, which offered some insight into why humans age. Using his work with yeast as a launching point, Sinclair and his lab colleagues have focused on identifying the mechanism for aging in humans and published a study in 2013 asserting that the malfunction of a family of proteins called sirtuins is the single cause of aging. Sirtuins are responsible for repairing DNA damage and controlling overall cellular health by keeping cells on task. In other words, sirtuins tell kidney cells to act like kidney cells. If they get overwhelmed, cells start to misbehave, and we see the symptoms of aging, like organ failure or wrinkles. All of the genetic info in our cells is still there as we get older, but our body loses the ability to interpret it. This is because our body starts to run low on NAD, a molecule that activates the sirtuins: we have half as much NAD in our body when we’re 50 as we do at 20. Without it, the sirtuins can’t do their job, and the cells in our body forget what they’re supposed to be doing.

Sinclair splits his time between the U.S. and Australia, running labs at Harvard Medical School and at the University of New South Wales. All of his research seeks to prove that aging is a problem we can solve—and figure out how to stop. He argues that we can slow down the aging process, and in some cases even reverse it, by putting our body through “healthy stressors” that increase NAD levels and promote sirtuin activity. The role of sirtuins in aging is now fairly well accepted, but the idea that we can reactivate them (and how best to do so) is still being worked out.

Getting cold, working out hard, and going hungry every once in a while all engage what Sinclair calls our body’s survival circuit, wherein sirtuins tell cells to boost their defenses in order to keep the organism (you) alive. While Sinclair’s survival-circuit theory has yet to be proven in a trial setting, there’s plenty of research to suggest that exercise, cold exposure, and calorie reduction all help slow down the side effects of aging and stave off diseases associated with getting older. Fasting, in particular, has been well supported by other research: in various studies, both mice and yeast that were fed restricted diets live much longer than their well-fed cohorts. A two-year-long human experiment in the 1990s found that participants who had a restricted diet that left them hungry often had decreased blood pressure, blood-sugar levels, and cholesterol levels. Subsequent human studies found that decreasing calories by 12 percent slowed down biological aging based on changes in blood biomarkers.

Longevity science is a bit like the Wild West: the rules aren’t quite established. The research is exciting, but human clinical trials haven’t found anything definitive just yet. Throughout the field, there’s an uncomfortable relationship between privately owned companies, researchers, and even research institutes like Harvard: Sinclair points to a biomarker test by a company called InsideTracker as proof of his own reduced “biological age,” but he is also an investor in that company. He is listed as an inventor on a patent held by a NAD booster that’s on the market right now, too.

While the dust settles, the best advice for the curious to take from Lifespan is to experiment with habits that are easy, free, and harmless—like taking a brisk, cold walk and eating a lighter diet. With cold exposure, Sinclair explains, moderation is the key. He believes that you can reap benefits by simply taking a walk in the winter without a jacket. He doesn’t prescribe an exact fasting regimen that works best, but he doesn’t recommend anything extreme—simply missing a meal here and there, like skipping breakfast and having a late lunch.

How the Pandemic Defeated America

A virus has brought the world’s most powerful country to its knees.

窗体顶端

Like ​The Atlantic? Subscribe to The Atlantic Daily​, our free weekday email newsletter.

窗体底端

窗体顶端

窗体底端

Editor’s Note: The Atlantic is making vital coverage of the coronavirus available to all readers. Find the collection here.

Updated at 1:12 p.m. ET on August 4, 2020.

How did it come to this? A virus a thousand times smaller than a dust mote has humbled and humiliated the planet’s most powerful nation. America has failed to protect its people, leaving them with illness and financial ruin. It has lost its status as a global leader. It has careened between inaction and ineptitude. The breadth and magnitude of its errors are difficult, in the moment, to truly fatho

In the first half of 2020, SARSCoV2—the new coronavirus behind the disease COVID19—infected 10 million people around the world and killed about half a million. But few countries have been as severely hit as the United States, which has just 4 percent of the world’s population but a quarter of its confirmed COVID19 cases and deaths. These numbers are estimates. The actual toll, though undoubtedly higher, is unknown, because the richest country in the world still lacks sufficient testing to accurately count its sick citizens.

Despite ample warning, the U.S. squandered every possible opportunity to control the coronavirus. And despite its considerable advantages—immense resources, biomedical might, scientific expertise—it floundered. While countries as different as South Korea, Thailand, Iceland, Slovakia, and Australia acted decisively to bend the curve of infections downward, the U.S. achieved merely a plateau in the spring, which changed to an appalling upward slope in the summer. “The U.S. fundamentally failed in ways that were worse than I ever could have imagined,” Julia Marcus, an infectious-disease epidemiologist at Harvard Medical School, told me.

Since the pandemic began, I have spoken with more than 100 experts in a variety of fields. I’ve learned that almost everything that went wrong with America’s response to the pandemic was predictable and preventable. A sluggish response by a government denuded of expertise allowed the coronavirus to gain a foothold. Chronic underfunding of public health neutered the nation’s ability to prevent the pathogen’s spread. A bloated, inefficient health-care system left hospitals ill-prepared for the ensuing wave of sickness. Racist policies that have endured since the days of colonization and slavery left Indigenous and Black Americans especially vulnerable to COVID19. The decades-long process of shredding the nation’s social safety net forced millions of essential workers in low-paying jobs to risk their life for their livelihood. The same social-media platforms that sowed partisanship and misinformation during the 2014 Ebola outbreak in Africa and the 2016 U.S. election became vectors for conspiracy theories during the 2020 pandemic.

The U.S. has little excuse for its inattention. In recent decades, epidemics of SARS, MERS, Ebola, H1N1 flu, Zika, and monkeypox showed the havoc that new and reemergent pathogens could wreak. Health experts, business leaders, and even middle schoolers ran simulated exercises to game out the spread of new diseases. In 2018, I wrote an article for The Atlantic arguing that the U.S. was not ready for a pandemic, and sounded warnings about the fragility of the nation’s health-care system and the slow process of creating a vaccine. But the COVID19 debacle has also touched—and implicated—nearly every other facet of American society: its shortsighted leadership, its disregard for expertise, its racial inequities, its social-media culture, and its fealty to a dangerous strain of individualism.

SARSCoV2 is something of an anti-Goldilocks virus: just bad enough in every way. Its symptoms can be severe enough to kill millions but are often mild enough to allow infections to move undetected through a population. It spreads quickly enough to overload hospitals, but slowly enough that statistics don’t spike until too late. These traits made the virus harder to control, but they also softened the pandemic’s punch. SARSCoV2 is neither as lethal as some other coronaviruses, such as SARS and MERS, nor as contagious as measles. Deadlier pathogens almost certainly exist. Wild animals harbor an estimated 40,000 unknown viruses, a quarter of which could potentially jump into humans. How will the U.S. fare when “we can’t even deal with a starter pandemic?,” Zeynep Tufekci, a sociologist at the University of North Carolina and an Atlantic contributing writer, asked me.

Despite its epochal effects, COVID19 is merely a harbinger of worse plagues to come. The U.S. cannot prepare for these inevitable crises if it returns to normal, as many of its people ache to do. Normal led to this. Normal was a world ever more prone to a pandemic but ever less ready for one. To avert another catastrophe, the U.S. needs to grapple with all the ways normal failed us. It needs a full accounting of every recent misstep and foundational sin, every unattended weakness and unheeded warning, every festering wound and reopened scar.

A pandemic can be prevented in two ways: Stop an infection from ever arising, or stop an infection from becoming thousands more. The first way is likely impossible. There are simply too many viruses and too many animals that harbor them. Bats alone could host thousands of unknown coronaviruses; in some Chinese caves, one out of every 20 bats is infected. Many people live near these caves, shelter in them, or collect guano from them for fertilizer. Thousands of bats also fly over these people’s villages and roost in their homes, creating opportunities for the bats’ viral stowaways to spill over into human hosts. Based on antibody testing in rural parts of China, Peter Daszak of EcoHealth Alliance, a nonprofit that studies emerging diseases, estimates that such viruses infect a substantial number of people every year. “Most infected people don’t know about it, and most of the viruses aren’t transmissible,” Daszak says. But it takes just one transmissible virus to start a pandemic.

Sometime in late 2019, the wrong virus left a bat and ended up, perhaps via an intermediate host, in a human—and another, and another. Eventually it found its way to the Huanan seafood market, and jumped into dozens of new hosts in an explosive super-spreading event. The COVID19 pandemic had begun.

“There is no way to get spillover of everything to zero,” Colin Carlson, an ecologist at Georgetown University, told me. Many conservationists jump on epidemics as opportunities to ban the wildlife trade or the eating of “bush meat,” an exoticized term for “game,” but few diseases have emerged through either route. Carlson said the biggest factors behind spillovers are land-use change and climate change, both of which are hard to control. Our species has relentlessly expanded into previously wild spaces. Through intensive agriculture, habitat destruction, and rising temperatures, we have uprooted the planet’s animals, forcing them into new and narrower ranges that are on our own doorsteps. Humanity has squeezed the world’s wildlife in a crushing grip—and viruses have come bursting out.

Curtailing those viruses after they spill over is more feasible, but requires knowledge, transparency, and decisiveness that were lacking in 2020. Much about coronaviruses is still unknown. There are no surveillance networks for detecting them as there are for influenza. There are no approved treatments or vaccines. Coronaviruses were formerly a niche family, of mainly veterinary importance. Four decades ago, just 60 or so scientists attended the first international meeting on coronaviruses. Their ranks swelled after SARS swept the world in 2003, but quickly dwindled as a spike in funding vanished. The same thing happened after MERS emerged in 2012. This year, the world’s coronavirus experts—and there still aren’t many—had to postpone their triennial conference in the Netherlands because SARSCoV2 made flying too risky.

In the age of cheap air travel, an outbreak that begins on one continent can easily reach the others. SARS already demonstrated that in 2003, and more than twice as many people now travel by plane every year. To avert a pandemic, affected nations must alert their neighbors quickly. In 2003, China covered up the early spread of SARS, allowing the new disease to gain a foothold, and in 2020, history repeated itself. The Chinese government downplayed the possibility that SARSCoV2 was spreading among humans, and only confirmed as much on January 20, after millions had traveled around the country for the lunar new year. Doctors who tried to raise the alarm were censured and threatened. One, Li Wenliang, later died of COVID19. The World Health Organization initially parroted China’s line and did not declare a public-health emergency of international concern until January 30. By then, an estimated 10,000 people in 20 countries had been infected, and the virus was spreading fast.

The United States has correctly castigated China for its duplicity and the WHO for its laxity—but the U.S. has also failed the international community. Under President Donald Trump, the U.S. has withdrawn from several international partnerships and antagonized its allies. It has a seat on the WHO’s executive board, but left that position empty for more than two years, only filling it this May, when the pandemic was in full swing. Since 2017, Trump has pulled more than 30 staffers out of the Centers for Disease Control and Prevention’s office in China, who could have warned about the spreading coronavirus. Last July, he defunded an American epidemiologist embedded within China’s CDC. America First was America oblivious.

Even after warnings reached the U.S., they fell on the wrong ears. Since before his election, Trump has cavalierly dismissed expertise and evidence. He filled his administration with inexperienced newcomers, while depicting career civil servants as part of a “deep state.” In 2018, he dismantled an office that had been assembled specifically to prepare for nascent pandemics. American intelligence agencies warned about the coronavirus threat in January, but Trump habitually disregards intelligence briefings. The secretary of health and human services, Alex Azar, offered similar counsel, and was twice ignored.

Being prepared means being ready to spring into action, “so that when something like this happens, you’re moving quickly,” Ronald Klain, who coordinated the U.S. response to the West African Ebola outbreak in 2014, told me. “By early February, we should have triggered a series of actions, precisely zero of which were taken.” Trump could have spent those crucial early weeks mass-producing tests to detect the virus, asking companies to manufacture protective equipment and ventilators, and otherwise steeling the nation for the worst. Instead, he focused on the border. On January 31, Trump announced that the U.S. would bar entry to foreigners who had recently been in China, and urged Americans to avoid going there.

Related Stories

Travel bans make intuitive sense, because travel obviously enables the spread of a virus. But in practice, travel bans are woefully inefficient at restricting either travel or viruses. They prompt people to seek indirect routes via third-party countries, or to deliberately hide their symptoms. They are often porous: Trump’s included numerous exceptions, and allowed tens of thousands of people to enter from China. Ironically, they create travel: When Trump later announced a ban on flights from continental Europe, a surge of travelers packed America’s airports in a rush to beat the incoming restrictions. Travel bans may sometimes work for remote island nations, but in general they can only delay the spread of an epidemic—not stop it. And they can create a harmful false confidence, so countries “rely on bans to the exclusion of the things they actually need to do—testing, tracing, building up the health system,” says Thomas Bollyky, a global-health expert at the Council on Foreign Relations. “That sounds an awful lot like what happened in the U.S.”

This was predictable. A president who is fixated on an ineffectual border wall, and has portrayed asylum seekers as vectors of disease, was always going to reach for travel bans as a first resort. And Americans who bought into his rhetoric of xenophobia and isolationism were going to be especially susceptible to thinking that simple entry controls were a panacea.

And so the U.S. wasted its best chance of restraining COVID19. Although the disease first arrived in the U.S. in mid-January, genetic evidence shows that the specific viruses that triggered the first big outbreaks, in Washington State, didn’t land until mid-February. The country could have used that time to prepare. Instead, Trump, who had spent his entire presidency learning that he could say whatever he wanted without consequence, assured Americans that “the coronavirus is very much under control,” and “like a miracle, it will disappear.” With impunity, Trump lied. With impunity, the virus spread.

On February 26, Trump asserted that cases were “going to be down to close to zero.” Over the next two months, at least 1 million Americans were infected.

As the coronavirus established itself in the U.S., it found a nation through which it could spread easily, without being detected. For years, Pardis Sabeti, a virologist at the Broad Institute of Harvard and MIT, has been trying to create a surveillance network that would allow hospitals in every major U.S. city to quickly track new viruses through genetic sequencing. Had that network existed, once Chinese scientists published SARSCoV2’s genome on January 11, every American hospital would have been able to develop its own diagnostic test in preparation for the virus’s arrival. “I spent a lot of time trying to convince many funders to fund it,” Sabeti told me. “I never got anywhere.”

The CDC developed and distributed its own diagnostic tests in late January. These proved useless because of a faulty chemical component. Tests were in such short supply, and the criteria for getting them were so laughably stringent, that by the end of February, tens of thousands of Americans had likely been infected but only hundreds had been tested. The official data were so clearly wrong that The Atlantic developed its own volunteer-led initiative—the COVID Tracking Project—to count cases.

Diagnostic tests are easy to make, so the U.S. failing to create one seemed inconceivable. Worse, it had no Plan B. Private labs were strangled by FDA bureaucracy. Meanwhile, Sabeti’s lab developed a diagnostic test in mid-January and sent it to colleagues in Nigeria, Sierra Leone, and Senegal. “We had working diagnostics in those countries well before we did in any U.S. states,” she told me.

It’s hard to overstate how thoroughly the testing debacle incapacitated the U.S. People with debilitating symptoms couldn’t find out what was wrong with them. Health officials couldn’t cut off chains of transmission by identifying people who were sick and asking them to isolate themselves.

Read: How the coronavirus became an American catastrophe

Water running along a pavement will readily seep into every crack; so, too, did the unchecked coronavirus seep into every fault line in the modern world. Consider our buildings. In response to the global energy crisis of the 1970s, architects made structures more energy-efficient by sealing them off from outdoor air, reducing ventilation rates. Pollutants and pathogens built up indoors, “ushering in the era of ‘sick buildings,’ ” says Joseph Allen, who studies environmental health at Harvard’s T. H. Chan School of Public Health. Energy efficiency is a pillar of modern climate policy, but there are ways to achieve it without sacrificing well-being. “We lost our way over the years and stopped designing buildings for people,” Allen says.

The indoor spaces in which Americans spend 87 percent of their time became staging grounds for super-spreading events. One study showed that the odds of catching the virus from an infected person are roughly 19 times higher indoors than in open air. Shielded from the elements and among crowds clustered in prolonged proximity, the coronavirus ran rampant in the conference rooms of a Boston hotel, the cabins of the Diamond Princess cruise ship, and a church hall in Washington State where a choir practiced for just a few hours.

The hardest-hit buildings were those that had been jammed with people for decades: prisons. Between harsher punishments doled out in the War on Drugs and a tough-on-crime mindset that prizes retribution over rehabilitation, America’s incarcerated population has swelled sevenfold since the 1970s, to about 2.3 million. The U.S. imprisons five to 18 times more people per capita than other Western democracies. Many American prisons are packed beyond capacity, making social distancing impossible. Soap is often scarce. Inevitably, the coronavirus ran amok. By June, two American prisons each accounted for more cases than all of New Zealand. One, Marion Correctional Institution, in Ohio, had more than 2,000 cases among inmates despite having a capacity of 1,500. 


Other densely packed facilities were also besieged. America’s nursing homes and long-term-care facilities house less than 1 percent of its people, but as of mid-June, they accounted for 40 percent of its coronavirus deaths. More than 50,000 residents and staff have died. At least 250,000 more have been infected. These grim figures are a reflection not just of the greater harms that COVID19 inflicts upon elderly physiology, but also of the care the elderly receive. Before the pandemic, three in four nursing homes were understaffed, and four in five had recently been cited for failures in infection control. The Trump administration’s policies have exacerbated the problem by reducing the influx of immigrants, who make up a quarter of long-term caregivers.

Read: Another coronavirus nursing-home disaster is coming

Even though a Seattle nursing home was one of the first COVID19 hot spots in the U.S., similar facilities weren’t provided with tests and protective equipment. Rather than girding these facilities against the pandemic, the Department of Health and Human Services paused nursing-home inspections in March, passing the buck to the states. Some nursing homes avoided the virus because their owners immediately stopped visitations, or paid caregivers to live on-site. But in others, staff stopped working, scared about infecting their charges or becoming infected themselves. In some cases, residents had to be evacuated because no one showed up to care for them.

America’s neglect of nursing homes and prisons, its sick buildings, and its botched deployment of tests are all indicative of its problematic attitude toward health: “Get hospitals ready and wait for sick people to show,” as Sheila Davis, the CEO of the nonprofit Partners in Health, puts it. “Especially in the beginning, we catered our entire [COVID19] response to the 20 percent of people who required hospitalization, rather than preventing transmission in the community.” The latter is the job of the public-health system, which prevents sickness in populations instead of merely treating it in individuals. That system pairs uneasily with a national temperament that views health as a matter of personal responsibility rather than a collective good.

At the end of the 20th century, public-health improvements meant that Americans were living an average of 30 years longer than they were at the start of it. Maternal mortality had fallen by 99 percent; infant mortality by 90 percent. Fortified foods all but eliminated rickets and goiters. Vaccines eradicated smallpox and polio, and brought measles, diphtheria, and rubella to heel. These measures, coupled with antibiotics and better sanitation, curbed infectious diseases to such a degree that some scientists predicted they would soon pass into history. But instead, these achievements brought complacency. “As public health did its job, it became a target” of budget cuts, says Lori Freeman, the CEO of the National Association of County and City Health Officials.

Today, the U.S. spends just 2.5 percent of its gigantic health-care budget on public health. Underfunded health departments were already struggling to deal with opioid addiction, climbing obesity rates, contaminated water, and easily preventable diseases. Last year saw the most measles cases since 1992. In 2018, the U.S. had 115,000 cases of syphilis and 580,000 cases of gonorrhea—numbers not seen in almost three decades. It has 1.7 million cases of chlamydia, the highest number ever recorded.

Since the last recession, in 2009, chronically strapped local health departments have lost 55,000 jobs—a quarter of their workforce. When COVID19 arrived, the economic downturn forced overstretched departments to furlough more employees. When states needed battalions of public-health workers to find infected people and trace their contacts, they had to hire and train people from scratch. In May, Maryland Governor Larry Hogan asserted that his state would soon have enough people to trace 10,000 contacts every day. Last year, as Ebola tore through the Democratic Republic of Congo—a country with a quarter of Maryland’s wealth and an active war.

Ripping unimpeded through American communities, the coronavirus created thousands of sickly hosts that it then rode into America’s hospitals. It should have found facilities armed with state-of-the-art medical technologies, detailed pandemic plans, and ample supplies of protective equipment and life-saving medicines. Instead, it found a brittle system in danger of collapse.

Compared with the average wealthy nation, America spends nearly twice as much of its national wealth on health care, about a quarter of which is wasted on inefficient care, unnecessary treatments, and administrative chicanery. The U.S. gets little bang for its exorbitant buck. It has the lowest life-expectancy rate of comparable countries, the highest rates of chronic disease, and the fewest doctors per person. This profit-driven system has scant incentive to invest in spare beds, stockpiled supplies, peacetime drills, and layered contingency plans—the essence of pandemic preparedness. America’s hospitals have been pruned and stretched by market forces to run close to full capacity, with little ability to adapt in a crisis.

When hospitals do create pandemic plans, they tend to fight the last war. After 2014, several centers created specialized treatment units designed for Ebola—a highly lethal but not very contagious disease. These units were all but useless against a highly transmissible airborne virus like SARSCoV2. Nor were hospitals ready for an outbreak to drag on for months. Emergency plans assumed that staff could endure a few days of exhausting conditions, that supplies would hold, and that hard-hit centers could be supported by unaffected neighbors. “We’re designed for discrete disasters” like mass shootings, traffic pileups, and hurricanes, says Esther Choo, an emergency physician at Oregon Health and Science University. The COVID19 pandemic is not a discrete disaster. It is a 50-state catastrophe that will likely continue at least until a vaccine is ready.

Wherever the coronavirus arrived, hospitals reeled. Several states asked medical students to graduate early, reenlisted retired doctors, and deployed dermatologists to emergency departments. Doctors and nurses endured grueling shifts, their faces chapped and bloody when they finally doffed their protective equipment. Soon, that equipment—masks, respirators, gowns, gloves—started running out.

Millions of Americans have found themselves impoverished and disconnected from medical care.

American In the middle of the greatest health and economic crises in generations, hospitals operate on a just-in-time economy. They acquire the goods they need in the moment through labyrinthine supply chains that wrap around the world in tangled lines, from countries with cheap labor to richer nations like the U.S. The lines are invisible until they snap. About half of the world’s face masks, for example, are made in China, some of them in Hubei province. When that region became the pandemic epicenter, the mask supply shriveled just as global demand spiked. The Trump administration turned to a larder of medical supplies called the Strategic National Stockpile, only to find that the 100 million respirators and masks that had been dispersed during the 2009 flu pandemic were never replaced. Just 13 million respirators were left.

In April, four in five frontline nurses said they didn’t have enough protective equipment. Some solicited donations from the public, or navigated a morass of back-alley deals and internet scams. Others fashioned their own surgical masks from bandannas and gowns from garbage bags. The supply of nasopharyngeal swabs that are used in every diagnostic test also ran low, because one of the largest manufacturers is based in Lombardy, Italy—initially the COVID19 capital of Europe. About 40 percent of critical-care drugs, including antibiotics and painkillers, became scarce because they depend on manufacturing lines that begin in China and India. Once a vaccine is ready, there might not be enough vials to put it in, because of the long-running global shortage of medical-grade glass—literally, a bottle-neck bottleneck.

The federal government could have mitigated those problems by buying supplies at economies of scale and distributing them according to need. Instead, in March, Trump told America’s governors to “try getting it yourselves.” As usual, health care was a matter of capitalism and connections. In New York, rich hospitals bought their way out of their protective-equipment shortfall, while neighbors in poorer, more diverse parts of the city rationed their supplies.

While the president prevaricated, Americans acted. Businesses sent their employees home. People practiced social distancing, even before Trump finally declared a national emergency on March 13, and before governors and mayors subsequently issued formal stay-at-home orders, or closed schools, shops, and restaurants. A study showed that the U.S. could have averted 36,000 COVID19 deaths if leaders had enacted social-distancing measures just a week earlier. But better late than never: By collectively reducing the spread of the virus, America flattened the curve. Ventilators didn’t run out, as they had in parts of Italy. Hospitals had time to add extra beds.

Social distancing worked. But the indiscriminate lockdown was necessary only because America’s leaders wasted months of prep time. Deploying this blunt policy instrument came at enormous cost. Unemployment rose to 14.7 percent, the highest level since record-keeping began, in 1948. More than 26 million people lost their jobs, a catastrophe in a country that—uniquely and absurdly—ties health care to employment. Some COVID19 survivors have been hit with seven-figure medical bills. In the middle of the greatest health and economic crises in generations, millions of Americans have found themselves disconnected from medical care and impoverished. They join the millions who have always lived that way.

The coronavirus found, exploited, and widened every inequity that the U.S. had to offer. Elderly people, already pushed to the fringes of society, were treated as acceptable losses. Women were more likely to lose jobs than men, and also shouldered extra burdens of child care and domestic work, while facing rising rates of domestic violence. In half of the states, people with dementia and intellectual disabilities faced policies that threatened to deny them access to lifesaving ventilators. Thousands of people endured months of COVID19 symptoms that resembled those of chronic postviral illnesses, only to be told that their devastating symptoms were in their head. Latinos were three times as likely to be infected as white people. Asian Americans faced racist abuse. Far from being a “great equalizer,” the pandemic fell unevenly upon the U.S., taking advantage of injustices that had been brewing throughout the nation’s history.

Read: COVID-19 can last for several months

Of the 3.1 million Americans who still cannot afford health insurance in states where Medicaid has not been expanded, more than half are people of color, and 30 percent are Black.* This is no accident. In the decades after the Civil War, the white leaders of former slave states deliberately withheld health care from Black Americans, apportioning medicine more according to the logic of Jim Crow than Hippocrates. They built hospitals away from Black communities, segregated Black patients into separate wings, and blocked Black students from medical school. In the 20th century, they helped construct America’s system of private, employer-based insurance, which has kept many Black people from receiving adequate medical treatment. They fought every attempt to improve Black people’s access to health care, from the creation of Medicare and Medicaid in the ’60s to the passage of the Affordable Care Act in 2010.

A number of former slave states also have among the lowest investments in public health, the lowest quality of medical care, the highest proportions of Black citizens, and the greatest racial divides in health outcomes. As the COVID19 pandemic wore on, they were among the quickest to lift social-distancing restrictions and reexpose their citizens to the coronavirus. The harms of these moves were unduly foisted upon the poor and the Black.

As of early July, one in every 1,450 Black Americans had died from COVID19—a rate more than twice that of white Americans. That figure is both tragic and wholly expected given the mountain of medical disadvantages that Black people face. Compared with white people, they die three years younger. Three times as many Black mothers die during pregnancy. Black people have higher rates of chronic illnesses that predispose them to fatal cases of COVID19. When they go to hospitals, they’re less likely to be treated. The care they do receive tends to be poorer. Aware of these biases, Black people are hesitant to seek aid for COVID19 symptoms and then show up at hospitals in sicker states. “One of my patients said, ‘I don’t want to go to the hospital, because they’re not going to treat me well,’ ” says Uché Blackstock, an emergency physician and the founder of Advancing Health Equity, a nonprofit that fights bias and racism in health care. “Another whispered to me, ‘I’m so relieved you’re Black. I just want to make sure I’m listened to.’ ”

Rather than countering misinformation during the pandemic, trusted sources often made things worse.

Black people were both more worried about the pandemic and more likely to be infected by it. The dismantling of America’s social safety net left Black people with less income and higher unemployment. They make up a disproportionate share of the low-paid “essential workers” who were expected to staff grocery stores and warehouses, clean buildings, and deliver mail while the pandemic raged around them. Earning hourly wages without paid sick leave, they couldn’t afford to miss shifts even when symptomatic. They faced risky commutes on crowded public transportation while more privileged people teleworked from the safety of isolation. “There’s nothing about Blackness that makes you more prone to COVID,” says Nicolette Louissaint, the executive director of Healthcare Ready, a nonprofit that works to strengthen medical supply chains. Instead, existing inequities stack the odds in favor of the virus.

Native Americans were similarly vulnerable. A third of the people in the Navajo Nation can’t easily wash their hands, because they’ve been embroiled in long-running negotiations over the rights to the water on their own lands. Those with water must contend with runoff from uranium mines. Most live in cramped multigenerational homes, far from the few hospitals that service a 17-million-acre reservation. As of mid-May, the Navajo Nation had higher rates of COVID19 infections than any U.S. state.

Americans often misperceive historical inequities as personal failures. Stephen Huffman, a Republican state senator and doctor in Ohio, suggested that Black Americans might be more prone to COVID19 because they don’t wash their hands enough, a remark for which he later apologized. Republican Senator Bill Cassidy of Louisiana, also a physician, noted that Black people have higher rates of chronic disease, as if this were an answer in itself, and not a pattern that demanded further explanation.

Clear distribution of accurate information is among the most important defenses against an epidemic’s spread. And yet the largely unregulated, social-media-based communications infrastructure of the 21st century almost ensures that misinformation will proliferate fast. “In every outbreak throughout the existence of social media, from Zika to Ebola, conspiratorial communities immediately spread their content about how it’s all caused by some government or pharmaceutical company or Bill Gates,” says Renée DiResta of the Stanford Internet Observatory, who studies the flow of online information. When COVID19 arrived, “there was no doubt in my mind that it was coming.”

Read: The great 5G conspiracy

Sure enough, existing conspiracy theories—George Soros! 5G! Bioweapons!—were repurposed for the pandemic. An infodemic of falsehoods spread alongside the actual virus. Rumors coursed through online platforms that are designed to keep users engaged, even if that means feeding them content that is polarizing or untrue. In a national crisis, when people need to act in concert, this is calamitous. “The social internet as a system is broken,” DiResta told me, and its faults are readily abused.

Beginning on April 16, DiResta’s team noticed growing online chatter about Judy Mikovits, a discredited researcher turned anti-vaccination champion. Posts and videos cast Mikovits as a whistleblower who claimed that the new coronavirus was made in a lab and described Anthony Fauci of the White House’s coronavirus task force as her nemesis. Ironically, this conspiracy theory was nested inside a larger conspiracy—part of an orchestrated PR campaign by an anti-vaxxer and QAnon fan with the explicit goal to “take down Anthony Fauci.” It culminated in a slickly produced video called Plandemic, which was released on May 4. More than 8 million people watched it in a week.

Doctors and journalists tried to debunk Plandemic’s many misleading claims, but these efforts spread less successfully than the video itself. Like pandemics, infodemics quickly become uncontrollable unless caught early. But while health organizations recognize the need to surveil for emerging diseases, they are woefully unprepared to do the same for emerging conspiracies. In 2016, when DiResta spoke with a CDC team about the threat of misinformation, “their response was: ‘ That’s interesting, but that’s just stuff that happens on the internet.’ ”

From the June 2020 issue: Adrienne LaFrance on how QAnon is more important than you think

Rather than countering misinformation during the pandemic’s early stages, trusted sources often made things worse. Many health experts and government officials downplayed the threat of the virus in January and February, assuring the public that it posed a low risk to the U.S. and drawing comparisons to the ostensibly greater threat of the flu. The WHO, the CDC, and the U.S. surgeon general urged people not to wear masks, hoping to preserve the limited stocks for health-care workers. These messages were offered without nuance or acknowledgement of uncertainty, so when they were reversed—the virus is worse than the flu; wear masks—the changes seemed like befuddling flip-flops.

The media added to the confusion. Drawn to novelty, journalists gave oxygen to fringe anti-lockdown protests while most Americans quietly stayed home. They wrote up every incremental scientific claim, even those that hadn’t been verified or peer-reviewed.

There were many such claims to choose from. By tying career advancement to the publishing of papers, academia already creates incentives for scientists to do attention-grabbing but irreproducible work. The pandemic strengthened those incentives by prompting a rush of panicked research and promising ambitious scientists global attention.

In March, a small and severely flawed French study suggested that the antimalarial drug hydroxychloroquine could treat COVID19. Published in a minor journal, it likely would have been ignored a decade ago. But in 2020, it wended its way to Donald Trump via a chain of credulity that included Fox News, Elon Musk, and Dr. Oz. Trump spent months touting the drug as a miracle cure despite mounting evidence to the contrary, causing shortages for people who actually needed it to treat lupus and rheumatoid arthritis. The hydroxychloroquine story was muddied even further by a study published in a top medical journal, The Lancet, that claimed the drug was not effective and was potentially harmful. The paper relied on suspect data from a small analytics company called Surgisphere, and was retracted in June.**

Science famously self-corrects. But during the pandemic, the same urgent pace that has produced valuable knowledge at record speed has also sent sloppy claims around the world before anyone could even raise a skeptical eyebrow. The ensuing confusion, and the many genuine unknowns about the virus, has created a vortex of fear and uncertainty, which grifters have sought to exploit. Snake-oil merchants have peddled ineffectual silver bullets (including actual silver). Armchair experts with scant or absent qualifications have found regular slots on the nightly news. And at the center of that confusion is Donald Trump.

During a pandemic, leaders must rally the public, tell the truth, and speak clearly and consistently. Instead, Trump repeatedly contradicted public-health experts, his scientific advisers, and himself. He said that “nobody ever thought a thing like [the pandemic] could happen” and also that he “felt it was a pandemic long before it was called a pandemic.” Both statements cannot be true at the same time, and in fact neither is true.

A month before his inauguration, I wrote that “the question isn’t whether [Trump will] face a deadly outbreak during his presidency, but when.” Based on his actions as a media personality during the 2014 Ebola outbreak and as a candidate in the 2016 election, I suggested that he would fail at diplomacy, close borders, tweet rashly, spread conspiracy theories, ignore experts, and exhibit reckless self-confidence. And so he did.

No one should be shocked that a liar who has made almost 20,000 false or misleading claims during his presidency would lie about whether the U.S. had the pandemic under control; that a racist who gave birth to birtherism would do little to stop a virus that was disproportionately killing Black people; that a xenophobe who presided over the creation of new immigrant-detention centers would order meatpacking plants with a substantial immigrant workforce to remain open; that a cruel man devoid of empathy would fail to calm fearful citizens; that a narcissist who cannot stand to be upstaged would refuse to tap the deep well of experts at his disposal; that a scion of nepotism would hand control of a shadow coronavirus task force to his unqualified son-in-law; that an armchair polymath would claim to have a “natural ability” at medicine and display it by wondering out loud about the curative potential of injecting disinfectant; that an egotist incapable of admitting failure would try to distract from his greatest one by blaming China, defunding the WHO, and promoting miracle drugs; or that a president who has been shielded by his party from any shred of accountability would say, when asked about the lack of testing, “I don’t take any responsibility at all.”

Left: A woman hugs her grandmother through a plastic sheet in Wantagh, New York. Right: An elderly woman has her oxygen levels tested in Yonkers, New York. (Al Bello / Getty; Andrew Renneisen / The New York Times / Redux)

Trump is a comorbidity of the COVID19 pandemic. He isn’t solely responsible for America’s fiasco, but he is central to it. A pandemic demands the coordinated efforts of dozens of agencies. “In the best circumstances, it’s hard to make the bureaucracy move quickly,” Ron Klain said. “It moves if the president stands on a table and says, ‘Move quickly.’ But it really doesn’t move if he’s sitting at his desk saying it’s not a big deal.”

In the early days of Trump’s presidency, many believed that America’s institutions would check his excesses. They have, in part, but Trump has also corrupted them. The CDC is but his latest victim. On February 25, the agency’s respiratory-disease chief, Nancy Messonnier, shocked people by raising the possibility of school closures and saying that “disruption to everyday life might be severe.” Trump was reportedly enraged. In response, he seems to have benched the entire agency. The CDC led the way in every recent domestic disease outbreak and has been the inspiration and template for public-health agencies around the world. But during the three months when some 2 million Americans contracted COVID19 and the death toll topped 100,000, the agency didn’t hold a single press conference. Its detailed guidelines on reopening the country were shelved for a month while the White House released its own uselessly vague plan.

Again, everyday Americans did more than the White House. By voluntarily agreeing to months of social distancing, they bought the country time, at substantial cost to their financial and mental well-being. Their sacrifice came with an implicit social contract—that the government would use the valuable time to mobilize an extraordinary, energetic effort to suppress the virus, as did the likes of Germany and Singapore. But the government did not, to the bafflement of health experts. “There are instances in history where humanity has really moved mountains to defeat infectious diseases,” says Caitlin Rivers, an epidemiologist at the Johns Hopkins Center for Health Security. “It’s appalling that we in the U.S. have not summoned that energy around COVID19.”

Instead, the U.S. sleepwalked into the worst possible scenario: People suffered all the debilitating effects of a lockdown with few of the benefits. Most states felt compelled to reopen without accruing enough tests or contact tracers. In April and May, the nation was stuck on a terrible plateau, averaging 20,000 to 30,000 new cases every day. In June, the plateau again became an upward slope, soaring to record-breaking heights.

Read: Ed Yong on living in a patchwork pandemic

Trump never rallied the country. Despite declaring himself a “wartime president,” he merely presided over a culture war, turning public health into yet another politicized cage match. Abetted by supporters in the conservative media, he framed measures that protect against the virus, from masks to social distancing, as liberal and anti-American. Armed anti-lockdown protesters demonstrated at government buildings while Trump egged them on, urging them to “LIBERATE” Minnesota, Michigan, and Virginia. Several public-health officials left their jobs over harassment and threats.

It is no coincidence that other powerful nations that elected populist leaders—Brazil, Russia, India, and the United Kingdom—also fumbled their response to COVID19. “When you have people elected based on undermining trust in the government, what happens when trust is what you need the most?” says Sarah Dalglish of the Johns Hopkins Bloomberg School of Public Health, who studies the political determinants of health.

“Trump is president,” she says. “How could it go well?”

The countries that fared better against COVID19 didn’t follow a universal playbook. Many used masks widely; New Zealand didn’t. Many tested extensively; Japan didn’t. Many had science-minded leaders who acted early; Hong Kong didn’t—instead, a grassroots movement compensated for a lax government. Many were small islands; not large and continental Germany. Each nation succeeded because it did enough things right.

Read: What really doomed America’s coronavirus response

Meanwhile, the United States underperformed across the board, and its errors compounded. The dearth of tests allowed unconfirmed cases to create still more cases, which flooded the hospitals, which ran out of masks, which are necessary to limit the virus’s spread. Twitter amplified Trump’s misleading messages, which raised fear and anxiety among people, which led them to spend more time scouring for information on Twitter. Even seasoned health experts underestimated these compounded risks. Yes, having Trump at the helm during a pandemic was worrying, but it was tempting to think that national wealth and technological superiority would save America. “We are a rich country, and we think we can stop any infectious disease because of that,” says Michael Osterholm, the director of the Center for Infectious Disease Research and Policy at the University of Minnesota. “But dollar bills alone are no match against a virus.”

COVID‐19 is an assault on America’s body, and a referendum on the ideas that animate its culture.

Public-health experts talk wearily about the panic-neglect cycle, in which outbreaks trigger waves of attention and funding that quickly dissipate once the diseases recede. This time around, the U.S. is already flirting with neglect, before the panic phase is over. The virus was never beaten in the spring, but many people, including Trump, pretended that it was. Every state reopened to varying degrees, and many subsequently saw record numbers of cases. After Arizona’s cases started climbing sharply at the end of May, Cara Christ, the director of the state’s health-services department, said, “We are not going to be able to stop the spread. And so we can’t stop living as well.” The virus may beg to differ.

At times, Americans have seemed to collectively surrender to COVID19. The White House’s coronavirus task force wound down. Trump resumed holding rallies, and called for less testing, so that official numbers would be rosier. The country behaved like a horror-movie character who believes the danger is over, even though the monster is still at large. The long wait for a vaccine will likely culminate in a predictable way: Many Americans will refuse to get it, and among those who want it, the most vulnerable will be last in line.

Still, there is some reason for hope. Many of the people I interviewed tentatively suggested that the upheaval wrought by COVID19 might be so large as to permanently change the nation’s disposition. Experience, after all, sharpens the mind. East Asian states that had lived through the SARS and MERS epidemics reacted quickly when threatened by SARSCoV2, spurred by a cultural memory of what a fast-moving coronavirus can do. But the U.S. had barely been touched by the major epidemics of past decades (with the exception of the H1N1 flu). In 2019, more Americans were concerned about terrorists and cyberattacks than about outbreaks of exotic diseases. Perhaps they will emerge from this pandemic with immunity both cellular and cultural.

There are also a few signs that Americans are learning important lessons. A June survey showed that 60 to 75 percent of Americans were still practicing social distancing. A partisan gap exists, but it has narrowed. “In public-opinion polling in the U.S., high-60s agreement on anything is an amazing accomplishment,” says Beth Redbird, a sociologist at Northwestern University, who led the survey. Polls in May also showed that most Democrats and Republicans supported mask wearing, and felt it should be mandatory in at least some indoor spaces. It is almost unheard-of for a public-health measure to go from zero to majority acceptance in less than half a year. But pandemics are rare situations when “people are desperate for guidelines and rules,” says Zoë McLaren, a health-policy professor at the University of Maryland at Baltimore County. The closest analogy is pregnancy, she says, which is “a time when women’s lives are changing, and they can absorb a ton of information. A pandemic is similar: People are actually paying attention, and learning.”

Redbird’s survey suggests that Americans indeed sought out new sources of information—and that consumers of news from conservative outlets, in particular, expanded their media diet. People of all political bents became more dissatisfied with the Trump administration. As the economy nose-dived, the health-care system ailed, and the government fumbled, belief in American exceptionalism declined. “Times of big social disruption call into question things we thought were normal and standard,” Redbird told me. “If our institutions fail us here, in what ways are they failing elsewhere?” And whom are they failing the most?

Americans were in the mood for systemic change. Then, on May 25, George Floyd, who had survived COVID19’s assault on his airway, asphyxiated under the crushing pressure of a police officer’s knee. The excruciating video of his killing circulated through communities that were still reeling from the deaths of Breonna Taylor and Ahmaud Arbery, and disproportionate casualties from COVID19. America’s simmering outrage came to a boil and spilled into its streets.

Defiant and largely cloaked in masks, protesters turned out in more than 2,000 cities and towns. Support for Black Lives Matter soared: For the first time since its founding in 2013, the movement had majority approval across racial groups. These protests were not about the pandemic, but individual protesters had been primed by months of shocking governmental missteps. Even people who might once have ignored evidence of police brutality recognized yet another broken institution. They could no longer look away.

It is hard to stare directly at the biggest problems of our age. Pandemics, climate change, the sixth extinction of wildlife, food and water shortages—their scope is planetary, and their stakes are overwhelming. We have no choice, though, but to grapple with them. It is now abundantly clear what happens when global disasters collide with historical negligence.

COVID19 is an assault on America’s body, and a referendum on the ideas that animate its culture. Recovery is possible, but it demands radical introspection. America would be wise to help reverse the ruination of the natural world, a process that continues to shunt animal diseases into human bodies. It should strive to prevent sickness instead of profiting from it. It should build a health-care system that prizes resilience over brittle efficiency, and an information system that favors light over heat. It should rebuild its international alliances, its social safety net, and its trust in empiricism. It should address the health inequities that flow from its history. Not least, it should elect leaders with sound judgment, high character, and respect for science, logic, and reason.

The pandemic has been both tragedy and teacher. Its very etymology offers a clue about what is at stake in the greatest challenges of the future, and what is needed to address them. Pandemic. Pan and demos. All people.

* This article has been updated to clarify why 3.1 million Americans still cannot afford health insurance.

** This article originally mischaracterized similarities between two studies that were retracted in June, one in The Lancet and one in the New England Journal of Medicine. It has been updated to reflect that the latter study was not specifically about hydroxychloroquine. It appears in the September 2020 print edition with the headline “Anatomy of an American Failure.”

Ed Yong is a staff writer at The Atlantic, where he covers science.

Connect Twitter

Why a Traffic Flow Suddenly Turns Into a Traffic Jam

Those aggravating slowdowns aren’t one driver’s fault. They’re everybody’s fault. August 3rd 2020

Nautilus

  • Benjamin Seibold
15967_97855ff80c2ef0cc2f1b586e78fb287b.png

Photo by Raymond Depardon / Magnum Photos.

Few experiences on the road are more perplexing than phantom traffic jams. Most of us have experienced one: The vehicle ahead of you suddenly brakes, forcing you to brake, and making the driver behind you brake. But, soon afterward, you and the cars around you accelerate back to the original speed—and it becomes clear that there were no obstacles on the road, and apparently no cause for the slowdown.

Because traffic quickly resumes its original speed, phantom traffic jams usually don’t cause major delays. But neither are they just minor nuisances. They are hot spots for accidents because they force unexpected braking. And the unsteady driving they cause is not good for your car, causing wear and tear and poor gas mileage.

So what is going on, exactly? To answer this question mathematicians, physicists, and traffic engineers have devised many types of traffic models. For instance, microscopic models resolve the paths of the individual vehicles, and are good at describing vehicle–vehicle interactions. In contrast, macroscopic models describe traffic as a fluid, in which cars are interpreted as fluid particles. They are effective at capturing large-scale phenomena that involve many vehicles. Finally, cellular models divide the road into segments and prescribe rules by which cars move from cell to cell, providing a framework for capturing the uncertainty that is inherent in real traffic.

It soon becomes clear that there were no obstacles on the road, and apparently no cause for the slowdown.

In setting out to understand how a phantom traffic jam forms, we first have to be aware of the many effects present in real traffic that could conceivably contribute to a jam: different types of vehicles and drivers, unpredictable behavior, on- and off-ramps, and lane switching, to name just a few. We might expect that some combination of these effects is necessary to cause a phantom jam. One of the great advantages of studying mathematical models is that these various effects can be turned off in theoretical analysis or computer simulations. This creates a host of identical, predictable drivers on a single-lane highway without any ramps. In other words, your perfect commute home.

Surprisingly, when all these effects are turned off, phantom traffic jams still occur! This observation tells us that phantom jams are not the fault of individual drivers, but result instead from the collective behavior of all drivers on the road. It works like this. Envision a uniform traffic flow: All vehicles are evenly distributed along the highway, and all drive with the same velocity. Under perfect conditions, this ideal traffic flow could persist forever. However, in reality, the flow is constantly exposed to small perturbations: imperfections on the asphalt, tiny hiccups of the engines, half-seconds of driver inattention, and so on. To predict the evolution of this traffic flow, the big question is to decide whether these small perturbations decay, or are amplified.

If they decay, the traffic flow is stable and there are no jams. But if they are amplified, the uniform flow becomes unstable, with small perturbations growing into backwards-traveling waves called “jamitons.” These jamitons can be observed in reality, are visible in various types of models and computer simulations, and have also been reproduced in tightly controlled experiments.

In macroscopic, or “fluid-dynamical,” models, each driver—interpreted as a traffic-fluid particle—observes the local density of traffic around her at any instant in time and accordingly decides on a target velocity: fast, when few cars are nearby, or slow, when the congestion level is high. Then she accelerates or decelerates towards this target velocity. In addition, she anticipates what the traffic will do next. This predictive driving effect is modeled by a “traffic pressure,” which acts in many ways like the pressure in a real fluid.

Phantom jams are not the fault of individual drivers, but result instead from the collective behavior of all drivers on the road.

The mathematical analysis of traffic models reveals that these two are competing effects. The delay before drivers reach their target velocity causes the growth of perturbations, while traffic pressure makes perturbations decay. A uniform flow profile is stable if the anticipation effect dominates, which it does when traffic density is low. The delay effect dominates when traffic densities are high, causing instabilities and, ultimately, phantom jams.

The transition from uniform traffic flow to jamiton-dominated flow is similar to water turning from a liquid state into a gas state. In traffic, this phase transition occurs once traffic density reaches a particular, critical threshold at which the drivers’ anticipation exactly balances the delay effect in their velocity adjustment. The most fascinating aspect of this phase transition is that the character of the traffic changes dramatically while individual drivers do not change their driving behavior at all.

The occurrence of jamiton traffic waves, then, can be explained by phase transition behavior. To think about how to prevent phantom jams, though, we also need to understand the details of the structure of a fully established jamiton. In macroscopic traffic models, jamitons are the mathematical analog of detonation waves, which naturally occur in explosions. All jamitons have a localized region of high traffic density and low vehicle velocity. The transition from high to low speed is extremely abrupt—like a shock wave in a fluid. Vehicles that run into the shock front are forced to brake heavily. After the shock is a “reaction zone,” in which drivers attempt to accelerate back to their original velocity. Finally, at the end of the phantom jam, from the drivers’ perspective, is the “sonic point.”

The name “sonic point” comes from the analogy with detonation waves. In an explosion, it is at this point that the flow turns from supersonic to subsonic. This has crucial implications for the information flow within a detonation wave, as well as in a jamiton. The sonic point provides an information boundary, similar to the event horizon in a black hole: no information from further downstream can affect the jamiton through the sonic point. This makes dispersing jamitons rather difficult—a vehicle can’t affect the jamiton through its driving behavior after passing through.

Instead, the driving behavior of a vehicle must be affected before it runs into a jamiton. Wireless communication between vehicles provides one possibility to achieve this goal, and today’s mathematical models allow us to develop appropriate ways to use tomorrow’s technology. For example, once a vehicle detects a sudden braking event followed by an immediate acceleration, it can broadcast a “jamiton warning” to the vehicles following it within a mile distance. The drivers of those vehicles can then, at the least, prepare for unexpected braking; or, better still, increase their headway so that they can eventually contribute to the dissipation of the traffic wave.

The character of the traffic changes dramatically while individual drivers don’t change their driving behavior
at all.

The insights we glean from fluid-dynamical traffic models can help with many other real-world problems. For example, supply chains exhibit a queuing behavior reminiscent of traffic jams. Jamming, queuing, and wave phenomena can also be observed in gas pipelines, information webs, and flows in biological networks—all of which can be understood as fluid-like flows.

Besides being an important mathematical case study, the phantom traffic jam is, perhaps, also an interesting and instructive social system. Whenever jamitons arise, they are caused by the collective behavior of all drivers—not a few bad apples on the road. Those who drive preventively can dissipate jamitons, and benefit all of the drivers behind them. It is a classic example of the effectiveness of the Golden Rule.

So the next time you are caught in a warrantless, pointless, and spontaneous traffic jam, remember just how much more it is than it seems.

Benjamin Seibold is an Assistant Professor of Mathematics at Temple University.

Could Air-Conditioning Fix Climate Change?

Researchers proposed a carbon-neutral “synthetic oil well” on every rooftop.

Scientific American

  • Richard Conniff
GettyImages-171242693.jpg

Photo from 4FR / Getty Images.

It is one of the great dilemmas of climate change: We take such comfort from air conditioning that worldwide energy consumption for that purpose has already tripled since 1990. It is on track to grow even faster through mid-century—and assuming fossil-fuel–fired power plants provide the electricity, that could cause enough carbon dioxide emissions to warm the planet by another deadly half-degree Celsius.

A paper published in the Nature Communications proposes a partial remedy:  Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour.  Machines that capture carbon dioxide from the atmosphere—a developing fix for climate change—also depend on moving large volumes of air.  So why not save energy by tacking the carbon capture machine onto the air conditioner?

This futuristic proposal, from a team led by chemical engineer Roland Dittmeyer at Germany’s Karlsruhe Institute of Technology, goes even further. The researchers imagine a system of modular components, powered by renewable energy, that would not just extract carbon dioxide and water from the air. It would also convert them into hydrogen, and then use a multistep chemical process to transform that hydrogen into liquid hydrocarbon fuels. The result: “Personalized, localized and distributed, synthetic oil wells” in buildings or neighborhoods, the authors write. “The envisioned model of ‘crowd oil’ from solar refineries, akin to ‘crowd electricity’ from solar panels,” would enable people “to take control and collectively manage global warming and climate change, rather than depending on the fossil power industrial behemoths.”

The research group has already developed an experimental model that can complete several key steps of the process, Dittmeyer says, adding, “The plan in two or three years is to have the first experimental showcase where I can show you a bottle of hydrocarbon fuel from carbon dioxide captured in an air-conditioning unit.”

Neither Dittmeyer nor co-author Geoffrey Ozin, a chemical engineer at the University of Toronto, would predict how long it might take before building owners could purchase and install such units. But Ozin claims much of the necessary technology is already commercially available. He says the carbon capture equipment could come from a Swiss “direct air capture” company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies. “And you use Roland’s amazing microstructure catalytic reactors, which convert the hydrogen and carbon dioxide into a synthetic fuel,” he adds. Those reactors are being brought to market by the German company Ineratec, a spinoff from Dittmeyer’s research. Because the system would rely on advanced forms of solar energy, Ozin thinks of the result as “photosynthetic buildings.”

The authors calculate that applying this system to the HVAC in one of Europe’s tallest skyscrapers, the MesseTurm, or Trade Fair Tower, in Frankfurt, would extract and convert enough carbon dioxide to yield at least 2,000 metric tons (660,000 U.S. gallons) of fuel a year. The office space in the entire city of Frankfurt could yield more than 370,000 tons (122 million gallons) annually, they say.

“This is a wonderful concept—it made my day,” says David Keith, a Harvard professor of applied physics and public policy, who was not involved in the new paper. He suggests that the best use for the resulting fuels would be to “help solve two of our biggest energy challenges”: providing a carbon-neutral fuel to fill the gaps left by intermittent renewables such as wind and solar power, and providing fuel for “the hard-to-electrify parts of transportation and industry,” such as airplanes, large trucks and steel- or cement-making. Keith is already targeting some of these markets through Carbon Engineering, a company he founded focused on direct air capture of carbon dioxide for large-scale liquid fuel production. But he says he is “deeply skeptical” about doing it on a distributed building or neighborhood basis. “Economies of scale can’t be wished away. There’s a reason we have huge wind turbines,” he says—and a reason we do not have backyard all-in-one pulp-and-paper mills for disposing of our yard wastes. He believes it is simply “faster and cheaper” to take carbon dioxide from the air and turn it into fuel “by doing it an appropriate scale.”

Other scientists who were not involved in the new paper note two other potential problems. “The idea that Roland has presented is an interesting one,” says Jennifer Wilcox, a chemical engineer at Worcester Institute of Technology, “but more vetting needs to be done in order to determine the true potential of the approach.” While it seems to make sense to take advantage of the air movement already being generated by HVAC systems, Wilcox says, building and operating the necessary fans is not what makes direct air capture systems so expensive. “The dominant capital cost,” she says, “is the solid adsorbent materials”—that is, substances to which the carbon dioxide adheres—and the main energy cost is the heat needed to recover the carbon dioxide from these materials afterward. Moreover, she contends that any available solar or other carbon-free power source would be put to better use in replacing fossil-fuel-fired power plants, to reduce the amount of carbon dioxide getting into the air in the first place.

“The idea of converting captured carbon into liquid fuel is persuasive,“ says Matthew J. Realff, a chemical engineer at Georgia Institute of Technology. “We have an enormous investment in our liquid fuel infrastructure, and using that has tremendous value. You wouldn’t have to build a whole new infrastructure. But this concept of doing it at the household level is a little bit fantastical”—partly because the gases involved (carbon monoxide and hydrogen) are toxic and explosive. The process to convert them to a liquid fuel is well understood, Realff says, but it produces a range of products that now typically get separated out in massive refineries—requiring huge amounts of energy. “It’s possible that it could be worked out at the scale that is being proposed,” he adds. “But we haven’t done it at this point, and it may not turn out to be the most effective way from an economic perspective.” There is, however, an unexpected benefit of direct air capture of carbon dioxide, says Realff, and it could help stimulate market acceptance of the technology: One reason office buildings replace their air so frequently is simply to protect workers from elevated levels of carbon dioxide. His research suggests that capturing the carbon dioxide from the air stream may be one way to cut energy costs, by reducing the frequency of air changes.

Dittmeyer disputes the argument that thinking big is always better. He notes that small, modular plants are a trend in some areas of chemical engineering, “because they are more flexible and don’t involve such a financial risk.” He also anticipates that cost will become less of a barrier as governments face up to the urgency of achieving a climate solution, and as jurisdictions increasingly impose carbon taxes or mandate strict energy efficiency standards for buildings.

“Of course, it’s a visionary perspective,” he says, “it relies on this idea of a decentralized product empowering people, not leaving it to industry. Industrial players observe the situation, but as long as there is no profit in the short term, they won’t do anything. If we have the technology that is safe and affordable, though maybe not as cheap, we can generate some momentum” among individuals, much as happened in the early stages of the solar industry. “And then I would expect the industrial parties to act, too.”

Could Air-Conditioning Fix Climate Change?

Researchers proposed a carbon-neutral “synthetic oil well” on every rooftop. August 2nd 2020

Scientific American

  • Richard Conniff
GettyImages-171242693.jpg

Photo from 4FR / Getty Images.

It is one of the great dilemmas of climate change: We take such comfort from air conditioning that worldwide energy consumption for that purpose has already tripled since 1990. It is on track to grow even faster through mid-century—and assuming fossil-fuel–fired power plants provide the electricity, that could cause enough carbon dioxide emissions to warm the planet by another deadly half-degree Celsius.

A paper published in the Nature Communications proposes a partial remedy:  Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour.  Machines that capture carbon dioxide from the atmosphere—a developing fix for climate change—also depend on moving large volumes of air.  So why not save energy by tacking the carbon capture machine onto the air conditioner?

This futuristic proposal, from a team led by chemical engineer Roland Dittmeyer at Germany’s Karlsruhe Institute of Technology, goes even further. The researchers imagine a system of modular components, powered by renewable energy, that would not just extract carbon dioxide and water from the air. It would also convert them into hydrogen, and then use a multistep chemical process to transform that hydrogen into liquid hydrocarbon fuels. The result: “Personalized, localized and distributed, synthetic oil wells” in buildings or neighborhoods, the authors write. “The envisioned model of ‘crowd oil’ from solar refineries, akin to ‘crowd electricity’ from solar panels,” would enable people “to take control and collectively manage global warming and climate change, rather than depending on the fossil power industrial behemoths.”

The research group has already developed an experimental model that can complete several key steps of the process, Dittmeyer says, adding, “The plan in two or three years is to have the first experimental showcase where I can show you a bottle of hydrocarbon fuel from carbon dioxide captured in an air-conditioning unit.”

Neither Dittmeyer nor co-author Geoffrey Ozin, a chemical engineer at the University of Toronto, would predict how long it might take before building owners could purchase and install such units. But Ozin claims much of the necessary technology is already commercially available. He says the carbon capture equipment could come from a Swiss “direct air capture” company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies. “And you use Roland’s amazing microstructure catalytic reactors, which convert the hydrogen and carbon dioxide into a synthetic fuel,” he adds. Those reactors are being brought to market by the German company Ineratec, a spinoff from Dittmeyer’s research. Because the system would rely on advanced forms of solar energy, Ozin thinks of the result as “photosynthetic buildings.”

The authors calculate that applying this system to the HVAC in one of Europe’s tallest skyscrapers, the MesseTurm, or Trade Fair Tower, in Frankfurt, would extract and convert enough carbon dioxide to yield at least 2,000 metric tons (660,000 U.S. gallons) of fuel a year. The office space in the entire city of Frankfurt could yield more than 370,000 tons (122 million gallons) annually, they say.

“This is a wonderful concept—it made my day,” says David Keith, a Harvard professor of applied physics and public policy, who was not involved in the new paper. He suggests that the best use for the resulting fuels would be to “help solve two of our biggest energy challenges”: providing a carbon-neutral fuel to fill the gaps left by intermittent renewables such as wind and solar power, and providing fuel for “the hard-to-electrify parts of transportation and industry,” such as airplanes, large trucks and steel- or cement-making. Keith is already targeting some of these markets through Carbon Engineering, a company he founded focused on direct air capture of carbon dioxide for large-scale liquid fuel production. But he says he is “deeply skeptical” about doing it on a distributed building or neighborhood basis. “Economies of scale can’t be wished away. There’s a reason we have huge wind turbines,” he says—and a reason we do not have backyard all-in-one pulp-and-paper mills for disposing of our yard wastes. He believes it is simply “faster and cheaper” to take carbon dioxide from the air and turn it into fuel “by doing it an appropriate scale.”

Other scientists who were not involved in the new paper note two other potential problems. “The idea that Roland has presented is an interesting one,” says Jennifer Wilcox, a chemical engineer at Worcester Institute of Technology, “but more vetting needs to be done in order to determine the true potential of the approach.” While it seems to make sense to take advantage of the air movement already being generated by HVAC systems, Wilcox says, building and operating the necessary fans is not what makes direct air capture systems so expensive. “The dominant capital cost,” she says, “is the solid adsorbent materials”—that is, substances to which the carbon dioxide adheres—and the main energy cost is the heat needed to recover the carbon dioxide from these materials afterward. Moreover, she contends that any available solar or other carbon-free power source would be put to better use in replacing fossil-fuel-fired power plants, to reduce the amount of carbon dioxide getting into the air in the first place.

“The idea of converting captured carbon into liquid fuel is persuasive,“ says Matthew J. Realff, a chemical engineer at Georgia Institute of Technology. “We have an enormous investment in our liquid fuel infrastructure, and using that has tremendous value. You wouldn’t have to build a whole new infrastructure. But this concept of doing it at the household level is a little bit fantastical”—partly because the gases involved (carbon monoxide and hydrogen) are toxic and explosive. The process to convert them to a liquid fuel is well understood, Realff says, but it produces a range of products that now typically get separated out in massive refineries—requiring huge amounts of energy. “It’s possible that it could be worked out at the scale that is being proposed,” he adds. “But we haven’t done it at this point, and it may not turn out to be the most effective way from an economic perspective.” There is, however, an unexpected benefit of direct air capture of carbon dioxide, says Realff, and it could help stimulate market acceptance of the technology: One reason office buildings replace their air so frequently is simply to protect workers from elevated levels of carbon dioxide. His research suggests that capturing the carbon dioxide from the air stream may be one way to cut energy costs, by reducing the frequency of air changes.

Dittmeyer disputes the argument that thinking big is always better. He notes that small, modular plants are a trend in some areas of chemical engineering, “because they are more flexible and don’t involve such a financial risk.” He also anticipates that cost will become less of a barrier as governments face up to the urgency of achieving a climate solution, and as jurisdictions increasingly impose carbon taxes or mandate strict energy efficiency standards for buildings.

“Of course, it’s a visionary perspective,” he says, “it relies on this idea of a decentralized product empowering people, not leaving it to industry. Industrial players observe the situation, but as long as there is no profit in the short term, they won’t do anything. If we have the technology that is safe and affordable, though maybe not as cheap, we can generate some momentum” among individuals, much as happened in the early stages of the solar industry. “And then I would expect the industrial parties to act, too.”

Richard Conniff is an award-winning science writer. His books include The Species Seekers: Heroes, Fools, and the Mad Pursuit of Life on Earth (W. W. Norton, 2011).

Richard Conniff is an award-winning science writer. His books include The Species Seekers: Heroes, Fools, and the Mad Pursuit of Life on Earth (W. W. Norton, 2011).

Could Consciousness All Come Down to the Way Things Vibrate?

A resonance theory of consciousness suggests that the way all matter vibrates, and the tendency for those vibrations to sync up, might be a way to answer the so-called ‘hard problem’ of consciousness.

The Conversation

  • Tam Hunt
file-20181109-74754-hj6p7i.jpg

What do synchronized vibrations add to the mind/body question? Photo by agsandrew / Shutterstock.com.

Why is my awareness here, while yours is over there? Why is the universe split in two for each of us, into a subject and an infinity of objects? How is each of us our own center of experience, receiving information about the rest of the world out there? Why are some things conscious and others apparently not? Is a rat conscious? A gnat? A bacterium?

These questions are all aspects of the ancient “mind-body problem,” which asks, essentially: What is the relationship between mind and matter? It’s resisted a generally satisfying conclusion for thousands of years.

The mind-body problem enjoyed a major rebranding over the last two decades. Now it’s generally known as the “hard problem” of consciousness, after philosopher David Chalmers coined this term in a now classic paper and further explored it in his 1996 book, “The Conscious Mind: In Search of a Fundamental Theory.”

Chalmers thought the mind-body problem should be called “hard” in comparison to what, with tongue in cheek, he called the “easy” problems of neuroscience: How do neurons and the brain work at the physical level? Of course they’re not actually easy at all. But his point was that they’re relatively easy compared to the truly difficult problem of explaining how consciousness relates to matter.

Over the last decade, my colleague, University of California, Santa Barbara psychology professor Jonathan Schooler and I have developed what we call a “resonance theory of consciousness.” We suggest that resonance – another word for synchronized vibrations – is at the heart of not only human consciousness but also animal consciousness and of physical reality more generally. It sounds like something the hippies might have dreamed up – it’s all vibrations, man! – but stick with me. file-20181109-74769-7ov2ol.jpg

How do things in nature – like flashing fireflies – spontaneously synchronize? Photo by Suzanne Tucker /Shutterstock.com.

All About the Vibrations

All things in our universe are constantly in motion, vibrating. Even objects that appear to be stationary are in fact vibrating, oscillating, resonating, at various frequencies. Resonance is a type of motion, characterized by oscillation between two states. And ultimately all matter is just vibrations of various underlying fields. As such, at every scale, all of nature vibrates.

Something interesting happens when different vibrating things come together: They will often start, after a little while, to vibrate together at the same frequency. They “sync up,” sometimes in ways that can seem mysterious. This is described as the phenomenon of spontaneous self-organization.

Mathematician Steven Strogatz provides various examples from physics, biology, chemistry and neuroscience to illustrate “sync” – his term for resonance – in his 2003 book “Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life,” including:

  • When fireflies of certain species come together in large gatherings, they start flashing in sync, in ways that can still seem a little mystifying.
  • Lasers are produced when photons of the same power and frequency sync up.
  • The moon’s rotation is exactly synced with its orbit around the Earth such that we always see the same face.

Examining resonance leads to potentially deep insights about the nature of consciousness and about the universe more generally. file-20181109-74751-1503r83.jpg

External electrodes can record a brain’s activity. Photo by vasara / Shutterstock.com.

Sync Inside Your Skull

Neuroscientists have identified sync in their research, too. Large-scale neuron firing occurs in human brains at measurable frequencies, with mammalian consciousness thought to be commonly associated with various kinds of neuronal sync.

For example, German neurophysiologist Pascal Fries has explored the ways in which various electrical patterns sync in the brain to produce different types of human consciousness.

Fries focuses on gamma, beta and theta waves. These labels refer to the speed of electrical oscillations in the brain, measured by electrodes placed on the outside of the skull. Groups of neurons produce these oscillations as they use electrochemical impulses to communicate with each other. It’s the speed and voltage of these signals that, when averaged, produce EEG waves that can be measured at signature cycles per second. file-20181109-116826-1hsxqnf.jpg

Each type of synchronized activity is associated with certain types of brain function. Image from artellia / Shutterstock.com.

Gamma waves are associated with large-scale coordinated activities like perception, meditation or focused consciousness; beta with maximum brain activity or arousal; and theta with relaxation or daydreaming. These three wave types work together to produce, or at least facilitate, various types of human consciousness, according to Fries. But the exact relationship between electrical brain waves and consciousness is still very much up for debate.

Fries calls his concept “communication through coherence.” For him, it’s all about neuronal synchronization. Synchronization, in terms of shared electrical oscillation rates, allows for smooth communication between neurons and groups of neurons. Without this kind of synchronized coherence, inputs arrive at random phases of the neuron excitability cycle and are ineffective, or at least much less effective, in communication.

A Resonance Theory of Consciousness

Our resonance theory builds upon the work of Fries and many others, with a broader approach that can help to explain not only human and mammalian consciousness, but also consciousness more broadly.

Based on the observed behavior of the entities that surround us, from electrons to atoms to molecules, to bacteria to mice, bats, rats, and on, we suggest that all things may be viewed as at least a little conscious. This sounds strange at first blush, but “panpsychism” – the view that all matter has some associated consciousness – is an increasingly accepted position with respect to the nature of consciousness.

The panpsychist argues that consciousness did not emerge at some point during evolution. Rather, it’s always associated with matter and vice versa – they’re two sides of the same coin. But the large majority of the mind associated with the various types of matter in our universe is extremely rudimentary. An electron or an atom, for example, enjoys just a tiny amount of consciousness. But as matter becomes more interconnected and rich, so does the mind, and vice versa, according to this way of thinking.

Biological organisms can quickly exchange information through various biophysical pathways, both electrical and electrochemical. Non-biological structures can only exchange information internally using heat/thermal pathways – much slower and far less rich in information in comparison. Living things leverage their speedier information flows into larger-scale consciousness than what would occur in similar-size things like boulders or piles of sand, for example. There’s much greater internal connection and thus far more “going on” in biological structures than in a boulder or a pile of sand.

Under our approach, boulders and piles of sand are “mere aggregates,” just collections of highly rudimentary conscious entities at the atomic or molecular level only. That’s in contrast to what happens in biological life forms where the combinations of these micro-conscious entities together create a higher level macro-conscious entity. For us, this combination process is the hallmark of biological life.

The central thesis of our approach is this: the particular linkages that allow for large-scale consciousness – like those humans and other mammals enjoy – result from a shared resonance among many smaller constituents. The speed of the resonant waves that are present is the limiting factor that determines the size of each conscious entity in each moment.

As a particular shared resonance expands to more and more constituents, the new conscious entity that results from this resonance and combination grows larger and more complex. So the shared resonance in a human brain that achieves gamma synchrony, for example, includes a far larger number of neurons and neuronal connections than is the case for beta or theta rhythms alone.

What about larger inter-organism resonance like the cloud of fireflies with their little lights flashing in sync? Researchers think their bioluminescent resonance arises due to internal biological oscillators that automatically result in each firefly syncing up with its neighbors.

Is this group of fireflies enjoying a higher level of group consciousness? Probably not, since we can explain the phenomenon without recourse to any intelligence or consciousness. But in biological structures with the right kind of information pathways and processing power, these tendencies toward self-organization can and often do produce larger-scale conscious entities.

Our resonance theory of consciousness attempts to provide a unified framework that includes neuroscience, as well as more fundamental questions of neurobiology and biophysics, and also the philosophy of mind. It gets to the heart of the differences that matter when it comes to consciousness and the evolution of physical systems.

It is all about vibrations, but it’s also about the type of vibrations and, most importantly, about shared vibrations.

Tam Hunt is an Affiliate Guest in Psychology at the University of California, Santa Barbara.The Conversation

More from The Conversation

Science or Compliance ?

Getting There Social Theory July 25th 2020

There is a lot going on in the world, much of it quite bad. As a trained social scientist from the early 1970s, I was taught sociological and economic theories no longer popular with the global monstrously rich ruling elite.  By the way, in case you don’t know, you do not have to hold political office to be a part of that elite.  I was also taught philosophy and economic history of Britain and the United States.

So, firstly let’s get economics dealt with.  I was taught about its founders, men like Jeremy Bentham, also a philosopher, Malthus, Jevons and Marshall – the latter bringing order, principles and so called ‘rational economic man’ into the discipline.

Society was pretty rigid, wars were the way countries got richer and class systems became more objectified.  All went well until the post World War One ‘Great Depression’ when, in spite of rapidly falling interest rates, the rich decided to let the poor sink while they retrenched and had a good time- stirring up Nazism. .  

Economics revolutionary John Maynard Keynes concluded that governments needed to tax the rich and borrow to spend their way out of depression.  Britain’s elite would have none of it but the U.S.A , Italy and Germany took it up -I admit to being a nerd, and was reading Keynes ‘General Theory of Employment Interest and Money ‘ in my teens, much more interesting to me than following football.

Meanwhile Russia was locked out as a pariah. Britain had done its best to discredit and destroy Russia because they killed the British Royal family’s treacherous cousins – because they were terrible and corrupt rulers of Russia – and terrified the British rich with a fear of communism from the lower orders rising up.  

Only World War Two saved them offering a wonderful opportunity to slaughter more of the lower orders.  In the process, their empire was exposed and fell apart in the post war age – a ghost of it surviving as the Commonwealth ( sic ).

So we come to sociology.  Along the way, through this Industrial Revolution, Empire Building , oppression and decline a so called ‘Science of Society had been developing, with substantial data collected and poured into theories.  Marxism was the most famous, with Karl Marx’s forgotten friend and industrialist Friedrich Engels well placed to collect the data.  

The essence of Marxist theory, which was primarily based on the Hegelian dialectic and Marx’s historical studies, was that Capitalistm contained the seeds of its own destruction due to an inherent conflict between those who owned the means of production and the slaves being exploited for profit and greed.  Taking the opportunity provided by incompetent Russian elite rule in 1917, Germany helped smuggle Lenin back into Russia to ferment the Russian revolution.

That revolution and Russia has terrified the rich western elites ever since, with all manner of methods and episodes used to undermine it.  It is no wonder, leaving the vile Stalin to one side, that Russia developed police state methods, leading to what windbag Churchill called an Iron Curtain descending and dividing Europe.

By 1991, the West, dominated by Britain and the U.S elites who had the most to lose from what the U.S had called ‘The Domino Theory’ of one country falling to communism after another- because the masses might realise how they were being exploited – thought their day had come.  Gorbachev got rid of the Berlin Wall, the U.S undermined him to get their friend Yeltsin into power.  

But it didn’t last when Putin stepped up.  Oligarch’s allowed by Yeltsin, to rip off state assets rushed to Britain, even donating to Tory Party funds. Ever since, the Western elite have been in overdrive to discredit Putin has made in spite of the progress he has inspired and directed.  

Anglo US sanctions aren’t working fast enough and West Germany wants to buy Russian Gas – Nord Stream 2.  So now we have fake socialist, former head of Britain’s corrupt CPS, now Labour’s top man ( sic ) wanting RT ( Russia Today ) closed down.  There is a lot of worry about people not watching BBC TV in spite of being forced to pay for an expensive licence even though they do not want to watch BBC’s smug upper middle class drivel and biased news.  This is where sociology comes back into the picture.

The discipline ( sic ) of actual sociology derived from French thinkers like Auguste Comte who predicted that sociologists would become the priesthood of modern society – see where our government gets its ‘the science mantra ‘ from.  

As with economics, sociology was about understanding the increasingly complex way of life in an increasingly industrialised Industrialising world. Early schools of sociological thought drew comparisons with Darwin’s idea of organisms evolving.  So society’s head was its government, the transport system was its veins and arteries etc, with every part working towards functional integration.

Herbert Spencer, whose girlfriend Mary Ann Evans wrote social science orientated novels under the name George Elliot, and Frenchman Emile Durkeheim, founded this ‘functionalists’ school of sociology.  Frenchman Emile Durkheim, inspired by thinkers from the French Revolutionary era, took that school a stage further.  His theory considered dysfunctions which he called ‘pathological’ factors like suicide.  Robert K Merton went on after 1945, to write about dysfunctional aspects of society, building on Durkheim’s work.  Both men had a concept of ‘anomie.’  Durkheim talked of normlessness, Merton of people and societies never satisfied, having ever receding horizons.

To an old school person like myself, these ideas are still useful as is Keynes on economics.  One just has to look behind today’s self interested pseudo scientific jargon speak about ‘experts say,  the science and studies reveal.’  The important thing to remember about any social science, and epidemiologists are among them, is that you get or predict according to what you put in.  As far as Covid 19 is concerned, there are too many vested interests now to take anything they say seriously.  It is quite clear that there is no evidence that lockdown works. There is clear evidence that certain groups make themselves vulnerable or are deluded that death does not come with old age.  

I am old.  I am one of the ‘I’ and ‘Me’ generation whose interests should not come first and nor should the BAME.  The same goes for Africa, Indian sub continent and the Middle East, where overpopulation, foreign aid, corruption, Oxfam ,ignorance, dictators and religious bigotry are not solutions to Covid 19 or anything else.  If our pathetic fake caring politicians carry on like this, grovelling to the likes of the WHO, then we are all doomed.  

As for little Greta, she is a rather noisy poorly educated opinionated stooge. She has no idea what she is talking about.  As for modern sociology, it is pure feminist narrow minded dogma popular on police training courses for morons to use for profiling and fitting up innocent men.  They go on toilet paper degree courses, getting rather impressive letters, BSc to make them look and sound like experts. 

Robert Cook

New Alien Theory July 18th 2020

After decades of searching, we still haven’t discovered a single sign of extraterrestrial intelligence. Probability tells us life should be out there, so why haven’t we found it yet?

The problem is often referred to as Fermi’s paradox, after the Nobel Prize–winning physicist Enrico Fermi, who once asked his colleagues this question at lunch. Many theories have been proposed over the years. It could be that we are simply alone in the universe or that there is some great filter that prevents intelligent life progressing beyond a certain stage. Maybe alien life is out there, but we are too primitive to communicate with it, or we are placed inside some cosmic zoo, observed but left alone to develop without external interference. Now, three researchers think they think they may have another potential answer to Fermi’s question: Aliens do exist; they’re just all asleep.

According to a research paper accepted for publication in the Journal of the British Interplanetary Society, extraterrestrials are sleeping while they wait. In the paper, authors from Oxford’s Future of Humanity Institute and the Astronomical Observatory of BelgradeAnders Sandberg, Stuart Armstrong, and Milan Cirkovic argue that the universe is too hot right now for advanced, digital civilizations to make the most efficient use of their resources. The solution: Sleep and wait for the universe to cool down, a process known as aestivating (like hibernation but sleeping until it’s colder).

Understanding the new hypothesis first requires wrapping your head around the idea that the universe’s most sophisticated life may elect to leave biology behind and live digitally. Having essentially uploaded their minds onto powerful computers, the civilizations choosing to do this could enhance their intellectual capacities or inhabit some of the harshest environments in the universe with ease.

The idea that life might transition toward a post-biological form of existence is gaining ground among experts. “It’s not something that is necessarily unavoidable, but it is highly likely,” Cirkovic told me in an interview.

Once you’re living digitally, Cirkovic explained, it’s important to process information efficiently. Each computation has a certain cost attached to it, and this cost is tightly coupled with temperature. The colder it gets, the lower the cost is, meaning you can do more with the same amount of resources. This is one of the reasons why we cool powerful computers. Though humans may find the universe to be a pretty frigid place (the background radiation hovers about 3 kelvins above absolute zero, the very lower limit of the temperature scale), digital minds may find it far too hot.

But why aestivate? Surely any aliens wanting more efficient processing could cool down their systems manually, just as we do with computers. In the paper, the authors concede this is a possibility. “While it is possible for a civilization to cool down parts of itself to any low temperature,” the authors write, that, too, requires work. So it wouldn’t make sense for a civilization looking to maximize its computational capacity to waste energy on the process. As Sandberg and Cirkovic elaborate in a blog post, it’s more likely that such artificial life would be in a protected sleep mode today, ready to wake up in colder futures.

If such aliens exist, they’re in luck. The universe appears to be cooling down on its own. Over the next trillions of years, as it continues to expand and the formation of new stars slows, the background radiation will reduce to practically zero. Under those conditions, Sandberg and Cirkovic explain, this kind of artificial life would get “tremendously more done.” Tremendous isn’t an understatement, either. The researchers calculate that by employing such a strategy, they could achieve up to 1030 times more than if done today. That’s a 1 with 30 zeroes after it.

But just because the aliens are asleep doesn’t mean we can’t find signs of them. Any aestivating civilization has to preserve resources it intends to use in the future. Processes that waste or threaten these resources, then, should be conspicuously absent, thanks to interference from the aestivators. (If they are sufficiently advanced to upload their minds and aestivators, they should be able to manipulate space.) This includes galaxies colliding, galactic winds venting matter into intergalactic space, and stars converting into black holes, which can push resources beyond the reach of the sleeping civilization or change them into less-useful forms.

Another strategy to find the sleeping aliens, Cirkovic said, might be to try and meddle with the aestivators’ possessions and territory, which we may already reside within. One way of doing this would be to send out self-replicating probes into the universe that would steal the aestivators’ things. Any competent species ought to have measures in place to respond to these kind of threats. “It could be an exceptionally dangerous test,” he cautioned, “but if there really are very old and very advanced civilizations out there, we can assume there is a potential for danger in anything we do.”

Interestingly, neither Sandberg nor Cirkovic said they have much faith in finding anything. Sandberg, writing on his blog, states that he does not believe the hypothesis to be a likely one: “I personally think the likeliest reason we are not seeing aliens is not that they are aestivating.” He writes that he feels it’s more likely that “they do not exist or are very far away.”

Cirkovic concurred. “I don’t find it very likely, either,” he said in our interview. “I much prefer hypotheses that do not rely on assuming intentional decisions made by extraterrestrial societies. Any assumption is extremely speculative.” There could be forms of energy that we can’t even conceive of using now, he said—producing antimatter in bulk, tapping evaporating black holes, using dark matter. Any of this could change what we might expect to see from an advanced technical civilization.

Yet, he said, the theory has a place. It’s important to cover as much ground as possible. You need to test a wide set of hypotheses one by one—falsifying them, pruning them—to get closer to the truth. “This is how science works. We need to have as many hypotheses and explanations for Fermi’s paradox as possible,” he said.

Plus, there’s a modest likelihood their aestivating aliens idea might be part of the answer, Cirkovic said. We shouldn’t expect a single hypothesis to account for Fermi’s paradox. It will be more of a “patchwork-quilt kind of solution,” he said.

And it’s important to keep exploring solutions. Fermi’s paradox is so much more than an intellectual exercise. It’s about trying to understand what might be out there and how this might explain our past and guide our future.

“I would say that 90-plus percent of hypotheses that were historically proposed to account for Fermi’s paradox have practical consequences,” Cirkovic said. They allow us to think proactively about some of the problems we as a species face, or may one day face, and prompt us to develop strategies to actively shape a more prosperous and secure future for humanity.“We can apply this reasoning to our past, to the emergence of life and complexity. We can also apply similar reasoning to thinking about our future. It can help us avoid catastrophes and help us understand the most likely fate of intelligent species in the universe.”

Stephen Hawking Left Us Bold Predictions on AI, Superhumans, and Aliens

The great physicist’s thoughts on the future of the human race and the fragility of planet Earth.

Quartz

  • Max de Haldevang

The late physicist Stephen Hawking’s last writings predict that a breed of superhumans will take over, having used genetic engineering to surpass their fellow beings.

In Brief Answers to the Big Questions, to published in October 2018 and excerpted in the UK’s Sunday Times (paywall), Hawking pulls no punches on subjects like machines taking over, the biggest threat to Earth, and the possibilities of intelligent life in space.

Artificial Intelligence

Hawking delivers a grave warning on the importance of regulating AI, noting that “in the future AI could develop a will of its own, a will that is in conflict with ours.” A possible arms race over autonomous-weapons should be stopped before it can start, he writes, asking what would happen if a crash similar to the 2010 stock market Flash Crash happened with weapons. He continues:

In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

Earth’s Bleak Future, Gene Editing, and Superhumans

The bad news: At some point in the next 1,000 years, nuclear war or environmental calamity will “cripple Earth.” However, by then, “our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.” The Earth’s other species probably won’t make it, though.

The humans who do escape Earth will probably be new “superhumans” who have used gene editing technology like CRISPR to outpace others. They’ll do so by defying laws against genetic engineering, improving their memories, disease resistance, and life expectancy, he says

Hawking seems curiously enthusiastic about this final point, writing, “There is no time to wait for Darwinian evolution to make us more intelligent and better natured.”

Once such superhumans appear, there are going to be significant political problems with the unimproved humans, who won’t be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings who are improving themselves at an ever-increasing rate. If the human race manages to redesign itself, it will probably spread out and colonise other planets and stars.

Intelligent Life in Space

Hawking acknowledges there are various explanations for why intelligent life hasn’t been found or has not visited Earth. His predictions here aren’t so bold, but his preferred explanation is that humans have “overlooked” forms of intelligent life that are out there.

Does God Exist?

No, Hawking says.

The question is, is the way the universe began chosen by God for reasons we can’t understand, or was it determined by a law of science? I believe the second. If you like, you can call the laws of science “God”, but it wouldn’t be a personal God that you would meet and put questions to.

The Biggest Threats to Earth

Threat number one one is an asteroid collision, like the one that killed the dinosaurs. However, “we have no defense” against that, Hawking writes. More immediately: climate change. “A rise in ocean temperature would melt the ice caps and cause the release of large amounts of carbon dioxide,” Hawking writes. “Both effects could make our climate like that of Venus with a temperature of 250C.”

The Best Idea Humanity Could Implement

Nuclear fusion power. That would give us clean energy with no pollution or global warming.

More from Quartz

Advertisement


Could Invisible Aliens Really Exist Among Us? An Astrobiologist Explains

The Earth may be crawling with undiscovered creatures with a biochemistry that differs from life as we know it. July 13th 2020

The Conversation

  • Samantha Rolfe

They probably won’t look anything like this. Credit: Martina Badini / Shutterstock.

Life is pretty easy to recognise. It moves, it grows, it eats, it excretes, it reproduces. Simple. In biology, researchers often use the acronym “MRSGREN” to describe it. It stands for movement, respiration, sensitivity, growth, reproduction, excretion and nutrition.

But Helen Sharman, Britain’s first astronaut and a chemist at Imperial College London, recently said that alien lifeforms that are impossible to spot may be living among us. How could that be possible?

While life may be easy to recognise, it’s actually notoriously difficult to define and has had scientists and philosophers in debate for centuries – if not millennia. For example, a 3D printer can reproduce itself, but we wouldn’t call it alive. On the other hand, a mule is famously sterile, but we would never say it doesn’t live.

As nobody can agree, there are more than 100 definitions of what life is. An alternative (but imperfect) approach is describing life as “a self-sustaining chemical system capable of Darwinian evolution”, which works for many cases we want to describe.

The lack of definition is a huge problem when it comes to searching for life in space. Not being able to define life other than “we’ll know it when we see it” means we are truly limiting ourselves to geocentric, possibly even anthropocentric, ideas of what life looks like. When we think about aliens, we often picture a humanoid creature. But the intelligent life we are searching for doesn’t have to be humanoid.

Life, But Not as We Know It

Sharman says she believes aliens exist and “there’s no two ways about it”. Furthermore, she wonders: “Will they be like you and me, made up of carbon and nitrogen? Maybe not. It’s possible they’re here right now and we simply can’t see them.”

Such life would exist in a “shadow biosphere”. By that, I don’t mean a ghost realm, but undiscovered creatures probably with a different biochemistry. This means we can’t study or even notice them because they are outside of our comprehension. Assuming it exists, such a shadow biosphere would probably be microscopic.

So why haven’t we found it? We have limited ways of studying the microscopic world as only a small percentage of microbes can be cultured in a lab. This may mean that there could indeed be many lifeforms we haven’t yet spotted. We do now have the ability to sequence the DNA of unculturable strains of microbes, but this can only detect life as we know it – that contain DNA.

If we find such a biosphere, however, it is unclear whether we should call it alien. That depends on whether we mean “of extraterrestrial origin” or simply “unfamiliar”.

Silicon-Based Life

A popular suggestion for an alternative biochemistry is one based on silicon rather than carbon. It makes sense, even from a geocentric point of view. Around 90 percent of the Earth is made up of silicon, iron, magnesium and oxygen, which means there’s lots to go around for building potential life.

Artist’s impression of a silicon-based life form. Credit: Zita.

Silicon is similar to carbon, it has four electrons available for creating bonds with other atoms. But silicon is heavier, with 14 protons (protons make up the atomic nucleus with neutrons) compared to the six in the carbon nucleus. While carbon can create strong double and triple bonds to form long chains useful for many functions, such as building cell walls, it is much harder for silicon. It struggles to create strong bonds, so long-chain molecules are much less stable.

What’s more, common silicon compounds, such as silicon dioxide (or silica), are generally solid at terrestrial temperatures and insoluble in water. Compare this to highly soluble carbon dioxide, for example, and we see that carbon is more flexible and provides many more molecular possibilities.

Life on Earth is fundamentally different from the bulk composition of the Earth. Another argument against a silicon-based shadow biosphere is that too much silicon is locked up in rocks. In fact, the chemical composition of life on Earth has an approximate correlation with the chemical composition of the sun, with 98 percent of atoms in biology consisting of hydrogen, oxygen and carbon. So if there were viable silicon lifeforms here, they may have evolved elsewhere.

That said, there are arguments in favour of silicon-based life on Earth. Nature is adaptable. A few years ago, scientists at Caltech managed to breed a bacterial protein that created bonds with silicon – essentially bringing silicon to life. So even though silicon is inflexible compared with carbon, it could perhaps find ways to assemble into living organisms, potentially including carbon.

And when it comes to other places in space, such as Saturn’s moon Titan or planets orbiting other stars, we certainly can’t rule out the possibility of silicon-based life.

To find it, we have to somehow think outside of the terrestrial biology box and figure out ways of recognising lifeforms that are fundamentally different from the carbon-based form. There are plenty of experiments testing out these alternative biochemistries, such as the one from Caltech.

Regardless of the belief held by many that life exists elsewhere in the universe, we have no evidence for that. So it is important to consider all life as precious, no matter its size, quantity or location. The Earth supports the only known life in the universe. So no matter what form life elsewhere in the solar system or universe may take, we have to make sure we protect it from harmful contamination – whether it is terrestrial life or alien lifeforms.

So could aliens be among us? I don’t believe that we have been visited by a life form with the technology to travel across the vast distances of space. But we do have evidence for life-forming, carbon-based molecules having arrived on Earth on meteorites, so the evidence certainly doesn’t rule out the same possibility for more unfamiliar life forms.

Samantha Rolfe is a Lecturer in Astrobiology and Principal Technical Officer at the University of Hertfordshire’s Bayfordbury Observatory.

Memories Can Be Injected and Survive Amputation and Metamorphosis July 13th 2020

If a headless worm can regrow a memory, then where is the memory stored? And, if a memory can regenerate, could you transfer it?

Nautilus

  • Marco Altamirano

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines—and eventually became something of a celebrity

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines—and eventually became something of a celebrity—with a series of experiments on freshwater flatworms called planaria. These worms fascinated McConnell not only because they had, as he wrote, a “true synaptic type of nervous system” but also because they had “enormous powers of regeneration…under the best conditions one may cut [the worm] into as many as 50 pieces” with each section regenerating “into an intact, fully-functioning organism.” 

In an early experiment, McConnell trained the worms à la Pavlov by pairing an electric shock with flashing lights. Eventually, the worms recoiled to the light alone. Then something interesting happened when he cut the worms in half. The head of one half of the worm grew a tail and, understandably, retained the memory of its training. Surprisingly, however, the tail, which grew a head and a brain, also retained the memory of its training. If a headless worm can regrow a memory, then where is the memory stored, McConnell wondered. And, if a memory can regenerate, could he transfer it? 

McConnell’s work has recently experienced a sort of renaissance.

Perhaps. Swedish neurobiologist Holger Hydén had suggested, in the 1960s, that memories were stored in neuron cells, specifically in RNA, the messenger molecule that takes instructions from DNA and links up with ribosomes to make proteins, the building blocks of life. McConnell, having become interested in Hydén’s work, scrambled to test for a speculative molecule that he called “memory RNA” by grafting portions of trained planaria onto the bodies of untrained planaria. His aim was to transfer RNA from one worm to another but, encountering difficulty getting the grafts to stick, he turned to a “more spectacular type of tissue transfer, that of ‘cannibalistic ingestion.’” Planaria, accommodatingly, are cannibals, so McConnell merely had to blend trained worms and feed them to their untrained peers. (Planaria lack the acids and enzymes that would completely break down food, so he hoped that some RNA might be integrated into the consuming worms.) 

Shockingly, McConnell reported that cannibalizing trained worms induced learning in untrained planaria. In other experiments, he trained planaria to run through mazes and even developed a technique for extracting RNA from trained worms in order to inject it into untrained worms in an effort to transmit memories from one animal to another. Eventually, after his retirement in 1988, McConnell faded from view, and his work was relegated to the sidebars of textbooks as a curious but cautionary tale. Many scientists simply assumed that invertebrates like planaria couldn’t be trained, making the dismissal of McConnell’s work easy. McConnell also published some of his studies in his own journal, The Worm Runner’s Digest, alongside sci-fi humor and cartoons. As a result, there wasn’t a lot of interest in attempting to replicate his findings.

Nonetheless, McConnell’s work has recently experienced a sort of renaissance, taken up by innovative scientists like Michael Levin, a biologist at Tufts University specializing in limb regeneration, who has reproduced modernized and automated versions of his planarian maze-training experiments. The planarian itself has enjoyed a newfound popularity, too, after Levin cut the tail off a worm and shot a bioelectric current through the incision, provoking the worm to regrow another head in place of its tail (garnering Levin the endearing moniker of “young Frankenstein”). Levin also sent 15 worm pieces into space, with one returning, strangely enough, with two heads (“remarkably,” Levin and his colleagues wrote, “amputating this double-headed worm again, in plain water, resulted again in the double-headed phenotype.”) 

David Glanzman, a neurobiologist at the University of California, Los Angeles, has another promising research program that recently struck a chord reminiscent of McConnell’s memory experiments—although, instead of planaria, Glanzman’s lab works mostly with aplysia, the darling mollusk of neuroscience on account of its relatively simple nervous system. (Also known as “sea hares,” aplysia are giant, inky sea slugs that swim with undulating, ruffled wings.)

In 2015, Glanzman was testing the textbook theory on memory, which holds that memories are stored in synapses, the connective junctions between neurons. His team, attempting to create and erase a memory in aplysia, periodically delivered mild electric shocks to train the mollusk to prolong a reflex, one where it withdraws, upon touch, its siphon, a little breathing tube between the gill and the tail. After training, his lab witnessed new synaptic growth between the sensory neuron that felt touch and the motor neuron that triggered the siphon withdrawal reflex. Developing after the training, the increased connectivity between those neurons seemed to corroborate the theory that memories are stored in synaptic connections. Glanzman’s team tried to erase the memory of the training by dismantling the synaptic connections between the neurons and, sure enough, the snails subsequently behaved as if they’d lost the memory, further corroborating the synaptic memory theory. After Glanzman’s team administered a “reminder” shock to the snails, the researchers were surprised to quickly notice different, newer synaptic connections growing between the neurons. The snails then behaved, once again, as if they remembered the sensitizing training they seemed to have previously forgotten. 

If the memory persisted through such major synaptic change, where the synaptic connections that emerged through training had disappeared and completely different, newer connections had taken their place, then maybe, Glanzman thought, memories are not really stored in synapses after all. The experiment seems like something out of Eternal Sunshine of the Spotless Mind, a movie in which ex-lovers trying to forget each other undergo a questionable procedure that deletes the memory of a person, but evidently not to the point beyond recall. The lovers both hide a plan deep within their minds to meet in Montauk in the end. The movie suggests, in a way, that memories are never completely lost, that it always remains possible to go back, even to people and places that seem long forgotten.

But if memories aren’t stored in synaptic connections, where are they stored instead? Glanzman’s unpopular hypothesis was that they might reside in the nucleus of the neuron cell, where DNA and RNA sequences compose instructions for life processes. DNA sequences are fixed and unchanging, so most of an organism’s adaptability comes from supple epigenetic mechanisms, processes that regulate gene expression in response to environmental cues or pressures, which sometimes involve RNA. If DNA is printed sheet music, RNA-induced epigenetic mechanisms are like improvisational cuts and arrangements that might conduct learning and memory.

Perhaps memories reside in epigenetic changes induced by RNA, that improv molecule that scores protein-based adaptations of life. Glanzman’s team went back to their aplysia and trained them over two days to prolong their siphon-withdrawal reflex. They then dissected their nervous systems, extracting RNA involved in forming the memory of their training, and injected it into untrained aplysia, which were tested for learning a day later. Glanzman’s team found that the RNA from trained donors induced learning, while the RNA from untrained donors had no effect. They had transferred a memory, vaguely but surely, from one animal to another, and they had strong evidence that RNA was the memory-transferring agent.

Glanzman now believes that synapses are necessary for the activation of a memory, but that the memory is encoded in the nucleus of the neuron through epigenetic changes. “It’s like a pianist without hands,” Glanzman says. “He may know how to play Chopin, but he’d need hands to exercise the memory.” 

The work of Douglas Blackiston, an Allen Discovery Center scientist at Tufts University, who has studied memory in insects, paints a similar picture. He wanted to know if a butterfly could remember something about its life as a caterpillar, so he exposed caterpillars to the scent of ethyl acetate followed by a mild electric shock. After acquiring an aversion to ethyl acetate, the caterpillars pupated and, after emerging as adult butterflies several weeks later, were tested for memory of their aversive training. Surprisingly, the adult butterflies remembered—but how? The entire caterpillar becomes a cytoplasmic soup before it metamorphosizes into a butterfly. “The remodeling is catastrophic,” Blackiston says. “After all, we’re moving from a crawling machine to a flying machine. Not only the body but the entire brain has to be rewired.”

It’s hard to study exactly what goes on during pupation in vivo, but there’s a subset of caterpillar neurons that may persist in what are called “mushroom bodies,” a pair of structures involved in olfaction that many insects have located near their antennae. In other words, some structure remains. “It’s not soup,” Blackiston says. “Well, maybe it’s soup, but it’s chunky.” There’s near complete pruning of neurons during pupation, and the few neurons that remain become disconnected from other neurons, dissolving the synaptic connections between them in the process, until they reconnect with other neurons during the remodeling into the butterfly brain. Like Glanzman, Blackiston employs a hand analogy: “It’s like a small group of neurons were holding hands, but then let go and moved around, finally reconnecting with different neurons in the new brain.” If the memory was stored anywhere, Blackiston suspects it was stored in the subset of neurons located in the mushroom bodies, the only known carryover material from the caterpillar to the butterfly. 

In the end, despite its whimsical caricature of the science of memory, Eternal Sunshine may have stumbled on a correct premise. Not only do Glanzman and Blackiston believe their experiments harbor hopeful news for Alzheimer’s patients, it also might be possible to repair deteriorated neurons that could, at least theoretically, find their way back to lost memories, perhaps with the guidance of appropriate RNA.

Marco Altamirano is a writer based in New Orleans and the author of Time, Technology, and Environment: An Essay on the Philosophy of Nature. Follow him on

Blindsight: a strange neurological condition that could help explain consciousness

July 2, 2020 11.31am BST

Author

  1. Henry Taylor Birmingham Fellow in Philosophy, University of Birmingham

Disclosure statement

Henry Taylor previously received funding from The Leverhulme Trust and Isaac Newton Trust, but they do not stand to benefit from publication of this article.

Partners

University of Birmingham

University of Birmingham provides funding as a founding partner of The Conversation UK.

The Conversation UK receives funding from these organisations

View the full list

CC BY NDWe believe in the free flow of information
Republish our articles for free, online or in print, under Creative Commons licence.

Imagine being completely blind but still being able to see. Does that sound impossible? Well, it happens. A few years ago, a man (let’s call him Barry) suffered two strokes in quick succession. As a result, Barry was completely blind, and he walked with a stick.

One day, some psychologists placed Barry in a corridor full of obstacles like boxes and chairs. They took away his walking stick and told him to walk down the corridor. The result of this simple experiment would prove dramatic for our understanding of consciousness. Barry was able to navigate around the obstacles without tripping over a single one.

Barry has blindsight, an extremely rare condition that is as paradoxical as it sounds. People with blindsight consistently deny awareness of items in front of them, but they are capable of amazing feats, which demonstrate that, in some sense, they must be able to see them.

In another case, a man with blindsight (let’s call him Rick) was put in front of a screen and told to guess (from several options) what object was on the screen. Rick insisted that he didn’t know what was there and that he was just guessing, yet he was guessing with over 90% accuracy.

Into the brain

Blindsight results from damage to an area of the brain called the primary visual cortex. This is one of the areas, as you might have guessed, responsible for vision. Damage to primary visual cortex can result in blindness – sometimes total, sometimes partial.

So how does blindsight work? The eyes receive light and convert it into information that is then passed into the brain. This information then travels through a series of pathways through the brain to eventually end up at the primary visual cortex. For people with blindsight, this area is damaged and cannot properly process the information, so the information never makes it to conscious awareness. But the information is still processed by other areas of the visual system that are intact, enabling people with blindsight to carry out the kind of tasks that we see in the case of Barry and Rick.

Some blind people appear to be able to ‘see’. Akemaster/Shutterstock

Blindsight serves as a particularly striking example of a general phenomenon, which is just how much goes on in the brain below the surface of consciousness. This applies just as much to people without blindsight as people with it. Studies have shown that naked pictures of attractive people can draw our attention, even when we are completely unaware of them. Other studies have demonstrated that we can correctly judge the colour of an object without any conscious awareness of it.

Blindsight debunked?

Blindsight has generated a lot of controversy. Some philosophers and psychologists have argued that people with blindsight might be conscious of what is in front of them after all, albeit in a vague and hard-to-describe way.

This suggestion presents a difficulty, because ascertaining whether someone is conscious of a particular thing is a complicated and highly delicate task. There is no “test” for consciousness. You can’t put a probe or a monitor next to someone’s head to test whether they are conscious of something – it’s a totally private experience.

We can, of course, ask them. But interpreting what people say about their own experiences can be a thorny task. Their reports sometimes seem to indicate that they have no consciousness at all of the objects in front of them (Rick once insisted that he did not believe that there really were any objects there). Other individuals with blindsight report feeling “visual pin-pricks” or “dark shadows” indicating the tantalising possibility that they did have some conscious awareness left over.

The boundaries of consciousness

So, what does blindsight tell us about consciousness? Exactly how you answer this question will heavily depend on which interpretation you accept. Do you think that those who have blindsight are in some sense conscious of what is out there or not?

The visual cortex. Geyer S, Weiss M, Reimann K, Lohmann G and Turner R/wikipedia, CC BY-SA

If they’re not, then blindsight provides an exciting tool that we can use to work out exactly what consciousness is for. By looking at what the brain can do without consciousness, we can try to work out which tasks ultimately require consciousness. From that, we may be able to work out what the evolutionary function of consciousness is, which is something that we are still relatively in the dark about.

On the other hand, if we could prove that people with blindsight are conscious of what is in front of them, this raises no less interesting and exciting questions about the limits of consciousness. What is their consciousness actually like? How does it differ from more familiar kinds of consciousness? And precisely where in the brain does consciousness begin and end? If they are conscious, despite damage to their visual cortex, what does that tell us about the role of this brain area in generating consciousness?

In my research, I am interested in the way that blindsight reveals the fuzzy boundaries at the edges of vision and consciousness. In cases like blindsight, it becomes increasingly unclear whether our normal concepts such as “perception”, “consciousness” and “seeing” are up to the task of adequately describing and explaining what is really going on. My goal is to develop more nuanced views of perception and consciousness that can help us understand their distinctly fuzzy edges.

To ultimately understand these cases, we will need to employ careful philosophical reflection on the concepts we use and the assumptions we make, just as much as we will need a thorough scientific investigation of the mechanics of the mind.

Before you go…

The past year has been marked by record-breaking hurricanes, floods and heatwaves. As this extreme weather becomes the new normal, The Conversation’s academic authors have analysed these events and how they are (or aren’t) linked to climate change. Should you wish to make a donation, it will help us continue to provide research-led reactions to the climate crisis.

Scientists say most likely number of contactable alien civilisations is 36

New calculations come up with an estimate for worlds capable of communicating with others.

The Guardian

  • Nicola Davis
GettyImages-498384831.jpg

We’re listening … but is anything out there? Photo by dszc / Getty Images.

They may not be little green men. They may not arrive in a vast spaceship. But according to new calculations there could be more than 30 intelligent civilisations in our galaxy today capable of communicating with others.

Experts say the work not only offers insights into the chances of life beyond Earth but could shed light on our own future and place in the cosmos.

“I think it is extremely important and exciting because for the first time we really have an estimate for this number of active intelligent, communicating civilisations that we potentially could contact and find out there is other life in the universe – something that has been a question for thousands of years and is still not answered,” said Christopher Conselice, a professor of astrophysics at the University of Nottingham and a co-author of the research.

In 1961 the astronomer Frank Drake proposed what became known as the Drake equation, setting out seven factors that would need to be known to come up with an estimate for the number of intelligent civilisations out there. These factors ranged from the the average number of stars that form each year in the galaxy through to the timespan over which a civilisation would be expected to be sending out detectable signals.

But few of the factors are measurable. “Drake equation estimates have ranged from zero to a few billion [civilisations] – it is more like a tool for thinking about questions rather than something that has actually been solved,” said Conselice.

Now Conselice and colleagues report in the Astrophysical Journal how they refined the equation with new data and assumptions to come up with their estimates.

“Basically, we made the assumption that intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution,” said Conselice.

The assumption, known as the Astrobiological Copernican Principle, is fair as everything from chemical reactions to star formation is known to occur if the conditions are right, he said. “[If intelligent life forms] in a scientific way, not just a random way or just a very unique way, then you would expect at least this many civilisations within our galaxy,” he said.

He added that, while it is a speculative theory, he believes alien life would have similarities in appearance to life on Earth. “We wouldn’t be super shocked by seeing them,” he said.

Under the strictest set of assumptions – where, as on Earth, life forms between 4.5bn and 5.5bn years after star formation – there are likely between four and 211 civilisations in the Milky Way today capable of communicating with others, with 36 the most likely figure. But Conselice noted that this figure is conservative, not least as it is based on how long our own civilisation has been sending out signals into space – a period of just 100 years so far.

The team add that our civilisation would need to survive at least another 6,120 years for two-way communication. “They would be quite far away … 17,000 light years is our calculation for the closest one,” said Conselice. “If we do find things closer … then that would be a good indication that the lifespan of [communicating] civilisations is much longer than a hundred or a few hundred years, that an intelligent civilisation can last for thousands or millions of years. The more we find nearby, the better it looks for the long-term survival of our own civilisation.”

Dr Oliver Shorttle, an expert in extrasolar planets at the University of Cambridge who was not involved in the research, said several as yet poorly understood factors needed to be unpicked to make such estimates, including how life on Earth began and how many Earth-like planets considered habitable could truly support life.

Dr Patricia Sanchez-Baracaldo, an expert on how Earth became habitable, from the University of Bristol, was more upbeat, despite emphasising that many developments were needed on Earth for conditions for complex life to exist, including photosynthesis. “But, yes if we evolved in this planet, it is possible that intelligent life evolved in another part of the universe,” she said.

Prof Andrew Coates, of the Mullard Space Science Laboratory at University College London, said the assumptions made by Conselice and colleagues were reasonable, but the quest to find life was likely to take place closer to home for now.

“[The new estimate] is an interesting result, but one which it will be impossible to test using current techniques,” he said. “In the meantime, research on whether we are alone in the universe will include visiting likely objects within our own solar system, for example with our Rosalind Franklin Exomars 2022 rover to Mars, and future missions to Europa, Enceladus and Titan [moons of Jupiter and Saturn]. It’s a fascinating time in the search for life elsewhere.”

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

More from The Guardian

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

More from The Guardian

Advertisement


‘Miles-wide anomaly’ over Texas sparks concerns HAARP weather manipulation has BEGUN

BIZARRE footage has emerged that proves the US government is testing weather manipulation technology, according to wild claims online.

The clip, captured in Texas, US, shows the moment radar was completely blotted out by an unknown source.

Another video shows a green blob forming above Sugar Land, quickly growing in size in a circular formation.

According to Travis Herzog, a meteorologist at ABC News, the phenomenon was caused by a flock of birds filling the sky.

But conspiracy theorist Tyler Glockner, who runs YouTube channel secureteam10, disagrees.

He posted a video yesterday speculating it could be something more sinister which was accidentally exposed by the news channel.

He also pointed out the number of birds needed to cause such an event would have been seen or recorded by someone.

And his video has now racked up more than 350,000 hits in less than 48 hours.

“I am willing to bet there is a power station near the centre of that burst. Some kind of HAARP technology,” one viewer suggested.

Another added: “This is HAARP and some kind of weather modification/manipulation technology.”

And a third simply claimed: “Scary weather manipulation in progress.”

The High-Frequency Active Auroral Research Programme was initiated as a research project between the US Air Force, Navy, University of Alaska Fairbanks and the Defence Advance Research Agency.

Many conspiracists believe the US government is already using the HAARP programme to control weather occurrences through the use of chemtrailing.

Over the years, HAARP has been blamed for generating natural catastrophes such as thunderstorms and power loss as well as strange cloud formations.

But it was actually designed and built by BAE Advance Technologies to help analyse the ionosphere and investigate the potential for developing enhanced technologies.

Climate change expert Janos Pasztor previously revealed to Daily Star Online how this technology could lead to weaponisation.

The following extract is from U.S Research on weather modification dating back to 1957 Posted July 5th 2020

COMMENT secretary ‘.’ From: Sent: Andrea Psoras-QEDI [apsoras@qedinternational.com] Monday, May 05, 2008 3:08 PM To: secretary • , Subject: CFTC Requests Public Input on Possible Regulation of “Event Contracts” Commodity Futures Trading Commission Three Lafayette Centre 1155 21st Street, NW Page 1 of35 C) -·1 ‘ C) i .-(:::::l -q ‘. .. -~ .. (~· …. ..: .. ~ Washington, DC 20581 f\1 I ” (/) -….,J : : 1 r…····~· c ) :.:S.2 202-418-5000 202-418-5521, fax 202-418-5514, 11lY questions@cftc. gov Dear Commissioners and Secretary: :u f’ \ }_:;~ :;: ) ,, -·· -f l’;y … c: N Not everything is a commodity, nor should something that is typically covered by some sort of property and casualty insurance suddenly become exchange tradable. Insurance companies for a number of years have provided compensation of some sort for random, but periodic events. Where the insurance industry wants to off-load their risk at the expense of other commodities markets participants, contributes to sorts of moral hazards – which I vigorously oppose. If where there is ‘interest’ to develop these sorts of risk event instruments, to me it seems an admission that the insurance sector is perhaps marginal or worse, incompetent or too greedy to determine how to offer insurance for events presumably produced by nature. Now where there are the weather and earth shaking technologies, or some circles call these weather and electro-magnetic weapons, used insidiously unfortunately by our military, our intelligence apparatus, and perhaps our military contractors for purposes contrary to that to which our public servants take their oath of office to the Constitution,

I suggest prohibiting the use of that technology rather than leaving someone else holding the bag in the event destruction produced by, and where so-called ‘natural’ events were produced by military contractor technology in the guise of ‘mother nature’. * Consider Rep Denis Kucinich as well as former Senator John Glenn attempted to have our Congress prohibit the use of space based weapons. That class ofweapons includes the ‘weather weapons’. http://www.globalresearch.ca/articles/CH0409F.html as well as other articles about this on the Global Research website. Respectfully, Andrea Psoras “CFTC Requests Public Input on Possible Regulation of “Event Contracts” Washington, DC-The Commodity Futures Trading Commission (CFTC) is asking for public 5/7/2008 ::0 r:1 :> I ,.1 -·· -·’ ” r·n 0 Page 2 of35 comment on the appropriate regulatory treatment of finanCial agreements offered by markets commonly referred to as event, prediction, or information markets.

During the past several years, the CFTC has received numerous requests for guidance involving the trading of event contracts. These contracts typically involve financial agreements that are linked to events or measurable outcomes and often serve as information collection vehicles. The contracts are based on a broad spectrum of events, such as the results of presidential elections, world population levels, or economic measures. “Event markets are rapidly evolving, and growing, presenting a host of difficult policy and legal questions including: What public purpose is served in the oversight of these markets and what differentiates these markets from pure gambling outside the CFTC’ s jurisdiction?” said CFTC Acting chairman Walt Lukken.

“The CFTC is evaluating how these markets should be regulated with the proper protections in place and I encourage members of the public to provide their views.” In response to requests for guidance, and to promote regulatory certainty, the CFTC has commenced a comprehensive review of the Commodity Exchange Act’s applicability to event contracts and markets.

The CFTC is issuing a Concept Release to solicit the expertise and opinions of all interested parties, including CFTC registrants, legal practitioners, economists, state and federal regulatory authorities, academics, and event market participants. The Concept Release will be published in the Federal Register shortly; comments will be accepted for 60 days after publication in the Federal Register.” Comments may also be submitted electronically to secretary@cftc.gov. All comments received will be posted on the CFTC’s website. * Weather as a Force Multiplier: Owning the Weather in 2025 A Research Paper Presented To Air Force 2025 August 1996 Below are highlights contained within the actual report. Please remember that this research report was issued in 1996 -8 years ago -and that much of what was discussed as being in preliminary stages back then is now a reality. In the United States, weather-modification will likely become a part of national security policy with both domestic and international applications. Our government will pursue such a policy, depending on its interests, at various levels. In this paper we show that appropriate application of weather-modification can provide battlespace dominance to a degree never before imagined. In the future, such operations will enhance air and space superiority and provide new options for battlespace shaping and battlespace awareness. “The technology is there, waiting for us to pull it all together” [General Gordon R. Sullivan, “Moving into the 21st Century: America’s Army and Modernization,” Military Review (July 1993) quoted in Mary Ann Seagraves and Richard Szymber, “Weather a Force Multiplier,” Military Review, November/December 1995, 75]. A global, precise, real-time, robust, systematic weather-modification capability 5/7/2008 would provide war-fighting CINCs [an acronym meaning “Commander IN Chief’ of a unified command] with a powerful force multiplier to achieve military objectives.

Since weather will be common to all possible futures, a weather-modification capability would be universally applicable and have utility across the entire spectrum of conflict. The capability of influencing the weather even on a small scale could change it from a force degrader to a force multiplier.

In 1957, the president’s advisory committee on weather control explicitly recognized the military potential of weather-modification, warning in their report that it could become a more important weapon than the atom bomb [William B. Meyer, “The Life and Times ofUS Weather: What Can We Do About It?” American Heritage 37, no. 4 (June/July 1986), 48]. Today [since 1969], weather-modification is the alteration of weather phenomena over a limited area for a limited period of time. [Herbert S. Appleman, An Introduction to Weather-modification (Scott AFB, Ill.: Air Weather Service/MAC, September 1969), 1]. In the broadest sense, weather-modification can be divided into two major categories: suppression and intensification of weather patterns. In extreme cases, it might involve the creation of completely new weather patterns, attenuation or control of severe storms, or even alteration of global climate on a far-reaching and/or long-lasting scale.

Extreme and controversial examples of weather modification-creation of made-to-order weather, large-scale climate modification, creation and/or control (or “steering”) of severe storms, etc.-were researched as part of this study … the weather-modification applications proposed in this report range from technically proven to potentially feasible. Applying Weather-modification to Military Operations How will the military, in general, and the USAF, in particular, manage and employ a weather-modification capability?

We envision this will be done by the weather force support element (WFSE), whose primary mission would be to support the war-fighting CINCs with weather-modification options, in addition to current forecasting support. Although the WFSE could operate anywhere as long as it has access to the GWN and the system components already discussed, it will more than likely be a component within the AOC or its 2025-equivalent. With the CINC’s intent as guidance, the WFSE formulates weather-modification options using information provided by the GWN, local weather data network, and weather-modification forecast model.

The options include range of effect, probability of success, resources to be expended, the enemy’s vulnerability, and risks involved. The CINC chooses an effect based on these inputs, and the WFSE then implements the chosen course, selecting the right modification tools and employing them to achieve the desired effect. Sensors detect the change and feed data on the new weather pattern to the modeling system which updates its forecast accordingly. The WFSE checks the effectiveness of its efforts by pulling down the updated current conditions and new forecast(s) from the GWN and local weather data network, and plans follow-on missions as needed. This concept is illustrated in figure 3-2. 5/7/2008 Page 3 of35

Two key technologies are necessary to meld an integrated, comprehensive, responsive, precise, and effective weather-modification system. Advances in the science of chaos are critical to this endeavor. Also key to the feasibility of such a system is the ability to model the extremely complex nonlinear system of global weather in ways that can accurately predict the outcome of changes in the influencing variables. Researchers have already successfully controlled single variable nonlinear systems in the lab and hypothesize that current mathematical techniques and computer capacity could handle systems with up to five variables.

Advances in these two areas would make it feasible to affect regional weather patterns by making small, continuous nudges to one or more influencing factors. Conceivably, with enough lead time and the right conditions, you could get “made-to-order” weather [William Brown, “Mathematicians Learn How to Tame Chaos,” New Scientist (30 May 1992): 16]. The total weather-modification process would be a real-time loop of continuous, appropriate, measured interventions, and feedback capable of producing desired weather behavior. The essential ingredient ofthe weather-modification system is the set of intervention techniques used to modify the weather.

The number of specific intervention methodologies is limited only by the imagination, but with few exceptions they involve infusing either energy or chemicals into the meteorological process in the right way, at the right place and time. The intervention could be designed to modify the weather in a number of ways, such as influencing clouds and precipitation, storm intensity, climate, space, or fog. 5/7/2008 Page 4 of35 PRECIPITATION ” … significant beneficial influences can be derived through judicious exploitation of the solar absorption potential of carbon black dust” [William M. Gray et al., “Weather-modification by Carbon Dust Absorption of Solar Energy,” Journal of Applied Meteorology 15 (April1976): 355]. The study ultimately found that this technology could be used to enhance rainfall on the mesoscale, generate cirrus clouds, and enhance cumulonimbus (thunderstorm) clouds in otherwise dry areas . . . .if we are fortunate enough to have a fairly large body of water available upwind from the targeted battlefield, carbon dust could be placed in the atmosphere over that water. Assuming the dynamics are supportive in the atmosphere, the rising saturated air will eventually form clouds and rainshowers downwind over the land. Numerous dispersal techniques [of carbon dust] have already been studied, but the most convenient, safe, and cost-effective method discussed is the use of afterburner-type jet engines to generate carbon particles while flying through the targeted air.

This method is based on injection ofliquid hydrocarbon fuel into the afterburner’s combustion gases [this explains why contrails have now become chemtrails]. To date, much work has been done on UAVs [Unmanned Aviation Vehicles] which can closely (if not completely) match the capabilities of piloted aircraft. If this UAV technology were combined with stealth and carbon dust technologies, the result could be a UAV aircraft invisible to radar while en route to the targeted area, which could spontaneously create carbon dust in any location. If clouds were seeded (using chemical nuclei similar to those used today or perhaps a more effective agent discovered through continued research) before their downwind arrival to a desired location, the result could be a suppression of precipitation. In other words, precipitation could be “forced” to fall before its arrival in the desired territory, thereby making the desired territory “dry.” FOG Field experiments with lasers have demonstrated the capability to dissipate warm fog at an airfield with zero visibility. Smart materials based on nanotechnology are currently being developed with gigaops computer capability at their core.

They could adjust their size to optimal dimensions for a given fog seeding situation and even make adjustments throughout the process. They might also enhance their dispersal qualities by adjusting their buoyancy, by communicating with each other, and by steering themselves within the fog. They will be able to provide immediate and continuous effectiveness feedback by integrating with a larger sensor network and can also change their temperature and polarity to improve their seeding effects [J. Storrs Hall, “Overview ofNanotechnology,” adapted from papers by Ralph C. Merkle and K. Eric Drexler, Rutgers University, November 1995]. As mentioned above, UAVs could be used to deliver and distribute these smart materials. Recent army research lab experiments have demonstrated the feasibility of 5/7/2008 Page 5 of35 generating fog.

They used commercial equipment to generate thick fog in an area 100 meters long. Further study has shown fogs to be effective at blocking much of the UV/IR/visible spectrum, effectively masking emitters of such radiation from IR weapons [Robert A. Sutherland, “Results of Man-Made Fog Experiment,” Proceedings of the 1991 Battlefield Atmospherics Coriference (Fort Bliss, Tex.: Hinman Hall, 3-6 December~1991)]. STORMS The damage caused by storms is indeed horrendous. For instance, a tropical storm has an energy equal to 10,000 one-megaton hydrogen bombs [Louis J. Battan, Harvesting the Clouds (Garden City, N.Y.: Doubleday & Co., 1960), 120]. At any instant there are approximately 2,000 thunderstorms taking place. In fact 45,000 thunderstorms, which contain heavy rain, hail, microbursts, wind shear, and lightning form daily [GeneS. Stuart, “Whirlwinds and Thunderbolts,” Nature on the Rampage (Washington, D.C.: National Geographic Society, 1986), 130]. Weather-modification technologies might involve techniques that would increase latent heat release in the atmosphere, provide additional water vapor for cloud cell development, and provide additional surface and lower atmospheric heating to increase atmospheric instability.

The focus of the weather-modification effort would be to provide additional “conditions” that would make the atmosphere unstable enough to generate cloud and eventually storm cell development. One area of storm research that would significantly benefit military operations is lightning modification … but some offensive military benefit could be obtained by doing research on increasing the potential and intensity of lightning. Possible mechanisms to investigate would be ways to modify the electropotential characteristics over certain targets to induce lightning strikes on the desired targets as the storm passes over their location. In summary, the ability to modify battlespace weather through storm cell triggering or enhancement would allow us to exploit the technological “weather” advances. SPACE WEATHER-MODIFICATION This section discusses opportunities for control and modification of the ionosphere and near-space environment for force enhancement. A number of methods have been explored or proposed to modify the ionosphere, including injection of chemical vapors and heating or charging via electromagnetic radiation or particle beams (such as ions, neutral particles, x-rays, MeV particles, and energetic electrons)-[Peter M. Banks, “Overview of Ionospheric Modification from Space Platforms,” in Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems (AGARD Conference Proceedings 485, October 1990) 19-1].

It is important to note that many techniques to modify the upper atmosphere have been successfully demonstrated experimentally. Ground-based modification techniques employed by the FSU include vertical HF heating, oblique HF heating, microwave heating, and magnetospheric modification [Capt Mike Johnson, Upper 5/7/2008 Page 6 of35 Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992)]. Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic [EM] radiation of a selected source.

Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992)]. Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic [EM] radiation of a selected frequency or a range of frequencies. AJ1iftcial Ionospheric Mirrors S’IATION ARTIFICIAL WEATHER GRO U.ND-IJAS..Iill AIM GENERATOR 121\niZ STATION While most weather-modification efforts rely on the existence of certain preexisting conditions, it may be possible to produce some weather effects artificially, regardless of preexisting conditions. For instance, virtual weather could be created by influencing the weather inforrilation received by an end user.

Nanotechnology also offers possibilities for creating simulated weather. A cloud, or several clouds, of microscopic computer particles, all communicating with each other and with a larger control system could provide tremendous capability. Interconnected, atmospherically buoyant, and having navigation capability in three dimensions, such clouds could be designed to have a wide-range of properties … Even if power levels achieved were insufficient to be an effective strike weapon [if power levels WERE sufficient, they would be an effective strike weapon], the potential for psychological operations in many situations could be fantastic. One major advantage of using simulated weather to achieve a desired effect is that unlike other approaches, it makes what are otherwise the results of deliberate actions appear to be the consequences of natural weather phenomena. In addition, it is potentially relatively inexpensive to do. According to J. Storrs Hall, a 5/7/2008 Page 7 of35

Andrea Psoras Senior Vice President The Electronic Frontier Foundation, an advocate for freedom of information on the Internet, has condemned Santorum’s bill. “It is a terrible precedent for information policy,” said staff member Ren Bucholz. “If the rule is, data provided by taxpayer money can’t be provided to the public but through a private entity, we won’t have a very useful public agency.” QED International Associates, Inc. US Agent for Rapid Ratings International 708 Third A venue, 23rd Fl New York, NY 10017 (212) 953-40580 apsoras@gmail.com (646) 709-9629c apsoras@qedinternational.com http://www.qedintemational.com

  • 07-13-19

Apollo 11 really landed on the Moon—and here’s how you can be sure (sorry, conspiracy nuts)

We went to the Moon. Here’s all the proof you’ll ever need.

By Charles Fishman7 minute Read

This is the 43rd in an exclusive series of 50 articles, one published each day until July 20, exploring the 50th anniversary of the first-ever Moon landing. You can check out 50 Days to the Moon here every day.

The United States sent astronauts to the Moon, they landed, they walked around, they drove around, they deployed lots of instruments, they packed up nearly half a ton of Moon rocks, and they flew home.

No silly conspiracy was involved.

There were no Hollywood movie sets.

Anybody who writes about Apollo and talks about Apollo is going to be asked how we actually know that we went to the Moon.

Not that the smart person asking the question has any doubts, mind you, but how do we know we went, anyway?

It’s a little like asking how we know there was a Revolutionary War. Where’s the evidence? Maybe it’s just made up by the current government to force us to think about America in a particular way.

How do we know there was a Titanic that sank?

And by the way, when I go to the battlefields at Gettysburg—or at Normandy, for that matter—they don’t look much like battlefields to me. Can you prove we fought a Civil War? World War II?

In the case of Apollo, in the case of the race to the Moon, there is a perfect reply.

The race to the Moon in the 1960s was, in fact, an actual race.

The success of the Soviet space program—from Sputnik to Strelka and Belka to Yuri Gagarin—was the reason for Apollo. John Kennedy launched America to the Moon precisely to beat the Russians to the Moon.

When Kennedy was frustrated with the fact that the Soviets were first to achieve every important milestone in space, he asked Vice President Lyndon Johnson to figure it out—fast. The opening question of JFK’s memo to LBJ:

“Do we have a chance of beating the Soviets by putting a laboratory in space, or by a trip around the Moon, or by a rocket to land on the Moon, or by a rocket to go to the Moon and back with a man. Is there any other space program which promises dramatic results in which we could win?”

Win. Kennedy wanted to know how to beat the Soviets—how to win in space.

That memo was written a month before Kennedy’s dramatic “go to the Moon” speech. The race to the Moon he launched would last right up to the moment, almost 100 months later, when Apollo 11 would land on the Moon.

The race would shape the American and Soviet space programs in subtle and also dramatic ways.

Apollo 8 was the first U.S. mission that went to the Moon: The Apollo capsule and the service module, with Frank Borman, Bill Anders, and Jim Lovell, flew to the Moon at Christmastime in 1968, but without a lunar module. The lunar modules were running behind, and there wasn’t one ready for the flight.

Apollo 8 represented a furious rejuggling of the NASA flight schedule to accommodate the lack of a lunar module. The idea was simple: Let’s get Americans to the Moon quick, even if they weren’t ready to land on the Moon. Let’s “lasso the Moon” before the Soviets do.

At the moment when the mission was conceived and the schedule redone to accommodate a different kind of Apollo 8, in late summer 1968, NASA officials were worried that the Russians might somehow mount exactly the same kind of mission: Put cosmonauts in a capsule and send them to orbit the Moon, without landing. Then the Soviets would have made it to the Moon first.

Apollo 8 was designed to confound that, and it did.

In early December 1968, in fact, the rivalry remained alive enough that Time magazine did a cover story on it. “Race for the Moon” was the headline, and the cover was an illustration of an American astronaut and a Soviet cosmonaut, in spacesuits, leaping for the surface of the Moon.

Seven months later, when Apollo 11, with Michael Collins, Neil Armstrong, and Buzz Aldrin aboard, entered orbit around the Moon on July 19, 1969, there was a Soviet spaceship there to meet them. It was Luna 15, and it had been launched a few days before Apollo 11. Its goal: Land on the Moon, scoop up Moon rocks and dirt, and then dash back to a landing in the Soviet Union before Collins, Aldrin, and Armstrong could return with their own Moon rocks.

If that had happened, the Soviets would at least have been able to claim that they had gotten Moon rocks back to Earth first (and hadn’t needed people to do it).

So put aside for a moment the pure ridiculousness of a Moon landing conspiracy that somehow doesn’t leak out. More than 410,000 Americans worked on Apollo, on behalf of 20,000 companies. Was their work fake? Were they all in on the conspiracy? And then, also, all their family members—more than 1 million people—not one of whom ever whispered a word of the conspiracy?

What of the reporters? Hundreds of reporters covering space, writing stories not just of the dramatic moments, but about all the local companies making space technology, from California to Delaware.

Put aside as well the thousands of hours of audio recordings—between spacecraft and mission control; in mission control, where dozens of controllers talked to each other; in the spacecraft themselves, where there were separate recordings of the astronauts just talking to each other in space. There were 2,502 hours of Apollo spaceflight, more than 100 days. It’s an astonishing undertaking not only to script all that conversation, but then to get people to enact it with authenticity, urgency, and emotion. You can now listen to all of it online, and it would take you many years to do so.

For those who believe the missions were fake, all that can, somehow, be waved off. A puzzling shadow in a picture from the Moon, a quirk in a single moment of audio recording, reveals that the whole thing was a vast fabrication. (With grace and straight-faced reporting, the Associated Press this week reviewed, and rebutted, the most popular sources of the conspiracy theories.)

Forget all that.

If the United States had been faking the Moon landings, one group would not have been in on the conspiracy: The Soviets.

The Soviet Union would have revealed any fraud in the blink of an eye, and not just without hesitation, but with joy and satisfaction.

In fact, the Russians did just the opposite. The Soviet Union was one of the few places on Earth (along with China and North Korea) where ordinary people couldn’t watch the landing of Apollo 11 and the Moon walk in real time. It was real enough for the Russians that they didn’t let their own people see it.

That’s all the proof you need. If the Moon landings had been faked—indeed, if any part of them had been made up, or even exaggerated—the Soviets would have told the world. They were watching. Right to the end, they had their own ambitions to be first to the Moon, in the only way they could muster at that point.

And that’s a kind of proof that the conspiracy-meisters cannot wriggle around.

But another thing is true about the Moon landings: You’ll never convince someone who wants to think they were faked that they weren’t. There is nothing in particular you could ever say, no particular moment or piece of evidence you could produce, that would cause someone like that to light up and say, “Oh! You’re right! We did go to the Moon.”

Anyone who wants to live in a world where we didn’t go to the Moon should be happy there. That’s a pinched and bizarre place, one that defies not just the laws of physics but also the laws of ordinary human relationships.

I prefer to live in the real world, the one in which we did go to the Moon, because the work that was necessary to get American astronauts to the Moon and back was extraordinary. It was done by ordinary people, right here on Earth, people who were called to do something they weren’t sure they could, and who then did it, who rose to the occasion in pursuit of a remarkable goal.

That’s not just the real world, of course. It’s the best of America.

We went to the Moon, and on the 50th anniversary of that first landing, it’s worth banishing forever the nutty idea that we didn’t, and also appreciating what the achievement itself required, and what it says about the people who were able to do it.


A Mysterious Anomaly Under Africa Is Radically Weakening Earth’s Magnetic Field Posted June 29th 2020

PETER DOCKRILL 6 MARCH 2018

Around Earth, an invisible magnetic field traps electrons and other charged particles.
(Image: © NASA’s Goddard Space Flight Center)

Above our heads, something is not right. Earth’s magnetic field is in a state of dramatic weakening – and according to mind-boggling new research, this phenomenal disruption is part of a pattern lasting for over 1,000 years.

The Earth‘s magnetic field is weakening between Africa and South America, causing issues for satellites and space craft.

Scientists studying the phenomenon observed that an area known as the South Atlantic Anomaly has grown considerably in recent years, though the reason for it is not entirely clear.

Using data gathered by the European Space Agency’s (ESA) Swarm constellation of satellites, researchers noted that the area of the anomaly dropped in strength by more than 8 per cent between 1970 and 2020.

“The new, eastern minimum of the South Atlantic Anomaly has appeared over the last decade and in recent years is developing vigorously,” said Jürgen Matzka, from the German Research Centre for Geosciences.

“We are very lucky to have the Swarm satellites in orbit to investigate the development of the South Atlantic Anomaly. The challenge now is to understand the processes in Earth’s core driving theses changes.”

Earth’s magnetic field doesn’t just give us our north and south poles; it’s also what protects us from solar winds and cosmic radiation – but this invisible force field is rapidly weakening, to the point scientists think it could actually flip, with our magnetic poles reversing.

As crazy as that sounds, this actually does happen over vast stretches of time. The last time it occurred was about 780,000 years ago, although it got close again around 40,000 years back.

When it takes place, it’s not quick, with the polarity reversal slowly occurring over thousands of years.

Nobody knows for sure if another such flip is imminent, and one of the reasons for that is a lack of hard data.

The region that concerns scientists the most at the moment is called the South Atlantic Anomaly – a huge expanse of the field stretching from Chile to Zimbabwe. The field is so weak within the anomaly that it’s hazardous for Earth’s satellites to enter it, because the additional radiation it’s letting through could disrupt their electronics.

“We’ve known for quite some time that the magnetic field has been changing, but we didn’t really know if this was unusual for this region on a longer timescale, or whether it was normal,” says physicist Vincent Hare from the University of Rochester in New York.

One of the reasons scientists don’t know much about the magnetic history of this region of Earth is it lacks what’s called archeomagnetic data – physical evidence of magnetism in Earth’s past, preserved in archaeological relics from bygone ages.

One such bygone age belonged to a group of ancient Africans, who lived in the Limpopo River Valley – which borders Zimbabwe, South Africa, and Botswana: regions that fall within the South Atlantic Anomaly of today.

Approximately 1,000 years ago, these Bantu peoples observed an elaborate, superstitious ritual in times of environmental hardship.

During times of drought, they would burn down their clay huts and grain bins, in a sacred cleansing rite to make the rains come again – never knowing they were performing a kind of preparatory scientific fieldwork for researchers centuries later.

“When you burn clay at very high temperatures, you actually stabilise the magnetic minerals, and when they cool from these very high temperatures, they lock in a record of the earth’s magnetic field,” one of the team, geophysicist John Tarduno explains.

As such, an analysis of the ancient artefacts that survived these burnings reveals much more than just the cultural practices of the ancestors of today’s southern Africans.

“We were looking for recurrent behaviour of anomalies because we think that’s what is happening today and causing the South Atlantic Anomaly,” Tarduno says.

“We found evidence that these anomalies have happened in the past, and this helps us contextualise the current changes in the magnetic field.”

Like a “compass frozen in time immediately after [the] burning”, the artefacts revealed that the weakening in the South Atlantic Anomaly isn’t a standalone phenomenon of history.

Similar fluctuations occurred in the years 400-450 CE, 700-750 CE, and 1225-1550 CE – and the fact that there’s a pattern tells us that the position of the South Atlantic Anomaly isn’t a geographic fluke.

“We’re getting stronger evidence that there’s something unusual about the core-mantel boundary under Africa that could be having an important impact on the global magnetic field,” Tarduno says.

The current weakening in Earth’s magnetic field – which has been taking place for the last 160 years or so – is thought to be caused by a vast reservoir of dense rock called the African Large Low Shear Velocity Province, which sits about 2,900 kilometres (1,800 miles) below the African continent.

“It is a profound feature that must be tens of millions of years old,” the researchers explained in The Conversation last year.

“While thousands of kilometres across, its boundaries are sharp.”

This dense region, existing in between the hot liquid iron of Earth’s outer core and the stiffer, cooler mantle, is suggested to somehow be disturbing the iron that helps generate Earth’s magnetic field.

There’s a lot more research to do before we know more about what’s going on here.

As the researchers explain, the conventional idea of pole reversals is that they can start anywhere in the core – but the latest findings suggest what happens in the magnetic field above us is tied to phenomena at special places in the core-mantle boundary.

If they’re right, a big piece of the field weakening puzzle just fell in our lap – thanks to a clay-burning ritual a millennia ago. What this all means for the future, though, no-one is certain.

“We now know this unusual behaviour has occurred at least a couple of times before the past 160 years, and is part of a bigger long-term pattern,” Hare says.

“However, it’s simply too early to say for certain whether this behaviour will lead to a full pole reversal.”

The findings are reported in Geophysical Review Letters.

Extending from Earth like invisible spaghetti is the planet’s magnetic field. Created by the churn of Earth’s core, this field is important for everyday life: It shields the planet from solar particles, it provides a basis for navigation and it might have played an important role in the evolution of life on Earth. 

But what would happen if Earth’s magnetic field disappeared tomorrow? A larger number of charged solar particles would bombard the planet, putting power grids and satellites on the fritz and increasing human exposure to higher levels of cancer-causing ultraviolet radiation. In other words, a missing magnetic field would have consequences that would be problematic but not necessarily apocalyptic, at least in the short term.

And that’s good news, because for more than a century, it’s been weakening. Even now, there are especially flimsy spots, like the South Atlantic Anomaly in the Southern Hemisphere, which create technical problems for low-orbiting satellites. 

Related: What Will Happen to Earth When the Sun Dies?

Read more

One possibility, according to the ESA, is that the weakening field is a sign that the Earth’s magnetic field is about to reverse, whereby the North Pole and South Pole switch places.

The last time a “geomagnetic reversal” took place was 780,000 years ago, with some scientists claiming that the next one is long overdue. Typically, such events take place every 250,000 years.

The repercussions of such an event could be significant, as the Earth’s magnetic field plays an important role in protecting the planet from solar winds and harmful cosmic radiation.

Telecommunication and satellite systems also rely on it to operate, suggesting that computers and mobile phones could experience difficulties.

The South Atlantic Anomaly has been captured by the Swarm satellite constellation (Division of Geomagnetism, DTU Space)

The South Atlantic Anomaly is already causing issues with satellites orbiting Earth, the ESA warned, while spacecrafts flying in the area could also experience “technical malfunctions”.

A 2018 study published in the scientific journal Proceedings of the National Academy of Sciences found that despite the weakening field, “Earth’s magnetic field is probably not reversing”.

The study also explained that the process is not an instantaneous one and could take tens of thousands of years to take place.

ESA said it would continue to monitor the weakening magnetic field with its constellation of Swarm satellites.

“The mystery of the origin of the South Atlantic Anomaly has yet to be solved,” the space agency stated. “However, one thing is certain: magnetic field observations from Swarm are providing exciting new insights into the scarcely understood processes of Earth’s interior.”

Alien life is out there, but our theories are probably steering us away from it May 22nd 2020

If we discovered evidence of alien life, would we even realise it? Life on other planets could be so different from what we’re used to that we might not recognise any biological signatures that it produces.

Recent years have seen changes to our theories about what counts as a biosignature and which planets might be habitable, and further turnarounds are inevitable. But the best we can really do is interpret the data we have with our current best theory, not with some future idea we haven’t had yet.

This is a big issue for those involved in the search for extraterrestrial life. As Scott Gaudi of Nasa’s Advisory Council has said: “One thing I am quite sure of, now having spent more than 20 years in this field of exoplanets … expect the unexpected.”

But is it really possible to “expect the unexpected”? Plenty of breakthroughs happen by accident, from the discovery of penicillin to the discovery of the cosmic microwave background radiation left over from the Big Bang. These often reflect a degree of luck on behalf of the researchers involved. When it comes to alien life, is it enough for scientists to assume “we’ll know it when we see it”?

Many results seem to tell us that expecting the unexpected is extraordinarily difficult. “We often miss what we don’t expect to see,” according to cognitive psychologist Daniel Simons, famous for his work on inattentional blindness. His experiments have shown how people can miss a gorilla banging its chest in front of their eyes. Similar experiments also show how blind we are to non-standard playing cards such as a black four of hearts. In the former case, we miss the gorilla if our attention is sufficiently occupied. In the latter, we miss the anomaly because we have strong prior expectations.

There are also plenty of relevant examples in the history of science. Philosophers describe this sort of phenomenon as “theory-ladenness of observation”. What we notice depends, quite heavily sometimes, on our theories, concepts, background beliefs and prior expectations. Even more commonly, what we take to be significant can be biased in this way.

For example, when scientists first found evidence of low amounts of ozone in the atmosphere above Antarctica, they initially dismissed it as bad data. With no prior theoretical reason to expect a hole, the scientists ruled it out in advance. Thankfully, they were minded to double check, and the discovery was made.

More than 200,000 stars captured in one small section of the sky by Nasa’s TESS mission. Nasa

Could a similar thing happen in the search for extraterrestrial life? Scientists studying planets in other solar systems (exoplanets) are overwhelmed by the abundance of possible observation targets competing for their attention. In the last 10 years scientists have identified more than 3,650 planets – more than one a day. And with missions such as NASA’s TESS exoplanet hunter this trend will continue.

Each and every new exoplanet is rich in physical and chemical complexity. It is all too easy to imagine a case where scientists do not double check a target that is flagged as “lacking significance”, but whose great significance would be recognised on closer analysis or with a non-standard theoretical approach.

The Müller-Lyer optical illusion. Fibonacci/Wikipedia, CC BY-SA

However, we shouldn’t exaggerate the theory-ladenness of observation. In the Müller-Lyer illusion, a line ending in arrowheads pointing outwards appears shorter than an equally long line with arrowheads pointing inwards. Yet even when we know for sure that the two lines are the same length, our perception is unaffected and the illusion remains. Similarly, a sharp-eyed scientist might notice something in her data that her theory tells her she should not be seeing. And if just one scientist sees something important, pretty soon every scientist in the field will know about it.

History also shows that scientists are able to notice surprising phenomena, even biased scientists who have a pet theory that doesn’t fit the phenomena. The 19th-century physicist David Brewster incorrectly believed that light is made up of particles travelling in a straight line. But this didn’t affect his observations of numerous phenomena related to light, such as what’s known as birefringence in bodies under stress. Sometimes observation is definitely not theory-laden, at least not in a way that seriously affects scientific discovery.

We need to be open-minded

Certainly, scientists can’t proceed by just observing. Scientific observation needs to be directed somehow. But at the same time, if we are to “expect the unexpected”, we can’t allow theory to heavily influence what we observe, and what counts as significant. We need to remain open-minded, encouraging exploration of the phenomena in the style of Brewster and similar scholars of the past.

Studying the universe largely unshackled from theory is not only a legitimate scientific endeavour – it’s a crucial one. The tendency to describe exploratory science disparagingly as “fishing expeditions” is likely to harm scientific progress. Under-explored areas need exploring, and we can’t know in advance what we will find.

In the search for extraterrestrial life, scientists must be thoroughly open-minded. And this means a certain amount of encouragement for non-mainstream ideas and techniques. Examples from past science (including very recent ones) show that non-mainstream ideas can sometimes be strongly held back. Space agencies such as NASA must learn from such cases if they truly believe that, in the search for alien life, we should “expect the unexpected”.

Could invisible aliens really exist among us? An astrobiologist explains . May 22nd 2020

Life is pretty easy to recognise. It moves, it grows, it eats, it excretes, it reproduces. Simple. In biology, researchers often use the acronym “MRSGREN” to describe it. It stands for movement, respiration, sensitivity, growth, reproduction, excretion and nutrition.

But Helen Sharman, Britain’s first astronaut and a chemist at Imperial College London, recently said that alien lifeforms that are impossible to spot may be living among us. How could that be possible?

While life may be easy to recognise, it’s actually notoriously difficult to define and has had scientists and philosophers in debate for centuries – if not millennia. For example, a 3D printer can reproduce itself, but we wouldn’t call it alive. On the other hand, a mule is famously sterile, but we would never say it doesn’t live.

As nobody can agree, there are more than 100 definitions of what life is. An alternative (but imperfect) approach is describing life as “a self-sustaining chemical system capable of Darwinian evolution”, which works for many cases we want to describe.

The lack of definition is a huge problem when it comes to searching for life in space. Not being able to define life other than “we’ll know it when we see it” means we are truly limiting ourselves to geocentric, possibly even anthropocentric, ideas of what life looks like. When we think about aliens, we often picture a humanoid creature. But the intelligent life we are searching for doesn’t have to be humanoid.

Life, but not as we know it

Sharman says she believes aliens exist and “there’s no two ways about it”. Furthermore, she wonders: “Will they be like you and me, made up of carbon and nitrogen? Maybe not. It’s possible they’re here right now and we simply can’t see them.”

Such life would exist in a “shadow biosphere”. By that, I don’t mean a ghost realm, but undiscovered creatures probably with a different biochemistry. This means we can’t study or even notice them because they are outside of our comprehension. Assuming it exists, such a shadow biosphere would probably be microscopic.

So why haven’t we found it? We have limited ways of studying the microscopic world as only a small percentage of microbes can be cultured in a lab. This may mean that there could indeed be many lifeforms we haven’t yet spotted. We do now have the ability to sequence the DNA of unculturable strains of microbes, but this can only detect life as we know it – that contain DNA.

If we find such a biosphere, however, it is unclear whether we should call it alien. That depends on whether we mean “of extraterrestrial origin” or simply “unfamiliar”.

Silicon-based life

A popular suggestion for an alternative biochemistry is one based on silicon rather than carbon. It makes sense, even from a geocentric point of view. Around 90% of the Earth is made up of silicon, iron, magnesium and oxygen, which means there’s lots to go around for building potential life.

Artist’s impression of a silicon-based life form. Zita

Silicon is similar to carbon, it has four electrons available for creating bonds with other atoms. But silicon is heavier, with 14 protons (protons make up the atomic nucleus with neutrons) compared to the six in the carbon nucleus. While carbon can create strong double and triple bonds to form long chains useful for many functions, such as building cell walls, it is much harder for silicon. It struggles to create strong bonds, so long-chain molecules are much less stable.

What’s more, common silicon compounds, such as silicon dioxide (or silica), are generally solid at terrestrial temperatures and insoluble in water. Compare this to highly soluble carbon dioxide, for example, and we see that carbon is more flexible and provides many more molecular possibilities.

Life on Earth is fundamentally different from the bulk composition of the Earth. Another argument against a silicon-based shadow biosphere is that too much silicon is locked up in rocks. In fact, the chemical composition of life on Earth has an approximate correlation with the chemical composition of the sun, with 98% of atoms in biology consisting of hydrogen, oxygen and carbon. So if there were viable silicon lifeforms here, they may have evolved elsewhere.

That said, there are arguments in favour of silicon-based life on Earth. Nature is adaptable. A few years ago, scientists at Caltech managed to breed a bacterial protein that created bonds with silicon – essentially bringing silicon to life. So even though silicon is inflexible compared with carbon, it could perhaps find ways to assemble into living organisms, potentially including carbon.

And when it comes to other places in space, such as Saturn’s moon Titan or planets orbiting other stars, we certainly can’t rule out the possibility of silicon-based life.

To find it, we have to somehow think outside of the terrestrial biology box and figure out ways of recognising lifeforms that are fundamentally different from the carbon-based form. There are plenty of experiments testing out these alternative biochemistries, such as the one from Caltech.

Regardless of the belief held by many that life exists elsewhere in the universe, we have no evidence for that. So it is important to consider all life as precious, no matter its size, quantity or location. The Earth supports the only known life in the universe. So no matter what form life elsewhere in the solar system or universe may take, we have to make sure we protect it from harmful contamination – whether it is terrestrial life or alien lifeforms.


Read more: Elon Musk’s Starship may be more moral catastrophe than bold step in space exploration


So could aliens be among us? I don’t believe that we have been visited by a life form with the technology to travel across the vast distances of space. But we do have evidence for life-forming, carbon-based molecules having arrived on Earth on meteorites, so the evidence certainly doesn’t rule out the same possibility for more unfamiliar life forms.

Project HAARP: Is The US Controlling The Weather? – YouTube

www.youtube.com/watch?v=InoHOvYXJ0Q

23/07/2013 · Project HAARP: US Weather Control? A secretive government radio energy experiment in Alaska, with the potential to control the weather or a simple scientific experiment?

The Science of Corona Spread according to Neil Ferguson et al of Imperial College London Posted May 14th 2020

Note this report is about spread and guesswork as to the nature and structure OF Corona with particular regard to mutation and effects of the Corona Virus. It is about a maths model of predicted spread, and rate of spread, with R representing the reinfection rate. R at 1 means each person with Corona can be expected or predicted to infect one other person who will go on to infect one other etc.

What Ferguson does know for certain as a bassis for his modelling is is that the virtually privatised, asset stripped debt loaded poorly equiped run down and management top heavy NHS will fail massively especially in densely populated urban areas of high ethnic diversity, religious bigotry, poverty and squalor

He also knows that a privatised very expensive profit based care homes will fail hideously, so those already close to natural death, especially if they have previous health conditions will die sooner with corona, which given the squalor of the homes will make sure they get it.

So operation smokescreen needs the Ferguson maths to justify putting key at risk voters’ peace of mind above the wider national interest – to hell with the young, scare them to death, blind them with science like the following report which they won’t understand, upon which there will be further analysis and comment here soon.

On the wider scene, Britain has been a massively malign influence on Europe, the U.S and beyond, so Ferguson must factor in no limit to borders, air traffic or illegal immigrants. Though he clearly did not believe his own advice because he broke it at least twice for sexual contact with a married mother.

The maths of his assessment for his affair with a married woman here was simple : M + F = S where M represents male F represents female and S represents sex. But we do not need algebra to explain the obvious anymore than we need what is below, from Fergusoon’s 14 page report.

We might also consider that M + F , because of other human factors/variables, could equal D where D reresents divorce, or MB where MB represents Male Bankruptcy or a number of other possibilities.

But for Ferguson, operation smokescreen, blinding people with science, has only one possibility, LOCKDOWN because that is what the government wanted, the media wanted it and now a lot of workers want it, especially teachers who do not want to go back to work. Britain is ridiculing and patronising European countries for doing the sensible thing and easing out of lockdown. People with brains should fear the British elite more than Europe’s.

Public sector workers are paid to stay at home. Furloughed private sector workers are going to be bankrolled by the taxpaper the Chancellor said so. Lockdown is costing £14 billion a day. Imagine if all that money had been invested in an NHS fit to cope with all the illegal and legal mass of third world immigrants and an ageing population. But moron politicians are always economical with the truth, out to feed their own egos and winging it.

As an ex maths teacher, I could convert all of this into alegbra and probable outcomes. British people are more likely to belive what they can’t understand which is why so many still believe in God. So if God made everything, then God made ‘the science’ so it must be true

It is not is necessary to tell us that if someone catches a cold it is an airborne virus which will spread to anyone in its path, the poorly and old being vulnerable to a cold turning fatal. That is the reality of Corona.

Ferguson made his report on the basis of probability, some limits to the masses, regardless of the damage caused long term, because he got paid, would look good and enhance his and pompous Imperial College’s reputation.

Robert Cook

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 1 of 14

10 February 2020 Imperial College London COVID-19 Response Team DOI: https://doi.org/10.25561/77154 Page 1 of 14 Report 4: Severity of 2019-novel coronavirus (nCoV) Ilaria Dorigatti+ , Lucy Okell+ , Anne Cori, Natsuko Imai , Marc Baguelin, Sangeeta Bhatia, Adhiratha Boonyasiri, Zulma Cucunubá, Gina Cuomo-Dannenburg, Rich FitzJohn, Han Fu, Katy Gaythorpe , Arran Hamlet, Wes Hinsley, Nan Hong , Min Kwun, Daniel Laydon, Gemma Nedjati-Gilani, Steven Riley, Sabine van Elsland, Erik Volz, Haowei Wang, Raymond Wang, Caroline Walters , Xiaoyue Xi, Christl Donnelly, Azra Ghani, Neil Ferguson*. With support from other volunteers from the MRC Centre.1 WHO Collaborating Centre for Infectious Disease Modelling MRC Centre for Global Infectious Disease Analysis Abdul Latif Jameel Institute for Disease and Emergency Analytics (J-IDEA) Imperial College London *Correspondence: neil.ferguson@imperial.ac.uk 1 See full list at end of document. +These two authors contributed equally. Summary We present case fatality ratio (CFR) estimates for three strata of 2019-nCoV infections. For cases detected in Hubei, we estimate the CFR to be 18% (95% credible interval: 11%-81%). For cases detected in travellers outside mainland China, we obtain central estimates of the CFR in the range 1.2- 5.6% depending on the statistical methods, with substantial uncertainty around these central values. Using estimates of underlying infection prevalence in Wuhan at the end of January derived from testing of passengers on repatriation flights to Japan and Germany, we adjusted the estimates of CFR from either the early epidemic in Hubei Province, or from cases reported outside mainland China, to obtain estimates of the overall CFR in all infections (asymptomatic or symptomatic) of approximately 1% (95% confidence interval 0.5%-4%). It is important to note that the differences in these estimates does not reflect underlying differences in disease severity between countries. CFRs seen in individual countries will vary depending on the sensitivity of different surveillance systems to detect cases of differing levels of severity and the clinical care offered to severely ill cases. All CFR estimates should be viewed cautiously at the current time as the sensitivity of surveillance of both deaths and cases in mainland China is unclear. Furthermore, all estimates rely on limited data on the typical time intervals from symptom onset to death or recovery which influences the CFR estimates.

Report 4: Severity of 2019-novel coronavirus (nCoV) 1

WHO Collaborating Centre for Infectious Disease Modelling

MRC Centre for Global Infectious Disease Analysis

Abdul Latif Jameel Institute for Disease and Emergency Analytics (J-IDEA)

Imperial College London

*

Correspondence: neil.ferguson@imperial.ac.uk

1 Summary

We present case fatality ratio (CFR) estimates for three strata of 2019-nCoV infections. For cases

detected in Hubei, we estimate the CFR to be 18% (95% credible interval: 11%-81%). For cases

detected in travellers outside mainland China, we obtain central estimates of the CFR in the range 1.2-

5.6% depending on the statistical methods, with substantial uncertainty around these central values.

Using estimates of underlying infection prevalence in Wuhan at the end of January derived from

testing of passengers on repatriation flights to Japan and Germany, we adjusted the estimates of CFR

from either the early epidemic in Hubei Province, or from cases reported outside mainland China, to

obtain estimates of the overall CFR in all infections (asymptomatic or symptomatic) of approximately

1% (95% confidence interval 0.5%-4%). It is important to note that the differences in these estimates

does not reflect underlying differences in disease severity between countries. CFRs seen in individual

countries will vary depending on the sensitivity of different surveillance systems to detect cases of

differing levels of severity and the clinical care offered to severely ill cases. All CFR estimates should

be viewed cautiously at the current time as the sensitivity of surveillance of both deaths and cases in

mainland China is unclear. Furthermore, all estimates rely on limited data on the typical time intervals

from symptom onset to death or recovery which influences the CFR estimates.

SUGGESTED CITATION

Ilaria Dorigatti, Lucy Okell, Anne Cori et al. Severity of 2019-novel coronavirus (nCoV). Imperial College London

(10-02-2020), doi: https://doi.org/10.25561/77154.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives

4.0 International License.10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 2 of 14

1. Introduction: Challenges in assessing the spectrum of severity

There are two main challenges in assessing the severity of clinical outcomes during an epidemic of a

newly emerging infection:

1. Surveillance is typically biased towards detecting clinically severe cases, particularly at the

start of an epidemic when diagnostic capacity is limited (Figure 1). Estimates of the proportion

of fatal cases (the case fatality ratio, CFR) may thus be biased upwards until the extent of

clinically milder disease is determined [1].

2. There can be a period of two to three weeks between a case developing symptoms,

subsequently being detected and reported and observing the final clinical outcome. During a

growing epidemic the final clinical outcome of the majority of the reported cases is typically

unknown. Dividing the cumulative reported deaths by reported cases will underestimate the

CFR among these cases early in an epidemic [1-3].

Figure 1 illustrates the first challenge. Published data from China suggest that the majority of detected

and reported cases have moderate or severe illness, with atypical pneumonia and/or acute respiratory

distress being used to define suspected cases eligible for testing. In these individuals, clinical outcomes

are likely to be more severe, and hence any estimates of the CFR are likely to be high.

Outside mainland China, countries alert to the risk of infection being imported via international travel

have instituted surveillance for 2019-nCoV infection with a broader set of clinical criteria for defining

a suspected case, typically including a combination of symptoms (e.g. cough + fever) combined with

recent travel history to the affected region (Wuhan and/or Hubei Province). Such surveillance is

therefore likely to pick up clinically milder cases as well as the more severe cases also being detected

in mainland China. However, by restricting testing to those with a travel history or link, it is also likely

to miss other symptomatic cases (and possibly hospitalised cases with atypical pneumonia) that have

occurred through local transmission or through travel to other affected areas of China.

Figure 1: Spectrum of cases for 2019-nCoV, illustrating imputed sensitivity of surveillance in

mainland China and in travellers arriving in other countries or territories from mainland China.10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 3 of 14

Finally, the bottom of the pyramid represents the likely largest population of those infected with

either mild, non-specific symptoms or who are asymptomatic. Quantifying the extent of infection

overall in the population requires random population surveys of infection prevalence. The only such

data at present for 2019-nCoV are the PCR infection prevalence surveys conducted in exposed

expatriates who have recently been repatriated to Japan, Germany and the USA from Wuhan city (see

below).

To obtain estimates of the severity of 2019-nCoV across the full severity range we examined aggregate

data from Hubei Province, China (representing the top two levels – deaths and hospitalised cases – in

Figure 1) and individual-level data from reports of cases outside mainland China (the top three levels

and perhaps part of the fourth level in Figure 1). We also analysed data on infections in repatriated

expatriates returning from Hubei Provence (representing all levels in Figure 1).

2. Current estimates of the case fatality ratio

The CFR is defined as the proportion of cases of a disease who will ultimately die from the disease. For

a given case definition, once all deaths and cases have been ascertained (for example at the end of an

epidemic), this is simply calculated as deaths/cases. However, at the start of the epidemic this ratio

underestimates the true CFR due to the time-lag between onset of symptoms and death [1-3]. We

adopted several approaches to account for this time-lag and to adjust for the unknown final clinical

outcome of the majority of cases reported both inside and outside China (cases reported in mainland

China and those reported outside mainland China) (see Methods section below). We present the range

of resulting CFR estimates in Table 1 for two parts of the case severity pyramid. Note that all estimates

have high uncertainty and therefore point estimates represent a snapshot at the current time and

may change as additional information becomes available. Furthermore, all data sources have inherent

potential biases due to the limits in testing capacity as outlined earlier. 10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 4 of 14

Table 1: Estimates of CFR for two severity ranges: cases reported in mainland China, and those

reported outside. All estimates quoted to two significant figures.

1Mode quoted for Bayesian estimates, given uncertainty in the tail of the onset-to-death distribution. 2Estimates made

without imputing onset dates in traveller cases for whom onset dates are unknown are slightly higher than when onset dates

are imputed. 3Maximum likelihood estimate. 4This estimate relies on information from just 2 deaths reported outside

mainland China thus far and therefore has wide uncertainty. Both of these deaths occurred a relatively short time after onset

compared with the typical pattern in China.

Use of data on those who have recovered among exported cases gives very similar point estimates to

just relying on death data, but a rather narrower uncertainty range. This highlights the value of case

follow-up data on both fatal and non-fatal cases.

Given that the estimates of CFR across all infections rely on a single point estimate of infection

prevalence, they should be treated cautiously. In particular, the sensitivity of the diagnostics used to

test repatriated passengers is not known, and it is unclear when infected people might test positive,

or how representative those passengers were of the general population of Wuhan (their infection risk

might have been higher or lower than the general population). Additional representative studies to

assess the extent of mildly symptomatic or asymptomatic infection are therefore urgently needed.

Figure 2 shows projected expected numbers of deaths detected in cases detected up to 4

th

February

outside mainland China over the next few weeks for different values of the CFR. If no further deaths

are reported amongst this group (and indeed if many of those now in hospital recover and are

Severity range Method and data used Time to outcome

distributions used

CFR

China: Epidemic

currently in

Hubei

Parametric model fitted to publicly

reported number of cases and deaths

in Hubei as of 5

th

February, assuming

exponential growth at rate 0.14/day.

Onset-to-death estimated

from 26 deaths in China;

assume 5-day period from

onset to report and 1-day

period from death to report.

18%

1

(95% credible

interval: 11-81%)

Outside mainland

China: cases in

travellers from

mainland China

to other

countries or

territories

(showing a

broader

spectrum of

symptoms than

cases in Hubei,

including milder

disease)

Parametric model fitted to reported

traveller cases up to 8

th

February using

both death and recovery outcomes

and inferring latest possible dates of

onset in traveller cases

2

.

Onset-to-death estimated

from 26 deaths in China;

onset-to-recovery estimated

from 36 cases detected

outside mainland China

4

.

5.1%

3

(95% credible

interval: 1.1%-38%)

Parametric model fitted to reported

traveller cases up to 8

th

February using

only death outcome and inferring

latest possible unreported dates of

onset in traveller cases

2

.

Onset-to-death estimated

from 26 deaths in China.

5.6%

1

Tesseracts visually represent the four dimensions, including time.

The science surrounding multi-dimensional space is so mind-boggling that even the physicists who study it do not fully understand it. It may be helpful to start with the three observable dimensions, which correspond to the height, width, and length of a physical object. Einstein, in his work on general relativity in the early 20th century, demonstrated that time is also a physical dimension. This is observable only in extreme conditions; for example, the immense gravity of a planetary body can actually slow down time in its near vicinity. The new model of the universe created by this theory is known as space-time.

In theory, gravity from a massive object bends space-time around it.
In theory, gravity from a massive object bends space-time around it.

Since Einstein’s era, scientists have discovered many of the universe’s secrets, but not nearly all. A major field of study, quantum mechanics, is devoted to learning about the smallest particles of matter and how they interact. These particles behave in a very different manner than the matter of observable reality. Physicist John Wheeler is reported to have said, “If you are not completely confused by quantum mechanics, you do not understand it.” It has been suggested that multi-dimensional space can explain the strange behavior of these elementary particles.

For much of the 20th and 21st centuries, physicists have tried to reconcile the discoveries of Einstein with those of quantum physics. It is believed that such a theory would explain much that is still unknown about the universe, including poorly understood forces such as gravity. One of the leading contenders for this theory is known variously as superstring theory, supersymmetry, or M-theory. This theory, while explaining many aspects of quantum mechanics, can only be correct if reality has 10, 11, or as many as 26 dimensions. Thus, many physicists believe multi-dimensional space is likely.

The extra dimensions of this multi-dimensional space would exist beyond the ability of humans to observe them. Some scientists suggest they are folded or curled into the observable three dimensions in such a way that they cannot be seen by ordinary methods. Scientists hope their effects can be documented by watching how elementary particles behave when they collide. Many experiments in the world’s particle accelerator laboratories, such as CERN in Europe, are conducted to search for this evidence. Other theories claim to reconcile relativity and quantum mechanics without requiring the existence of multi-dimensional space; which theory is correct remains to be seen.

Dreams Are The REAL World

~ admin

Arno Pienaar – Dreams are reality just as much as the real world is classified as reality. Dreams are your actual own reality and the real world is the creator’s reality. 

Dreams are by far the most intriguing aspect of existence for a human-being. Within them we behold experiences that the conscious mind may recollect, but for the most part, cannot make sense of. The only sense we can gain from them is the way they make us feel intuitively.

SunSkyCloudsMountainWaterTreeStone

Subconscious Guiding Mechanism

The feeling is known to be the message carried over from the guiding mechanism of the sub-conscious mind.

The guidance we receive in our dreams comes, in fact, from our very selves, although the access we have to everything is only tapped into briefly, when the conscious mind is completely shut down in the sleeping state.

The subconscious tends to show us the things that dominate our consciousness whenever it has the chance and the onus is on us to sort out the way we live our lives in the primary waking state, which is where we embody programming that is keeping us out of our own paradise, fully conscious in the now.

Labels such as the astral plane, dream-scape or the fourth dimension, have served to make people believe that this dimension of reality is somehow not as real as the “real” world, or that the dream state is not as valid as the waking state.

This is one of the biggest lies ever as the dream state is in fact the only reality where you can tap into the unconscious side of yourself, which you otherwise cannot perceive, except during transcendental states under psychedelics or during disciplined meditational practices.

Dreams offer a vital glimpse into your dark side, the unconscious embedded programming which corrupts absolutely until light has shone on it.

The dream state shows us what we are unconsciously projecting as a reality and must be used to face the truth of what you have mistaken for reality.

A person with an eating disorder will, for sure, have plenty of dreams involving gluttony, a nimfo will have many lustful encounters in the dreamstate, a narcissist will have audiences worshiping himself, or himself worshiping himself and someone filled with hatred will encounter scenes I wish not to elaborate on.

The patterns of your dreams and especially recurring themes, are projections within your unconscious mind that is governing the ultimate experience of your “waking state.”

I believe the new heaven and earth is the merging of heaven (dreams) and earth (matrix) into one conclusive experience.

Besides for showing us what needs attention, dreams also transcend the rules and laws of matter, time and space.

The successful lucid dreamer gains an entire new heaven and earth, where the absolute impossible is only possible.

For the one who gains access to everything through the dream state, the constraints of the so called real world in the waking state becomes but a monkey on the back.

When you can fly, see and talk to anybody, go anywhere you choose, then returning to the world of matter, time and space, is arguably a nightmare.

Anybody with a sound mind would choose to exist beyond the limitations of the matrix-construct. There are many that already do.

The Real World vs. the Dream World

The greatest of sages have enlightened us that the REAL WORLD is indeed the illusion, maya, or manyan.

If what we have thought to be real is, in fact, the veil to fool us that we are weak, small and limited, then our dreams must be the real world and this experience, here, is just an aspect of ourselves that is in dire need of deprogramming from the jaws of hypnotic spell-casting.

There is actually no such thing as reality. There is also no such thing as the real world. What makes the “waking state” the real world and the “dream state” the unreal world?

People would argue that the matrix is a world in which our physical bodies are housed, and that we always return after sleep to continue our existence in the real world.

Morpheus exclaimed that the body cannot survive without the mind. What he ment was that the body is but a projection of the mind.

Have you ever had a dream that was interrupted unexpectedly, only to continue from where you had left off when you go back to sleep?

Do you have a sanctuary which you visit regularly in the dream state? A safe have in your sub-conscious mind?

When we have the intent to return to any reality we do so, as it is proven by fellow lucid dreamers.

What if I told you that this matrix-hive is a dream just like any other dream you have, and that billions of souls share this dream together?

Do you think these souls consciously chose to share this dream together? The answer is no, they were merely incepted by an idea that “this is the real world” from the very beings that summoned them into this plane through the sexual act.

Every night we have to re-energize ourselves by accessing the dream world (i.e. the actual world)/True Source of infinite Potential, which is the reservoir that refills us, only to return to give that energy to the dreamworld which we believe to be the real world. This “real world” only seems like the REAL WORLD because most of its inhabitants believe just that.

Pause and Continue

Just like we can pause a dream when interrupted and return, so do we can pause the “real world”. Whether you believe it or not, we only return to the “waking reality” because we have forsaken ourselves for it and we expect to return to it on a daily basis.

We intend to always come back because we have such a large investment in an illusion and this is our chain to the physical world. We are so attached to this dream, that we evenreincarnate to continue from where we left off, because this dream is able to trap you in limbo forever.

We have capitulated to it and, in so doing, gave it absolute power over us. We are in fact in a reality of another, not in our own. That is why we cannot manifest what we want in it, because it has claimed ownership over us here and while we are in it, we are subject to its rules, laws and limitations.

When one enters the dimension of another, one fall subject to its construct.

In the case of the Real World, the Real World has been hacked by a virus that affects all the beings that embrace the code of that matrix. It is like a spiderweb that traps souls.

As soon as we wake up in the morning, we start dreaming of the world we share together. As long as the mind machine is in power, it will always kick in again after waking up.

Whatever it is we believe, becomes activated by this dream once more, so to validate our contribution to this illusion, to which we have agreed.

The time is now to turn all of it back to the five elements, so that we can have our own reality again!

Hyperdimensionality is a Reality Identity Crisis

We are only hyper-dimensional beings because we are not in our own reality yet — we are in the middle of two realities fighting over us. It is time to come to terms with this identity crisis.

We cannot be forced to be in a dimension we choose not to partake in, a dimension that was made to fool you into believe it is the alpha & omega.

It is this very choice (rejecting the digital holographic program) that many are now making, which is breaking the mirror on the wall (destroying the illusion).

Deprogramming Souls of Matrix-Based Constructs is coming in 2016. The spiderweb will be disentangled and the laws of time, matter and space will be transcended in the Real World, once we will regain full consciousness in the NOW.

Original source Dreamcatcherreality.com

SF Source How To Exit The Matrix  May 2016

Physicists Say There’s a 90 Percent Chance Civilization Will Soon Collapse October 9th 2020

Final Countdown

If humanity continues down its current path, civilization as we know it is heading toward “irreversible collapse” in a matter of decades.

That’s according to research published in the journal Scientific Reports, which models out our future based on current rates of deforestation and other resource use. As Motherboard reports, even the rosiest projections in the research show a 90 percent chance of catastrophe.

Last Gasp

The paper, penned by physicists from the Alan Turing Institute and the University of Tarapacá, predicts that deforestation will claim the last forests on Earth in between 100 and 200 years. Coupled with global population changes and resource consumption, that’s bad new for humanity.

“Clearly it is unrealistic to imagine that the human society would start to be affected by the deforestation only when the last tree would be cut down,” reads the paper.

Coming Soon

In light of that, the duo predicts that society as we know it could end within 20 to 40 years.

In lighter news, Motherboard reports that the global rate of deforestation has actually decreased in recent years. But there’s still a net loss in forest overall — and newly-planted trees can’t protect the environment nearly as well as old-growth forest.

If humanity continues down its current path, civilization as we know it is heading toward “irreversible collapse” in a matter of decades.

In light of that, the duo predicts that society as we know it could end within 20 to 40 years.

“Calculations show that, maintaining the actual rate of population growth and resource consumption, in particular forest consumption, we have a few decades left before an irreversible collapse of our civilization,” reads the paper.

READ MORE: Theoretical Physicists Say 90% Chance of Societal Collapse Within Several Decades [Motherboard]

More on societal collapse: Doomsday Report Author: Earth’s Leaders Have Failed

What is Binary, and Why Do Computers Use It?

Anthony Heddings@anthonyheddings
October 1, 2018, 6:40am EDT

Computers don’t understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand. You’re probably familiar with base 10—the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

Counting in Binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on—doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we’re going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we’re held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There’s another base system that’s also used in programming: hexadecimal. Although computers don’t run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

So Why Do Computers Use Binary?

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “on” state—represented by negative charge—and an “off” state—represented by a positive charge. For those unsure of why the “off” is represented by a positive charge, it’s because electrons have a negative charge—more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we’ve kept the same fundamental principles. Modern computers use what’s known as a transistor to perform calculations with binary. Here’s a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small—all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that’s mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics).

But Why Only Base 2?

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” While some of it comes down to tradition in how computers are built, to add another digit would mean we’d have to distinguish between different levels of current—not just “off” and “on,” but also states like “on a little bit” and “on a lot.”

The problem here is if you wanted to use multiple levels of voltage, you’d need a way to easily perform calculations with them, and the hardware for that isn’t viable as a replacement for binary computing. It indeed does exist; it’s called a ternary computer, and it’s been around since the 1950s, but that’s pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work’s been done on developing them at the same tiny scales as binary.

The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”and how they’re used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what’s known as a truth table:

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it’s also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx/Shutterstock, Wikipedia, Wikipedia, Wikipedia, WikipediaREAD NEXT

What is Binary, and Why Do Computers Use It?

Anthony Heddings@anthonyheddings
October 1, 2018, 6:40am EDT

Computers don’t understand words or numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels of your computer, everything is represented by a binary electrical signal that registers in one of two states: on or off. To make sense of complicated data, your computer has to encode it in binary.

Binary is a base 2 number system. Base 2 means there are only two digits—1 and 0—which correspond to the on and off states your computer can understand. You’re probably familiar with base 10—the decimal system. Decimal makes use of ten digits that range from 0 to 9, and then wraps around to form two-digit numbers, with each digit being worth ten times more than the last (1, 10, 100, etc.). Binary is similar, with each digit being worth two times more than the last.

Counting in Binary

In binary, the first digit is worth 1 in decimal. The second digit is worth 2, the third worth 4, the fourth worth 8, and so on—doubling each time. Adding these all up gives you the number in decimal. So,

1111 (in binary)  =  8 + 4 + 2 + 1  =  15 (in decimal)

Accounting for 0, this gives us 16 possible values for four binary bits. Move to 8 bits, and you have 256 possible values. This takes up a lot more space to represent, as four digits in decimal give us 10,000 possible values. It may seem like we’re going through all this trouble of reinventing our counting system just to make it clunkier, but computers understand binary much better than they understand decimal. Sure, binary takes up more space, but we’re held back by the hardware. And for some things, like logic processing, binary is better than decimal.

There’s another base system that’s also used in programming: hexadecimal. Although computers don’t run on hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of hexadecimal can represent a whole byte, eight digits in binary. Hexadecimal uses 0-9 like decimal, and also the letters A through F to represent the additional six digits.

So Why Do Computers Use Binary?

The short answer: hardware and the laws of physics. Every number in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “on” state—represented by negative charge—and an “off” state—represented by a positive charge. For those unsure of why the “off” is represented by a positive charge, it’s because electrons have a negative charge—more electrons mean more current with a negative charge.

So, the early room-sized computers used binary to build their systems, and even though they used much older, bulkier hardware, we’ve kept the same fundamental principles. Modern computers use what’s known as a transistor to perform calculations with binary. Here’s a diagram of what a field-effect transistor (FET) looks like:

Essentially, it only allows current to flow from the source to the drain if there is a current in the gate. This forms a binary switch. Manufacturers can build these transistors incredibly small—all the way down to 5 nanometers, or about the size of two strands of DNA. This is how modern CPUs operate, and even they can suffer from problems differentiating between on and off states (though that’s mostly due to their unreal molecular size, being subject to the weirdness of quantum mechanics).

But Why Only Base 2?

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” While some of it comes down to tradition in how computers are built, to add another digit would mean we’d have to distinguish between different levels of current—not just “off” and “on,” but also states like “on a little bit” and “on a lot.”

The problem here is if you wanted to use multiple levels of voltage, you’d need a way to easily perform calculations with them, and the hardware for that isn’t viable as a replacement for binary computing. It indeed does exist; it’s called a ternary computer, and it’s been around since the 1950s, but that’s pretty much where development on it stopped. Ternary logic is way more efficient than binary, but as of yet, nobody has an effective replacement for the binary transistor, or at the very least, no work’s been done on developing them at the same tiny scales as binary.

The reason we can’t use ternary logic comes down to the way transistors are stacked in a computer—something called “gates”and how they’re used to perform math. Gates take two inputs, perform an operation on them, and return one output.

This brings us to the long answer: binary math is way easier for a computer than anything else. Boolean logic maps easily to binary systems, with True and False being represented by on and off. Gates in your computer operate on boolean logic: they take two inputs and perform an operation on them like AND, OR, XOR, and so on. Two inputs are easy to manage. If you were to graph the answers for each possible input, you would have what’s known as a truth table:

A binary truth table operating on boolean logic will have four possible outputs for each fundamental operation. But because ternary gates take three inputs, a ternary truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19,683 (3^3^3). Scaling becomes an issue because while ternary is more efficient, it’s also exponentially more complex.

Who knows? In the future, we could begin to see ternary computers become a thing, as we push the limits of binary down to a molecular level. For now, though, the world will continue to run on binary.

Image credits: spainter_vfx/Shutterstock, Wikipedia, Wikipedia, Wikipedia, WikipediaREAD NEXT

Anthony Heddings
Anthony Heddings is the resident cloud engineer for LifeSavvy Media, a technical writer, programmer, and an expert at Amazon’s AWS platform. He’s written hundreds of articles for How-To Geek and CloudSavvy IT that have been read millions of times

If The Big Bang Wasn’t The Beginning, What Was It? Posted September 30th 2020

Ethan SiegelSenior ContributorStarts With A BangContributor GroupScienceThe Universe is out there, waiting for you to discover it.

The history of our expanding Universe is one illustrated image.
Our entire cosmic history is theoretically well-understood, but only because we understand the … [+] NICOLE RAGER FULLER / NATIONAL SCIENCE FOUNDATION

For more than 50 years, we’ve had definitive scientific evidence that our Universe, as we know it, began with the hot Big Bang. The Universe is expanding, cooling, and full of clumps (like planets, stars, and galaxies) today because it was smaller, hotter, denser, and more uniform in the past. If you extrapolate all the way back to the earliest moments possible, you can imagine that everything we see today was once concentrated into a single point: a singularity, which marks the birth of space and time itself.

At least, we thought that was the story: the Universe was born a finite amount of time ago, and started off with the Big Bang. Today, however, we know a whole lot more than we did back then, and the picture isn’t quite so clear. The Big Bang can no longer be described as the very beginning of the Universe that we know, and the hot Big Bang almost certainly doesn’t equate to the birth of space and time. So, if the Big Bang wasn’t truly the beginning, what was it? Here’s what the science tells us.

Looking back at the distant Universe with NASA's Hubble space telescope.
Nearby, the stars and galaxies we see look very much like our own. But as we look farther away, we … [+] NASA, ESA, AND A. FEILD (STSCI)

Our Universe, as we observe it today, almost certainly emerged from a hot, dense, almost-perfectly uniform state early on. In particular, there are four pieces of evidence that all point to this scenario: Recommended For You

  1. the Hubble expansion of the Universe, which shows that the amount that light from a distant object is redshifted is proportional to the distance to that object,
  2. the existence of a leftover glow — the Cosmic Microwave Background (CMB) — in all directions, with the same temperature everywhere just a few degrees above absolute zero,
  3. light elements — hydrogen, deuterium, helium-3, helium-4, and lithium-7 — that exist in a particular ratio of abundances back before any stars were formed,
  4. and a cosmic web of structure that gets denser and clumpier, with more space between larger and larger clumps, as time goes on.

These four facts: the Hubble expansion of the Universe, the existence and properties of the CMB, the abundance of the light elements from Big Bang nucleosynthesis, and the formation and growth of large-scale structure in the Universe, represent the four cornerstones of the Big Bang.

The cosmic microwave background & large-scale structure are two cosmological cornerstones.
The largest-scale observations in the Universe, from the cosmic microwave background to the cosmic … [+] Chris Blake and Sam Moorfield

Why are these the four cornerstones? In the 1920s, Edwin Hubble, using the largest, most powerful telescope in the world at the time, was able to measure how individual stars varied in brightness over time, even in galaxies beyond our own. That enabled us to know how far away the galaxies that housed those stars were. By combining that information with data about how significantly the atomic spectral lines from those galaxies were shifted, we could determine what the relationship was between distance and a spectral shift.

As it turned out, it was simple, straightforward, and linear: Hubble’s law. The farther away a galaxy was, the more significantly its light was redshifted, or shifted systematically towards longer wavelengths. In the context of General Relativity, that corresponds to a Universe whose very fabric is expanding with time. As time marches on, all points in the Universe that aren’t somehow bound together (either gravitationally or by some other force) will expand away from one another, causing any emitted light to be shifted towards longer wavelengths by time the observer receives it.

How light redshifts and distances change over time in the expanding Universe.
This simplified animation shows how light redshifts and how distances between unbound objects change … [+] Rob Knop

Although there are many possible explanations for the effect we observe as Hubble’s Law, the Big Bang is a unique idea among those possibilities. The idea is simple and straightforward in its simplicity, but also breathtaking in how powerful it is. It simply says this:

  • the Universe is expanding and stretching light to longer wavelengths (and lower energies and temperatures) today,
  • and that means, if we extrapolate backwards, the Universe was denser and hotter earlier on.
  • Because it’s been gravitating the whole time, the Universe gets clumpier and forms larger, more massive structures later on.
  • If we go back to early enough times, we’ll see that galaxies were smaller, more numerous, and made of intrinsically younger, bluer stars.
  • If we go back earlier still, we’ll find a time where no stars have had time to form.
  • Even earlier, and we’ll find that it’s hot enough that light, at some early time, would have split even neutral atoms apart, creating an ionized plasma which “releases” the radiation at last when the Universe does become neutral. (The origin of the CMB.)
  • And at even earlier times still, things were hot enough that even atomic nuclei would be blasted apart; transitioning to a cooler phase allows the first stable nuclear reactions, yielding the light elements, to proceed.
As the Universe cools, atomic nuclei form, followed by neutral atoms as it cools further.
As the Universe cools, atomic nuclei form, followed by neutral atoms as it cools further. All of … [+] E. Siegel

All of these claims, at some point during the 20th century, were validated and confirmed by observations. We’ve measured the clumpiness of the Universe, and found that it increases exactly as predicted as time goes on. We’ve measured how galaxies evolve with distance (and cosmic time), and found that the earlier, more distant ones are overall younger, bluer, more numerous, and smaller in size. We’ve discovered and measured the CMB, and not only does it spectacularly match the Big Bang’s predictions, but we’ve observed how its temperature changes (increases) at earlier times. And we’ve successfully measured the primordial abundances of the light elements, finding a spectacular agreement with the predictions of Big Bang nucleosynthesis.

We can extrapolate back even further if we like: beyond the limits of what our current technology has the capability to directly observe. We can imagine the Universe getting even denser, hotter, and more compact than it was when protons and neutrons were being blasted apart. If we stepped back even earlier, we’d see neutrinos and antineutrinos, which need about a light-year of solid lead to stop half of them, start to interact with electrons and other particles in the early Universe. Beginning in the mid-2010s, we were able to detect their imprint on first the photons of the CMB and, a few years later, on the large-scale structure that would later grow in the Universe.

The impact of neutrinos on the large-scale structure features in the Universe.
If there were no oscillations due to matter interacting with radiation in the Universe, there would … [+] D. Baumann et al. (2019), Nature Physics

That’s the earliest signal, thus far, we’ve ever detected from the hot Big Bang. But there’s nothing stopping us from running the clock back farther: all the way to the extremes. At some point:

  • it gets hot and dense enough that particle-antiparticle pairs get created out of pure energy, simply from quantum conservation laws and Einstein’s E = mc²,
  • the Universe gets denser than individual protons and neutrons, causing it to behave as a quark-gluon plasma rather than as individual nucleons,
  • the Universe gets even hotter, causing the electroweak force to unify, the Higgs symmetry to be restored, and for fundamental particles to lose their rest mass,

and then we go to energies that lie beyond the limits of known, tested physics, even from particle accelerators and cosmic rays. Some processes must occur under those conditions to reproduce the Universe we see. Something must have created dark matter. Something must have created more matter than antimatter in our Universe. And something must have happened, at some point, for the Universe to exist at all.

An illustration of the Big Bang from an initially hot, dense state to our modern Universe.
There is a large suite of scientific evidence that supports the picture of the expanding Universe … [+] NASA / GSFC

From the moment this extrapolation was first considered back in the 1920s — and then again in its more modern forms in the 1940s and 1960s — the thinking was that the Big Bang takes you all the way back to a singularity. In many ways, the big idea of the Big Bang was that if you have a Universe filled with matter and radiation, and it’s expanding today, then if you go far enough back in time, you’ll come to a state that’s so hot and so dense that the laws of physics themselves break down.

At some point, you achieve energies, densities, and temperatures that are so large that the quantum uncertainty inherent to nature leads to consequences that make no sense. Quantum fluctuations would routinely create black holes that encompass the entire Universe. Probabilities, if you try to compute them, give answers that are either negative or greater than 1: both physical impossibilities. We know that gravity and quantum physics don’t make sense at these extremes, and that’s what a singularity is: a place where the laws of physics are no longer useful. Under these extreme conditions, it’s possible that space and time themselves can emerge. This, originally, was the idea of the Big Bang: a birth to time and space themselves.

The Big Bang, from the earliest stages, to modern-day galaxies.
A visual history of the expanding Universe includes the hot, dense state known as the Big Bang and … [+] NASA / CXC / M. WEISS

But all of that was based on the notion that we actually could extrapolate the Big Bang scenario as far back as we wanted: to arbitrarily high energies, temperatures, densities, and early times. As it turned out, that created a number of physical puzzles that defied explanation. Puzzles such as:

  • Why did causally disconnected regions of space — regions with insufficient time to exchange information, even at the speed of light — have identical temperatures to one another?
  • Why was the initial expansion rate of the Universe in balance with the total amount of energy in the Universe so perfectly: to more than 50 decimal places, to deliver a “flat” Universe today?
  • And why, if we achieved these ultra-high temperatures and densities early on, don’t we see any leftover relic remnants from those times in our Universe today?

If you still want to invoke the Big Bang, the only answer you can give is, “well, the Universe must have been born that way, and there is no reason why.” But in physics, that’s akin to throwing up your hands in surrender. Instead, there’s another approach: to concoct a mechanism that could explain those observed properties, while reproducing all the successes of the Big Bang, and still making new predictions about phenomena we could observe that differ from the conventional Big Bang.

The 3 big puzzles, the horizon, flatness, and monopole problems, that inflation solves.
In the top panel, our modern Universe has the same properties (including temperature) everywhere … [+] E. SIEGEL / BEYOND THE GALAXY

About 40 years ago, that’s exactly the idea that was put forth: cosmic inflation. Instead of extrapolating the Big Bang all the way back to a singularity, inflation basically says that there’s a cutoff: you can go back to a certain high temperature and density, but no further. According to the big idea of cosmic inflation, this hot, dense, uniform state was preceded by a state where:

  • the Universe wasn’t filled with matter and radiation,
  • but instead possessed a large amount of energy intrinsic to the fabric of space itself,
  • which caused the Universe to expand exponentially (and at a constant, unchanging rate),
  • which drives the Universe to be flat, empty, and uniform (up to the scale of quantum fluctuations),
  • and then inflation ends, converting that intrinsic-to-space energy into matter and radiation,

and that’s where the hot Big Bang comes from. Not only did this solve the puzzles the Big Bang couldn’t explain, but it made multiple new predictions that have since been verified. There’s a lot we still don’t know about cosmic inflation, but the data that’s come in over the last 3 decades overwhelmingly supports the existence of this inflationary state: that preceded and set up the hot Big Bang.

How inflation and quantum fluctuations give rise to the Universe we observe today.
The quantum fluctuations that occur during inflation get stretched across the Universe, and when … [+] E. SIEGEL, WITH IMAGES DERIVED FROM ESA/PLANCK AND THE DOE/NASA/ NSF INTERAGENCY TASK FORCE ON CMB RESEARCH

All of this, taken together, is enough to tell us what the Big Bang is and what it isn’t. It is the notion that our Universe emerged from a hotter, denser, more uniform state in the distant past. It is not the idea that things got arbitrarily hot and dense until the laws of physics no longer applied.

It is the notion that, as the Universe expanded, cooled, and gravitated, we annihilated away our excess antimatter, formed protons and neutrons and light nuclei, atoms, and eventually, stars, galaxies, and the Universe we recognize today. It is no longer considered inevitable that space and time emerged from a singularity 13.8 billion years ago.

And it is a set of conditions that applies at very early times, but was preceded by a different set of conditions (inflation) that came before it. The Big Bang might not be the very beginning of the Universe itself, but it is the beginning of our Universe as we recognize it. It’s not “the” beginning, but it is “our” beginning. It may not be the entire story on its own, but it’s a vital part of the universal cosmic story that connects us all.

Follow me on Twitter. Check out my website or some of my other work hereEthan Siegel

I am a Ph.D. astrophysicist, author, and science communicator, who professes physics and astronomy at various colleges. I have won numerous awards for science writing…

Artificial intelligence with virtual hanging head on podium. Global world cybernetic mind controls humanity. Digital Brain with AI in the spotlight. Super computer. science futuristic concept.

brain wave

Neuralink: 3 neuroscientists react to Elon Musk’s brain chip reveal September 17th 2020

With a pig-filled demonstration, Neuralink revealed its latest advancements in brain implants this week. But what do scientists think of Elon Musk’s company’s grand claims?ShutterstockMike Brown9.4.2020 8:00 AM

What does the future look like for humans and machines? Elon Musk would argue that it involves wiring brains directly up to computers – but neuroscientists tell Inverse that’s easier said than done.

On August 28, Musk and his team unveiled the latest updates from secretive firm Neuralink with a demo featuring pigs implanted with their brain chip device. These chips are called Links, and they measure 0.9 inches wide by 0.3 inches tall. They connect to the brain via wires, and provide a battery life of 12 hours per charge, after which the user would need to wirelessly charge again. During the demo, a screen showed the real-time spikes of neurons firing in the brain of one pig, Gertrude, as she snuffed around her pen during the event.

More like this

Innovation9.8.2020 9:53 PM”Plug and play” brain prosthesis could change how people with paralysis use implantsBy Sarah WellsInnovation9.8.2020 9:32 PMStarship: Watch SpaceX nail 6th test David GrossmanInnovation9.2.2020 10:55 AMMusk Reads: Neuralink’s big revealBy Mike Brown.

It was an event designed to show how far Neuralink has come in terms of making its science objectives reality. But how much of Musk’s ambitions for Links are still in the realm of science fiction?

Neuralink argues the chips will one day have medical applications, listing a whole manner of ailments that its chips could feasibly solve. Memory loss, depression, seizures, and brain damage were all suggested as conditions where a generalized brain device like the Link could help.

Ralph Adolphs, Bren Professor of Psychology, Neuroscience, and Biology at California Institute of Technology, tells Inverse Neuralink’s announcement was “tremendously exciting” and “a huge technical achievement.”

Neuralink is “a good example of technology outstripping our current ability to know how to use it,” Adolphs says. “The primary initial application will be for people who are ill and for clinical reasons it is justified to implant such a chip into their brain. It would be unethical to do so right now in a healthy person.”

“But who knows what the future holds?” He adds.

Adolphs says the chip is comparable to the natural processes that emerge through evolution. Currently, to interface between the brain and the world, humans use their hands and mouth. But to imagine just sitting and thinking about these actions is a lot harder, so a lot of the future work will need to focus on making this interface with the world feel more natural, Adolphs says.

Achieving that goal could be further out than the Neuralink demo suggested. John Krakauer, chief medical and scientific officer at MindMaze and professor of neurology at Johns Hopkins University, tells Inverse that his view is humanity is “still a long way away” from consumer-level linkups.

“Let me give a more specific concern: The device we saw was placed over a single sensorimotor area,” Krakauer says. “If we want to read thoughts rather than movements (assuming we knew their neural basis) where do we put it? How many will we need? How does one avoid having one’s scalp studded with them? No mention of any of this of course.”

While a brain linkup may get people “excited” because it “has echoes of Charles Xavier in the X-Men,” Krakauer argues that there’s plenty of potential non-invasive solutions to help people with the conditions Neuralink says its technology will treat.

These existing solutions don’t require invasive surgery, but Krakauer fears “the cool factor clouds critical thinking.”

But Elon Musk, Neuralink’s CEO, wants the Link to take humans far beyond new medical treatments.

The ultimate objective, according to Musk, is for Neuralink to help create a symbiotic relationship between humans and computers. Musk argues that Neuralink-like devices could help humanity keep up with super-fast machines. But Krakauer finds such an ambition troubling.

“I would like to see less unsubstantiated hype about a brain ‘Alexa’ and interfacing with A.I.,” Krakauer says. “The argument is if you can’t avoid the singularity, join it. I’m sorry but this angle is just ridiculous.”

Neuralink's link implant.
Neuralink’s link implant.Neuralink

Even a general-purpose linkup could be much further away from development than it may seem. Musk told WaitButWhy in 2017 that a general-purpose linkup could be eight to 10 years away for people with no disability. That would place the timescale for roll-out somewhere around 2027 at the latest — seven years from now.

Kevin Tracey, a neurosurgery professor and president of the Feinstein Institutes for Medical Research, tells Inverse that he “can’t imagine” that any of the publicly suggested diseases could see a solution “sooner than 10 years.” Considering that Neuralink hopes to offer the device as a medical solution before it moves to more general-purpose implants, these notes of caution cast the company’s timeline into doubt.

But unlike Krakauer, Tracey argues that “we need more hype right now.” Not enough attention has been paid to this area of research, he says.

“In the United States for the last 20 years, the federal government’s investment supporting research hasn’t kept up with inflation,” Tracey says. “There’s been this idea that things are pretty good and we don’t have to spend so much money on research. That’s nonsense. COVID proved we need to raise enthusiasm and investment.”

Neuralink’s device is just one part of the brain linkup puzzle, Tracey explains. There are three fields at play: molecular medicine to make and find the targets, neuroscience to understand how the pathways control the target, and the devices themselves. Advances in each area can help the others. Neuralink may help map new pathways, for example, but it’s just one aspect of what needs to be done to make it work as planned.

Neuralink’s smaller chips may also help avoid issues with brain scarring seen with larger devices, Tracey says. And advancements in robots can also help with surgeries, an area Neuralink has detailed before.

But perhaps the biggest benefit from the announcement is making the field cool again.

“If and to the extent that a new, very cool device elevates the discussion on the neuroscience implications of new devices, and what do we need to get these things to the benefit of humanity through more science, that’s all good,” Tracey says.

How sleep helps us lose weight September 12th 2020

When it comes to weight loss, diet and exercise are usually thought of as the two key factors that will achieve results. However, sleep is an often-neglected lifestyle factor that also plays an important role.

The recommended sleep duration for adults is seven to nine hours a night, but many people often sleep for less than this. Research has shown that sleeping less than the recommended amount is linked to having greater body fat, increased risk of obesity, and can also influence how easily you lose weight on a calorie-controlled diet.

Typically, the goal for weight loss is usually to decrease body fat while retaining as much muscle mass as possible. Not obtaining the correct amount of sleep can determine how much fat is lost as well as how much muscle mass you retain while on a calorie restricted diet.

One study found that sleeping 5.5 hours each night over a two-week period while on a calorie-restricted diet resulted in less fat loss when compared to sleeping 8.5 hours each night. But it also resulted a greater loss of fat-free mass (including muscle).

Another study has shown similar results over an eight-week period when sleep was reduced by only one hour each night for five nights of the week. These results showed that even catch-up sleep at the weekend may not be enough to reverse the negative effects of sleep deprivation while on a calorie-controlled diet.

Metabolism, appetite, and sleep

There are several reasons why shorter sleep may be associated with higher body weight and affect weight loss. These include changes in metabolism, appetite and food selection.

Sleep influences two important appetite hormones in our body – leptin and ghrelin. Leptin is a hormone that decreases appetite, so when leptin levels are high we usually feel fuller. On the other hand, ghrelin is a hormone that can stimulate appetite, and is often referred to as the “hunger hormone” because it’s thought to be responsible for the feeling of hunger.

One study found that sleep restriction increases levels of ghrelin and decreases leptin. Another study, which included a sample of 1,024 adults, also found that short sleep was associated with higher levels of ghrelin and lower levels of leptin. This combination could increase a person’s appetite, making calorie-restriction more difficult to adhere to, and may make a person more likely to overeat.

Consequently, increased food intake due to changes in appetite hormones may result in weight gain. This means that, in the long term, sleep deprivation may lead to weight gain due to these changes in appetite. So getting a good night’s sleep should be prioritised.

Along with changes in appetite hormones, reduced sleep has also been shown to impact on food selection and the way the brain perceives food. Researchers have found that the areas of the brain responsible for reward are more active in response to food after sleep loss (six nights of only four hours’ sleep) when compared to people who had good sleep (six nights of nine hours’ sleep).

This could possibly explain why sleep-deprived people snack more often and tend to choose carbohydrate-rich foods and sweet-tasting snacks, compared to those who get enough sleep.

Person's hands typing on keyboard while eating unhealthy snacks.
Sleep deprivation may make you eat more unhealthy food during the day. Flotsam/ Shutterstock

Sleep duration also influences metabolism, particularly glucose (sugar) metabolism. When food is eaten, our bodies release insulin, a hormone that helps to process the glucose in our blood. However, sleep loss can impair our bodies’ response to insulin, reducing its ability to uptake glucose. We may be able to recover from the occasional night of sleep loss, but in the long term this could lead to health conditions such as obesity and type 2 diabetes.

Our own research has shown that a single night of sleep restriction (only four hours’ sleep) is enough to impair the insulin response to glucose intake in healthy young men. Given that sleep-deprived people already tend to choose foods high in glucose due to increased appetite and reward-seeking behaviour, the impaired ability to process glucose can make things worse.

An excess of glucose (both from increased intake and a reduced ability to uptake into the tissues) could be converted to fatty acids and stored as fat. Collectively, this can accumulate over the long term, leading to weight gain.

However, physical activity may show promise as a countermeasure against the detrimental impact of poor sleep. Exercise has a positive impact on appetite, by reducing ghrelin levels and increasing levels of peptide YY, a hormone that is released from the gut, and is associated with the feeling of being satisfied and full.

After exercise, people tend to eat less, particularly when the energy expended by exercise is taken into account. However, it’s unknown if this still remains in the context of sleep restriction.

Research has also shown that exercise training may protect against the metabolic impairments that result from a lack of sleep, by improving the body’s response to insulin, leading to improved glucose control.

We have also shown the potential benefits of just a single session of exercise on glucose metabolism after sleep restriction. While this shows promise, studies are yet to determine the role of long-term physical activity in people with poor sleep.

It’s clear that sleep is important for losing weight. A lack of sleep can increase appetite by changing hormones, makes us more likely to eat unhealthy foods, and influences how body fat is lost while counting our calories. Sleep should therefore be considered as an essential alongside diet and physical activity as part of a healthy lifestyle

Elon Musk Says Settlers Will Likely Die on Mars. He’s Right.

But is that such a bad thing?

Mars or Milton Keynes, What’s the difference ?

By Caroline Delbert 

Sep 2, 2020

Earlier this week, Elon Musk said there’s a “good chance” settlers in the first Mars missions will die. And while that’s easy to imagine, he and others are working hard to plan and minimize the risk of death by hardship or accident. In fact, the goal is to have people comfortably die on Mars after a long life of work and play that, we hope, looks at least a little like life on Earth.

Let’s explore it together.

There are already major structural questions about how humans will settle on Mars. How will we aim Musk’s planned hundreds of Starships at Mars during the right times for the shortest, safest trips? How will a spaceship turn into something that safely lands on the planet’s surface? How will astronauts reasonably survive a yearlong trip in cramped, close quarters where maximum possible volume is allotted to supplies?

And all of that is before anyone even touches the surface.

Then there are logistical reasons to talk about potential Mars settlers in, well, actuarial terms. First, the trip itself will take a year based on current estimates, and applicants to settlement programs are told to expect this trip to be one way.

It follows, statistically, that there’s an almost certain “chance” these settlers will die on Mars, because their lives will continue there until they naturally end. Musk is referring to accidental death in tough conditions, but people are likely to stay on Mars

When Mars One opened applications in 2013, people flocked to audition to die on Mars after a one-way trip and a lifetime of settlement. As chemist and applicant Taylor Rose Nations said in a 2014 podcast episode:

“If I can go to Mars and be a human guinea pig, I’m willing to sort of donate my body to science. I feel like it’s worth it for me personally, and it’s kind of a selfish thing, but just to turn around and look and see Earth. That’s a lifelong total dream.”

Musk said in a conference Monday that building reusable rocket technology and robust, “complex life support” are his major priorities, based on his long-term goals of settling humans on Mars. Musk has successfully transported astronauts to the International Space Station (ISS), where NASA and global space administrations already have long-term life support technology in place. But that’s not the same as, for example, NASA’s advanced life support projects:

“Advanced life support (ALS) technologies required for future human missions include improved physico-chemical technologies for atmosphere revitalization, water recovery, and waste processing/resource recovery; biological processors for food production; and systems modeling, analysis, and controls associated with integrated subsystems operations.”

In other words, while the ISS does many of these different functions like water recovery, people on the moon (for NASA) or Mars (for Musk’s SpaceX) will require long-term life support for the same group of people, not a group that rotates every few months with frequent short trips from Earth.

And if the Mars colony plans to endure and put down roots, that means having food, shelter, medical care, and mental and emotional stimulation for the entire population.

There must be redundancies and ways to repair everything. Researchers like 3D printers and chemical processes such as ligand bonding as they plan these hypothetical missions, because it’s more prudent to send raw materials that can be turned into 100 different things or 50 different medicines. The right chemical processes can recycle discarded items into fertilizer molecules.

“Good chance you’ll die, it’s going to be tough going,” Musk said, “but it will be pretty glorious if it works out.”

David Bohm, Quantum Mechanics and Enlightenment

The visionary physicist, whose ideas remain influential, sought spiritual as well as scientific illumination. September 8th 2020

Scientific American

  • John Horgan
GettyImages-3251636.jpg

Theoretical physicist Dr. David J. Bohm at a 1971 symposium in London. Photo by Keystone.

Some scientists seek to clarify reality, others to mystify it. David Bohm seemed driven by both impulses. He is renowned for promoting a sensible (according to Einstein and other experts) interpretation of quantum mechanics. But Bohm also asserted that science can never fully explain the world, and his 1980 book Wholeness and the Implicate Order delved into spirituality. Bohm’s interpretation of quantum mechanics has attracted increasing attention lately. He is a hero of Adam Becker’s 2018 book What Is Real? The Unfinished Quest for the Meaning of Quantum Mechanics (reviewed by James Gleick, David Albert and Peter Woit). In The End of Science I tried to make sense of this paradoxical truth-seeker, who died in 1992 at the age of 74. Below is an edited version of that profile. See also my post on another quantum visionary, John Wheeler. –John Horgan

In August 1992 I visited David Bohm at his home in a London suburb. His skin was alarmingly pale, especially in contrast to his purplish lips and dark, wiry hair. His frame, sinking into a large armchair, seemed limp, languorous, and at the same time suffused with nervous energy. One hand cupped the top of his head, the other gripped an armrest. His fingers, long and blue-veined, with tapered, yellow nails, were splayed. He was recovering, he said, from a heart attack.

Bohm’s wife brought us tea and biscuits and vanished. Bohm spoke haltingly at first, but gradually the words came faster, in a low, urgent monotone. His mouth was apparently dry, because he kept smacking his lips. Occasionally, after making an observation that amused him, he pulled his lips back from his teeth in a semblance of a smile. He also had the disconcerting habit of pausing every few sentences and saying, “Is that clear?” or simply, “Hmmm?” I was often so hopelessly befuddled that I just smiled and nodded. But Bohm could be bracingly clear, too. Like an exotic subatomic particle, he oscillated in and out of focus.

Born and raised in the U.S., Bohm left in 1951, the height of anti-communist hysteria, after refusing to answer questions from a Congressional committee about whether he or anyone he knew was a communist. After stays in Brazil and Israel, he settled in England. Bohm was a scientific dissident too. He rebelled against the dominant interpretation of quantum mechanics, the so-called Copenhagen interpretation promulgated by Danish physicist Niels Bohr.

Bohm began questioning the Copenhagen interpretation in the late 1940s while writing a book on quantum mechanics. According to the Copenhagen interpretation, a quantum entity such as an electron has no definite existence apart from our observation of it. We cannot say with certainty whether it is either a wave or a particle. The interpretation also rejects the possibility that the seemingly probabilistic behavior of quantum systems stems from underlying, deterministic mechanisms.

Bohm found this view unacceptable. “The whole idea of science so far has been to say that underlying the phenomenon is some reality which explains things,” he explained. “It was not that Bohr denied reality, but he said quantum mechanics implied there was nothing more that could be said about it.” Such a view reduced quantum mechanics to “a system of formulas that we use to make predictions or to control things technologically. I said that’s not enough. I don’t think I would be very interested in science if that were all there was.”

In 1952 Bohm proposed that particles are indeed particles–and at all times, not just when they are observed in a certain way. Their behavior is determined by a force that Bohm called the “pilot wave.” Any effort to observe a particle alters its behavior by disturbing the pilot wave. Bohm thus gave the uncertainty principle a purely physical rather than metaphysical meaning. Niels Bohr had interpreted the uncertainty principle as meaning “not that there is uncertainty, but that there is an inherent ambiguity” in a quantum system, Bohm explained.

Bohm’s interpretation gets rid of one quantum paradox, wave/particle duality, but it preserves and even highlights another, nonlocality, the capacity of one particle to influence another instantaneously across vast distances. Einstein had drawn attention to nonlocality in 1935 in an effort to show that quantum mechanics must be flawed. Together with Boris Podolsky and Nathan Rosen, Einstein proposed a thought experiment involving two particles that spring from a common source and fly in opposite directions.

According to the standard model of quantum mechanics, neither particle has fixed properties, such as momentum, before it is measured. But by measuring one particle’s momentum, the physicist instantaneously forces the other particle, no matter how distant, to assume a fixed momentum. Deriding this effect as “spooky action at a distance,” Einstein argued that quantum mechanics must be flawed or incomplete. But in 1980 French physicists demonstrated spooky action in a laboratory. Bohm never had any doubts about the experiment’s outcome. “It would have been a terrific surprise to find out otherwise,” he said.

But here is the paradox of Bohm: Although he tried to make the world more sensible with his pilot-wave model, he also argued that complete clarity is impossible. He reached this conclusion after seeing an experiment on television, in which a drop of ink was squeezed onto a cylinder of glycerine. When the cylinder was rotated, the ink diffused through the glycerine in an apparently irreversible fashion. Its order seemed to have disintegrated. But when the direction of rotation was reversed, the ink gathered into a drop again.

The experiment inspired Bohm to write Wholeness and the Implicate Order, published in 1980. He proposed that underlying physical appearances, the “explicate order,” there is a deeper, hidden “implicate order.” Applying this concept to the quantum realm, Bohm proposed that the implicate order is a field consisting of an infinite number of fluctuating pilot waves. The overlapping of these waves generates what appears to us as particles, which constitute the explicate order. Even space and time might be manifestations of a deeper, implicate order, according to Bohm.

To plumb the implicate order, Bohm said, physicists might need to jettison basic assumptions about nature. During the Enlightenment, thinkers such as Newton and Descartes replaced the ancients’ organic concept of order with a mechanistic view. Even after the advent of relativity and quantum mechanics, “the basic idea is still the same,” Bohm told me, “a mechanical order described by coordinates.”

Bohm hoped scientists would eventually move beyond mechanistic and even mathematical paradigms. “We have an assumption now that’s getting stronger and stronger that mathematics is the only way to deal with reality,” Bohm said. “Because it’s worked so well for a while, we’ve assumed that it has to be that way.”

Someday, science and art will merge, Bohm predicted. “This division of art and science is temporary,” he observed. “It didn’t exist in the past, and there’s no reason why it should go on in the future.” Just as art consists not simply of works of art but of an “attitude, the artistic spirit,” so does science consist not in the accumulation of knowledge but in the creation of fresh modes of perception. “The ability to perceive or think differently is more important than the knowledge gained,” Bohm explained.

Bohm rejected the claim of physicists such as Hawking and Weinberg that physics can achieve a final “theory of everything” that explains the world. Science is an infinite, “inexhaustible process,” he said. “The form of knowledge is to have at any moment something essential, and the appearance can be explained. But then when we look deeper at these essential things they turn out to have some feature of appearances. We’re not ever going to get a final essence which isn’t also the appearance of something.”

Bohm feared that belief in a final theory might become self-fulfilling. “If you have fish in a tank and you put a glass barrier in there, the fish keep away from it,” he noted. “And then if you take away the glass barrier they never cross the barrier and they think the whole world is that.” He chuckled drily. “So your thought that this is the end could be the barrier to looking further.” Trying to convince me that final knowledge is unattainable, Bohm offered the following argument:

“Anything known has to be determined by its limits. And that’s not just quantitative but qualitative. The theory is this and not that. Now it’s consistent to propose that there is the unlimited. You have to notice that if you say there is the unlimited, it cannot be different, because then the unlimited will limit the limited, by saying that the limited is not the unlimited, right? The unlimited must include the limited. We have to say, from the unlimited the limited arises, in a creative process. That’s consistent. Therefore we say that no matter how far we go there is the unlimited. It seems that no matter how far you go, somebody will come up with another point you have to answer. And I don’t see how you could ever settle that.”

To my relief, Bohm’s wife entered the room and asked if we wanted more tea. As she refilled my cup, I pointed out a book on Buddhism on a shelf and asked Bohm if he was interested in spirituality. He nodded. He had been a friend of Krishnamurti, one of the first modern Indian sages to try to show Westerners how to achieve the state of spiritual serenity and grace called enlightenment. Was Krishnamurti enlightened? “In some ways, yes,” Bohm replied. “His basic thing was to go into thought, to get to the end of it, completely, and thought would become a different kind of consciousness.”

Of course, one could never truly plumb one’s own mind, Bohm said. Any attempt to examine one’s own thought changes it–just as the measurement of an electron alters its course. We cannot achieve final self-knowledge, Bohm seemed to imply, any more we can achieve a final theory of physics.

Was Krishnamurti a happy person? Bohm seemed puzzled by my question. “That’s hard to say,” he replied. “He was unhappy at times, but I think he was pretty happy overall. The thing is not about happiness, really.” Bohm frowned, as if realizing the import of what he had just said.

I said goodbye to Bohm and his wife and departed. Outside, a light rain was falling. I walked up the path to the street and glanced back at Bohm’s house, a modest whitewashed cottage on a street of modest whitewashed cottages. He died of a heart attack two months later.

In Wholeness and the Implicate Order Bohm insisted on the importance of “playfulness” in science, and in life, but Bohm, in his writings and in person, was anything but playful. For him, truth-seeking was not a game, it was a dreadful, impossible, necessary task. Bohm was desperate to know, to discover the secret of everything, but he knew it wasn’t attainable, not for any mortal being. No one gets out of the fish tank alive.

John Horgan directs the Center for Science Writings at the Stevens Institute of Technology. His books include “The End of Science,” “The End of War” and “Mind-Body Problems,” available for free at mindbodyproblems.com.

The views expressed are those of the author(s) and are not necessarily those of Scientific American. Scientific American

More from Scientific American

This post originally appeared on Scientific American and was published July 23, 2018. This article is republished here with permission.

Why String Theory Is Still Not Even Wrong

A Frozen Graveyard: The Sad Tales of Antarctica’s Deaths

Beneath layers of snow and ice on the world’s coldest continent, there may be hundreds of people buried forever. Martha Henriques investigates their stories.

BBC Future

  • Martha Henriques
p06h2xhx.jpg

Crevasses can be deadly; this vehicle in the 1950s had a lucky escape. Credit: Getty Images.

In the bleak, almost pristine land at the edge of the world, there are the frozen remains of human bodies – and each one tells a story of humanity’s relationship with this inhospitable continent.

Even with all our technology and knowledge of the dangers of Antarctica, it can remain deadly for anyone who goes there. Inland, temperatures can plummet to nearly -90C (-130F). In some places, winds can reach 200mph (322km/h). And the weather is not the only risk.

Many bodies of scientists and explorers who perished in this harsh place are beyond reach of retrieval. Some are discovered decades or more than a century later. But many that were lost will never be found, buried so deep in ice sheets or crevasses that they will never emerge – or they are headed out towards the sea within creeping glaciers and calving ice.

The stories behind these deaths range from unsolved mysteries to freak accidents. In the second of the series Frozen Continent, BBC Future explored what these events reveal about life on the planet’s most inhospitable landmass.

1800s: Mystery of the Chilean Bones

At Livingston Island, among the South Shetlands off the Antarctic Peninsula, a human skull and femur have been lying near the shore for 175 years. They are the oldest human remains ever found in Antarctica.

The bones were discovered on the beach in the 1980s. Chilean researchers found that they belonged to a woman who died when she was about 21 years old. She was an indigenous person from southern Chile, 1,000km (620 miles) away.

Analysis of the bones suggested that she died between 1819 and 1825. The earlier end of that range would put her among the very first people to have been in Antarctica.

A Russian orthodox church sits on a small rise above Chile’s research base. Credit: Yadvinder Malhi.

The question is, how did she get there? The traditional canoes of the indigenous Chileans couldn’t have supported her on such a long voyage through what can be incredibly rough seas.

“There’s no evidence for an independent Amerindian presence in the South Shetlands,” says Michael Pearson, an Antarctic heritage consultant and independent researcher. “It’s not a journey you’d make in a bark canoe.”

The original interpretation by the Chilean researchers was that she was an indigenous guide to the sealers travelling from the northern hemisphere to the Antarctic islands that had been newly discovered by William Smith in 1819. But women taking part in expeditions to the far south in those early days was virtually unheard of.

Sealers did have a close relationship with the indigenous people of southern Chile, says Melisa Salerno, an archaeologist of the Argentinean Scientific and Technical Research Council (Conicet). Sometimes they would exchange seal skins with each other. It’s not out of the question that they traded expertise and knowledge, too. But the two cultures’ interactions weren’t always friendly.

“Sometimes it was a violent situation,” says Salerno. “The sealers could just take a woman from one beach and later leave her far away on another.”

Any scientist or explorer visiting Antarctica knows that they could be at risk. Credit: Getty Images.

A lack of surviving logs and journals from the early ships sailing south to Antarctica makes it even more difficult to trace this woman’s history.

Her story is unique among the early human presence in Antarctica. A woman who, by all the usual accounts, shouldn’t have been there – but somehow she was. Her bones mark the start of human activity on Antarctica, and the unavoidable loss of life that comes with trying to occupy this inhospitable continent.

29 March 1912: Scott’s South Pole Expedition Crew

Robert Falcon Scott’s team of British explorers reached the South Pole on 17 January 1912, just three weeks after the Norwegian team led by Roald Amundsen had departed from the same spot.

The British group’s morale was crushed when they discovered that they had not arrived first. Soon after, things would get much worse.

Attaining the pole was a feat to test human endurance, and Scott had been under huge pressure. As well as dealing with the immediate challenges of the harsh climate and lack of natural resources like wood for building, he had a crew of more than 60 men to lead. More pressure came from the high hopes of his colleagues back home.

Robert Falcon Scott writing his journal. Credit: Herbert Ponting/Wikipedia.

“They mean to do or die – that is the spirit in which they are going to the Antarctic,” Leonard Darwin, a president of the Royal Geographical Society and son of Charles Darwin, said in a speech at the time.

“Captain Scott is going to prove once again that the manhood of the nation is not dead … the self-respect of the whole nation is certainly increased by such adventures as this,” he said.

Scott was not impervious to the expectations. “He was a very rounded, human character,” says Max Jones, a historian of heroism and polar exploration at the University of Manchester. “In his journals, you find he’s racked with doubts and anxieties about whether he’s up to the task and that makes him more appealing. He had failings and weaknesses too.”

Despite his worries and doubts, the mindset of “do or die” drove the team to take risks that might seem alien to us now.

On the team’s return from the pole, Edgar Evans died first, in February. Then Lawrence Oates. He had considered himself a burden, thinking the team could not return home with him holding them back. “I am just going outside and may be some time,” he said on 17 March.

Members of the ill-fated British expedition to the pole. Credit: Getty Images.

Perhaps he had not realised how close the rest of the group were to death. The bodies of Oates and Evans were never found, but Scott, Edward Wilson and Henry Bowers were discovered by a search party several months after their deaths. They had died on 29 March 1912, according to the date in Scott’s diary entry. The search party covered them with snow and left them where they lay.

“I do not think human beings ever came through such a month as we have come through,” Scott wrote in his diary’s final pages. The team knew they were within 18km (11 miles) of the last food depot, with the supplies that could have saved them. But they were confined to a tent for days, growing weaker, trapped by a fierce blizzard.

“They were prepared to risk their lives and they saw that as legitimate. You can view that as part of a mindset of imperial masculinity, tied up with enduring hardship and hostile environments,” says Jones. “I’m not saying that they had a death wish, but I think that they were willing to die.”

14 October 1965: Jeremy Bailey, David Wild and John Wilson

Four men were riding a Muskeg tractor and its sledges near the Heimefront Mountains, to the east of their base at Halley Research Station in East Antarctica, close to the Weddell Sea. The Muskeg was a heavy-duty vehicle designed to haul people and supplies over long distances on the ice. A team of dogs ran behind.

Three of the men were in the cab. The fourth, John Ross, sat behind on the sledge at the back, close to the huskies. Jeremy (Jerry) Bailey, a scientist measuring the depth of the ice beneath the tractor, was driving. He and David (Dai) Wild, a surveyor, and John Wilson, a doctor, were scanning the ice ahead. Snow obscured much of the small, flat windscreen. The group had been travelling all day, taking turns to warm up in the cab or sit out back on the sledge.

Ross was staring out at the vast ice, snow and Stella Group mountains. At about 8:30, the dogs alongside the sledge stopped running. The sledge had ground to a halt.

Ross, muffled with a balaclava and two anoraks, had heard nothing. He turned to see that the Muskeg was gone. Ahead, the first sledge was leaning down into the ice. Ross ran up to it to find it had wedged in the top of a large crevasse running directly across their course. The Muskeg itself had fallen about 30m (100ft) into the crevasse. Down below, its tracks were wedged vertically against one ice wall, and the cab had been flattened hard against the other.

Ross shouted down. There was no reply from the three men in the cab. After about 20 minutes of shouting, Ross heard a reply. The exchange, as he recorded it from memory soon after the event, was brief:

Ross: Dai?

Bailey: Dai’s dead. It’s me.

Ross: Is that John or Jerry?

Bailey: Jerry.

Ross: How is John?

Bailey: He’s a goner, mate.

Ross: What about yourself?

Bailey: I’m all smashed up.

Ross: Can you move about at all or tie a rope round yourself?

Bailey: I’m all smashed up.

Ross tried climbing down into the crevasse, but the descent was difficult. Bailey told him not to risk it, but Ross tried anyway. After several attempts, Bailey stopped responding to Ross’s calls. Ross heard a scream from the crevasse. After that, Bailey didn’t respond.

Crevasses – deep clefts in the ice stretching down hundreds of feet – are serious threats while travelling across the Antarctic. On 14 October 1965, there had been strong winds kicking up drifts and spreading snow far over the landscape, according to reports on the accident held at the British Antarctic Survey archives. This concealed the top of the chasms, and crucially, the thin blue line in the ice ahead of each drop that would have warned the men to stop.

“You can imagine – there’s a bit of drift about, and there’s bits of ice on the windscreen, your fingers are bloody cold, and you think it’s about time to stop anyway,” says Rod Rhys Jones, one of the expedition party who had not gone on that trip with the Muskeg. He points to the crevassed area the Muskeg had been driving over, on a map of the continent spread over his coffee table, littered with books on the Antarctic.

Many bodies are never recovered; others are buried on the continent. Credit: Getty Images.

“You’re driving along over the ice and thumping and bumping and banging. You don’t see the little blue line.”

Jones questions whether the team had been given adequate training for the hazards of travel in Antarctica. They were young men, mostly fresh out of university. Many of them had little experience in harsh physical conditions. Much of their time preparing for life in Antarctica was spent learning to use the scientific equipment they would need, not training them in how to avoid accidents on the ice.

Each accident in Antarctica has slowly led to changes in the way people travelled and were trained. Reports filed after the incident recommended several ways to make travel through crevassed regions safer, from adapting the vehicle, to new ways to hitch them together.

August 1982: Ambrose Morgan, Kevin Ockleton and John Coll

The three men set out over the ice for an expedition to a nearby island in the depths of the Antarctic winter.

The sea ice was firm, and they made it easily to Petermann Island. The southern aurora was visible in the sky, unusually bright and strong enough to wipe out communications. The team reached the island safely and camped out at a hut near the shore.

Soon after reaching the shore, a large storm blew in that, by the next day, entirely destroyed the sea ice. The group was stranded, but concern among the party was low. There was enough food in the hut to last three people more than a month.

In the next few days, the sea ice failed to reform as storms swept and disrupted the ice in the channel.

Death is never far away in Antarctica. Credit: Richard Fisher.

There were no books or papers in the hut, and contact with the outside world was limited to scheduled radio transmissions to the base. Soon, it had been two weeks. The transmissions were kept brief, as the batteries in their radios were getting weaker and weaker. The team grew restless. Gentoo and Adelie penguins surrounded the hut. They might have looked endearing, but their smell soon began to bother the men.

Things got worse. The team got diarrhoea, as it turned out some of the food in the hut was much older than they had thought. The stench of the penguins didn’t make them feel any better. They killed and ate a few to boost their supplies.

The men waited with increasing frustration, complaining of boredom on their radio transmissions to base. On Friday 13 August 1982, they were seen through a telescope, waving back to the main base. Radio batteries were running low. The sea ice had reformed again, providing a tantalising hope for escape.

Two days later, on Sunday 15 August, the group didn’t check in on the radio at the scheduled time. Then another large storm blew in.

The men at the base climbed up to a high point where they could see the island. All the sea ice was gone again, taken out by the storm.

“These guys had done something which we all did – go out on a little trip to the island,” says Pete Salino, who had been on the main base at the time. The three men were never seen again.

There were very strong currents around the island. Reliable, thick ice formed relatively rarely, Salino recalls. The way they tested whether the ice would hold them was primitive – they would whack it with a wooden stick tipped with metal to see if it would smash.

Even after an extensive search, the bodies were never found. Salino suspects the men went out onto the ice when it reformed and either got stuck or weren’t able to turn back when the storm blew in.

“It does sound mad now, sitting in a cosy room in Surrey,” Salino says. “When we used to go out, there was always a risk of falling through, but you’d always go prepared. We’d always have spare clothing in a sealed bag. We all accepted the risk and felt that it could have been any of us.”

Legacy of Death

For those who experience the loss of colleagues and friends in Antarctica, grieving can be uniquely difficult. When a friend disappears or a body cannot be recovered, the typical human rituals of death – a burial, a last goodbye – elude those left behind.

Clifford Shelley, a British geophysicist based at Argentine Islands off the Antarctic Peninsula in the late 1970s, lost friends who were climbing the nearby peak Mount Peary in 1976. It was thought that those men – Geoffrey Hargreaves, Michael Walker and Graham Whitfield – were trapped in an avalanche. Signs of their camp were found by an air search, but their bodies were never recovered.

The graves of past explorers. Credit: Getty Images.

“You just wait and wait, but there’s nothing. Then you just sort of lose hope,” Shelley says.

Even when the body is recovered, the demanding nature of life and work on Antarctica can make it a hard place to grieve. Ron Pinder, a radio operator in the South Orkneys in the late 1950s and early 1960s, still mourns someone who slipped from a cliff on Signy Island while tagging birds in 1961. The body of his friend, Roger Filer, was found at the foot of a 20ft (6m) cliff below the nests where he was thought to have been tagging birds. His body was buried on the island.

“It is 57 years ago now. It is in the distant past. But it affects me more now than it did then. Life was such that you had to get on with it,” Pinder says.

The same rings true for Shelley. “I don’t think we did really process it,” he says. “It remains at the back of your mind. But it’s certainly a mixed feeling, because Antarctica is superbly beautiful, both during the winter and the summer. It’s the best place to be and we were doing the things we wanted to do.”

The monument to those who lost their lives at the Scott Polar Research Institute. Credit: swancharlotte/Wikipedia/CC BY-SA 4.0.

These deaths have led to changes in how people work in Antarctica. As a result, the people there today can live more safely on this hazardous, isolated continent. Although terrible incidents still happen, much has been learned from earlier fatalities.

For the friends and families of the dead, there is an ongoing effort to make sure their lost loved ones are not forgotten. Outside the Scott Polar Research Institute in Cambridge, UK, two high curved oak pillars lean towards one another, gently touching at the top. It is half of a monument to the dead, erected by the British Antarctic Monument Trust, set up by Rod Rhys Jones and Brian Dorsett-Bailey, Jeremy’s brother, to recognise and honour those who died in Antarctica. The other half of the monument is a long slither of metal leaning slightly towards the sea at Port Stanley in the Falkland Islands, where many of the researchers set off for the last leg of their journey to Antarctica.

Viewed from one end so they align, the oak pillars curve away from each other, leaving a long tapering empty space between them. The shape of that void is perfectly filled by the tall steel shard mounted on a plinth on the other side of the world. It is a physical symbol that spans the hemispheres, connecting home with the vast and wild continent that drew these scientists away for the last time.

More from BBC Future

Water, Water, Every Where — And Now Scientists Know Where It Came From September 3rd 2020

Nell Greenfieldboyce 2010

Nell Greenfieldboyce

Water on Earth is omnipresent and essential for life as we know it, and yet scientists remain a bit baffled about where all of this water came from: Was it present when the planet formed, or did the planet form dry and only later get its water from impacts with water-rich objects such as comets?

A new study in the journal Science suggests that the Earth likely got a lot ofits precious water from the original materials that built the planet, instead of having water arrive later from afar.

The researchers who did this study went looking for signs of water in a rare kind of meteorite. Only about 2% of the meteorites found on Earth are so-called enstatite chondrite meteorites. Their chemical makeup suggests they’re close to the kind of primordial stuff that glommed together and produced our planet 4.5 billion years ago.

You wouldn’t necessarily know how special these meteorites are at first glance. “It’s a bit like a gray rock,” says Laurette Piani, a researcher in France at the Centre de Recherches Pétrographiques et Géochimiques.

What she wanted to know about these rocks is how much hydrogen was in there — because that’s what could produce water.

Space

NASA Braves The Heat To Get Up Close And Personal With Our Sun

Compared with planets such as Jupiter and Saturn, the Earth formed close to the sun. Scientists have long thought that the temperatures must have been hot enough to prevent any water from being in the form of ice. That means there would be no ice to join with the swirling bits of rock and dust that were smashing into each other and slowly building up the young Earth.

If this is all true, our home planet must have been watered later on, perhaps when it got hit by icy comets or meteorites with water-rich minerals coming from farther out in the solar system.

Space

Frosty Asteroid May Give Clues About Earth’s Oceans

Even though that’s been the prevailing view, some planetary scientists don’t buy it. After all, the story of Earth’s water would be a lot more simple and straightforward if the water was just present to begin with.

So Piani and her colleagues recently took a close look at 13 of those unusual meteorites, which are also thought to have formed close in to the sun.

“Before the study, there were almost no measurement of the hydrogen or water in this meteorite,” Piani says. Those measurements that did exist were inconsistent, she says, and were done on meteorites that could have undergone changes after falling to the Earth’s surface.

“We do not want to have meteorites that were altered and modified by the Earth processes,” Piani explains, saying that they deliberately selected the most pristine meteorites possible.

The researchers then analyzed the meteorite’s chemical makeup to see how much hydrogen was in there. Since hydrogen can react with oxygen to produce water, knowing how much hydrogen is in the rocks indicates how much water this material could have contributed to a growing Earth.

What they found was much less hydrogen than in more ordinary meteorites.

Still, what was there would be enough to explain plenty of Earth’s water — at least several times the amount of water in the Earth’s present-day oceans. “It’s a very big quantity of water in the initial material,” Piani says. “And this was never really considered before.”

What’s more, the team also measured the deuterium-to-hydrogen ratio in the meteorites and found that it’s similar to what’s known to exist in the interior of the Earth — which also contains a lot of water. This is additional evidence that there’s a link between our planet’s water and the basic building materials that were present when it formed.

The findings pleased Anne Peslier, a planetary scientist at NASA’s Johnson Space Center in Houston, who wasn’t part of the research team but has a special interest in water.

“I was happy because it makes it nice and simple,” Peslier says. “We don’t have to invoke complicated models where we have to bring material, water-rich material from the outer part of the solar system.”

She says the delivery of so much water from way out there would have required something unusual to disturb the orbits of this water-rich material, such as Jupiter having a little trip inside the inner solar system.

“So here, we just don’t need Jupiter. We don’t need to do anything weird. We just grab the material that was there where the Earth formed, and that’s where the water comes from,” Peslier says.

Even if a lot of the water was there at the start, however, she thinks some must have arrived later on. “I think it’s both,” she says.

Despite these convincing results, she says, there’s still plenty of watery mysteries to plumb. For example, researchers are still trying to determine exactly how much water is locked deep inside the Earth, but it’s surely substantial — several oceans’ worth.

“There is more water down beneath our feet,” Peslier says, “than there is that you see at the surface.”

A 16-Million-Year-Old Tree Tells a Deep Story of the Passage of Time

The sequoia tree slab is an invitation to begin thinking about a vast timescale that includes everything from fossils of armored amoebas to the great Tyrannosaurus rex.

Smithsonian Magazine

  • Riley Black
GettyImages-584372283.jpg

How many answers are hidden inside the giants? Photo by Kelly Cheng Travel Photography / Getty Images.

Paleobotanist Scott Wing hopes that he’s wrong. Even though he carefully counted each ring in an immense, ancient slab of sequoia, the scientist notes that there’s always a little bit of uncertainty in the count. Wing came up with about 260, but, he says, it’s likely a young visitor may one day write him saying: “You’re off by three.” And that would a good thing, Wing says, because it’d be another moment in our ongoing conversation about time.

The shining slab, preserved and polished, is the keystone to consideration of time and our place in it in the “Hall of Fossils—Deep Time” exhibition at the Smithsonian’s National Museum of Natural History. The fossil greets visitors at one of the show’s entrances and just like the physical tree, what the sequoia represents has layers.

Each yearly delineation on the sequoia’s surface is a small part of a far grander story that ties together all of life on Earth. Scientists know this as Deep Time. It’s not just on the scale of centuries, millennia, epochs, or periods, but the ongoing flow that goes back to the origins of our universe, the formation of the Earth, and the evolution of all life, up through this present moment. It’s the backdrop for everything we see around us today, and it can be understood through techniques as different as absolute dating of radioactive minerals and counting the rings of a prehistoric tree. Each part informs the whole.

In decades past, the Smithsonian’s fossil halls were known for the ancient celebrities they contained. There was the dinosaur hall, and the fossil mammal hall, surrounded by the remains of other extinct organisms. But now all of those lost species have been brought together into an integrated story of dynamic and dramatic change. The sequoia is an invitation to begin thinking about how we fit into the vast timescale that includes everything from fossils of armored amoebas called forams to the great Tyrannosaurus rex.

Exactly how the sequoia fossil came to be at the Smithsonian is not entirely clear. The piece was gifted to the museum long ago, “before my time,” Wing says. Still, enough of the tree’s backstory is known to identify it as a massive tree that grew in what’s now central Oregon about 16 million years ago. This tree was once a long-lived part of a true forest primeval.

There are fossils both far older and more recent in the recesses of the Deep Time displays. But what makes the sequoia a fitting introduction to the story that unfolds behind it, Wing says, is that the rings offer different ways to think about time. Given that the sequoia grew seasonally, each ring marks the passage of another year, and visitors can look at the approximately 260 delineations and think about what such a time span represents.

Wing says, people can play the classic game of comparing the tree’s life to a human lifespan. If a long human life is about 80 years, Wing says, then people can count 80, 160, and 240 years, meaning the sequoia grew and thrived over the course of approximately three human lifespans—but during a time when our own ancestors resembled gibbon-like apes. Time is not something that life simply passes through. In everything—from the rings of an ancient tree to the very bones in your body—time is part of life.

The record of that life—and even afterlife—lies between the lines. “You can really see that this tree was growing like crazy in its initial one hundred years or so,” Wing says, with the growth slowing as the tree became larger. And despite the slab’s ancient age, some of the original organic material is still locked inside.

“This tree was alive, photosynthesizing, pulling carbon dioxide out of the atmosphere, turning it into sugars and into lignin and cellulose to make cell walls,” Wing says. After the tree perished, water carrying silica and other minerals coated the log to preserve the wood and protect some of those organic components inside. “The carbon atoms that came out of the atmosphere 16 million years ago are locked in this chunk of glass.”

And so visitors are drawn even further back, not only through the life of the tree itself but through a time span so great that it’s difficult to comprehend. A little back of the envelope math indicates that the tree represents about three human lifetimes, but that the time between when the sequoia was alive and the present could contain about 200,000 human lifetimes. The numbers grow so large that they begin to become abstract. The sequoia is a way to touch that history and start to feel the pull of all those ages past, and what they mean to us. “Time is so vast,” Wing says, “that this giant slab of a tree is just scratching the surface.”

Riley Black is a freelance science writer specializing in evolution, paleontology and natural history who blogs regularly for Scientific American.Smithsonian Magazine

More from Smithsonian Magazine

This post originally appeared on Smithsonian Magazine and was published June 10, 2019. This article is republished here with permission.

“Holy Grail” Metallic Hydrogen Is Going to Change Everything

The substance has the potential to revolutionize everything from space travel to the energy grid. August 26th 2020

Inverse

  • Kastalia Medrano
GettyImages-532102097.jpg

Photo from Stocktrek Images / Getty Images.

Two Harvard scientists have succeeded in creating an entirely new substance long believed to be the “holy grail” of physics — metallic hydrogen, a material of unparalleled power that could one day propel humans into deep space. The research was published in January 2017 in the journal Science.

Scientists created the metallic hydrogen by pressurizing a hydrogen sample to more pounds per square inch than exists at the center of the Earth. This broke the molecule down from its solid state and allowed the particles to dissociate into atomic hydrogen.

The best rocket fuel we currently have is liquid hydrogen and liquid oxygen, burned for propellant. The efficacy of such substances is characterized by “specific impulse,” the measure of impulse fuel can give a rocket to propel it forward.

“People at NASA or the Air Force have told me that if they could get an increase from 450 seconds [of specific impulse] to 500 seconds, that would have a huge impact on rocketry,” Isaac Silvera, the Thomas D. Cabot Professor of the Natural Sciences at Harvard University, told Inverse by phone. “If you can trigger metallic hydrogen to recover to the molecular phase, [the energy release] calculated for that is 1700 seconds.”

Metallic hydrogen could potentially enable rockets to get into orbit in a single stage, even allowing humans to explore the outer planets. Metallic hydrogen is predicted to be “metastable” — meaning if you make it at a very high pressure then release it, it’ll stay at that pressure. A diamond, for example, is a metastable form of graphite. If you take graphite, pressurize it, then heat it, it becomes a diamond; if you take the pressure off, it’s still a diamond. But if you heat it again, it will revert back to graphite.

Scientists first theorized atomic metallic hydrogen a century ago. Silvera, who created the substance along with post-doctoral fellow Ranga Dias, has been chasing it since 1982 and working as a professor of physics at the University of Amsterdam.

Metallic hydrogen has also been predicted to be a high- or possibly room-temperature superconductor. There are no other known room-temperature superconductors in existence, meaning the applications are immense — particularly for the electric grid, which suffers for energy lost through heat dissipation. It could also facilitate magnetic levitation for futuristic high-speed trains; substantially improve performance of electric cars; and revolutionize the way energy is produced and stored.

But that’s all still likely a couple of decades off. The next step in terms of practical application is to determine if metallic hydrogen is indeed metastable. Right now Silvera has a very small quantity. If the substance does turn out to be metastable, it might be used to create room-temperature crystal and — by spraying atomic hydrogen onto the surface —use it like a seed to grow more, the way synthetic diamonds are made. Inverse

More from Inverse

This Is How Your Brain Becomes Addicted to Caffeine August 23rd 2020

Regular ingestion of the drug alters your brain’s chemical make up, leading to fatigue, headaches and nausea if you try to quit.

Smithsonian Magazine

Regular caffeine use alters your brain’s chemical makeup, leading to fatigue, headaches and nausea if you try to quit.

Within 24 hours of quitting the drug, your withdrawal symptoms begin. Initially, they’re subtle: The first thing you notice is that you feel mentally foggy, and lack alertness. Your muscles are fatigued, even when you haven’t done anything strenuous, and you suspect that you’re more irritable than usual.

Over time, an unmistakable throbbing headache sets in, making it difficult to concentrate on anything. Eventually, as your body protests having the drug taken away, you might even feel dull muscle pains, nausea and other flu-like symptoms.

This isn’t heroin, tobacco or even alcohol withdrawal. We’re talking about quitting caffeine, a substance consumed so widely (the FDA reports that more than 80 percent of American adults drink it daily) and in such mundane settings (say, at an office meeting or in your car) that we often forget it’s a drug—and by far the world’s most popular psychoactive one.

Like many drugs, caffeine is chemically addictive, a fact that scientists established back in 1994. In May 2013, with the publication of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM), caffeine withdrawal was finally included as a mental disorder for the first time—even though its merits for inclusion are symptoms that regular coffee-drinkers have long known well from the times they’ve gone off it for a day or more.

Why, exactly, is caffeine addictive? The reason stems from the way the drug affects the human brain, producing the alert feeling that caffeine drinkers crave.

Soon after you drink (or eat) something containing caffeine, it’s absorbed through the small intestine and dissolved into the bloodstream. Because the chemical is both water- and fat-soluble (meaning that it can dissolve in water-based solutions—think blood—as well as fat-based substances, such as our cell membranes), it’s able to penetrate the blood-brain barrier and enter the brain.

Structurally, caffeine closely resembles a molecule that’s naturally present in our brain, called adenosine (which is a by product of many cellular processes, including cellular respiration)—so much so, in fact, that caffeine can fit neatly into our brain cells’ receptors for adenosine, effectively blocking them off. Normally, the adenosine produced over time locks into these receptors and produces a feeling of tiredness.

Caffeine_and_adenosine.svg.png

Caffeine structurally resembles adenosine enough for it to fit into the brain’s adenosine receptors.

When caffeine molecules are blocking those receptors, they prevent this from occurring, thereby generating a sense of alertness and energy for a few hours.

Additionally, some of the brain’s own natural stimulants (such as dopamine) work more effectively when the adenosine receptors are blocked, and all the surplus adenosine floating around in the brain cues the adrenal glands to secrete adrenaline, another stimulant.

For this reason, caffeine isn’t technically a stimulant on its own, says Stephen R. Braun, the author or Buzzed: the Science and Lore of Caffeine and Alcohol, but a stimulant enabler: a substance that lets our natural stimulants run wild. Ingesting caffeine, he writes, is akin to “putting a block of wood under one of the brain’s primary brake pedals.” This block stays in place for anywhere from four to six hours, depending on the person’s age, size and other factors, until the caffeine is eventually metabolized by the body.

In people who take advantage of this process on a daily basis (i.e. coffee/tea, soda or energy drink addicts), the brain’s chemistry and physical characteristics actually change over time as a result. The most notable change is that brain cells grow more adenosine receptors, which is the brain’s attempt to maintain equilibrium in the face of a constant onslaught of caffeine, with its adenosine receptors so regularly plugged (studies indicate that the brain also responds by decreasing the number of receptors for norepinephrine, a stimulant). This explains why regular coffee drinkers build up a tolerance over time—because you have more adenosine receptors, it takes more caffeine to block a significant proportion of them and achieve the desired effect.

This also explains why suddenly giving up caffeine entirely can trigger a range of withdrawal effects. The underlying chemistry is complex and not fully understood, but the principle is that your brain is used to operating in one set of conditions (with an artificially-inflated number of adenosine receptors, and a decreased number of norepinephrine receptors) that depend upon regular ingestion of caffeine. Suddenly, without the drug, the altered brain chemistry causes all sorts of problems, including the dreaded caffeine withdrawal headache.

The good news is that, compared to many drug addictions, the effects are relatively short-term. To kick the thing, you only need to get through about 7-12 days of symptoms without drinking any caffeine. During that period, your brain will naturally decrease the number of adenosine receptors on each cell, responding to the sudden lack of caffeine ingestion. If you can make it that long without a cup of joe or a spot of tea, the levels of adenosine receptors in your brain reset to their baseline levels, and your addiction will be broken.

Joseph Stromberg Smithsonian

Is Consciousness an Illusion?

Philosopher Daniel Dennett holds a distinctive and openly paradoxical position on the question of consciousness. August 20th 2020

The New York Review of Books

  • Thomas Nagel
nagel_1-030917.jpg

Daniel Dennett at the Centro Cultural de la Ciencia, Buenos Aires, Argentina, June 2016. Photo by Soledad Aznarez / AP Images.

For fifty years the philosopher Daniel Dennett has been engaged in a grand project of disenchantment of the human world, using science to free us from what he deems illusions—illusions that are difficult to dislodge because they are so natural. In From Bacteria to Bach and Back, his eighteenth book (thirteenth as sole author), Dennett presents a valuable and typically lucid synthesis of his worldview. Though it is supported by reams of scientific data, he acknowledges that much of what he says is conjectural rather than proven, either empirically or philosophically.

Dennett is always good company. He has a gargantuan appetite for scientific knowledge, and is one of the best people I know at transmitting it and explaining its significance, clearly and without superficiality. He writes with wit and elegance; and in this book especially, though it is frankly partisan, he tries hard to grasp and defuse the sources of resistance to his point of view. He recognizes that some of what he asks us to believe is strongly counterintuitive. I shall explain eventually why I think the overall project cannot succeed, but first let me set out the argument, which contains much that is true and insightful.

The book has a historical structure, taking us from the prebiotic world to human minds and human civilization. It relies on different forms of evolution by natural selection, both biological and cultural, as its most important method of explanation. Dennett holds fast to the assumption that we are just physical objects and that any appearance to the contrary must be accounted for in a way that is consistent with this truth. Bach’s or Picasso’s creative genius, and our conscious experience of hearing Bach’s Fourth Brandenburg Concerto or seeing Picasso’s Girl Before a Mirror, all arose by a sequence of physical events beginning with the chemical composition of the earth’s surface before the appearance of unicellular organisms. Dennett identifies two unsolved problems along this path: the origin of life at its beginning and the origin of human culture much more recently. But that is no reason not to speculate.

The task Dennett sets himself is framed by a famous distinction drawn by the philosopher Wilfrid Sellars between the “manifest image” and the “scientific image”—two ways of seeing the world we live in. According to the manifest image, Dennett writes, the world is

full of other people, plants, and animals, furniture and houses and cars…and colors and rainbows and sunsets, and voices and haircuts, and home runs and dollars, and problems and opportunities and mistakes, among many other such things. These are the myriad “things” that are easy for us to recognize, point to, love or hate, and, in many cases, manipulate or even create…. It’s the world according to us.

According to the scientific image, on the other hand, the world

is populated with molecules, atoms, electrons, gravity, quarks, and who knows what else (dark energy, strings? branes?).

This, according to Dennett, is the world as it is in itself, not just for us, and the task is to explain scientifically how the world of molecules has come to include creatures like us, complex physical objects to whom everything, including they themselves, appears so different.

He greatly extends Sellars’s point by observing that the concept of the manifest image can be generalized to apply not only to humans but to all other living beings, all the way down to bacteria. All organisms have biological sensors and physical reactions that allow them to detect and respond appropriately only to certain features of their environment—“affordances,” Dennett calls them—that are nourishing, noxious, safe, dangerous, sources of energy or reproductive possibility, potential predators or prey.

For each type of organism, whether plant or animal, these are the things that define their world, that are salient and important for them; they can ignore the rest. Whatever the underlying physiological mechanisms, the content of the manifest image reveals itself in what the organisms do and how they react to their environment; it need not imply that the organisms are consciously aware of their surroundings. But in its earliest forms, it is the first step on the route to awareness.


The lengthy process of evolution that generates these results is first biological and then, in our case, cultural, and only at the very end is it guided partly by intelligent design, made possible by the unique capacities of the human mind and human civilization. But as Dennett says, the biosphere is saturated with design from the beginning—everything from the genetic code embodied in DNA to the metabolism of unicellular organisms to the operation of the human visual system—design that is not the product of intention and that does not depend on understanding.

One of Dennett’s most important claims is that most of what we and our fellow organisms do to stay alive, cope with the world and one another, and reproduce is not understood by us or them. It is competence without comprehension. This is obviously true of organisms like bacteria and trees that have no comprehension at all, but it is equally true of creatures like us who comprehend a good deal. Most of what we do, and what our bodies do—digest a meal, move certain muscles to grasp a doorknob, or convert the impact of sound waves on our eardrums into meaningful sentences—is done for reasons that are not our reasons. Rather, they are what Dennett calls free-floating reasons, grounded in the pressures of natural selection that caused these behaviors and processes to become part of our repertoire. There are reasons why these patterns have emerged and survived, but we don’t know those reasons, and we don’t have to know them to display the competencies that allow us to function.

Nor do we have to understand the mechanisms that underlie those competencies. In an illuminating metaphor, Dennett asserts that the manifest image that depicts the world in which we live our everyday lives is composed of a set of user-illusions,

like the ingenious user-illusion of click-and-drag icons, little tan folders into which files may be dropped, and the rest of the ever more familiar items on your computer’s desktop. What is actually going on behind the desktop is mind-numbingly complicated, but users don’t need to know about it, so intelligent interface designers have simplified the affordances, making them particularly salient for human eyes, and adding sound effects to help direct attention. Nothing compact and salient inside the computer corresponds to that little tan file-folder on the desktop screen.

He says that the manifest image of each species is “a user-illusion brilliantly designed by evolution to fit the needs of its users.” In spite of the word “illusion” he doesn’t wish simply to deny the reality of the things that compose the manifest image; the things we see and hear and interact with are “not mere fictions but different versions of what actually exists: real patterns.” The underlying reality, however, what exists in itself and not just for us or for other creatures, is accurately represented only by the scientific image—ultimately in the language of physics, chemistry, molecular biology, and neurophysiology.


Our user-illusions were not, like the little icons on the desktop screen, created by an intelligent interface designer. Nearly all of them—such as our images of people, their faces, voices, and actions, the perception of some things as delicious or comfortable and others as disgusting or dangerous—are the products of “bottom-up” design, understandable through the theory of evolution by natural selection, rather than “top-down” design by an intelligent being. Darwin, in what Dennett calls a “strange inversion of reasoning,” showed us how to resist the intuitive tendency always to explain competence and design by intelligence, and how to replace it with explanation by natural selection, a mindless process of accidental variation, replication, and differential survival.

As for the underlying mechanisms, we now have a general idea of how they might work because of another strange inversion of reasoning, due to Alan Turing, the creator of the computer, who saw how a mindless machine could do arithmetic perfectly without knowing what it was doing. This can be applied to all kinds of calculation and procedural control, in natural as well as in artificial systems, so that their competence does not depend on comprehension. Dennett’s claim is that when we put these two insights together, we see that

all the brilliance and comprehension in the world arises ultimately out of uncomprehending competences compounded over time into ever more competent—and hence comprehending—systems. This is indeed a strange inversion, overthrowing the pre-Darwinian mind-first vision of Creation with a mind-last vision of the eventual evolution of us, intelligent designers at long last.

And he adds:

Turing himself is one of the twigs on the Tree of Life, and his artifacts, concrete and abstract, are indirectly products of the blind Darwinian processes in the same way spider webs and beaver dams are….

An essential, culminating stage of this process is cultural evolution, much of which, Dennett believes, is as uncomprehending as biological evolution. He quotes Peter Godfrey-Smith’s definition, from which it is clear that the concept of evolution can apply more widely:

Evolution by natural selection is change in a population due to (i) variation in the characteristics of members of the population, (ii) which causes different rates of reproduction, and (iii) which is heritable.

In the biological case, variation is caused by mutations in DNA, and it is heritable through reproduction, sexual or otherwise. But the same pattern applies to variation in behavior that is not genetically caused, and that is heritable only in the sense that other members of the population can copy it, whether it be a game, a word, a superstition, or a mode of dress.


This is the territory of what Richard Dawkins memorably christened “memes,” and Dennett shows that the concept is genuinely useful in describing the formation and evolution of culture. He defines “memes” thus:

They are a kind of way of behaving (roughly) that can be copied, transmitted, remembered, taught, shunned, denounced, brandished, ridiculed, parodied, censored, hallowed.

They include such things as the meme for wearing your baseball cap backward or for building an arch of a certain shape; but the best examples of memes are words. A word, like a virus, needs a host to reproduce, and it will survive only if it is eventually transmitted to other hosts, people who learn it by imitation:

Like a virus, it is designed (by evolution, mainly) to provoke and enhance its own replication, and every token it generates is one of its offspring. The set of tokens descended from an ancestor token form a type, which is thus like a species.

ezgif.com-webp-to-jpg.jpg

Alan Turing; drawing by David Levine.

The distinction between type and token comes from the philosophy of language: the word “tomato” is a type, of which any individual utterance or inscription or occurrence in thought is a token. The different tokens may be physically very different—you say “tomayto,” I say “tomahto”—but what unites them is the perceptual capacity of different speakers to recognize them all as instances of the type. That is why people speaking the same language with different accents, or typing with different fonts, can understand each other.

A child picks up its native language without any comprehension of how it works. Dennett believes, plausibly, that language must have originated in an equally unplanned way, perhaps initially by the spontaneous attachment of sounds to prelinguistic thoughts. (And not only sounds but gestures: as Dennett observes, we find it very difficult to talk without moving our hands, an indication that the earliest language may have been partly nonvocal.) Eventually such memes coalesced to form languages as we know them, intricate structures with vast expressive capacity, shared by substantial populations.

Language permits us to transcend space and time by communicating about what is not present, to accumulate shared bodies of knowledge, and with writing to store them outside of individual minds, resulting in the vast body of collective knowledge and practice dispersed among many minds that constitutes civilization. Language also enables us to turn our attention to our own thoughts and develop them deliberately in the kind of top-down creativity characteristic of science, art, technology, and institutional design.

But such top-down research and development is possible only on a deep foundation of competence whose development was largely bottom-up, the result of cultural evolution by natural selection. Without denigrating the contributions of individual genius, Dennett urges us not to forget its indispensable precondition, the arms race over millennia of competing memes—exemplified by the essentially unplanned evolution, survival, and extinction of languages.

Of course the biological evolution of the human brain made all of this possible, together with some coevolution of brain and culture over the past 50,000 years, but at this point we can only speculate about what happened. Dennett cites recent research in support of the view that brain architecture is the product of bottom-up competition and coalition-formation among neurons—partly in response to the invasion of memes. But whatever the details, if Dennett is right that we are physical objects, it follows that all the capacities for understanding, all the values, perceptions, and thoughts that present us with the manifest image and allow us to form the scientific image, have their real existence as systems of representation in the central nervous system.


This brings us to the question of consciousness, on which Dennett holds a distinctive and openly paradoxical position. Our manifest image of the world and ourselves includes as a prominent part not only the physical body and central nervous system but our own consciousness with its elaborate features—sensory, emotional, and cognitive—as well as the consciousness of other humans and many nonhuman species. In keeping with his general view of the manifest image, Dennett holds that consciousness is not part of reality in the way the brain is. Rather, it is a particularly salient and convincing user-illusion, an illusion that is indispensable in our dealings with one another and in monitoring and managing ourselves, but an illusion nonetheless.

You may well ask how consciousness can be an illusion, since every illusion is itself a conscious experience—an appearance that doesn’t correspond to reality. So it cannot appear to me that I am conscious though I am not: as Descartes famously observed, the reality of my own consciousness is the one thing I cannot be deluded about. The way Dennett avoids this apparent contradiction takes us to the heart of his position, which is to deny the authority of the first-person perspective with regard to consciousness and the mind generally.

The view is so unnatural that it is hard to convey, but it has something in common with the behaviorism that was prevalent in psychology at the middle of the last century. Dennett believes that our conception of conscious creatures with subjective inner lives—which are not describable merely in physical terms—is a useful fiction that allows us to predict how those creatures will behave and to interact with them. He has coined the term “heterophenomenology” to describe the (strictly false) attribution each of us makes to others of an inner mental theater—full of sensory experiences of colors, shapes, tastes, sounds, images of furniture, landscapes, and so forth—that contains their representation of the world.

According to Dennett, however, the reality is that the representations that underlie human behavior are found in neural structures of which we know very little. And the same is true of the similar conception we have of our own minds. That conception does not capture an inner reality, but has arisen as a consequence of our need to communicate to others in rough and graspable fashion our various competencies and dispositions (and also, sometimes, to conceal them):

Curiously, then, our first-person point of view of our own minds is not so different from our second-person point of view of others’ minds: we don’t see, or hear, or feel, the complicated neural machinery churning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all.

The trouble is that Dennett concludes not only that there is much more behind our behavioral competencies than is revealed to the first-person point of view—which is certainly true—but that nothing whatever is revealed to the first-person point of view but a “version” of the neural machinery. In other words, when I look at the American flag, it may seem to me that there are red stripes in my subjective visual field, but that is an illusion: the only reality, of which this is “an interpreted, digested version,” is that a physical process I can’t describe is going on in my visual cortex.

I am reminded of the Marx Brothers line: “Who are you going to believe, me or your own eyes?” Dennett asks us to turn our backs on what is glaringly obvious—that in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotle’s words, “maintaining a thesis at all costs.”

If I understand him, this requires us to interpret ourselves behavioristically: when it seems to me that I have a subjective conscious experience, that experience is just a belief, manifested in what I am inclined to say. According to Dennett, the red stripes that appear in my visual field when I look at the flag are just the “intentional object” of such a belief, as Santa Claus is the intentional object of a child’s belief in Santa Claus. Neither of them is real. Recall that even trees and bacteria have a manifest image, which is to be understood through their outward behavior. The same, it turns out, is true of us: the manifest image is not an image after all.


There is no reason to go through such mental contortions in the name of science. The spectacular progress of the physical sciences since the seventeenth century was made possible by the exclusion of the mental from their purview. To say that there is more to reality than physics can account for is not a piece of mysticism: it is an acknowledgment that we are nowhere near a theory of everything, and that science will have to expand to accommodate facts of a kind fundamentally different from those that physics is designed to explain. It should not disturb us that this may have radical consequences, especially for Dennett’s favorite natural science, biology: the theory of evolution, which in its current form is a purely physical theory, may have to incorporate nonphysical factors to account for consciousness, if consciousness is not, as he thinks, an illusion. Materialism remains a widespread view, but science does not progress by tailoring the data to fit a prevailing theory.

There is much in the book that I haven’t discussed, about education, information theory, prebiotic chemistry, the analysis of meaning, the psychological role of probability, the classification of types of minds, and artificial intelligence. Dennett’s reflections on the history and prospects of artificial intelligence and how we should manage its development and our relation to it are informative and wise. He concludes:

The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence….

We should hope that new cognitive prostheses will continue to be designed to be parasitic, to be tools, not collaborators. Their only “innate” goal, set up by their creators, should be to respond, constructively and transparently, to the demands of the user.

About the true nature of the human mind, Dennett is on one side of an old argument that goes back to Descartes. He pays tribute to Descartes, citing the power of what he calls “Cartesian gravity,” the pull of the first-person point of view; and he calls the allegedly illusory realm of consciousness the “Cartesian Theater.” The argument will no doubt go on for a long time, and the only way to advance understanding is for the participants to develop and defend their rival conceptions as fully as possible—as Dennett has done. Even those who find the overall view unbelievable will find much to interest them in this book.

Thomas Nagel is University Professor Emeritus at NYU. He is the author of “The View From Nowhere, Mortal Questions, and Mind and Cosmos,” among other books.
The New York Review of Books

More from The New York Review of Books

This post originally appeared on The New York Review of Books

Wireless Charging Risk August 16th 2020

Wireless charging is increasingly common in modern smartphones, and there’s even speculation that Apple might ditch charging via a cable entirely in the near future. But the slight convenience of juicing up your phone by plopping it onto a pad rather than plugging it in comes with a surprisingly robust environmental cost. According to new calculations from OneZero and iFixit, wireless charging is drastically less efficient than charging with a cord, so much so that the widespread adoption of this technology could necessitate the construction of dozens of new power plants around the world. (Unless manufacturers find other ways to make up for the energy drain, of course.)

On paper, wireless charging sounds appealing. Just drop a phone down on a charger and it will start charging. There’s no wear and tear on charging ports, and chargers can even be built into furniture. Not all of the energy that comes out of a wall outlet, however, ends up in a phone’s battery. Some of it gets lost in the process as heat.

While this is true of all forms of charging to a certain extent, wireless chargers lose a lot of energy compared to cables. They get even less efficient when the coils in the phone aren’t aligned properly with the coils in the charging pad, a surprisingly common problem.

To get a sense of how much extra power is lost when using wireless charging versus wired charging in the real world, I tested a Pixel 4 using multiple wireless chargers, as well as the standard charging cable that comes with the phone. I used a high-precision power meter that sits between the charging block and the power outlet to measure power consumption.

In my tests, I found that wireless charging used, on average, around 47% more power than a cable.

Charging the phone from completely dead to 100% using a cable took an average of 14.26 watt-hours (Wh). Using a wireless charger took, on average, 21.01 Wh. That comes out to slightly more than 47% more energy for the convenience of not plugging in a cable. In other words, the phone had to work harder, generate more heat, and suck up more energy when wirelessly charging to fill the same size battery.

How the phone was positioned on the charger significantly affected charging efficiency. The flat Yootech charger I tested was difficult to line up properly. Initially I intended to measure power consumption with the coils aligned as well as possible, then intentionally misalign them to detect the difference.

Instead, during one test, I noticed that the phone wasn’t charging. It looked like it was aligned properly, but while trying to fiddle with it, the difference between positions that charged properly and those that didn’t charge at all could be measured in millimeters. Without a visual indicator, it would be impossible to tell. Without careful alignment, this could make the phone take way more energy to charge than necessary or, more annoyingly, not charge at all.

The first test with the Yootech pad — before I figured out how to align the coils properly — took a whopping 25.62 Wh to charge, or 80% more energy than an average cable charge. Hearing about the hypothetical inefficiencies online was one thing, but here I could see how I’d nearly doubled the amount of power it took to charge my phone by setting it down slightly wrong instead of just plugging in a cable.

Google’s official Pixel Stand fared better, likely due to its propped-up design. Since the base of the phone sits flat, the coils can only be misaligned from left to right — circular pads like the Yootech allow for misalignment in any direction. Again, the threshold was a few millimeters of difference tops (as seen below), but the Pixel Stand continued charging while misaligned, albeit slower and using more power. In general, the propped-up design helped align the coils without much fiddling, but it still used an average of 19.8 Wh, or 39% more power, to charge the phone than cables.

On top of this, both wireless chargers independently consumed a small amount of power when no phone was charging at all — around 0.25 watts, which might not sound like much, but over 24 hours it would consume around six watt-hours. A household with multiple wireless chargers left plugged in — say, a charger by the bed, one in the living room, and another in the office — could waste the same amount of power in a day as it would take to fully charge a phone. By contrast, in my testing the normal cable charger did not draw any measurable amount of power.

While wireless charging might use relatively more power than a cable, it’s often written off as negligible. The extra power consumed by charging one phone with wireless charging versus a cable is the equivalent of leaving one extra LED light bulb on for a few hours. It might not even register on your power bill. At scale, however, it can turn into an environmental problem.

“I think in terms of power consumption, for me worrying about how much I’m paying for electricity, I don’t think it’s a factor,” Kyle Wiens, CEO of iFixit, told OneZero. “If all of a sudden, the 3 billion[-plus] smartphones that are in use, if all of them take 50% more power to charge, that adds up to a big amount. So it’s a society-wide issue, not a personal issue.”

To get a frame of reference for scale, iFixit helped me calculate the impact that the kind of excess power drain I experienced could have if every smartphone user on the planet switched to wireless charging — not a likely scenario any time soon, but neither was 3.5 billion people carrying around smartphones, say, 30 years ago.

“We worked out that at 100% efficiency from wall socket to battery, it would take about 73 coal power plantsrunning for a day to charge the 3.5 billion smartphone batteries once fully,” iFixit technical writer Arthur Shi told OneZero. But if people place their phones wrong and reduce the efficiency of their charging, the number grows: “If the wireless charging efficiency was only 50%, you would need to double the [73] power plants in order to charge all the batteries.”

If everyone in the world switched to wireless charging, it would have a measurable impact on the global power grid.

This is rough math, of course. Measuring power consumption by the number of power plants devices require is a bit like measuring how many vehicles it takes to transport a couple dozen people. It could take a dozen two-seat convertibles, or one bus. Shi’s math assumed relatively small coal power plants outputting 50 MW, as many power plants in the United States are, but those same needs could also be met by a couple very large power plants outputting more than 2,000 MW (of which the United States has only 29).

However, the broader point remains the same: If everyone in the world switched to wireless charging, it would have a measurable impact on the global power grid. While tech companies like Apple and Google tout how environmentally friendly their phones are, power consumption often goes overlooked. “They want to cover the carbon impact of the product over their entire life cycle?” Wiens said. “The entire life cycle includes all the power that these things ever consumed plugged into the wall.”

There are some things that companies can do to balance out the excess power wireless chargers use. Manufacturers can design phones to disable wireless charging if their coils aren’t aligned — instead of allowing excessively inefficient charging for the sake of user experience — or design chargers to hold phones so they align properly. They can also continue to offer wired charging, which might mean Apple’s rumored future port-less phone would have to wait.

Finally, tech companies can work to offset their excesses in one area with savings in another. Wireless charging is only one small piece of the environmental picture, and environmental reports for major phones from Google and Apple only loosely point to energy efficiency and make no mention of the impact of using wireless chargers. There are many ways tech companies could be more energy-efficient to put less strain on our power grids. Until wireless charging itself gets a more thorough examination, though, the world would probably be better off if we all stuck to good old-fashioned plugs.

Update:A previous version of this article misstated two units of measurement in reference to the Pixel Stand charger. It consumes 0.25 watts when plugged in without a phone attached, which over 24 hours would consume around six watt-hours.

Bill Gates on Covid: Most US Tests Are ‘Completely Garbage’

The techie-turned-philanthropist on vaccines, Trump, and why social media is “a poisoned chalice.”

For 20 years, Bill Gates has been easing out of the roles that made him rich and famous—CEO, chief software architect, and chair of Microsoft—and devoting his brainpower and passion to the Bill and Melinda Gates Foundation, abandoning earnings calls and antitrust hearings for the metrics of disease eradication and carbon reduction. This year, after he left the Microsoft board, one would have thought he would have relished shedding the spotlight directed at the four CEOs of big tech companies called before Congress.

But as with many of us, 2020 had different plans for Gates. An early Cassandra who warned of our lack of preparedness for a global pandemic, he became one of the most credible figures as his foundation made huge investments in vaccines, treatments, and testing. He also became a target of the plague of misinformation afoot in the land, as logorrheic critics accused him of planning to inject microchips in vaccine recipients. (Fact check: false. In case you were wondering.)

My first interview with Gates was in 1983, and I’ve long lost count of how many times I’ve spoken to him since. He’s yelled at me (more in the earlier years) and made me laugh (more in the latter years). But I’ve never looked forward to speaking to him more than in our year of Covid. We connected on Wednesday, remotely of course. In discussing our country’s failed responses, his issues with his friend Mark Zuckerberg’s social networks, and the innovations that might help us out of this mess, Gates did not disappoint. The interview has been edited for length and clarity.

WIRED: You have been warning us about a global pandemic for years. Now that it has happened just as you predicted, are you disappointed with the performance of the United States? Exclusive Offer.

Bill Gates: Yeah. There’s three time periods, all of which have disappointments. There is 2015 until this particular pandemic hit. If we had built up the diagnostic, therapeutic, and vaccine platforms, and if we’d done the simulations to understand what the key steps were, we’d be dramatically better off. Then there’s the time period of the first few months of the pandemic, when the US actually made it harder for the commercial testing companies to get their tests approved, the CDC had this very low volume test that didn’t work at first, and they weren’t letting people test. The travel ban came too late, and it was too narrow to do anything. Then, after the first few months, eventually we figured out about masks, and that leadership is important.Get WIRED Access SubscribeMost Popular

Advertisement

So you’re disappointed, but are you surprised?

I’m surprised at the US situation because the smartest people on epidemiology in the world, by a lot, are at the CDC. I would have expected them to do better. You would expect the CDC to be the most visible, not the White House or even Anthony Fauci. But they haven’t been the face of the epidemic. They are trained to communicate and not try to panic people but get people to take things seriously. They have basically been muzzled since the beginning. We called the CDC, but they told us we had to talk to the White House a bunch of times. Now they say, “Look, we’re doing a great job on testing, we don’t want to talk to you.” Even the simplest things, which would greatly improve this system, they feel would be admitting there is some imperfection and so they are not interested.

Do you think it’s the agencies that fell down or just the leadership at the top, the White House?

We can do the postmortem at some point. We still have a pandemic going on, and we should focus on that. The White House didn’t allow the CDC to do its job after March. There was a window where they were engaged, but then the White House didn’t let them do that. So the variance between the US and other countries isn’t that first period, it’s the subsequent period where the messages—the opening up, the leadership on masks, those things—are not the CDC’s fault. They said not to open back up; they said that leadership has to be a model of face mask usage. I think they have done a good job since April, but we haven’t had the benefit of it.

At this point, are you optimistic?

Yes. You have to admit there’s been trillions of dollars of economic damage done and a lot of debts, but the innovation pipeline on scaling up diagnostics, on new therapeutics, on vaccines is actually quite impressive. And that makes me feel like, for the rich world, we should largely be able to end this thing by the end of 2021, and for the world at large by the end of 2022. That is only because of the scale of the innovation that’s taking place. Now whenever we get this done, we will have lost many years in malaria and polio and HIV and the indebtedness of countries of all sizes and instability. It’ll take you years beyond that before you’d even get back to where you were at the start of 2020. It’s not World War I or World War II, but it is in that order of magnitude as a negative shock to the system.

In March it was unimaginable that you’d be giving us that timeline and saying it’s great.

Well it’s because of innovation that you don’t have to contemplate an even sadder statement, which is this thing will be raging for five years until natural immunity is our only hope.

Let’s talk vaccines, which your foundation is investing in. Is there anything that’s shaping up relatively quickly that could be safe and effective?

Before the epidemic came, we saw huge potential in the RNA vaccines—Moderna, Pfizer/BioNTech, and CureVac. Right now, because of the way you manufacture them, and the difficulty of scaling up, they are more likely—if they are helpful—to help in the rich countries. They won’t be the low-cost, scalable solution for the world at large. There you’d look more at AstraZeneca or Johnson & Johnson. This disease, from both the animal data and the phase 1 data, seems to be very vaccine preventable. There are questions still. It will take us awhile to figure out the duration [of protection], and the efficacy in elderly, although we think that’s going to be quite good. Are there any side effects, which you really have to get out in those large phase 3 groups and even after that through lots of monitoring to see if there are any autoimmune diseases or conditions that the vaccine could interact with in a deleterious fashion.Most Popular

Advertisement

Are you concerned that in our rush to get a vaccine we are going to approve something that isn’t safe and effective?

Yeah. In China and Russia they are moving full speed ahead. I bet there’ll be some vaccines that will get out to lots of patients without the full regulatory review somewhere in the world. We probably need three or four months, no matter what, of phase 3 data, just to look for side effects. The FDA, to their credit, at least so far, is sticking to requiring proof of efficacy. So far they have behaved very professionally despite the political pressure. There may be pressure, but people are saying no, make sure that that’s not allowed. The irony is that this is a president who is a vaccine skeptic. Every meeting I have with him he is like, “Hey, I don’t know about vaccines, and you have to meet with this guy Robert Kennedy Jr. who hates vaccines and spreads crazy stuff about them.”

Wasn’t Kennedy Jr. talking about you using vaccines to implant chips into people?

Yeah, you’re right. He, Roger Stone, Laura Ingraham. They do it in this kind of way: “I’ve heard lots of people say X, Y, Z.” That’s kind of Trumpish plausible deniability. Anyway, there was a meeting where Francis Collins, Tony Fauci, and I had to [attend], and they had no data about anything. When we would say, “But wait a minute, that’s not real data,” they’d say, “Look, Trump told you you have to sit and listen, so just shut up and listen anyway.” So it’s a bit ironic that the president is now trying to have some benefit from a vaccine.

What goes through your head when you’re in a meeting hearing misinformation, and the President of the United States wants you to keep your mouth shut?

That was a bit strange. I haven’t met directly with the president since March of 2018. I made it clear I’m glad to talk to him about the epidemic anytime. And I have talked to Debbie Birx, I’ve talked to Pence, I’ve talked to Mnuchin, Pompeo, particularly on the issue of, Is the US showing up in terms of providing money to procure the vaccine for the developing countries? There have been lots of meetings, but we haven’t been able to get the US to show up. It’s very important to be able to tell the vaccine companies to build extra factories for the billions of doses, that there is procurement money to buy those for the marginal cost. So in this supplemental bill, I’m calling everyone I can to get 4 billion through GAVI for vaccines and 4 billion through a global fund for therapeutics. That’s less than 1 percent to the bill, but in terms of saving lives and getting us back to normal, that under 1 percent is by far the most important thing if we can get it in there.

Speaking of therapeutics, if you were in the hospital and you have the disease and you’re looking over the doctor’s shoulder, what treatment are you going to ask for?

Remdesivir. Sadly the trials in the US have been so chaotic that the actual proven effect is kind of small. Potentially the effect is much larger than that. It’s insane how confused the trials here in the US have been. The supply of that is going up in the US; it will be quite available for the next few months. Also dexamethasone—it’s actually a fairly cheap drug—that’s for late-stage disease.Most Popular

Advertisement

I’m assuming you’re not going to have trouble paying for it, Bill, so you could ask for anything.

Well, I don’t want special treatment, so that’s a tricky thing. Other antivirals are two to three months away. Antibodies are two to three months away. We’ve had about a factor-of-two improvement in hospital outcomes already, and that’s with just remdesivir and dexamethasone. These other things will be additive to that.

You helped fund a Covid diagnostic testing program in Seattle that got quicker results, and it wasn’t so intrusive. The FDA put it on pause. What happened?

There’s this thing where the health worker jams the deep turbinate, in the back of your nose, which actually hurts and makes you sneeze on the healthy worker. We showed that the quality of the results can be equivalent if you just put a self-test in the tip of your nose with a cotton swab. The FDA made us jump through some hoops to prove that you didn’t need to refrigerate the result, that it could go back in a dry plastic bag, and so on. So the delay there was just normal double checking, maybe overly careful but not based on some political angle. Because of what we have done at FDA, you can buy these cheaper swabs that are available by the billions. So anybody who’s using the deep turbinate now is just out of date. It’s a mistake, because it slows things down.

But people aren’t getting their tests back quickly enough.

Well, that’s just stupidity. The majority of all US tests are completely garbage, wasted. If you don’t care how late the date is and you reimburse at the same level, of course they’re going to take every customer. Because they are making ridiculous money, and it’s mostly rich people that are getting access to that. You have to have the reimbursement system pay a little bit extra for 24 hours, pay the normal fee for 48 hours, and pay nothing [if it isn’t done by then]. And they will fix it overnight.

Why don’t we just do that?

Because the federal government sets that reimbursement system. When we tell them to change it they say, “As far as we can tell, we’re just doing a great job, it’s amazing!” Here we are, this is August. We are the only country in the world where we waste the most money on tests. Fix the reimbursement. Set up the CDC website. But I have been on that kick, and people are tired of listening to me.

As someone who has built your life on science and logic, I’m curious what you think when you see so many people signing onto this anti-science view of the world.

Well, strangely, I’m involved in almost everything that anti-science is fighting. I’m involved with climate change, GMOs, and vaccines. The irony is that it’s digital social media that allows this kind of titillating, oversimplistic explanation of, “OK, there’s just an evil person, and that explains all of this.” And when you have [posts] encrypted, there is no way to know what it is. I personally believe government should not allow those types of lies or fraud or child pornography [to be hidden with encryption like WhatsApp or Facebook Messenger].

Well, you’re friends with Mark Zuckerberg. Have you talked to him about this?

After I said this publicly, he sent me mail. I like Mark, I think he’s got very good values, but he and I do disagree on the trade-offs involved there. The lies are so titillating you have to be able to see them and at least slow them down. Like that video where, what do they call her, the sperm woman? That got over 10 million views! [Note: It was more than 20 million.] Well how good are these guys at blocking things, where once something got the 10 million views and everybody was talking about it, they didn’t delete the link or the searchability? So it was meaningless. They claim, “Oh, now we don’t have it.” What effect did that have? Anybody can go watch that thing! So I am a little bit at odds with the way that these conspiracy theories spread, many of which are anti-vaccine things. We give literally tens of billions for vaccines to save lives, then people turn around saying, “No, we’re trying to make money and we’re trying to end lives.” That’s kind of a wild inversion of what our values are and what our track record is.Most Popular

Advertisement

As you are the technology adviser to Microsoft, I think you can look forward in a few months to fighting this battle yourself when the company owns TikTok.

Yeah, my critique of dance moves will be fantastically value-added for them.

TikTok is more than just dance moves. There’s political content.

I know, I’m kidding. You’re right. Who knows what’s going to happen with that deal. But yes, it’s a poison chalice. Being big in the social media business is no simple game, like the encryption issue.

So are you wary of Microsoft getting into that game?

I mean, this may sound self-serving, but I think that the game being more competitive is probably a good thing. But having Trump kill off the only competitor, it’s pretty bizarre.

Do you understand what rule or regulation the president is invoking to demand that TikTok sell to an American company and then take a cut of the sales price?

I agree that the principle this is proceeding on is singly strange. The cut thing, that’s doubly strange. Anyway, Microsoft will have to deal with all of that.

You have been very cautious in staying away from the political arena. But the issues you care most about—public health and climate change—have had huge setbacks because of who leads the country. Are you reconsidering spending on political change?

The foundation needs to be bipartisan. Whoever gets elected in the US, we are going to want to work with them. We do care a lot about competence, and hopefully voters will take into account how this administration has done at picking competent people and should that weigh into their vote. But there’s going to be plenty of money on both sides of this election, and I don’t like diverting money to political things. Even though the pandemic has made it pretty clear we should expect better, there’s other people who will put their time into the campaigning piece.

Did you have deja vu last week when those tech CEOs testified remotely before Congress?

Yeah. I had a whole committee attacking me, and they had four at a time. I mean, Jesus Christ, what’s the Congress coming to? If you want to give a guy a hard time, give him at least a whole day that he has to sit there on the hot seat by himself! And they didn’t even have to get on a plane!

Do you think the antitrust concerns are the same as when Microsoft was under the gun, or has the landscape changed?

Even without antitrust rules, tech does tend to be quite competitive. And even though in the short run you don’t think it’s going to dislodge people, there will be changes that will keep bringing prices down. But there are a lot of valid issues, and if you’re super-successful, the pleasure of going in front of the Congress comes with the territory.

How has your life changed living under the pandemic?

I used to travel a lot. If I wanted to see President Macron and say, “Hey, give money for the coronavirus vaccine,” to really show I’m serious I’d go there. Now, we had a GAVI replenishment summit where I just sat at home and got up a little early. I am able to get a lot done. My kids are home more than I thought they would be, which at least for me is a nice thing. I’m microwaving more food. I’m getting fairly good at it. The pandemic sadly is less painful for those who were better off before the pandemic.

Do you have a go-to mask you use?

No, I use a pretty ugly normal mask. I change it every day. Maybe I should get a designer mask or something creative, but I just use this surgical-looking mask.

Comment Gates calls social media a poisoned challice because it was intended to be a disinformation highway. Covid 19 is very useful to Gates class. Philantropist he is not. His money grabbing organisation has exploited Chinese slave labour for years. Cheap manufactured computers have been crucial to the development of social media, making Gates super rich. He speaks for very profound and wealthy vested interests. As for the masks, there is no evidence that they or lockdown works.

The impact of Covid 19 has been on old, already sick and most importantly BAME – remember the mantra ‘Black Lives Matter.’ All white men are equally privileged and have no right to an opinion unless they are part of the devious manipulative controlling elite. As for herd immunity or vaccine, for that elite these dreams must be beyond the horizon. That is why they immediately rubbish the Russian vaccine. The elite have us right where they want us. Our fears and preoccupations must be BAME, domestic violence , sex crimes ,feminist demands and fighting racists – our fears focused on Russia and China. That elite faked the figures for the first wave and are determined to find or fake evidence of a second one. Robert Cook

Forget Everything You Think You Know About Time

Is a linear representation of time accurate? This physicist says no.

Nautilus

  • Brian Gallagher

In April 2018, in the famous Faraday Theatre at the Royal Institution in London, Carlo Rovelli gave an hour-long lecture on the nature of time. A red thread spanned the stage, a metaphor for the Italian theoretical physicist’s subject. “Time is a long line,” he said. To the left lies the past—the dinosaurs, the big bang—and to the right, the future—the unknown. “We’re sort of here,” he said, hanging a carabiner on it, as a marker for the present.

Then he flipped the script. “I’m going to tell you that time is not like that,” he explained.

Rovelli went on to challenge our common-sense notion of time, starting with the idea that it ticks everywhere at a uniform rate. In fact, clocks tick slower when they are in a stronger gravitational field. When you move nearby clocks showing the same time into different fields—one in space, the other on Earth, say—and then bring them back together again, they will show different times. “It’s a fact,” Rovelli said, and it means “your head is older than your feet.” Also a non-starter is any shared sense of “now.” We don’t really share the present moment with anyone. “If I look at you, I see you now—well, but not really, because light takes time to come from you to me,” he said. “So I see you sort of a little bit in the past.” As a result, “now” means nothing beyond the temporal bubble “in which we can disregard the time it takes light to go back and forth.”

Rovelli turned next to the idea that time flows in only one direction, from past to future. Unlike general relativity, quantum mechanics, and particle physics, thermodynamics embeds a direction of time. Its second law states that the total entropy, or disorder, in an isolated system never decreases over time. Yet this doesn’t mean that our conventional notion of time is on any firmer grounding, Rovelli said. Entropy, or disorder, is subjective: “Order is in the eye of the person who looks.” In other words the distinction between past and future, the growth of entropy over time, depends on a macroscopic effect—“the way we have described the system, which in turn depends on how we interact with the system,” he said.

“A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Getting to the last common notion of time, Rovelli became a little more cautious. His scientific argument that time is discrete—that it is not seamless, but has quanta—is less solid. “Why? Because I’m still doing it! It’s not yet in the textbook.” The equations for quantum gravity he’s written down suggest three things, he said, about what “clocks measure.” First, there’s a minimal amount of time—its units are not infinitely small. Second, since a clock, like every object, is quantum, it can be in a superposition of time readings. “You cannot say between this event and this event is a certain amount of time, because, as always in quantum mechanics, there could be a probability distribution of time passing.” Which means that, third, in quantum gravity, you can have “a local notion of a sequence of events, which is a minimal notion of time, and that’s the only thing that remains,” Rovelli said. Events aren’t ordered in a line “but are confused and connected” to each other without “a preferred time variable—anything can work as a variable.”

Even the notion that the present is fleeting doesn’t hold up to scrutiny. It is certainly true that the present is “horrendously short” in classical, Newtonian physics. “But that’s not the way the world is designed,” Rovelli explained. Light traces a cone, or consecutively larger circles, in four-dimensional spacetime like ripples on a pond that grow larger as they travel. No information can cross the bounds of the light cone because that would require information to travel faster than the speed of light.

“In spacetime, the past is whatever is inside our past light-cone,” Rovelli said, gesturing with his hands the shape of an upside down cone. “So it’s whatever can affect us. The future is this opposite thing,” he went on, now gesturing an upright cone. “So in between the past and the future, there isn’t just a single line—there’s a huge amount of time.” Rovelli asked an audience member to imagine that he lived in Andromeda, which is two and a half million light years away. “A million years of your life would be neither past nor future for me. So the present is not thin; it’s horrendously thick.”

Listening to Rovelli’s description, I was reminded of a phrase from his book, The Order of Time: Studying time “is like holding a snowflake in your hands: gradually, as you study it, it melts between your fingers and vanishes.” 

Brian Gallagher is the editor of Facts So Romantic, the Nautilus blog. Follow him on Twitter @BSGallagher.
Nautilus

More from Nautilus

Big Bounce Simulations Challenge the Big Bang

Detailed computer simulations have found that a cosmic contraction can generate features of the universe that we observe today.

In a cyclic universe, periods of expansion alternate with periods of contraction. The universe has no beginning and no end.

Samuel Velasco/Quanta Magazine

Charlie Wood

Contributing Writer

August 4, 2020

Cyclic Universe

The standard story of the birth of the cosmos goes something like this: Nearly 14 billion years ago, a tremendous amount of energy materialized as if from nowhere.

In a brief moment of rapid expansion, that burst of energy inflated the cosmos like a balloon. The expansion straightened out any large-scale curvature, leading to a geometry that we now describe as flat. Matter also thoroughly mixed together, so that now the cosmos appears largely (though not perfectly) featureless. Here and there, clumps of particles have created galaxies and stars, but these are just minuscule specks on an otherwise unblemished cosmic canvas.

That theory, which textbooks call inflation, matches all observations to date and is preferred by most cosmologists. But it has conceptual implications that some find disturbing. In most regions of space-time, the rapid expansion would never stop. As a consequence, inflation can’t help but produce a multiverse — a technicolor existence with an infinite variety of pocket universes, one of which we call home. To critics, inflation predicts everything, which means it ultimately predicts nothing. “Inflation doesn’t work as it was intended to work,” said Paul Steinhardt, an architect of inflation who has become one of its most prominent critics.

In recent years, Steinhardt and others have been developing a different story of how our universe came to be. They have revived the idea of a cyclical universe: one that periodically grows and contracts. They hope to replicate the universe that we see — flat and smooth — without the baggage that comes with a bang.

In ‘Brief History of Time’ Stephen Hawkins suggests that all the matter in the universe ariginated from a pin head size store of infinitely dense matter. That seemed unlikely to me. The idea of an ever expanding universe is based on that concept. Robert Cook.

Abstractions​ navigates promising ideas in science and mathematics. Journey with us and join the conversation.

To that end, Steinhardt and his collaborators recently teamed up with researchers who specialize in computational models of gravity. They analyzed how a collapsing universe would change its own structure, and they ultimately discovered that contraction can beat inflation at its own game. No matter how bizarre and twisted the universe looked before it contracted, the collapse would efficiently erase a wide range of primordial wrinkles.

“It’s very important, what they claim they’ve done,” said Leonardo Senatore, a cosmologist at Stanford University who has analyzed inflation using a similar approach. There are aspects of the work he hasn’t yet had a chance to investigate, he said, but at first glance “it looks like they’ve done it.”

Squeezing the View

Over the last year and a half, a fresh view of the cyclic, or “ekpyrotic,” universe has emerged from a collaboration between Steinhardt, Anna Ijjas, a cosmologist at the Max Planck Institute for Gravitational Physics in Germany, and others — one that achieves renewal without collapse.

When it comes to visualizing expansion and contraction, people often focus on a balloonlike universe whose change in size is described by a “scale factor.” But a second measure — the Hubble radius, which is the greatest distance we can see — gets short shrift. The equations of general relativity let them evolve independently, and, crucially, you can flatten the universe by changing either.

Picture an ant on a balloon. Inflation is like blowing up the balloon. It puts the onus of smoothing and flattening primarily on the swelling cosmos. In the cyclic universe, however, the smoothing happens during a period of contraction. During this epoch, the balloon deflates modestly, but the real work is done by a drastically shrinking horizon. It’s as if the ant views everything through an increasingly powerful magnifying glass. The distance it can see shrinks, and thus its world grows more and more featureless.

Lucy Reading-Ikkanda/Quanta Magazine

Steinhardt and company imagine a universe that expands for perhaps a trillion years, driven by the energy of an omnipresent (and hypothetical) field, whose behavior we currently attribute to dark energy. When this energy field eventually grows sparse, the cosmos starts to gently deflate. Over billions of years a contracting scale factor brings everything a bit closer, but not all the way down to a point. The dramatic change comes from the Hubble radius, which rushes in and eventually becomes microscopic. The universe’s contraction recharges the energy field, which heats up the cosmos and vaporizes its atoms. A bounce ensues, and the cycle starts anew.

In the bounce model, the microscopic Hubble radius ensures smoothness and flatness. And whereas inflation blows up many initial imperfections into giant plots of multiverse real estate, slow contraction squeezes them essentially out of existence. We are left with a cosmos that has no beginning, no end, no singularity at the Big Bang, and no multiverse.

From Any Cosmos to Ours

One challenge for both inflation and bounce cosmologies is to show that their respective energy fields create the right universe no matter how they get started. “Our philosophy is that there should be no philosophy,” Ijjas said. “You know it works when you don’t have to ask under what condition it works.”

She and Steinhardt criticize inflation for doing its job only in special cases, such as when its energy field forms without notable features and with little motion. Theorists have explored these situations most thoroughly, in part because they are the only examples tractable with chalkboard mathematics. In recent computer simulations, which Ijjas and Steinhardt describe in a pair of preprints posted online in June, the team stress-tested their slow-contraction model with a range of baby universes too wild for pen-and paper analysis.

Adapting code developed by Frans Pretorius, a theoretical physicist at Princeton University who specializes in computational models of general relativity, the collaboration explored twisted and lumpy fields, fields moving in the wrong direction, even fields born with halves racing in opposing directions. In nearly every case, contraction swiftly produced a universe as boring as ours.

“You let it go and — bam! In a few cosmic moments of slow contraction it looks as smooth as silk,” Steinhardt said.

Katy Clough, a cosmologist at the University of Oxford who also specializes in numerical solutions of general relativity, called the new simulations “very comprehensive.” But she also noted that computational advances have only recently made this kind of analysis possible, so the full range of conditions that inflation can handle remains uncharted.

“It’s been semi-covered, but it needs a lot more work,” she said.

While interest in Ijjas and Steinhardt’s model varies, most cosmologists agree that inflation remains the paradigm to beat. “[Slow contraction] is not an equal contender at this point,” said Gregory Gabadadze, a cosmologist at New York University.

The collaboration will next flesh out the bounce itself — a more complex stage that requires novel interactions to push everything apart again. Ijjas already has one bounce theory that upgrades general relativity with a new interaction between matter and space-time, and she suspects that other mechanisms exist too. She plans to put her model on the computer soon to understand its behavior in detail.

Related:

Physicists Debate Hawking’s Idea That the Universe Had No Beginning
How the Universe Got Its Bounce Back
A Fight for the Soul of Science

The group hopes that after gluing the contraction and expansion stages together, they’ll identify unique features of a bouncing universe that astronomers might spot.

The collaboration has not worked out every detail of a cyclic cosmos with no bang and no crunch, much less shown that we live in one. But Steinhardt now feels optimistic that the model will soon offer a viable alternative to the multiverse. “The roadblocks I was most worried about have been surpassed,” he said. “I’m not kept up at night anymore.”

Editor’s note: Some of this research was funded in part by the Simons Foundation, which also funds this editorially independent magazine. Simons Foundation funding decisions play no role in our coverage. M

KODAK Digital Still Camera

This Scientist Believes Ageing Is Optional august 10th 2020

In his book, “Lifespan,” celebrated scientist David Sinclair lays out exactly why we age—and why he thinks we don’t have to.

Outside

  • Graham Averill
life-review-book_h.jpg

If scientist David Sinclair is correct about aging, we might not have to age as quickly as we do. Photo by tomazl / iStock.

The oldest-known living person is Kane Tanaka, a Japanese woman who is a mind-boggling 116 years old. But if you ask David Sinclair, he’d argue that 116 is just middle age. At least, he thinks it should be. Sinclair is one of the leading scientists in the field of aging, and he believes that growing old isn’t a natural part of life—it’s a disease that needs a cure.

Sounds crazy, right? Sinclair, a Harvard professor who made Time’s list of the 100 most influential people in the world in 2014, will acquiesce that everyone has to die at some point, but he argues that we can double our life expectancy and live healthy, active lives right up until the end.

His 2019 book, Lifespan: Why We Age and Why We Don’t Have To ($28, Atria Books), out this fall, details the cutting-edge science that’s taking place in the field of longevity right now. The quick takeaway from this not-so-quick read: scientists are tossing out previous assumptions about aging, and they’ve discovered several tools that you can employ right now to slow down, and in some cases, reverse the clock.

In the nineties, as a postdoc in an MIT lab, Sinclair caused a stir in the field when he discovered the mechanism that leads to aging in yeast, which offered some insight into why humans age. Using his work with yeast as a launching point, Sinclair and his lab colleagues have focused on identifying the mechanism for aging in humans and published a study in 2013 asserting that the malfunction of a family of proteins called sirtuins is the single cause of aging. Sirtuins are responsible for repairing DNA damage and controlling overall cellular health by keeping cells on task. In other words, sirtuins tell kidney cells to act like kidney cells. If they get overwhelmed, cells start to misbehave, and we see the symptoms of aging, like organ failure or wrinkles. All of the genetic info in our cells is still there as we get older, but our body loses the ability to interpret it. This is because our body starts to run low on NAD, a molecule that activates the sirtuins: we have half as much NAD in our body when we’re 50 as we do at 20. Without it, the sirtuins can’t do their job, and the cells in our body forget what they’re supposed to be doing.

Sinclair splits his time between the U.S. and Australia, running labs at Harvard Medical School and at the University of New South Wales. All of his research seeks to prove that aging is a problem we can solve—and figure out how to stop. He argues that we can slow down the aging process, and in some cases even reverse it, by putting our body through “healthy stressors” that increase NAD levels and promote sirtuin activity. The role of sirtuins in aging is now fairly well accepted, but the idea that we can reactivate them (and how best to do so) is still being worked out.

Getting cold, working out hard, and going hungry every once in a while all engage what Sinclair calls our body’s survival circuit, wherein sirtuins tell cells to boost their defenses in order to keep the organism (you) alive. While Sinclair’s survival-circuit theory has yet to be proven in a trial setting, there’s plenty of research to suggest that exercise, cold exposure, and calorie reduction all help slow down the side effects of aging and stave off diseases associated with getting older. Fasting, in particular, has been well supported by other research: in various studies, both mice and yeast that were fed restricted diets live much longer than their well-fed cohorts. A two-year-long human experiment in the 1990s found that participants who had a restricted diet that left them hungry often had decreased blood pressure, blood-sugar levels, and cholesterol levels. Subsequent human studies found that decreasing calories by 12 percent slowed down biological aging based on changes in blood biomarkers.

Longevity science is a bit like the Wild West: the rules aren’t quite established. The research is exciting, but human clinical trials haven’t found anything definitive just yet. Throughout the field, there’s an uncomfortable relationship between privately owned companies, researchers, and even research institutes like Harvard: Sinclair points to a biomarker test by a company called InsideTracker as proof of his own reduced “biological age,” but he is also an investor in that company. He is listed as an inventor on a patent held by a NAD booster that’s on the market right now, too.

While the dust settles, the best advice for the curious to take from Lifespan is to experiment with habits that are easy, free, and harmless—like taking a brisk, cold walk and eating a lighter diet. With cold exposure, Sinclair explains, moderation is the key. He believes that you can reap benefits by simply taking a walk in the winter without a jacket. He doesn’t prescribe an exact fasting regimen that works best, but he doesn’t recommend anything extreme—simply missing a meal here and there, like skipping breakfast and having a late lunch.

How the Pandemic Defeated America

A virus has brought the world’s most powerful country to its knees.

窗体顶端

Like ​The Atlantic? Subscribe to The Atlantic Daily​, our free weekday email newsletter.

窗体底端

窗体顶端

窗体底端

Editor’s Note: The Atlantic is making vital coverage of the coronavirus available to all readers. Find the collection here.

Updated at 1:12 p.m. ET on August 4, 2020.

How did it come to this? A virus a thousand times smaller than a dust mote has humbled and humiliated the planet’s most powerful nation. America has failed to protect its people, leaving them with illness and financial ruin. It has lost its status as a global leader. It has careened between inaction and ineptitude. The breadth and magnitude of its errors are difficult, in the moment, to truly fatho

In the first half of 2020, SARSCoV2—the new coronavirus behind the disease COVID19—infected 10 million people around the world and killed about half a million. But few countries have been as severely hit as the United States, which has just 4 percent of the world’s population but a quarter of its confirmed COVID19 cases and deaths. These numbers are estimates. The actual toll, though undoubtedly higher, is unknown, because the richest country in the world still lacks sufficient testing to accurately count its sick citizens.

Despite ample warning, the U.S. squandered every possible opportunity to control the coronavirus. And despite its considerable advantages—immense resources, biomedical might, scientific expertise—it floundered. While countries as different as South Korea, Thailand, Iceland, Slovakia, and Australia acted decisively to bend the curve of infections downward, the U.S. achieved merely a plateau in the spring, which changed to an appalling upward slope in the summer. “The U.S. fundamentally failed in ways that were worse than I ever could have imagined,” Julia Marcus, an infectious-disease epidemiologist at Harvard Medical School, told me.

Since the pandemic began, I have spoken with more than 100 experts in a variety of fields. I’ve learned that almost everything that went wrong with America’s response to the pandemic was predictable and preventable. A sluggish response by a government denuded of expertise allowed the coronavirus to gain a foothold. Chronic underfunding of public health neutered the nation’s ability to prevent the pathogen’s spread. A bloated, inefficient health-care system left hospitals ill-prepared for the ensuing wave of sickness. Racist policies that have endured since the days of colonization and slavery left Indigenous and Black Americans especially vulnerable to COVID19. The decades-long process of shredding the nation’s social safety net forced millions of essential workers in low-paying jobs to risk their life for their livelihood. The same social-media platforms that sowed partisanship and misinformation during the 2014 Ebola outbreak in Africa and the 2016 U.S. election became vectors for conspiracy theories during the 2020 pandemic.

The U.S. has little excuse for its inattention. In recent decades, epidemics of SARS, MERS, Ebola, H1N1 flu, Zika, and monkeypox showed the havoc that new and reemergent pathogens could wreak. Health experts, business leaders, and even middle schoolers ran simulated exercises to game out the spread of new diseases. In 2018, I wrote an article for The Atlantic arguing that the U.S. was not ready for a pandemic, and sounded warnings about the fragility of the nation’s health-care system and the slow process of creating a vaccine. But the COVID19 debacle has also touched—and implicated—nearly every other facet of American society: its shortsighted leadership, its disregard for expertise, its racial inequities, its social-media culture, and its fealty to a dangerous strain of individualism.

SARSCoV2 is something of an anti-Goldilocks virus: just bad enough in every way. Its symptoms can be severe enough to kill millions but are often mild enough to allow infections to move undetected through a population. It spreads quickly enough to overload hospitals, but slowly enough that statistics don’t spike until too late. These traits made the virus harder to control, but they also softened the pandemic’s punch. SARSCoV2 is neither as lethal as some other coronaviruses, such as SARS and MERS, nor as contagious as measles. Deadlier pathogens almost certainly exist. Wild animals harbor an estimated 40,000 unknown viruses, a quarter of which could potentially jump into humans. How will the U.S. fare when “we can’t even deal with a starter pandemic?,” Zeynep Tufekci, a sociologist at the University of North Carolina and an Atlantic contributing writer, asked me.

Despite its epochal effects, COVID19 is merely a harbinger of worse plagues to come. The U.S. cannot prepare for these inevitable crises if it returns to normal, as many of its people ache to do. Normal led to this. Normal was a world ever more prone to a pandemic but ever less ready for one. To avert another catastrophe, the U.S. needs to grapple with all the ways normal failed us. It needs a full accounting of every recent misstep and foundational sin, every unattended weakness and unheeded warning, every festering wound and reopened scar.

A pandemic can be prevented in two ways: Stop an infection from ever arising, or stop an infection from becoming thousands more. The first way is likely impossible. There are simply too many viruses and too many animals that harbor them. Bats alone could host thousands of unknown coronaviruses; in some Chinese caves, one out of every 20 bats is infected. Many people live near these caves, shelter in them, or collect guano from them for fertilizer. Thousands of bats also fly over these people’s villages and roost in their homes, creating opportunities for the bats’ viral stowaways to spill over into human hosts. Based on antibody testing in rural parts of China, Peter Daszak of EcoHealth Alliance, a nonprofit that studies emerging diseases, estimates that such viruses infect a substantial number of people every year. “Most infected people don’t know about it, and most of the viruses aren’t transmissible,” Daszak says. But it takes just one transmissible virus to start a pandemic.

Sometime in late 2019, the wrong virus left a bat and ended up, perhaps via an intermediate host, in a human—and another, and another. Eventually it found its way to the Huanan seafood market, and jumped into dozens of new hosts in an explosive super-spreading event. The COVID19 pandemic had begun.

“There is no way to get spillover of everything to zero,” Colin Carlson, an ecologist at Georgetown University, told me. Many conservationists jump on epidemics as opportunities to ban the wildlife trade or the eating of “bush meat,” an exoticized term for “game,” but few diseases have emerged through either route. Carlson said the biggest factors behind spillovers are land-use change and climate change, both of which are hard to control. Our species has relentlessly expanded into previously wild spaces. Through intensive agriculture, habitat destruction, and rising temperatures, we have uprooted the planet’s animals, forcing them into new and narrower ranges that are on our own doorsteps. Humanity has squeezed the world’s wildlife in a crushing grip—and viruses have come bursting out.

Curtailing those viruses after they spill over is more feasible, but requires knowledge, transparency, and decisiveness that were lacking in 2020. Much about coronaviruses is still unknown. There are no surveillance networks for detecting them as there are for influenza. There are no approved treatments or vaccines. Coronaviruses were formerly a niche family, of mainly veterinary importance. Four decades ago, just 60 or so scientists attended the first international meeting on coronaviruses. Their ranks swelled after SARS swept the world in 2003, but quickly dwindled as a spike in funding vanished. The same thing happened after MERS emerged in 2012. This year, the world’s coronavirus experts—and there still aren’t many—had to postpone their triennial conference in the Netherlands because SARSCoV2 made flying too risky.

In the age of cheap air travel, an outbreak that begins on one continent can easily reach the others. SARS already demonstrated that in 2003, and more than twice as many people now travel by plane every year. To avert a pandemic, affected nations must alert their neighbors quickly. In 2003, China covered up the early spread of SARS, allowing the new disease to gain a foothold, and in 2020, history repeated itself. The Chinese government downplayed the possibility that SARSCoV2 was spreading among humans, and only confirmed as much on January 20, after millions had traveled around the country for the lunar new year. Doctors who tried to raise the alarm were censured and threatened. One, Li Wenliang, later died of COVID19. The World Health Organization initially parroted China’s line and did not declare a public-health emergency of international concern until January 30. By then, an estimated 10,000 people in 20 countries had been infected, and the virus was spreading fast.

The United States has correctly castigated China for its duplicity and the WHO for its laxity—but the U.S. has also failed the international community. Under President Donald Trump, the U.S. has withdrawn from several international partnerships and antagonized its allies. It has a seat on the WHO’s executive board, but left that position empty for more than two years, only filling it this May, when the pandemic was in full swing. Since 2017, Trump has pulled more than 30 staffers out of the Centers for Disease Control and Prevention’s office in China, who could have warned about the spreading coronavirus. Last July, he defunded an American epidemiologist embedded within China’s CDC. America First was America oblivious.

Even after warnings reached the U.S., they fell on the wrong ears. Since before his election, Trump has cavalierly dismissed expertise and evidence. He filled his administration with inexperienced newcomers, while depicting career civil servants as part of a “deep state.” In 2018, he dismantled an office that had been assembled specifically to prepare for nascent pandemics. American intelligence agencies warned about the coronavirus threat in January, but Trump habitually disregards intelligence briefings. The secretary of health and human services, Alex Azar, offered similar counsel, and was twice ignored.

Being prepared means being ready to spring into action, “so that when something like this happens, you’re moving quickly,” Ronald Klain, who coordinated the U.S. response to the West African Ebola outbreak in 2014, told me. “By early February, we should have triggered a series of actions, precisely zero of which were taken.” Trump could have spent those crucial early weeks mass-producing tests to detect the virus, asking companies to manufacture protective equipment and ventilators, and otherwise steeling the nation for the worst. Instead, he focused on the border. On January 31, Trump announced that the U.S. would bar entry to foreigners who had recently been in China, and urged Americans to avoid going there.

Related Stories

Travel bans make intuitive sense, because travel obviously enables the spread of a virus. But in practice, travel bans are woefully inefficient at restricting either travel or viruses. They prompt people to seek indirect routes via third-party countries, or to deliberately hide their symptoms. They are often porous: Trump’s included numerous exceptions, and allowed tens of thousands of people to enter from China. Ironically, they create travel: When Trump later announced a ban on flights from continental Europe, a surge of travelers packed America’s airports in a rush to beat the incoming restrictions. Travel bans may sometimes work for remote island nations, but in general they can only delay the spread of an epidemic—not stop it. And they can create a harmful false confidence, so countries “rely on bans to the exclusion of the things they actually need to do—testing, tracing, building up the health system,” says Thomas Bollyky, a global-health expert at the Council on Foreign Relations. “That sounds an awful lot like what happened in the U.S.”

This was predictable. A president who is fixated on an ineffectual border wall, and has portrayed asylum seekers as vectors of disease, was always going to reach for travel bans as a first resort. And Americans who bought into his rhetoric of xenophobia and isolationism were going to be especially susceptible to thinking that simple entry controls were a panacea.

And so the U.S. wasted its best chance of restraining COVID19. Although the disease first arrived in the U.S. in mid-January, genetic evidence shows that the specific viruses that triggered the first big outbreaks, in Washington State, didn’t land until mid-February. The country could have used that time to prepare. Instead, Trump, who had spent his entire presidency learning that he could say whatever he wanted without consequence, assured Americans that “the coronavirus is very much under control,” and “like a miracle, it will disappear.” With impunity, Trump lied. With impunity, the virus spread.

On February 26, Trump asserted that cases were “going to be down to close to zero.” Over the next two months, at least 1 million Americans were infected.

As the coronavirus established itself in the U.S., it found a nation through which it could spread easily, without being detected. For years, Pardis Sabeti, a virologist at the Broad Institute of Harvard and MIT, has been trying to create a surveillance network that would allow hospitals in every major U.S. city to quickly track new viruses through genetic sequencing. Had that network existed, once Chinese scientists published SARSCoV2’s genome on January 11, every American hospital would have been able to develop its own diagnostic test in preparation for the virus’s arrival. “I spent a lot of time trying to convince many funders to fund it,” Sabeti told me. “I never got anywhere.”

The CDC developed and distributed its own diagnostic tests in late January. These proved useless because of a faulty chemical component. Tests were in such short supply, and the criteria for getting them were so laughably stringent, that by the end of February, tens of thousands of Americans had likely been infected but only hundreds had been tested. The official data were so clearly wrong that The Atlantic developed its own volunteer-led initiative—the COVID Tracking Project—to count cases.

Diagnostic tests are easy to make, so the U.S. failing to create one seemed inconceivable. Worse, it had no Plan B. Private labs were strangled by FDA bureaucracy. Meanwhile, Sabeti’s lab developed a diagnostic test in mid-January and sent it to colleagues in Nigeria, Sierra Leone, and Senegal. “We had working diagnostics in those countries well before we did in any U.S. states,” she told me.

It’s hard to overstate how thoroughly the testing debacle incapacitated the U.S. People with debilitating symptoms couldn’t find out what was wrong with them. Health officials couldn’t cut off chains of transmission by identifying people who were sick and asking them to isolate themselves.

Read: How the coronavirus became an American catastrophe

Water running along a pavement will readily seep into every crack; so, too, did the unchecked coronavirus seep into every fault line in the modern world. Consider our buildings. In response to the global energy crisis of the 1970s, architects made structures more energy-efficient by sealing them off from outdoor air, reducing ventilation rates. Pollutants and pathogens built up indoors, “ushering in the era of ‘sick buildings,’ ” says Joseph Allen, who studies environmental health at Harvard’s T. H. Chan School of Public Health. Energy efficiency is a pillar of modern climate policy, but there are ways to achieve it without sacrificing well-being. “We lost our way over the years and stopped designing buildings for people,” Allen says.

The indoor spaces in which Americans spend 87 percent of their time became staging grounds for super-spreading events. One study showed that the odds of catching the virus from an infected person are roughly 19 times higher indoors than in open air. Shielded from the elements and among crowds clustered in prolonged proximity, the coronavirus ran rampant in the conference rooms of a Boston hotel, the cabins of the Diamond Princess cruise ship, and a church hall in Washington State where a choir practiced for just a few hours.

The hardest-hit buildings were those that had been jammed with people for decades: prisons. Between harsher punishments doled out in the War on Drugs and a tough-on-crime mindset that prizes retribution over rehabilitation, America’s incarcerated population has swelled sevenfold since the 1970s, to about 2.3 million. The U.S. imprisons five to 18 times more people per capita than other Western democracies. Many American prisons are packed beyond capacity, making social distancing impossible. Soap is often scarce. Inevitably, the coronavirus ran amok. By June, two American prisons each accounted for more cases than all of New Zealand. One, Marion Correctional Institution, in Ohio, had more than 2,000 cases among inmates despite having a capacity of 1,500. 


Other densely packed facilities were also besieged. America’s nursing homes and long-term-care facilities house less than 1 percent of its people, but as of mid-June, they accounted for 40 percent of its coronavirus deaths. More than 50,000 residents and staff have died. At least 250,000 more have been infected. These grim figures are a reflection not just of the greater harms that COVID19 inflicts upon elderly physiology, but also of the care the elderly receive. Before the pandemic, three in four nursing homes were understaffed, and four in five had recently been cited for failures in infection control. The Trump administration’s policies have exacerbated the problem by reducing the influx of immigrants, who make up a quarter of long-term caregivers.

Read: Another coronavirus nursing-home disaster is coming

Even though a Seattle nursing home was one of the first COVID19 hot spots in the U.S., similar facilities weren’t provided with tests and protective equipment. Rather than girding these facilities against the pandemic, the Department of Health and Human Services paused nursing-home inspections in March, passing the buck to the states. Some nursing homes avoided the virus because their owners immediately stopped visitations, or paid caregivers to live on-site. But in others, staff stopped working, scared about infecting their charges or becoming infected themselves. In some cases, residents had to be evacuated because no one showed up to care for them.

America’s neglect of nursing homes and prisons, its sick buildings, and its botched deployment of tests are all indicative of its problematic attitude toward health: “Get hospitals ready and wait for sick people to show,” as Sheila Davis, the CEO of the nonprofit Partners in Health, puts it. “Especially in the beginning, we catered our entire [COVID19] response to the 20 percent of people who required hospitalization, rather than preventing transmission in the community.” The latter is the job of the public-health system, which prevents sickness in populations instead of merely treating it in individuals. That system pairs uneasily with a national temperament that views health as a matter of personal responsibility rather than a collective good.

At the end of the 20th century, public-health improvements meant that Americans were living an average of 30 years longer than they were at the start of it. Maternal mortality had fallen by 99 percent; infant mortality by 90 percent. Fortified foods all but eliminated rickets and goiters. Vaccines eradicated smallpox and polio, and brought measles, diphtheria, and rubella to heel. These measures, coupled with antibiotics and better sanitation, curbed infectious diseases to such a degree that some scientists predicted they would soon pass into history. But instead, these achievements brought complacency. “As public health did its job, it became a target” of budget cuts, says Lori Freeman, the CEO of the National Association of County and City Health Officials.

Today, the U.S. spends just 2.5 percent of its gigantic health-care budget on public health. Underfunded health departments were already struggling to deal with opioid addiction, climbing obesity rates, contaminated water, and easily preventable diseases. Last year saw the most measles cases since 1992. In 2018, the U.S. had 115,000 cases of syphilis and 580,000 cases of gonorrhea—numbers not seen in almost three decades. It has 1.7 million cases of chlamydia, the highest number ever recorded.

Since the last recession, in 2009, chronically strapped local health departments have lost 55,000 jobs—a quarter of their workforce. When COVID19 arrived, the economic downturn forced overstretched departments to furlough more employees. When states needed battalions of public-health workers to find infected people and trace their contacts, they had to hire and train people from scratch. In May, Maryland Governor Larry Hogan asserted that his state would soon have enough people to trace 10,000 contacts every day. Last year, as Ebola tore through the Democratic Republic of Congo—a country with a quarter of Maryland’s wealth and an active war.

Ripping unimpeded through American communities, the coronavirus created thousands of sickly hosts that it then rode into America’s hospitals. It should have found facilities armed with state-of-the-art medical technologies, detailed pandemic plans, and ample supplies of protective equipment and life-saving medicines. Instead, it found a brittle system in danger of collapse.

Compared with the average wealthy nation, America spends nearly twice as much of its national wealth on health care, about a quarter of which is wasted on inefficient care, unnecessary treatments, and administrative chicanery. The U.S. gets little bang for its exorbitant buck. It has the lowest life-expectancy rate of comparable countries, the highest rates of chronic disease, and the fewest doctors per person. This profit-driven system has scant incentive to invest in spare beds, stockpiled supplies, peacetime drills, and layered contingency plans—the essence of pandemic preparedness. America’s hospitals have been pruned and stretched by market forces to run close to full capacity, with little ability to adapt in a crisis.

When hospitals do create pandemic plans, they tend to fight the last war. After 2014, several centers created specialized treatment units designed for Ebola—a highly lethal but not very contagious disease. These units were all but useless against a highly transmissible airborne virus like SARSCoV2. Nor were hospitals ready for an outbreak to drag on for months. Emergency plans assumed that staff could endure a few days of exhausting conditions, that supplies would hold, and that hard-hit centers could be supported by unaffected neighbors. “We’re designed for discrete disasters” like mass shootings, traffic pileups, and hurricanes, says Esther Choo, an emergency physician at Oregon Health and Science University. The COVID19 pandemic is not a discrete disaster. It is a 50-state catastrophe that will likely continue at least until a vaccine is ready.

Wherever the coronavirus arrived, hospitals reeled. Several states asked medical students to graduate early, reenlisted retired doctors, and deployed dermatologists to emergency departments. Doctors and nurses endured grueling shifts, their faces chapped and bloody when they finally doffed their protective equipment. Soon, that equipment—masks, respirators, gowns, gloves—started running out.

Millions of Americans have found themselves impoverished and disconnected from medical care.

American In the middle of the greatest health and economic crises in generations, hospitals operate on a just-in-time economy. They acquire the goods they need in the moment through labyrinthine supply chains that wrap around the world in tangled lines, from countries with cheap labor to richer nations like the U.S. The lines are invisible until they snap. About half of the world’s face masks, for example, are made in China, some of them in Hubei province. When that region became the pandemic epicenter, the mask supply shriveled just as global demand spiked. The Trump administration turned to a larder of medical supplies called the Strategic National Stockpile, only to find that the 100 million respirators and masks that had been dispersed during the 2009 flu pandemic were never replaced. Just 13 million respirators were left.

In April, four in five frontline nurses said they didn’t have enough protective equipment. Some solicited donations from the public, or navigated a morass of back-alley deals and internet scams. Others fashioned their own surgical masks from bandannas and gowns from garbage bags. The supply of nasopharyngeal swabs that are used in every diagnostic test also ran low, because one of the largest manufacturers is based in Lombardy, Italy—initially the COVID19 capital of Europe. About 40 percent of critical-care drugs, including antibiotics and painkillers, became scarce because they depend on manufacturing lines that begin in China and India. Once a vaccine is ready, there might not be enough vials to put it in, because of the long-running global shortage of medical-grade glass—literally, a bottle-neck bottleneck.

The federal government could have mitigated those problems by buying supplies at economies of scale and distributing them according to need. Instead, in March, Trump told America’s governors to “try getting it yourselves.” As usual, health care was a matter of capitalism and connections. In New York, rich hospitals bought their way out of their protective-equipment shortfall, while neighbors in poorer, more diverse parts of the city rationed their supplies.

While the president prevaricated, Americans acted. Businesses sent their employees home. People practiced social distancing, even before Trump finally declared a national emergency on March 13, and before governors and mayors subsequently issued formal stay-at-home orders, or closed schools, shops, and restaurants. A study showed that the U.S. could have averted 36,000 COVID19 deaths if leaders had enacted social-distancing measures just a week earlier. But better late than never: By collectively reducing the spread of the virus, America flattened the curve. Ventilators didn’t run out, as they had in parts of Italy. Hospitals had time to add extra beds.

Social distancing worked. But the indiscriminate lockdown was necessary only because America’s leaders wasted months of prep time. Deploying this blunt policy instrument came at enormous cost. Unemployment rose to 14.7 percent, the highest level since record-keeping began, in 1948. More than 26 million people lost their jobs, a catastrophe in a country that—uniquely and absurdly—ties health care to employment. Some COVID19 survivors have been hit with seven-figure medical bills. In the middle of the greatest health and economic crises in generations, millions of Americans have found themselves disconnected from medical care and impoverished. They join the millions who have always lived that way.

The coronavirus found, exploited, and widened every inequity that the U.S. had to offer. Elderly people, already pushed to the fringes of society, were treated as acceptable losses. Women were more likely to lose jobs than men, and also shouldered extra burdens of child care and domestic work, while facing rising rates of domestic violence. In half of the states, people with dementia and intellectual disabilities faced policies that threatened to deny them access to lifesaving ventilators. Thousands of people endured months of COVID19 symptoms that resembled those of chronic postviral illnesses, only to be told that their devastating symptoms were in their head. Latinos were three times as likely to be infected as white people. Asian Americans faced racist abuse. Far from being a “great equalizer,” the pandemic fell unevenly upon the U.S., taking advantage of injustices that had been brewing throughout the nation’s history.

Read: COVID-19 can last for several months

Of the 3.1 million Americans who still cannot afford health insurance in states where Medicaid has not been expanded, more than half are people of color, and 30 percent are Black.* This is no accident. In the decades after the Civil War, the white leaders of former slave states deliberately withheld health care from Black Americans, apportioning medicine more according to the logic of Jim Crow than Hippocrates. They built hospitals away from Black communities, segregated Black patients into separate wings, and blocked Black students from medical school. In the 20th century, they helped construct America’s system of private, employer-based insurance, which has kept many Black people from receiving adequate medical treatment. They fought every attempt to improve Black people’s access to health care, from the creation of Medicare and Medicaid in the ’60s to the passage of the Affordable Care Act in 2010.

A number of former slave states also have among the lowest investments in public health, the lowest quality of medical care, the highest proportions of Black citizens, and the greatest racial divides in health outcomes. As the COVID19 pandemic wore on, they were among the quickest to lift social-distancing restrictions and reexpose their citizens to the coronavirus. The harms of these moves were unduly foisted upon the poor and the Black.

As of early July, one in every 1,450 Black Americans had died from COVID19—a rate more than twice that of white Americans. That figure is both tragic and wholly expected given the mountain of medical disadvantages that Black people face. Compared with white people, they die three years younger. Three times as many Black mothers die during pregnancy. Black people have higher rates of chronic illnesses that predispose them to fatal cases of COVID19. When they go to hospitals, they’re less likely to be treated. The care they do receive tends to be poorer. Aware of these biases, Black people are hesitant to seek aid for COVID19 symptoms and then show up at hospitals in sicker states. “One of my patients said, ‘I don’t want to go to the hospital, because they’re not going to treat me well,’ ” says Uché Blackstock, an emergency physician and the founder of Advancing Health Equity, a nonprofit that fights bias and racism in health care. “Another whispered to me, ‘I’m so relieved you’re Black. I just want to make sure I’m listened to.’ ”

Rather than countering misinformation during the pandemic, trusted sources often made things worse.

Black people were both more worried about the pandemic and more likely to be infected by it. The dismantling of America’s social safety net left Black people with less income and higher unemployment. They make up a disproportionate share of the low-paid “essential workers” who were expected to staff grocery stores and warehouses, clean buildings, and deliver mail while the pandemic raged around them. Earning hourly wages without paid sick leave, they couldn’t afford to miss shifts even when symptomatic. They faced risky commutes on crowded public transportation while more privileged people teleworked from the safety of isolation. “There’s nothing about Blackness that makes you more prone to COVID,” says Nicolette Louissaint, the executive director of Healthcare Ready, a nonprofit that works to strengthen medical supply chains. Instead, existing inequities stack the odds in favor of the virus.

Native Americans were similarly vulnerable. A third of the people in the Navajo Nation can’t easily wash their hands, because they’ve been embroiled in long-running negotiations over the rights to the water on their own lands. Those with water must contend with runoff from uranium mines. Most live in cramped multigenerational homes, far from the few hospitals that service a 17-million-acre reservation. As of mid-May, the Navajo Nation had higher rates of COVID19 infections than any U.S. state.

Americans often misperceive historical inequities as personal failures. Stephen Huffman, a Republican state senator and doctor in Ohio, suggested that Black Americans might be more prone to COVID19 because they don’t wash their hands enough, a remark for which he later apologized. Republican Senator Bill Cassidy of Louisiana, also a physician, noted that Black people have higher rates of chronic disease, as if this were an answer in itself, and not a pattern that demanded further explanation.

Clear distribution of accurate information is among the most important defenses against an epidemic’s spread. And yet the largely unregulated, social-media-based communications infrastructure of the 21st century almost ensures that misinformation will proliferate fast. “In every outbreak throughout the existence of social media, from Zika to Ebola, conspiratorial communities immediately spread their content about how it’s all caused by some government or pharmaceutical company or Bill Gates,” says Renée DiResta of the Stanford Internet Observatory, who studies the flow of online information. When COVID19 arrived, “there was no doubt in my mind that it was coming.”

Read: The great 5G conspiracy

Sure enough, existing conspiracy theories—George Soros! 5G! Bioweapons!—were repurposed for the pandemic. An infodemic of falsehoods spread alongside the actual virus. Rumors coursed through online platforms that are designed to keep users engaged, even if that means feeding them content that is polarizing or untrue. In a national crisis, when people need to act in concert, this is calamitous. “The social internet as a system is broken,” DiResta told me, and its faults are readily abused.

Beginning on April 16, DiResta’s team noticed growing online chatter about Judy Mikovits, a discredited researcher turned anti-vaccination champion. Posts and videos cast Mikovits as a whistleblower who claimed that the new coronavirus was made in a lab and described Anthony Fauci of the White House’s coronavirus task force as her nemesis. Ironically, this conspiracy theory was nested inside a larger conspiracy—part of an orchestrated PR campaign by an anti-vaxxer and QAnon fan with the explicit goal to “take down Anthony Fauci.” It culminated in a slickly produced video called Plandemic, which was released on May 4. More than 8 million people watched it in a week.

Doctors and journalists tried to debunk Plandemic’s many misleading claims, but these efforts spread less successfully than the video itself. Like pandemics, infodemics quickly become uncontrollable unless caught early. But while health organizations recognize the need to surveil for emerging diseases, they are woefully unprepared to do the same for emerging conspiracies. In 2016, when DiResta spoke with a CDC team about the threat of misinformation, “their response was: ‘ That’s interesting, but that’s just stuff that happens on the internet.’ ”

From the June 2020 issue: Adrienne LaFrance on how QAnon is more important than you think

Rather than countering misinformation during the pandemic’s early stages, trusted sources often made things worse. Many health experts and government officials downplayed the threat of the virus in January and February, assuring the public that it posed a low risk to the U.S. and drawing comparisons to the ostensibly greater threat of the flu. The WHO, the CDC, and the U.S. surgeon general urged people not to wear masks, hoping to preserve the limited stocks for health-care workers. These messages were offered without nuance or acknowledgement of uncertainty, so when they were reversed—the virus is worse than the flu; wear masks—the changes seemed like befuddling flip-flops.

The media added to the confusion. Drawn to novelty, journalists gave oxygen to fringe anti-lockdown protests while most Americans quietly stayed home. They wrote up every incremental scientific claim, even those that hadn’t been verified or peer-reviewed.

There were many such claims to choose from. By tying career advancement to the publishing of papers, academia already creates incentives for scientists to do attention-grabbing but irreproducible work. The pandemic strengthened those incentives by prompting a rush of panicked research and promising ambitious scientists global attention.

In March, a small and severely flawed French study suggested that the antimalarial drug hydroxychloroquine could treat COVID19. Published in a minor journal, it likely would have been ignored a decade ago. But in 2020, it wended its way to Donald Trump via a chain of credulity that included Fox News, Elon Musk, and Dr. Oz. Trump spent months touting the drug as a miracle cure despite mounting evidence to the contrary, causing shortages for people who actually needed it to treat lupus and rheumatoid arthritis. The hydroxychloroquine story was muddied even further by a study published in a top medical journal, The Lancet, that claimed the drug was not effective and was potentially harmful. The paper relied on suspect data from a small analytics company called Surgisphere, and was retracted in June.**

Science famously self-corrects. But during the pandemic, the same urgent pace that has produced valuable knowledge at record speed has also sent sloppy claims around the world before anyone could even raise a skeptical eyebrow. The ensuing confusion, and the many genuine unknowns about the virus, has created a vortex of fear and uncertainty, which grifters have sought to exploit. Snake-oil merchants have peddled ineffectual silver bullets (including actual silver). Armchair experts with scant or absent qualifications have found regular slots on the nightly news. And at the center of that confusion is Donald Trump.

During a pandemic, leaders must rally the public, tell the truth, and speak clearly and consistently. Instead, Trump repeatedly contradicted public-health experts, his scientific advisers, and himself. He said that “nobody ever thought a thing like [the pandemic] could happen” and also that he “felt it was a pandemic long before it was called a pandemic.” Both statements cannot be true at the same time, and in fact neither is true.

A month before his inauguration, I wrote that “the question isn’t whether [Trump will] face a deadly outbreak during his presidency, but when.” Based on his actions as a media personality during the 2014 Ebola outbreak and as a candidate in the 2016 election, I suggested that he would fail at diplomacy, close borders, tweet rashly, spread conspiracy theories, ignore experts, and exhibit reckless self-confidence. And so he did.

No one should be shocked that a liar who has made almost 20,000 false or misleading claims during his presidency would lie about whether the U.S. had the pandemic under control; that a racist who gave birth to birtherism would do little to stop a virus that was disproportionately killing Black people; that a xenophobe who presided over the creation of new immigrant-detention centers would order meatpacking plants with a substantial immigrant workforce to remain open; that a cruel man devoid of empathy would fail to calm fearful citizens; that a narcissist who cannot stand to be upstaged would refuse to tap the deep well of experts at his disposal; that a scion of nepotism would hand control of a shadow coronavirus task force to his unqualified son-in-law; that an armchair polymath would claim to have a “natural ability” at medicine and display it by wondering out loud about the curative potential of injecting disinfectant; that an egotist incapable of admitting failure would try to distract from his greatest one by blaming China, defunding the WHO, and promoting miracle drugs; or that a president who has been shielded by his party from any shred of accountability would say, when asked about the lack of testing, “I don’t take any responsibility at all.”

Left: A woman hugs her grandmother through a plastic sheet in Wantagh, New York. Right: An elderly woman has her oxygen levels tested in Yonkers, New York. (Al Bello / Getty; Andrew Renneisen / The New York Times / Redux)

Trump is a comorbidity of the COVID19 pandemic. He isn’t solely responsible for America’s fiasco, but he is central to it. A pandemic demands the coordinated efforts of dozens of agencies. “In the best circumstances, it’s hard to make the bureaucracy move quickly,” Ron Klain said. “It moves if the president stands on a table and says, ‘Move quickly.’ But it really doesn’t move if he’s sitting at his desk saying it’s not a big deal.”

In the early days of Trump’s presidency, many believed that America’s institutions would check his excesses. They have, in part, but Trump has also corrupted them. The CDC is but his latest victim. On February 25, the agency’s respiratory-disease chief, Nancy Messonnier, shocked people by raising the possibility of school closures and saying that “disruption to everyday life might be severe.” Trump was reportedly enraged. In response, he seems to have benched the entire agency. The CDC led the way in every recent domestic disease outbreak and has been the inspiration and template for public-health agencies around the world. But during the three months when some 2 million Americans contracted COVID19 and the death toll topped 100,000, the agency didn’t hold a single press conference. Its detailed guidelines on reopening the country were shelved for a month while the White House released its own uselessly vague plan.

Again, everyday Americans did more than the White House. By voluntarily agreeing to months of social distancing, they bought the country time, at substantial cost to their financial and mental well-being. Their sacrifice came with an implicit social contract—that the government would use the valuable time to mobilize an extraordinary, energetic effort to suppress the virus, as did the likes of Germany and Singapore. But the government did not, to the bafflement of health experts. “There are instances in history where humanity has really moved mountains to defeat infectious diseases,” says Caitlin Rivers, an epidemiologist at the Johns Hopkins Center for Health Security. “It’s appalling that we in the U.S. have not summoned that energy around COVID19.”

Instead, the U.S. sleepwalked into the worst possible scenario: People suffered all the debilitating effects of a lockdown with few of the benefits. Most states felt compelled to reopen without accruing enough tests or contact tracers. In April and May, the nation was stuck on a terrible plateau, averaging 20,000 to 30,000 new cases every day. In June, the plateau again became an upward slope, soaring to record-breaking heights.

Read: Ed Yong on living in a patchwork pandemic

Trump never rallied the country. Despite declaring himself a “wartime president,” he merely presided over a culture war, turning public health into yet another politicized cage match. Abetted by supporters in the conservative media, he framed measures that protect against the virus, from masks to social distancing, as liberal and anti-American. Armed anti-lockdown protesters demonstrated at government buildings while Trump egged them on, urging them to “LIBERATE” Minnesota, Michigan, and Virginia. Several public-health officials left their jobs over harassment and threats.

It is no coincidence that other powerful nations that elected populist leaders—Brazil, Russia, India, and the United Kingdom—also fumbled their response to COVID19. “When you have people elected based on undermining trust in the government, what happens when trust is what you need the most?” says Sarah Dalglish of the Johns Hopkins Bloomberg School of Public Health, who studies the political determinants of health.

“Trump is president,” she says. “How could it go well?”

The countries that fared better against COVID19 didn’t follow a universal playbook. Many used masks widely; New Zealand didn’t. Many tested extensively; Japan didn’t. Many had science-minded leaders who acted early; Hong Kong didn’t—instead, a grassroots movement compensated for a lax government. Many were small islands; not large and continental Germany. Each nation succeeded because it did enough things right.

Read: What really doomed America’s coronavirus response

Meanwhile, the United States underperformed across the board, and its errors compounded. The dearth of tests allowed unconfirmed cases to create still more cases, which flooded the hospitals, which ran out of masks, which are necessary to limit the virus’s spread. Twitter amplified Trump’s misleading messages, which raised fear and anxiety among people, which led them to spend more time scouring for information on Twitter. Even seasoned health experts underestimated these compounded risks. Yes, having Trump at the helm during a pandemic was worrying, but it was tempting to think that national wealth and technological superiority would save America. “We are a rich country, and we think we can stop any infectious disease because of that,” says Michael Osterholm, the director of the Center for Infectious Disease Research and Policy at the University of Minnesota. “But dollar bills alone are no match against a virus.”

COVID‐19 is an assault on America’s body, and a referendum on the ideas that animate its culture.

Public-health experts talk wearily about the panic-neglect cycle, in which outbreaks trigger waves of attention and funding that quickly dissipate once the diseases recede. This time around, the U.S. is already flirting with neglect, before the panic phase is over. The virus was never beaten in the spring, but many people, including Trump, pretended that it was. Every state reopened to varying degrees, and many subsequently saw record numbers of cases. After Arizona’s cases started climbing sharply at the end of May, Cara Christ, the director of the state’s health-services department, said, “We are not going to be able to stop the spread. And so we can’t stop living as well.” The virus may beg to differ.

At times, Americans have seemed to collectively surrender to COVID19. The White House’s coronavirus task force wound down. Trump resumed holding rallies, and called for less testing, so that official numbers would be rosier. The country behaved like a horror-movie character who believes the danger is over, even though the monster is still at large. The long wait for a vaccine will likely culminate in a predictable way: Many Americans will refuse to get it, and among those who want it, the most vulnerable will be last in line.

Still, there is some reason for hope. Many of the people I interviewed tentatively suggested that the upheaval wrought by COVID19 might be so large as to permanently change the nation’s disposition. Experience, after all, sharpens the mind. East Asian states that had lived through the SARS and MERS epidemics reacted quickly when threatened by SARSCoV2, spurred by a cultural memory of what a fast-moving coronavirus can do. But the U.S. had barely been touched by the major epidemics of past decades (with the exception of the H1N1 flu). In 2019, more Americans were concerned about terrorists and cyberattacks than about outbreaks of exotic diseases. Perhaps they will emerge from this pandemic with immunity both cellular and cultural.

There are also a few signs that Americans are learning important lessons. A June survey showed that 60 to 75 percent of Americans were still practicing social distancing. A partisan gap exists, but it has narrowed. “In public-opinion polling in the U.S., high-60s agreement on anything is an amazing accomplishment,” says Beth Redbird, a sociologist at Northwestern University, who led the survey. Polls in May also showed that most Democrats and Republicans supported mask wearing, and felt it should be mandatory in at least some indoor spaces. It is almost unheard-of for a public-health measure to go from zero to majority acceptance in less than half a year. But pandemics are rare situations when “people are desperate for guidelines and rules,” says Zoë McLaren, a health-policy professor at the University of Maryland at Baltimore County. The closest analogy is pregnancy, she says, which is “a time when women’s lives are changing, and they can absorb a ton of information. A pandemic is similar: People are actually paying attention, and learning.”

Redbird’s survey suggests that Americans indeed sought out new sources of information—and that consumers of news from conservative outlets, in particular, expanded their media diet. People of all political bents became more dissatisfied with the Trump administration. As the economy nose-dived, the health-care system ailed, and the government fumbled, belief in American exceptionalism declined. “Times of big social disruption call into question things we thought were normal and standard,” Redbird told me. “If our institutions fail us here, in what ways are they failing elsewhere?” And whom are they failing the most?

Americans were in the mood for systemic change. Then, on May 25, George Floyd, who had survived COVID19’s assault on his airway, asphyxiated under the crushing pressure of a police officer’s knee. The excruciating video of his killing circulated through communities that were still reeling from the deaths of Breonna Taylor and Ahmaud Arbery, and disproportionate casualties from COVID19. America’s simmering outrage came to a boil and spilled into its streets.

Defiant and largely cloaked in masks, protesters turned out in more than 2,000 cities and towns. Support for Black Lives Matter soared: For the first time since its founding in 2013, the movement had majority approval across racial groups. These protests were not about the pandemic, but individual protesters had been primed by months of shocking governmental missteps. Even people who might once have ignored evidence of police brutality recognized yet another broken institution. They could no longer look away.

It is hard to stare directly at the biggest problems of our age. Pandemics, climate change, the sixth extinction of wildlife, food and water shortages—their scope is planetary, and their stakes are overwhelming. We have no choice, though, but to grapple with them. It is now abundantly clear what happens when global disasters collide with historical negligence.

COVID19 is an assault on America’s body, and a referendum on the ideas that animate its culture. Recovery is possible, but it demands radical introspection. America would be wise to help reverse the ruination of the natural world, a process that continues to shunt animal diseases into human bodies. It should strive to prevent sickness instead of profiting from it. It should build a health-care system that prizes resilience over brittle efficiency, and an information system that favors light over heat. It should rebuild its international alliances, its social safety net, and its trust in empiricism. It should address the health inequities that flow from its history. Not least, it should elect leaders with sound judgment, high character, and respect for science, logic, and reason.

The pandemic has been both tragedy and teacher. Its very etymology offers a clue about what is at stake in the greatest challenges of the future, and what is needed to address them. Pandemic. Pan and demos. All people.

* This article has been updated to clarify why 3.1 million Americans still cannot afford health insurance.

** This article originally mischaracterized similarities between two studies that were retracted in June, one in The Lancet and one in the New England Journal of Medicine. It has been updated to reflect that the latter study was not specifically about hydroxychloroquine. It appears in the September 2020 print edition with the headline “Anatomy of an American Failure.”

Ed Yong is a staff writer at The Atlantic, where he covers science.

Connect Twitter

Why a Traffic Flow Suddenly Turns Into a Traffic Jam

Those aggravating slowdowns aren’t one driver’s fault. They’re everybody’s fault. August 3rd 2020

Nautilus

  • Benjamin Seibold
15967_97855ff80c2ef0cc2f1b586e78fb287b.png

Photo by Raymond Depardon / Magnum Photos.

Few experiences on the road are more perplexing than phantom traffic jams. Most of us have experienced one: The vehicle ahead of you suddenly brakes, forcing you to brake, and making the driver behind you brake. But, soon afterward, you and the cars around you accelerate back to the original speed—and it becomes clear that there were no obstacles on the road, and apparently no cause for the slowdown.

Because traffic quickly resumes its original speed, phantom traffic jams usually don’t cause major delays. But neither are they just minor nuisances. They are hot spots for accidents because they force unexpected braking. And the unsteady driving they cause is not good for your car, causing wear and tear and poor gas mileage.

So what is going on, exactly? To answer this question mathematicians, physicists, and traffic engineers have devised many types of traffic models. For instance, microscopic models resolve the paths of the individual vehicles, and are good at describing vehicle–vehicle interactions. In contrast, macroscopic models describe traffic as a fluid, in which cars are interpreted as fluid particles. They are effective at capturing large-scale phenomena that involve many vehicles. Finally, cellular models divide the road into segments and prescribe rules by which cars move from cell to cell, providing a framework for capturing the uncertainty that is inherent in real traffic.

It soon becomes clear that there were no obstacles on the road, and apparently no cause for the slowdown.

In setting out to understand how a phantom traffic jam forms, we first have to be aware of the many effects present in real traffic that could conceivably contribute to a jam: different types of vehicles and drivers, unpredictable behavior, on- and off-ramps, and lane switching, to name just a few. We might expect that some combination of these effects is necessary to cause a phantom jam. One of the great advantages of studying mathematical models is that these various effects can be turned off in theoretical analysis or computer simulations. This creates a host of identical, predictable drivers on a single-lane highway without any ramps. In other words, your perfect commute home.

Surprisingly, when all these effects are turned off, phantom traffic jams still occur! This observation tells us that phantom jams are not the fault of individual drivers, but result instead from the collective behavior of all drivers on the road. It works like this. Envision a uniform traffic flow: All vehicles are evenly distributed along the highway, and all drive with the same velocity. Under perfect conditions, this ideal traffic flow could persist forever. However, in reality, the flow is constantly exposed to small perturbations: imperfections on the asphalt, tiny hiccups of the engines, half-seconds of driver inattention, and so on. To predict the evolution of this traffic flow, the big question is to decide whether these small perturbations decay, or are amplified.

If they decay, the traffic flow is stable and there are no jams. But if they are amplified, the uniform flow becomes unstable, with small perturbations growing into backwards-traveling waves called “jamitons.” These jamitons can be observed in reality, are visible in various types of models and computer simulations, and have also been reproduced in tightly controlled experiments.

In macroscopic, or “fluid-dynamical,” models, each driver—interpreted as a traffic-fluid particle—observes the local density of traffic around her at any instant in time and accordingly decides on a target velocity: fast, when few cars are nearby, or slow, when the congestion level is high. Then she accelerates or decelerates towards this target velocity. In addition, she anticipates what the traffic will do next. This predictive driving effect is modeled by a “traffic pressure,” which acts in many ways like the pressure in a real fluid.

Phantom jams are not the fault of individual drivers, but result instead from the collective behavior of all drivers on the road.

The mathematical analysis of traffic models reveals that these two are competing effects. The delay before drivers reach their target velocity causes the growth of perturbations, while traffic pressure makes perturbations decay. A uniform flow profile is stable if the anticipation effect dominates, which it does when traffic density is low. The delay effect dominates when traffic densities are high, causing instabilities and, ultimately, phantom jams.

The transition from uniform traffic flow to jamiton-dominated flow is similar to water turning from a liquid state into a gas state. In traffic, this phase transition occurs once traffic density reaches a particular, critical threshold at which the drivers’ anticipation exactly balances the delay effect in their velocity adjustment. The most fascinating aspect of this phase transition is that the character of the traffic changes dramatically while individual drivers do not change their driving behavior at all.

The occurrence of jamiton traffic waves, then, can be explained by phase transition behavior. To think about how to prevent phantom jams, though, we also need to understand the details of the structure of a fully established jamiton. In macroscopic traffic models, jamitons are the mathematical analog of detonation waves, which naturally occur in explosions. All jamitons have a localized region of high traffic density and low vehicle velocity. The transition from high to low speed is extremely abrupt—like a shock wave in a fluid. Vehicles that run into the shock front are forced to brake heavily. After the shock is a “reaction zone,” in which drivers attempt to accelerate back to their original velocity. Finally, at the end of the phantom jam, from the drivers’ perspective, is the “sonic point.”

The name “sonic point” comes from the analogy with detonation waves. In an explosion, it is at this point that the flow turns from supersonic to subsonic. This has crucial implications for the information flow within a detonation wave, as well as in a jamiton. The sonic point provides an information boundary, similar to the event horizon in a black hole: no information from further downstream can affect the jamiton through the sonic point. This makes dispersing jamitons rather difficult—a vehicle can’t affect the jamiton through its driving behavior after passing through.

Instead, the driving behavior of a vehicle must be affected before it runs into a jamiton. Wireless communication between vehicles provides one possibility to achieve this goal, and today’s mathematical models allow us to develop appropriate ways to use tomorrow’s technology. For example, once a vehicle detects a sudden braking event followed by an immediate acceleration, it can broadcast a “jamiton warning” to the vehicles following it within a mile distance. The drivers of those vehicles can then, at the least, prepare for unexpected braking; or, better still, increase their headway so that they can eventually contribute to the dissipation of the traffic wave.

The character of the traffic changes dramatically while individual drivers don’t change their driving behavior
at all.

The insights we glean from fluid-dynamical traffic models can help with many other real-world problems. For example, supply chains exhibit a queuing behavior reminiscent of traffic jams. Jamming, queuing, and wave phenomena can also be observed in gas pipelines, information webs, and flows in biological networks—all of which can be understood as fluid-like flows.

Besides being an important mathematical case study, the phantom traffic jam is, perhaps, also an interesting and instructive social system. Whenever jamitons arise, they are caused by the collective behavior of all drivers—not a few bad apples on the road. Those who drive preventively can dissipate jamitons, and benefit all of the drivers behind them. It is a classic example of the effectiveness of the Golden Rule.

So the next time you are caught in a warrantless, pointless, and spontaneous traffic jam, remember just how much more it is than it seems.

Benjamin Seibold is an Assistant Professor of Mathematics at Temple University.

Could Air-Conditioning Fix Climate Change?

Researchers proposed a carbon-neutral “synthetic oil well” on every rooftop.

Scientific American

  • Richard Conniff
GettyImages-171242693.jpg

Photo from 4FR / Getty Images.

It is one of the great dilemmas of climate change: We take such comfort from air conditioning that worldwide energy consumption for that purpose has already tripled since 1990. It is on track to grow even faster through mid-century—and assuming fossil-fuel–fired power plants provide the electricity, that could cause enough carbon dioxide emissions to warm the planet by another deadly half-degree Celsius.

A paper published in the Nature Communications proposes a partial remedy:  Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour.  Machines that capture carbon dioxide from the atmosphere—a developing fix for climate change—also depend on moving large volumes of air.  So why not save energy by tacking the carbon capture machine onto the air conditioner?

This futuristic proposal, from a team led by chemical engineer Roland Dittmeyer at Germany’s Karlsruhe Institute of Technology, goes even further. The researchers imagine a system of modular components, powered by renewable energy, that would not just extract carbon dioxide and water from the air. It would also convert them into hydrogen, and then use a multistep chemical process to transform that hydrogen into liquid hydrocarbon fuels. The result: “Personalized, localized and distributed, synthetic oil wells” in buildings or neighborhoods, the authors write. “The envisioned model of ‘crowd oil’ from solar refineries, akin to ‘crowd electricity’ from solar panels,” would enable people “to take control and collectively manage global warming and climate change, rather than depending on the fossil power industrial behemoths.”

The research group has already developed an experimental model that can complete several key steps of the process, Dittmeyer says, adding, “The plan in two or three years is to have the first experimental showcase where I can show you a bottle of hydrocarbon fuel from carbon dioxide captured in an air-conditioning unit.”

Neither Dittmeyer nor co-author Geoffrey Ozin, a chemical engineer at the University of Toronto, would predict how long it might take before building owners could purchase and install such units. But Ozin claims much of the necessary technology is already commercially available. He says the carbon capture equipment could come from a Swiss “direct air capture” company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies. “And you use Roland’s amazing microstructure catalytic reactors, which convert the hydrogen and carbon dioxide into a synthetic fuel,” he adds. Those reactors are being brought to market by the German company Ineratec, a spinoff from Dittmeyer’s research. Because the system would rely on advanced forms of solar energy, Ozin thinks of the result as “photosynthetic buildings.”

The authors calculate that applying this system to the HVAC in one of Europe’s tallest skyscrapers, the MesseTurm, or Trade Fair Tower, in Frankfurt, would extract and convert enough carbon dioxide to yield at least 2,000 metric tons (660,000 U.S. gallons) of fuel a year. The office space in the entire city of Frankfurt could yield more than 370,000 tons (122 million gallons) annually, they say.

“This is a wonderful concept—it made my day,” says David Keith, a Harvard professor of applied physics and public policy, who was not involved in the new paper. He suggests that the best use for the resulting fuels would be to “help solve two of our biggest energy challenges”: providing a carbon-neutral fuel to fill the gaps left by intermittent renewables such as wind and solar power, and providing fuel for “the hard-to-electrify parts of transportation and industry,” such as airplanes, large trucks and steel- or cement-making. Keith is already targeting some of these markets through Carbon Engineering, a company he founded focused on direct air capture of carbon dioxide for large-scale liquid fuel production. But he says he is “deeply skeptical” about doing it on a distributed building or neighborhood basis. “Economies of scale can’t be wished away. There’s a reason we have huge wind turbines,” he says—and a reason we do not have backyard all-in-one pulp-and-paper mills for disposing of our yard wastes. He believes it is simply “faster and cheaper” to take carbon dioxide from the air and turn it into fuel “by doing it an appropriate scale.”

Other scientists who were not involved in the new paper note two other potential problems. “The idea that Roland has presented is an interesting one,” says Jennifer Wilcox, a chemical engineer at Worcester Institute of Technology, “but more vetting needs to be done in order to determine the true potential of the approach.” While it seems to make sense to take advantage of the air movement already being generated by HVAC systems, Wilcox says, building and operating the necessary fans is not what makes direct air capture systems so expensive. “The dominant capital cost,” she says, “is the solid adsorbent materials”—that is, substances to which the carbon dioxide adheres—and the main energy cost is the heat needed to recover the carbon dioxide from these materials afterward. Moreover, she contends that any available solar or other carbon-free power source would be put to better use in replacing fossil-fuel-fired power plants, to reduce the amount of carbon dioxide getting into the air in the first place.

“The idea of converting captured carbon into liquid fuel is persuasive,“ says Matthew J. Realff, a chemical engineer at Georgia Institute of Technology. “We have an enormous investment in our liquid fuel infrastructure, and using that has tremendous value. You wouldn’t have to build a whole new infrastructure. But this concept of doing it at the household level is a little bit fantastical”—partly because the gases involved (carbon monoxide and hydrogen) are toxic and explosive. The process to convert them to a liquid fuel is well understood, Realff says, but it produces a range of products that now typically get separated out in massive refineries—requiring huge amounts of energy. “It’s possible that it could be worked out at the scale that is being proposed,” he adds. “But we haven’t done it at this point, and it may not turn out to be the most effective way from an economic perspective.” There is, however, an unexpected benefit of direct air capture of carbon dioxide, says Realff, and it could help stimulate market acceptance of the technology: One reason office buildings replace their air so frequently is simply to protect workers from elevated levels of carbon dioxide. His research suggests that capturing the carbon dioxide from the air stream may be one way to cut energy costs, by reducing the frequency of air changes.

Dittmeyer disputes the argument that thinking big is always better. He notes that small, modular plants are a trend in some areas of chemical engineering, “because they are more flexible and don’t involve such a financial risk.” He also anticipates that cost will become less of a barrier as governments face up to the urgency of achieving a climate solution, and as jurisdictions increasingly impose carbon taxes or mandate strict energy efficiency standards for buildings.

“Of course, it’s a visionary perspective,” he says, “it relies on this idea of a decentralized product empowering people, not leaving it to industry. Industrial players observe the situation, but as long as there is no profit in the short term, they won’t do anything. If we have the technology that is safe and affordable, though maybe not as cheap, we can generate some momentum” among individuals, much as happened in the early stages of the solar industry. “And then I would expect the industrial parties to act, too.”

Could Air-Conditioning Fix Climate Change?

Researchers proposed a carbon-neutral “synthetic oil well” on every rooftop. August 2nd 2020

Scientific American

  • Richard Conniff
GettyImages-171242693.jpg

Photo from 4FR / Getty Images.

It is one of the great dilemmas of climate change: We take such comfort from air conditioning that worldwide energy consumption for that purpose has already tripled since 1990. It is on track to grow even faster through mid-century—and assuming fossil-fuel–fired power plants provide the electricity, that could cause enough carbon dioxide emissions to warm the planet by another deadly half-degree Celsius.

A paper published in the Nature Communications proposes a partial remedy:  Heating, ventilation and air conditioning (or HVAC) systems move a lot of air. They can replace the entire air volume in an office building five or 10 times an hour.  Machines that capture carbon dioxide from the atmosphere—a developing fix for climate change—also depend on moving large volumes of air.  So why not save energy by tacking the carbon capture machine onto the air conditioner?

This futuristic proposal, from a team led by chemical engineer Roland Dittmeyer at Germany’s Karlsruhe Institute of Technology, goes even further. The researchers imagine a system of modular components, powered by renewable energy, that would not just extract carbon dioxide and water from the air. It would also convert them into hydrogen, and then use a multistep chemical process to transform that hydrogen into liquid hydrocarbon fuels. The result: “Personalized, localized and distributed, synthetic oil wells” in buildings or neighborhoods, the authors write. “The envisioned model of ‘crowd oil’ from solar refineries, akin to ‘crowd electricity’ from solar panels,” would enable people “to take control and collectively manage global warming and climate change, rather than depending on the fossil power industrial behemoths.”

The research group has already developed an experimental model that can complete several key steps of the process, Dittmeyer says, adding, “The plan in two or three years is to have the first experimental showcase where I can show you a bottle of hydrocarbon fuel from carbon dioxide captured in an air-conditioning unit.”

Neither Dittmeyer nor co-author Geoffrey Ozin, a chemical engineer at the University of Toronto, would predict how long it might take before building owners could purchase and install such units. But Ozin claims much of the necessary technology is already commercially available. He says the carbon capture equipment could come from a Swiss “direct air capture” company called Climeworks, and the electrolyzers to convert carbon dioxide and water into hydrogen are available from Siemens, Hydrogenics or other companies. “And you use Roland’s amazing microstructure catalytic reactors, which convert the hydrogen and carbon dioxide into a synthetic fuel,” he adds. Those reactors are being brought to market by the German company Ineratec, a spinoff from Dittmeyer’s research. Because the system would rely on advanced forms of solar energy, Ozin thinks of the result as “photosynthetic buildings.”

The authors calculate that applying this system to the HVAC in one of Europe’s tallest skyscrapers, the MesseTurm, or Trade Fair Tower, in Frankfurt, would extract and convert enough carbon dioxide to yield at least 2,000 metric tons (660,000 U.S. gallons) of fuel a year. The office space in the entire city of Frankfurt could yield more than 370,000 tons (122 million gallons) annually, they say.

“This is a wonderful concept—it made my day,” says David Keith, a Harvard professor of applied physics and public policy, who was not involved in the new paper. He suggests that the best use for the resulting fuels would be to “help solve two of our biggest energy challenges”: providing a carbon-neutral fuel to fill the gaps left by intermittent renewables such as wind and solar power, and providing fuel for “the hard-to-electrify parts of transportation and industry,” such as airplanes, large trucks and steel- or cement-making. Keith is already targeting some of these markets through Carbon Engineering, a company he founded focused on direct air capture of carbon dioxide for large-scale liquid fuel production. But he says he is “deeply skeptical” about doing it on a distributed building or neighborhood basis. “Economies of scale can’t be wished away. There’s a reason we have huge wind turbines,” he says—and a reason we do not have backyard all-in-one pulp-and-paper mills for disposing of our yard wastes. He believes it is simply “faster and cheaper” to take carbon dioxide from the air and turn it into fuel “by doing it an appropriate scale.”

Other scientists who were not involved in the new paper note two other potential problems. “The idea that Roland has presented is an interesting one,” says Jennifer Wilcox, a chemical engineer at Worcester Institute of Technology, “but more vetting needs to be done in order to determine the true potential of the approach.” While it seems to make sense to take advantage of the air movement already being generated by HVAC systems, Wilcox says, building and operating the necessary fans is not what makes direct air capture systems so expensive. “The dominant capital cost,” she says, “is the solid adsorbent materials”—that is, substances to which the carbon dioxide adheres—and the main energy cost is the heat needed to recover the carbon dioxide from these materials afterward. Moreover, she contends that any available solar or other carbon-free power source would be put to better use in replacing fossil-fuel-fired power plants, to reduce the amount of carbon dioxide getting into the air in the first place.

“The idea of converting captured carbon into liquid fuel is persuasive,“ says Matthew J. Realff, a chemical engineer at Georgia Institute of Technology. “We have an enormous investment in our liquid fuel infrastructure, and using that has tremendous value. You wouldn’t have to build a whole new infrastructure. But this concept of doing it at the household level is a little bit fantastical”—partly because the gases involved (carbon monoxide and hydrogen) are toxic and explosive. The process to convert them to a liquid fuel is well understood, Realff says, but it produces a range of products that now typically get separated out in massive refineries—requiring huge amounts of energy. “It’s possible that it could be worked out at the scale that is being proposed,” he adds. “But we haven’t done it at this point, and it may not turn out to be the most effective way from an economic perspective.” There is, however, an unexpected benefit of direct air capture of carbon dioxide, says Realff, and it could help stimulate market acceptance of the technology: One reason office buildings replace their air so frequently is simply to protect workers from elevated levels of carbon dioxide. His research suggests that capturing the carbon dioxide from the air stream may be one way to cut energy costs, by reducing the frequency of air changes.

Dittmeyer disputes the argument that thinking big is always better. He notes that small, modular plants are a trend in some areas of chemical engineering, “because they are more flexible and don’t involve such a financial risk.” He also anticipates that cost will become less of a barrier as governments face up to the urgency of achieving a climate solution, and as jurisdictions increasingly impose carbon taxes or mandate strict energy efficiency standards for buildings.

“Of course, it’s a visionary perspective,” he says, “it relies on this idea of a decentralized product empowering people, not leaving it to industry. Industrial players observe the situation, but as long as there is no profit in the short term, they won’t do anything. If we have the technology that is safe and affordable, though maybe not as cheap, we can generate some momentum” among individuals, much as happened in the early stages of the solar industry. “And then I would expect the industrial parties to act, too.”

Richard Conniff is an award-winning science writer. His books include The Species Seekers: Heroes, Fools, and the Mad Pursuit of Life on Earth (W. W. Norton, 2011).

Richard Conniff is an award-winning science writer. His books include The Species Seekers: Heroes, Fools, and the Mad Pursuit of Life on Earth (W. W. Norton, 2011).

Could Consciousness All Come Down to the Way Things Vibrate?

A resonance theory of consciousness suggests that the way all matter vibrates, and the tendency for those vibrations to sync up, might be a way to answer the so-called ‘hard problem’ of consciousness.

The Conversation

  • Tam Hunt
file-20181109-74754-hj6p7i.jpg

What do synchronized vibrations add to the mind/body question? Photo by agsandrew / Shutterstock.com.

Why is my awareness here, while yours is over there? Why is the universe split in two for each of us, into a subject and an infinity of objects? How is each of us our own center of experience, receiving information about the rest of the world out there? Why are some things conscious and others apparently not? Is a rat conscious? A gnat? A bacterium?

These questions are all aspects of the ancient “mind-body problem,” which asks, essentially: What is the relationship between mind and matter? It’s resisted a generally satisfying conclusion for thousands of years.

The mind-body problem enjoyed a major rebranding over the last two decades. Now it’s generally known as the “hard problem” of consciousness, after philosopher David Chalmers coined this term in a now classic paper and further explored it in his 1996 book, “The Conscious Mind: In Search of a Fundamental Theory.”

Chalmers thought the mind-body problem should be called “hard” in comparison to what, with tongue in cheek, he called the “easy” problems of neuroscience: How do neurons and the brain work at the physical level? Of course they’re not actually easy at all. But his point was that they’re relatively easy compared to the truly difficult problem of explaining how consciousness relates to matter.

Over the last decade, my colleague, University of California, Santa Barbara psychology professor Jonathan Schooler and I have developed what we call a “resonance theory of consciousness.” We suggest that resonance – another word for synchronized vibrations – is at the heart of not only human consciousness but also animal consciousness and of physical reality more generally. It sounds like something the hippies might have dreamed up – it’s all vibrations, man! – but stick with me. file-20181109-74769-7ov2ol.jpg

How do things in nature – like flashing fireflies – spontaneously synchronize? Photo by Suzanne Tucker /Shutterstock.com.

All About the Vibrations

All things in our universe are constantly in motion, vibrating. Even objects that appear to be stationary are in fact vibrating, oscillating, resonating, at various frequencies. Resonance is a type of motion, characterized by oscillation between two states. And ultimately all matter is just vibrations of various underlying fields. As such, at every scale, all of nature vibrates.

Something interesting happens when different vibrating things come together: They will often start, after a little while, to vibrate together at the same frequency. They “sync up,” sometimes in ways that can seem mysterious. This is described as the phenomenon of spontaneous self-organization.

Mathematician Steven Strogatz provides various examples from physics, biology, chemistry and neuroscience to illustrate “sync” – his term for resonance – in his 2003 book “Sync: How Order Emerges from Chaos in the Universe, Nature, and Daily Life,” including:

  • When fireflies of certain species come together in large gatherings, they start flashing in sync, in ways that can still seem a little mystifying.
  • Lasers are produced when photons of the same power and frequency sync up.
  • The moon’s rotation is exactly synced with its orbit around the Earth such that we always see the same face.

Examining resonance leads to potentially deep insights about the nature of consciousness and about the universe more generally. file-20181109-74751-1503r83.jpg

External electrodes can record a brain’s activity. Photo by vasara / Shutterstock.com.

Sync Inside Your Skull

Neuroscientists have identified sync in their research, too. Large-scale neuron firing occurs in human brains at measurable frequencies, with mammalian consciousness thought to be commonly associated with various kinds of neuronal sync.

For example, German neurophysiologist Pascal Fries has explored the ways in which various electrical patterns sync in the brain to produce different types of human consciousness.

Fries focuses on gamma, beta and theta waves. These labels refer to the speed of electrical oscillations in the brain, measured by electrodes placed on the outside of the skull. Groups of neurons produce these oscillations as they use electrochemical impulses to communicate with each other. It’s the speed and voltage of these signals that, when averaged, produce EEG waves that can be measured at signature cycles per second. file-20181109-116826-1hsxqnf.jpg

Each type of synchronized activity is associated with certain types of brain function. Image from artellia / Shutterstock.com.

Gamma waves are associated with large-scale coordinated activities like perception, meditation or focused consciousness; beta with maximum brain activity or arousal; and theta with relaxation or daydreaming. These three wave types work together to produce, or at least facilitate, various types of human consciousness, according to Fries. But the exact relationship between electrical brain waves and consciousness is still very much up for debate.

Fries calls his concept “communication through coherence.” For him, it’s all about neuronal synchronization. Synchronization, in terms of shared electrical oscillation rates, allows for smooth communication between neurons and groups of neurons. Without this kind of synchronized coherence, inputs arrive at random phases of the neuron excitability cycle and are ineffective, or at least much less effective, in communication.

A Resonance Theory of Consciousness

Our resonance theory builds upon the work of Fries and many others, with a broader approach that can help to explain not only human and mammalian consciousness, but also consciousness more broadly.

Based on the observed behavior of the entities that surround us, from electrons to atoms to molecules, to bacteria to mice, bats, rats, and on, we suggest that all things may be viewed as at least a little conscious. This sounds strange at first blush, but “panpsychism” – the view that all matter has some associated consciousness – is an increasingly accepted position with respect to the nature of consciousness.

The panpsychist argues that consciousness did not emerge at some point during evolution. Rather, it’s always associated with matter and vice versa – they’re two sides of the same coin. But the large majority of the mind associated with the various types of matter in our universe is extremely rudimentary. An electron or an atom, for example, enjoys just a tiny amount of consciousness. But as matter becomes more interconnected and rich, so does the mind, and vice versa, according to this way of thinking.

Biological organisms can quickly exchange information through various biophysical pathways, both electrical and electrochemical. Non-biological structures can only exchange information internally using heat/thermal pathways – much slower and far less rich in information in comparison. Living things leverage their speedier information flows into larger-scale consciousness than what would occur in similar-size things like boulders or piles of sand, for example. There’s much greater internal connection and thus far more “going on” in biological structures than in a boulder or a pile of sand.

Under our approach, boulders and piles of sand are “mere aggregates,” just collections of highly rudimentary conscious entities at the atomic or molecular level only. That’s in contrast to what happens in biological life forms where the combinations of these micro-conscious entities together create a higher level macro-conscious entity. For us, this combination process is the hallmark of biological life.

The central thesis of our approach is this: the particular linkages that allow for large-scale consciousness – like those humans and other mammals enjoy – result from a shared resonance among many smaller constituents. The speed of the resonant waves that are present is the limiting factor that determines the size of each conscious entity in each moment.

As a particular shared resonance expands to more and more constituents, the new conscious entity that results from this resonance and combination grows larger and more complex. So the shared resonance in a human brain that achieves gamma synchrony, for example, includes a far larger number of neurons and neuronal connections than is the case for beta or theta rhythms alone.

What about larger inter-organism resonance like the cloud of fireflies with their little lights flashing in sync? Researchers think their bioluminescent resonance arises due to internal biological oscillators that automatically result in each firefly syncing up with its neighbors.

Is this group of fireflies enjoying a higher level of group consciousness? Probably not, since we can explain the phenomenon without recourse to any intelligence or consciousness. But in biological structures with the right kind of information pathways and processing power, these tendencies toward self-organization can and often do produce larger-scale conscious entities.

Our resonance theory of consciousness attempts to provide a unified framework that includes neuroscience, as well as more fundamental questions of neurobiology and biophysics, and also the philosophy of mind. It gets to the heart of the differences that matter when it comes to consciousness and the evolution of physical systems.

It is all about vibrations, but it’s also about the type of vibrations and, most importantly, about shared vibrations.

Tam Hunt is an Affiliate Guest in Psychology at the University of California, Santa Barbara.The Conversation

More from The Conversation

Science or Compliance ?

Getting There Social Theory July 25th 2020

There is a lot going on in the world, much of it quite bad. As a trained social scientist from the early 1970s, I was taught sociological and economic theories no longer popular with the global monstrously rich ruling elite.  By the way, in case you don’t know, you do not have to hold political office to be a part of that elite.  I was also taught philosophy and economic history of Britain and the United States.

So, firstly let’s get economics dealt with.  I was taught about its founders, men like Jeremy Bentham, also a philosopher, Malthus, Jevons and Marshall – the latter bringing order, principles and so called ‘rational economic man’ into the discipline.

Society was pretty rigid, wars were the way countries got richer and class systems became more objectified.  All went well until the post World War One ‘Great Depression’ when, in spite of rapidly falling interest rates, the rich decided to let the poor sink while they retrenched and had a good time- stirring up Nazism. .  

Economics revolutionary John Maynard Keynes concluded that governments needed to tax the rich and borrow to spend their way out of depression.  Britain’s elite would have none of it but the U.S.A , Italy and Germany took it up -I admit to being a nerd, and was reading Keynes ‘General Theory of Employment Interest and Money ‘ in my teens, much more interesting to me than following football.

Meanwhile Russia was locked out as a pariah. Britain had done its best to discredit and destroy Russia because they killed the British Royal family’s treacherous cousins – because they were terrible and corrupt rulers of Russia – and terrified the British rich with a fear of communism from the lower orders rising up.  

Only World War Two saved them offering a wonderful opportunity to slaughter more of the lower orders.  In the process, their empire was exposed and fell apart in the post war age – a ghost of it surviving as the Commonwealth ( sic ).

So we come to sociology.  Along the way, through this Industrial Revolution, Empire Building , oppression and decline a so called ‘Science of Society had been developing, with substantial data collected and poured into theories.  Marxism was the most famous, with Karl Marx’s forgotten friend and industrialist Friedrich Engels well placed to collect the data.  

The essence of Marxist theory, which was primarily based on the Hegelian dialectic and Marx’s historical studies, was that Capitalistm contained the seeds of its own destruction due to an inherent conflict between those who owned the means of production and the slaves being exploited for profit and greed.  Taking the opportunity provided by incompetent Russian elite rule in 1917, Germany helped smuggle Lenin back into Russia to ferment the Russian revolution.

That revolution and Russia has terrified the rich western elites ever since, with all manner of methods and episodes used to undermine it.  It is no wonder, leaving the vile Stalin to one side, that Russia developed police state methods, leading to what windbag Churchill called an Iron Curtain descending and dividing Europe.

By 1991, the West, dominated by Britain and the U.S elites who had the most to lose from what the U.S had called ‘The Domino Theory’ of one country falling to communism after another- because the masses might realise how they were being exploited – thought their day had come.  Gorbachev got rid of the Berlin Wall, the U.S undermined him to get their friend Yeltsin into power.  

But it didn’t last when Putin stepped up.  Oligarch’s allowed by Yeltsin, to rip off state assets rushed to Britain, even donating to Tory Party funds. Ever since, the Western elite have been in overdrive to discredit Putin has made in spite of the progress he has inspired and directed.  

Anglo US sanctions aren’t working fast enough and West Germany wants to buy Russian Gas – Nord Stream 2.  So now we have fake socialist, former head of Britain’s corrupt CPS, now Labour’s top man ( sic ) wanting RT ( Russia Today ) closed down.  There is a lot of worry about people not watching BBC TV in spite of being forced to pay for an expensive licence even though they do not want to watch BBC’s smug upper middle class drivel and biased news.  This is where sociology comes back into the picture.

The discipline ( sic ) of actual sociology derived from French thinkers like Auguste Comte who predicted that sociologists would become the priesthood of modern society – see where our government gets its ‘the science mantra ‘ from.  

As with economics, sociology was about understanding the increasingly complex way of life in an increasingly industrialised Industrialising world. Early schools of sociological thought drew comparisons with Darwin’s idea of organisms evolving.  So society’s head was its government, the transport system was its veins and arteries etc, with every part working towards functional integration.

Herbert Spencer, whose girlfriend Mary Ann Evans wrote social science orientated novels under the name George Elliot, and Frenchman Emile Durkeheim, founded this ‘functionalists’ school of sociology.  Frenchman Emile Durkheim, inspired by thinkers from the French Revolutionary era, took that school a stage further.  His theory considered dysfunctions which he called ‘pathological’ factors like suicide.  Robert K Merton went on after 1945, to write about dysfunctional aspects of society, building on Durkheim’s work.  Both men had a concept of ‘anomie.’  Durkheim talked of normlessness, Merton of people and societies never satisfied, having ever receding horizons.

To an old school person like myself, these ideas are still useful as is Keynes on economics.  One just has to look behind today’s self interested pseudo scientific jargon speak about ‘experts say,  the science and studies reveal.’  The important thing to remember about any social science, and epidemiologists are among them, is that you get or predict according to what you put in.  As far as Covid 19 is concerned, there are too many vested interests now to take anything they say seriously.  It is quite clear that there is no evidence that lockdown works. There is clear evidence that certain groups make themselves vulnerable or are deluded that death does not come with old age.  

I am old.  I am one of the ‘I’ and ‘Me’ generation whose interests should not come first and nor should the BAME.  The same goes for Africa, Indian sub continent and the Middle East, where overpopulation, foreign aid, corruption, Oxfam ,ignorance, dictators and religious bigotry are not solutions to Covid 19 or anything else.  If our pathetic fake caring politicians carry on like this, grovelling to the likes of the WHO, then we are all doomed.  

As for little Greta, she is a rather noisy poorly educated opinionated stooge. She has no idea what she is talking about.  As for modern sociology, it is pure feminist narrow minded dogma popular on police training courses for morons to use for profiling and fitting up innocent men.  They go on toilet paper degree courses, getting rather impressive letters, BSc to make them look and sound like experts. 

Robert Cook

New Alien Theory July 18th 2020

After decades of searching, we still haven’t discovered a single sign of extraterrestrial intelligence. Probability tells us life should be out there, so why haven’t we found it yet?

The problem is often referred to as Fermi’s paradox, after the Nobel Prize–winning physicist Enrico Fermi, who once asked his colleagues this question at lunch. Many theories have been proposed over the years. It could be that we are simply alone in the universe or that there is some great filter that prevents intelligent life progressing beyond a certain stage. Maybe alien life is out there, but we are too primitive to communicate with it, or we are placed inside some cosmic zoo, observed but left alone to develop without external interference. Now, three researchers think they think they may have another potential answer to Fermi’s question: Aliens do exist; they’re just all asleep.

According to a research paper accepted for publication in the Journal of the British Interplanetary Society, extraterrestrials are sleeping while they wait. In the paper, authors from Oxford’s Future of Humanity Institute and the Astronomical Observatory of BelgradeAnders Sandberg, Stuart Armstrong, and Milan Cirkovic argue that the universe is too hot right now for advanced, digital civilizations to make the most efficient use of their resources. The solution: Sleep and wait for the universe to cool down, a process known as aestivating (like hibernation but sleeping until it’s colder).

Understanding the new hypothesis first requires wrapping your head around the idea that the universe’s most sophisticated life may elect to leave biology behind and live digitally. Having essentially uploaded their minds onto powerful computers, the civilizations choosing to do this could enhance their intellectual capacities or inhabit some of the harshest environments in the universe with ease.

The idea that life might transition toward a post-biological form of existence is gaining ground among experts. “It’s not something that is necessarily unavoidable, but it is highly likely,” Cirkovic told me in an interview.

Once you’re living digitally, Cirkovic explained, it’s important to process information efficiently. Each computation has a certain cost attached to it, and this cost is tightly coupled with temperature. The colder it gets, the lower the cost is, meaning you can do more with the same amount of resources. This is one of the reasons why we cool powerful computers. Though humans may find the universe to be a pretty frigid place (the background radiation hovers about 3 kelvins above absolute zero, the very lower limit of the temperature scale), digital minds may find it far too hot.

But why aestivate? Surely any aliens wanting more efficient processing could cool down their systems manually, just as we do with computers. In the paper, the authors concede this is a possibility. “While it is possible for a civilization to cool down parts of itself to any low temperature,” the authors write, that, too, requires work. So it wouldn’t make sense for a civilization looking to maximize its computational capacity to waste energy on the process. As Sandberg and Cirkovic elaborate in a blog post, it’s more likely that such artificial life would be in a protected sleep mode today, ready to wake up in colder futures.

If such aliens exist, they’re in luck. The universe appears to be cooling down on its own. Over the next trillions of years, as it continues to expand and the formation of new stars slows, the background radiation will reduce to practically zero. Under those conditions, Sandberg and Cirkovic explain, this kind of artificial life would get “tremendously more done.” Tremendous isn’t an understatement, either. The researchers calculate that by employing such a strategy, they could achieve up to 1030 times more than if done today. That’s a 1 with 30 zeroes after it.

But just because the aliens are asleep doesn’t mean we can’t find signs of them. Any aestivating civilization has to preserve resources it intends to use in the future. Processes that waste or threaten these resources, then, should be conspicuously absent, thanks to interference from the aestivators. (If they are sufficiently advanced to upload their minds and aestivators, they should be able to manipulate space.) This includes galaxies colliding, galactic winds venting matter into intergalactic space, and stars converting into black holes, which can push resources beyond the reach of the sleeping civilization or change them into less-useful forms.

Another strategy to find the sleeping aliens, Cirkovic said, might be to try and meddle with the aestivators’ possessions and territory, which we may already reside within. One way of doing this would be to send out self-replicating probes into the universe that would steal the aestivators’ things. Any competent species ought to have measures in place to respond to these kind of threats. “It could be an exceptionally dangerous test,” he cautioned, “but if there really are very old and very advanced civilizations out there, we can assume there is a potential for danger in anything we do.”

Interestingly, neither Sandberg nor Cirkovic said they have much faith in finding anything. Sandberg, writing on his blog, states that he does not believe the hypothesis to be a likely one: “I personally think the likeliest reason we are not seeing aliens is not that they are aestivating.” He writes that he feels it’s more likely that “they do not exist or are very far away.”

Cirkovic concurred. “I don’t find it very likely, either,” he said in our interview. “I much prefer hypotheses that do not rely on assuming intentional decisions made by extraterrestrial societies. Any assumption is extremely speculative.” There could be forms of energy that we can’t even conceive of using now, he said—producing antimatter in bulk, tapping evaporating black holes, using dark matter. Any of this could change what we might expect to see from an advanced technical civilization.

Yet, he said, the theory has a place. It’s important to cover as much ground as possible. You need to test a wide set of hypotheses one by one—falsifying them, pruning them—to get closer to the truth. “This is how science works. We need to have as many hypotheses and explanations for Fermi’s paradox as possible,” he said.

Plus, there’s a modest likelihood their aestivating aliens idea might be part of the answer, Cirkovic said. We shouldn’t expect a single hypothesis to account for Fermi’s paradox. It will be more of a “patchwork-quilt kind of solution,” he said.

And it’s important to keep exploring solutions. Fermi’s paradox is so much more than an intellectual exercise. It’s about trying to understand what might be out there and how this might explain our past and guide our future.

“I would say that 90-plus percent of hypotheses that were historically proposed to account for Fermi’s paradox have practical consequences,” Cirkovic said. They allow us to think proactively about some of the problems we as a species face, or may one day face, and prompt us to develop strategies to actively shape a more prosperous and secure future for humanity.“We can apply this reasoning to our past, to the emergence of life and complexity. We can also apply similar reasoning to thinking about our future. It can help us avoid catastrophes and help us understand the most likely fate of intelligent species in the universe.”

Stephen Hawking Left Us Bold Predictions on AI, Superhumans, and Aliens

The great physicist’s thoughts on the future of the human race and the fragility of planet Earth.

Quartz

  • Max de Haldevang

The late physicist Stephen Hawking’s last writings predict that a breed of superhumans will take over, having used genetic engineering to surpass their fellow beings.

In Brief Answers to the Big Questions, to published in October 2018 and excerpted in the UK’s Sunday Times (paywall), Hawking pulls no punches on subjects like machines taking over, the biggest threat to Earth, and the possibilities of intelligent life in space.

Artificial Intelligence

Hawking delivers a grave warning on the importance of regulating AI, noting that “in the future AI could develop a will of its own, a will that is in conflict with ours.” A possible arms race over autonomous-weapons should be stopped before it can start, he writes, asking what would happen if a crash similar to the 2010 stock market Flash Crash happened with weapons. He continues:

In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

Earth’s Bleak Future, Gene Editing, and Superhumans

The bad news: At some point in the next 1,000 years, nuclear war or environmental calamity will “cripple Earth.” However, by then, “our ingenious race will have found a way to slip the surly bonds of Earth and will therefore survive the disaster.” The Earth’s other species probably won’t make it, though.

The humans who do escape Earth will probably be new “superhumans” who have used gene editing technology like CRISPR to outpace others. They’ll do so by defying laws against genetic engineering, improving their memories, disease resistance, and life expectancy, he says

Hawking seems curiously enthusiastic about this final point, writing, “There is no time to wait for Darwinian evolution to make us more intelligent and better natured.”

Once such superhumans appear, there are going to be significant political problems with the unimproved humans, who won’t be able to compete. Presumably, they will die out, or become unimportant. Instead, there will be a race of self-designing beings who are improving themselves at an ever-increasing rate. If the human race manages to redesign itself, it will probably spread out and colonise other planets and stars.

Intelligent Life in Space

Hawking acknowledges there are various explanations for why intelligent life hasn’t been found or has not visited Earth. His predictions here aren’t so bold, but his preferred explanation is that humans have “overlooked” forms of intelligent life that are out there.

Does God Exist?

No, Hawking says.

The question is, is the way the universe began chosen by God for reasons we can’t understand, or was it determined by a law of science? I believe the second. If you like, you can call the laws of science “God”, but it wouldn’t be a personal God that you would meet and put questions to.

The Biggest Threats to Earth

Threat number one one is an asteroid collision, like the one that killed the dinosaurs. However, “we have no defense” against that, Hawking writes. More immediately: climate change. “A rise in ocean temperature would melt the ice caps and cause the release of large amounts of carbon dioxide,” Hawking writes. “Both effects could make our climate like that of Venus with a temperature of 250C.”

The Best Idea Humanity Could Implement

Nuclear fusion power. That would give us clean energy with no pollution or global warming.

More from Quartz

Advertisement


Could Invisible Aliens Really Exist Among Us? An Astrobiologist Explains

The Earth may be crawling with undiscovered creatures with a biochemistry that differs from life as we know it. July 13th 2020

The Conversation

  • Samantha Rolfe

They probably won’t look anything like this. Credit: Martina Badini / Shutterstock.

Life is pretty easy to recognise. It moves, it grows, it eats, it excretes, it reproduces. Simple. In biology, researchers often use the acronym “MRSGREN” to describe it. It stands for movement, respiration, sensitivity, growth, reproduction, excretion and nutrition.

But Helen Sharman, Britain’s first astronaut and a chemist at Imperial College London, recently said that alien lifeforms that are impossible to spot may be living among us. How could that be possible?

While life may be easy to recognise, it’s actually notoriously difficult to define and has had scientists and philosophers in debate for centuries – if not millennia. For example, a 3D printer can reproduce itself, but we wouldn’t call it alive. On the other hand, a mule is famously sterile, but we would never say it doesn’t live.

As nobody can agree, there are more than 100 definitions of what life is. An alternative (but imperfect) approach is describing life as “a self-sustaining chemical system capable of Darwinian evolution”, which works for many cases we want to describe.

The lack of definition is a huge problem when it comes to searching for life in space. Not being able to define life other than “we’ll know it when we see it” means we are truly limiting ourselves to geocentric, possibly even anthropocentric, ideas of what life looks like. When we think about aliens, we often picture a humanoid creature. But the intelligent life we are searching for doesn’t have to be humanoid.

Life, But Not as We Know It

Sharman says she believes aliens exist and “there’s no two ways about it”. Furthermore, she wonders: “Will they be like you and me, made up of carbon and nitrogen? Maybe not. It’s possible they’re here right now and we simply can’t see them.”

Such life would exist in a “shadow biosphere”. By that, I don’t mean a ghost realm, but undiscovered creatures probably with a different biochemistry. This means we can’t study or even notice them because they are outside of our comprehension. Assuming it exists, such a shadow biosphere would probably be microscopic.

So why haven’t we found it? We have limited ways of studying the microscopic world as only a small percentage of microbes can be cultured in a lab. This may mean that there could indeed be many lifeforms we haven’t yet spotted. We do now have the ability to sequence the DNA of unculturable strains of microbes, but this can only detect life as we know it – that contain DNA.

If we find such a biosphere, however, it is unclear whether we should call it alien. That depends on whether we mean “of extraterrestrial origin” or simply “unfamiliar”.

Silicon-Based Life

A popular suggestion for an alternative biochemistry is one based on silicon rather than carbon. It makes sense, even from a geocentric point of view. Around 90 percent of the Earth is made up of silicon, iron, magnesium and oxygen, which means there’s lots to go around for building potential life.

Artist’s impression of a silicon-based life form. Credit: Zita.

Silicon is similar to carbon, it has four electrons available for creating bonds with other atoms. But silicon is heavier, with 14 protons (protons make up the atomic nucleus with neutrons) compared to the six in the carbon nucleus. While carbon can create strong double and triple bonds to form long chains useful for many functions, such as building cell walls, it is much harder for silicon. It struggles to create strong bonds, so long-chain molecules are much less stable.

What’s more, common silicon compounds, such as silicon dioxide (or silica), are generally solid at terrestrial temperatures and insoluble in water. Compare this to highly soluble carbon dioxide, for example, and we see that carbon is more flexible and provides many more molecular possibilities.

Life on Earth is fundamentally different from the bulk composition of the Earth. Another argument against a silicon-based shadow biosphere is that too much silicon is locked up in rocks. In fact, the chemical composition of life on Earth has an approximate correlation with the chemical composition of the sun, with 98 percent of atoms in biology consisting of hydrogen, oxygen and carbon. So if there were viable silicon lifeforms here, they may have evolved elsewhere.

That said, there are arguments in favour of silicon-based life on Earth. Nature is adaptable. A few years ago, scientists at Caltech managed to breed a bacterial protein that created bonds with silicon – essentially bringing silicon to life. So even though silicon is inflexible compared with carbon, it could perhaps find ways to assemble into living organisms, potentially including carbon.

And when it comes to other places in space, such as Saturn’s moon Titan or planets orbiting other stars, we certainly can’t rule out the possibility of silicon-based life.

To find it, we have to somehow think outside of the terrestrial biology box and figure out ways of recognising lifeforms that are fundamentally different from the carbon-based form. There are plenty of experiments testing out these alternative biochemistries, such as the one from Caltech.

Regardless of the belief held by many that life exists elsewhere in the universe, we have no evidence for that. So it is important to consider all life as precious, no matter its size, quantity or location. The Earth supports the only known life in the universe. So no matter what form life elsewhere in the solar system or universe may take, we have to make sure we protect it from harmful contamination – whether it is terrestrial life or alien lifeforms.

So could aliens be among us? I don’t believe that we have been visited by a life form with the technology to travel across the vast distances of space. But we do have evidence for life-forming, carbon-based molecules having arrived on Earth on meteorites, so the evidence certainly doesn’t rule out the same possibility for more unfamiliar life forms.

Samantha Rolfe is a Lecturer in Astrobiology and Principal Technical Officer at the University of Hertfordshire’s Bayfordbury Observatory.

Memories Can Be Injected and Survive Amputation and Metamorphosis July 13th 2020

If a headless worm can regrow a memory, then where is the memory stored? And, if a memory can regenerate, could you transfer it?

Nautilus

  • Marco Altamirano

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines—and eventually became something of a celebrity

The study of memory has always been one of the stranger outposts of science. In the 1950s, an unknown psychology professor at the University of Michigan named James McConnell made headlines—and eventually became something of a celebrity—with a series of experiments on freshwater flatworms called planaria. These worms fascinated McConnell not only because they had, as he wrote, a “true synaptic type of nervous system” but also because they had “enormous powers of regeneration…under the best conditions one may cut [the worm] into as many as 50 pieces” with each section regenerating “into an intact, fully-functioning organism.” 

In an early experiment, McConnell trained the worms à la Pavlov by pairing an electric shock with flashing lights. Eventually, the worms recoiled to the light alone. Then something interesting happened when he cut the worms in half. The head of one half of the worm grew a tail and, understandably, retained the memory of its training. Surprisingly, however, the tail, which grew a head and a brain, also retained the memory of its training. If a headless worm can regrow a memory, then where is the memory stored, McConnell wondered. And, if a memory can regenerate, could he transfer it? 

McConnell’s work has recently experienced a sort of renaissance.

Perhaps. Swedish neurobiologist Holger Hydén had suggested, in the 1960s, that memories were stored in neuron cells, specifically in RNA, the messenger molecule that takes instructions from DNA and links up with ribosomes to make proteins, the building blocks of life. McConnell, having become interested in Hydén’s work, scrambled to test for a speculative molecule that he called “memory RNA” by grafting portions of trained planaria onto the bodies of untrained planaria. His aim was to transfer RNA from one worm to another but, encountering difficulty getting the grafts to stick, he turned to a “more spectacular type of tissue transfer, that of ‘cannibalistic ingestion.’” Planaria, accommodatingly, are cannibals, so McConnell merely had to blend trained worms and feed them to their untrained peers. (Planaria lack the acids and enzymes that would completely break down food, so he hoped that some RNA might be integrated into the consuming worms.) 

Shockingly, McConnell reported that cannibalizing trained worms induced learning in untrained planaria. In other experiments, he trained planaria to run through mazes and even developed a technique for extracting RNA from trained worms in order to inject it into untrained worms in an effort to transmit memories from one animal to another. Eventually, after his retirement in 1988, McConnell faded from view, and his work was relegated to the sidebars of textbooks as a curious but cautionary tale. Many scientists simply assumed that invertebrates like planaria couldn’t be trained, making the dismissal of McConnell’s work easy. McConnell also published some of his studies in his own journal, The Worm Runner’s Digest, alongside sci-fi humor and cartoons. As a result, there wasn’t a lot of interest in attempting to replicate his findings.

Nonetheless, McConnell’s work has recently experienced a sort of renaissance, taken up by innovative scientists like Michael Levin, a biologist at Tufts University specializing in limb regeneration, who has reproduced modernized and automated versions of his planarian maze-training experiments. The planarian itself has enjoyed a newfound popularity, too, after Levin cut the tail off a worm and shot a bioelectric current through the incision, provoking the worm to regrow another head in place of its tail (garnering Levin the endearing moniker of “young Frankenstein”). Levin also sent 15 worm pieces into space, with one returning, strangely enough, with two heads (“remarkably,” Levin and his colleagues wrote, “amputating this double-headed worm again, in plain water, resulted again in the double-headed phenotype.”) 

David Glanzman, a neurobiologist at the University of California, Los Angeles, has another promising research program that recently struck a chord reminiscent of McConnell’s memory experiments—although, instead of planaria, Glanzman’s lab works mostly with aplysia, the darling mollusk of neuroscience on account of its relatively simple nervous system. (Also known as “sea hares,” aplysia are giant, inky sea slugs that swim with undulating, ruffled wings.)

In 2015, Glanzman was testing the textbook theory on memory, which holds that memories are stored in synapses, the connective junctions between neurons. His team, attempting to create and erase a memory in aplysia, periodically delivered mild electric shocks to train the mollusk to prolong a reflex, one where it withdraws, upon touch, its siphon, a little breathing tube between the gill and the tail. After training, his lab witnessed new synaptic growth between the sensory neuron that felt touch and the motor neuron that triggered the siphon withdrawal reflex. Developing after the training, the increased connectivity between those neurons seemed to corroborate the theory that memories are stored in synaptic connections. Glanzman’s team tried to erase the memory of the training by dismantling the synaptic connections between the neurons and, sure enough, the snails subsequently behaved as if they’d lost the memory, further corroborating the synaptic memory theory. After Glanzman’s team administered a “reminder” shock to the snails, the researchers were surprised to quickly notice different, newer synaptic connections growing between the neurons. The snails then behaved, once again, as if they remembered the sensitizing training they seemed to have previously forgotten. 

If the memory persisted through such major synaptic change, where the synaptic connections that emerged through training had disappeared and completely different, newer connections had taken their place, then maybe, Glanzman thought, memories are not really stored in synapses after all. The experiment seems like something out of Eternal Sunshine of the Spotless Mind, a movie in which ex-lovers trying to forget each other undergo a questionable procedure that deletes the memory of a person, but evidently not to the point beyond recall. The lovers both hide a plan deep within their minds to meet in Montauk in the end. The movie suggests, in a way, that memories are never completely lost, that it always remains possible to go back, even to people and places that seem long forgotten.

But if memories aren’t stored in synaptic connections, where are they stored instead? Glanzman’s unpopular hypothesis was that they might reside in the nucleus of the neuron cell, where DNA and RNA sequences compose instructions for life processes. DNA sequences are fixed and unchanging, so most of an organism’s adaptability comes from supple epigenetic mechanisms, processes that regulate gene expression in response to environmental cues or pressures, which sometimes involve RNA. If DNA is printed sheet music, RNA-induced epigenetic mechanisms are like improvisational cuts and arrangements that might conduct learning and memory.

Perhaps memories reside in epigenetic changes induced by RNA, that improv molecule that scores protein-based adaptations of life. Glanzman’s team went back to their aplysia and trained them over two days to prolong their siphon-withdrawal reflex. They then dissected their nervous systems, extracting RNA involved in forming the memory of their training, and injected it into untrained aplysia, which were tested for learning a day later. Glanzman’s team found that the RNA from trained donors induced learning, while the RNA from untrained donors had no effect. They had transferred a memory, vaguely but surely, from one animal to another, and they had strong evidence that RNA was the memory-transferring agent.

Glanzman now believes that synapses are necessary for the activation of a memory, but that the memory is encoded in the nucleus of the neuron through epigenetic changes. “It’s like a pianist without hands,” Glanzman says. “He may know how to play Chopin, but he’d need hands to exercise the memory.” 

The work of Douglas Blackiston, an Allen Discovery Center scientist at Tufts University, who has studied memory in insects, paints a similar picture. He wanted to know if a butterfly could remember something about its life as a caterpillar, so he exposed caterpillars to the scent of ethyl acetate followed by a mild electric shock. After acquiring an aversion to ethyl acetate, the caterpillars pupated and, after emerging as adult butterflies several weeks later, were tested for memory of their aversive training. Surprisingly, the adult butterflies remembered—but how? The entire caterpillar becomes a cytoplasmic soup before it metamorphosizes into a butterfly. “The remodeling is catastrophic,” Blackiston says. “After all, we’re moving from a crawling machine to a flying machine. Not only the body but the entire brain has to be rewired.”

It’s hard to study exactly what goes on during pupation in vivo, but there’s a subset of caterpillar neurons that may persist in what are called “mushroom bodies,” a pair of structures involved in olfaction that many insects have located near their antennae. In other words, some structure remains. “It’s not soup,” Blackiston says. “Well, maybe it’s soup, but it’s chunky.” There’s near complete pruning of neurons during pupation, and the few neurons that remain become disconnected from other neurons, dissolving the synaptic connections between them in the process, until they reconnect with other neurons during the remodeling into the butterfly brain. Like Glanzman, Blackiston employs a hand analogy: “It’s like a small group of neurons were holding hands, but then let go and moved around, finally reconnecting with different neurons in the new brain.” If the memory was stored anywhere, Blackiston suspects it was stored in the subset of neurons located in the mushroom bodies, the only known carryover material from the caterpillar to the butterfly. 

In the end, despite its whimsical caricature of the science of memory, Eternal Sunshine may have stumbled on a correct premise. Not only do Glanzman and Blackiston believe their experiments harbor hopeful news for Alzheimer’s patients, it also might be possible to repair deteriorated neurons that could, at least theoretically, find their way back to lost memories, perhaps with the guidance of appropriate RNA.

Marco Altamirano is a writer based in New Orleans and the author of Time, Technology, and Environment: An Essay on the Philosophy of Nature. Follow him on

Blindsight: a strange neurological condition that could help explain consciousness

July 2, 2020 11.31am BST

Author

  1. Henry Taylor Birmingham Fellow in Philosophy, University of Birmingham

Disclosure statement

Henry Taylor previously received funding from The Leverhulme Trust and Isaac Newton Trust, but they do not stand to benefit from publication of this article.

Partners

University of Birmingham

University of Birmingham provides funding as a founding partner of The Conversation UK.

The Conversation UK receives funding from these organisations

View the full list

CC BY NDWe believe in the free flow of information
Republish our articles for free, online or in print, under Creative Commons licence.

Imagine being completely blind but still being able to see. Does that sound impossible? Well, it happens. A few years ago, a man (let’s call him Barry) suffered two strokes in quick succession. As a result, Barry was completely blind, and he walked with a stick.

One day, some psychologists placed Barry in a corridor full of obstacles like boxes and chairs. They took away his walking stick and told him to walk down the corridor. The result of this simple experiment would prove dramatic for our understanding of consciousness. Barry was able to navigate around the obstacles without tripping over a single one.

Barry has blindsight, an extremely rare condition that is as paradoxical as it sounds. People with blindsight consistently deny awareness of items in front of them, but they are capable of amazing feats, which demonstrate that, in some sense, they must be able to see them.

In another case, a man with blindsight (let’s call him Rick) was put in front of a screen and told to guess (from several options) what object was on the screen. Rick insisted that he didn’t know what was there and that he was just guessing, yet he was guessing with over 90% accuracy.

Into the brain

Blindsight results from damage to an area of the brain called the primary visual cortex. This is one of the areas, as you might have guessed, responsible for vision. Damage to primary visual cortex can result in blindness – sometimes total, sometimes partial.

So how does blindsight work? The eyes receive light and convert it into information that is then passed into the brain. This information then travels through a series of pathways through the brain to eventually end up at the primary visual cortex. For people with blindsight, this area is damaged and cannot properly process the information, so the information never makes it to conscious awareness. But the information is still processed by other areas of the visual system that are intact, enabling people with blindsight to carry out the kind of tasks that we see in the case of Barry and Rick.

Some blind people appear to be able to ‘see’. Akemaster/Shutterstock

Blindsight serves as a particularly striking example of a general phenomenon, which is just how much goes on in the brain below the surface of consciousness. This applies just as much to people without blindsight as people with it. Studies have shown that naked pictures of attractive people can draw our attention, even when we are completely unaware of them. Other studies have demonstrated that we can correctly judge the colour of an object without any conscious awareness of it.

Blindsight debunked?

Blindsight has generated a lot of controversy. Some philosophers and psychologists have argued that people with blindsight might be conscious of what is in front of them after all, albeit in a vague and hard-to-describe way.

This suggestion presents a difficulty, because ascertaining whether someone is conscious of a particular thing is a complicated and highly delicate task. There is no “test” for consciousness. You can’t put a probe or a monitor next to someone’s head to test whether they are conscious of something – it’s a totally private experience.

We can, of course, ask them. But interpreting what people say about their own experiences can be a thorny task. Their reports sometimes seem to indicate that they have no consciousness at all of the objects in front of them (Rick once insisted that he did not believe that there really were any objects there). Other individuals with blindsight report feeling “visual pin-pricks” or “dark shadows” indicating the tantalising possibility that they did have some conscious awareness left over.

The boundaries of consciousness

So, what does blindsight tell us about consciousness? Exactly how you answer this question will heavily depend on which interpretation you accept. Do you think that those who have blindsight are in some sense conscious of what is out there or not?

The visual cortex. Geyer S, Weiss M, Reimann K, Lohmann G and Turner R/wikipedia, CC BY-SA

If they’re not, then blindsight provides an exciting tool that we can use to work out exactly what consciousness is for. By looking at what the brain can do without consciousness, we can try to work out which tasks ultimately require consciousness. From that, we may be able to work out what the evolutionary function of consciousness is, which is something that we are still relatively in the dark about.

On the other hand, if we could prove that people with blindsight are conscious of what is in front of them, this raises no less interesting and exciting questions about the limits of consciousness. What is their consciousness actually like? How does it differ from more familiar kinds of consciousness? And precisely where in the brain does consciousness begin and end? If they are conscious, despite damage to their visual cortex, what does that tell us about the role of this brain area in generating consciousness?

In my research, I am interested in the way that blindsight reveals the fuzzy boundaries at the edges of vision and consciousness. In cases like blindsight, it becomes increasingly unclear whether our normal concepts such as “perception”, “consciousness” and “seeing” are up to the task of adequately describing and explaining what is really going on. My goal is to develop more nuanced views of perception and consciousness that can help us understand their distinctly fuzzy edges.

To ultimately understand these cases, we will need to employ careful philosophical reflection on the concepts we use and the assumptions we make, just as much as we will need a thorough scientific investigation of the mechanics of the mind.

Before you go…

The past year has been marked by record-breaking hurricanes, floods and heatwaves. As this extreme weather becomes the new normal, The Conversation’s academic authors have analysed these events and how they are (or aren’t) linked to climate change. Should you wish to make a donation, it will help us continue to provide research-led reactions to the climate crisis.

Scientists say most likely number of contactable alien civilisations is 36

New calculations come up with an estimate for worlds capable of communicating with others.

The Guardian

  • Nicola Davis
GettyImages-498384831.jpg

We’re listening … but is anything out there? Photo by dszc / Getty Images.

They may not be little green men. They may not arrive in a vast spaceship. But according to new calculations there could be more than 30 intelligent civilisations in our galaxy today capable of communicating with others.

Experts say the work not only offers insights into the chances of life beyond Earth but could shed light on our own future and place in the cosmos.

“I think it is extremely important and exciting because for the first time we really have an estimate for this number of active intelligent, communicating civilisations that we potentially could contact and find out there is other life in the universe – something that has been a question for thousands of years and is still not answered,” said Christopher Conselice, a professor of astrophysics at the University of Nottingham and a co-author of the research.

In 1961 the astronomer Frank Drake proposed what became known as the Drake equation, setting out seven factors that would need to be known to come up with an estimate for the number of intelligent civilisations out there. These factors ranged from the the average number of stars that form each year in the galaxy through to the timespan over which a civilisation would be expected to be sending out detectable signals.

But few of the factors are measurable. “Drake equation estimates have ranged from zero to a few billion [civilisations] – it is more like a tool for thinking about questions rather than something that has actually been solved,” said Conselice.

Now Conselice and colleagues report in the Astrophysical Journal how they refined the equation with new data and assumptions to come up with their estimates.

“Basically, we made the assumption that intelligent life would form on other [Earth-like] planets like it has on Earth, so within a few billion years life would automatically form as a natural part of evolution,” said Conselice.

The assumption, known as the Astrobiological Copernican Principle, is fair as everything from chemical reactions to star formation is known to occur if the conditions are right, he said. “[If intelligent life forms] in a scientific way, not just a random way or just a very unique way, then you would expect at least this many civilisations within our galaxy,” he said.

He added that, while it is a speculative theory, he believes alien life would have similarities in appearance to life on Earth. “We wouldn’t be super shocked by seeing them,” he said.

Under the strictest set of assumptions – where, as on Earth, life forms between 4.5bn and 5.5bn years after star formation – there are likely between four and 211 civilisations in the Milky Way today capable of communicating with others, with 36 the most likely figure. But Conselice noted that this figure is conservative, not least as it is based on how long our own civilisation has been sending out signals into space – a period of just 100 years so far.

The team add that our civilisation would need to survive at least another 6,120 years for two-way communication. “They would be quite far away … 17,000 light years is our calculation for the closest one,” said Conselice. “If we do find things closer … then that would be a good indication that the lifespan of [communicating] civilisations is much longer than a hundred or a few hundred years, that an intelligent civilisation can last for thousands or millions of years. The more we find nearby, the better it looks for the long-term survival of our own civilisation.”

Dr Oliver Shorttle, an expert in extrasolar planets at the University of Cambridge who was not involved in the research, said several as yet poorly understood factors needed to be unpicked to make such estimates, including how life on Earth began and how many Earth-like planets considered habitable could truly support life.

Dr Patricia Sanchez-Baracaldo, an expert on how Earth became habitable, from the University of Bristol, was more upbeat, despite emphasising that many developments were needed on Earth for conditions for complex life to exist, including photosynthesis. “But, yes if we evolved in this planet, it is possible that intelligent life evolved in another part of the universe,” she said.

Prof Andrew Coates, of the Mullard Space Science Laboratory at University College London, said the assumptions made by Conselice and colleagues were reasonable, but the quest to find life was likely to take place closer to home for now.

“[The new estimate] is an interesting result, but one which it will be impossible to test using current techniques,” he said. “In the meantime, research on whether we are alone in the universe will include visiting likely objects within our own solar system, for example with our Rosalind Franklin Exomars 2022 rover to Mars, and future missions to Europa, Enceladus and Titan [moons of Jupiter and Saturn]. It’s a fascinating time in the search for life elsewhere.”

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

More from The Guardian

Nicola Davis writes about science, health and environment for the Guardian and Observer and was commissioning editor for Observer Tech Monthly. Previously she worked for the Times and other publications. She has a MChem and DPhil in Organic Chemistry from the University of Oxford. Nicola also presents the Science Weekly podcast.

More from The Guardian

Advertisement


‘Miles-wide anomaly’ over Texas sparks concerns HAARP weather manipulation has BEGUN

BIZARRE footage has emerged that proves the US government is testing weather manipulation technology, according to wild claims online.

The clip, captured in Texas, US, shows the moment radar was completely blotted out by an unknown source.

Another video shows a green blob forming above Sugar Land, quickly growing in size in a circular formation.

According to Travis Herzog, a meteorologist at ABC News, the phenomenon was caused by a flock of birds filling the sky.

But conspiracy theorist Tyler Glockner, who runs YouTube channel secureteam10, disagrees.

He posted a video yesterday speculating it could be something more sinister which was accidentally exposed by the news channel.

He also pointed out the number of birds needed to cause such an event would have been seen or recorded by someone.

And his video has now racked up more than 350,000 hits in less than 48 hours.

“I am willing to bet there is a power station near the centre of that burst. Some kind of HAARP technology,” one viewer suggested.

Another added: “This is HAARP and some kind of weather modification/manipulation technology.”

And a third simply claimed: “Scary weather manipulation in progress.”

The High-Frequency Active Auroral Research Programme was initiated as a research project between the US Air Force, Navy, University of Alaska Fairbanks and the Defence Advance Research Agency.

Many conspiracists believe the US government is already using the HAARP programme to control weather occurrences through the use of chemtrailing.

Over the years, HAARP has been blamed for generating natural catastrophes such as thunderstorms and power loss as well as strange cloud formations.

But it was actually designed and built by BAE Advance Technologies to help analyse the ionosphere and investigate the potential for developing enhanced technologies.

Climate change expert Janos Pasztor previously revealed to Daily Star Online how this technology could lead to weaponisation.

The following extract is from U.S Research on weather modification dating back to 1957 Posted July 5th 2020

COMMENT secretary ‘.’ From: Sent: Andrea Psoras-QEDI [apsoras@qedinternational.com] Monday, May 05, 2008 3:08 PM To: secretary • , Subject: CFTC Requests Public Input on Possible Regulation of “Event Contracts” Commodity Futures Trading Commission Three Lafayette Centre 1155 21st Street, NW Page 1 of35 C) -·1 ‘ C) i .-(:::::l -q ‘. .. -~ .. (~· …. ..: .. ~ Washington, DC 20581 f\1 I ” (/) -….,J : : 1 r…····~· c ) :.:S.2 202-418-5000 202-418-5521, fax 202-418-5514, 11lY questions@cftc. gov Dear Commissioners and Secretary: :u f’ \ }_:;~ :;: ) ,, -·· -f l’;y … c: N Not everything is a commodity, nor should something that is typically covered by some sort of property and casualty insurance suddenly become exchange tradable. Insurance companies for a number of years have provided compensation of some sort for random, but periodic events. Where the insurance industry wants to off-load their risk at the expense of other commodities markets participants, contributes to sorts of moral hazards – which I vigorously oppose. If where there is ‘interest’ to develop these sorts of risk event instruments, to me it seems an admission that the insurance sector is perhaps marginal or worse, incompetent or too greedy to determine how to offer insurance for events presumably produced by nature. Now where there are the weather and earth shaking technologies, or some circles call these weather and electro-magnetic weapons, used insidiously unfortunately by our military, our intelligence apparatus, and perhaps our military contractors for purposes contrary to that to which our public servants take their oath of office to the Constitution,

I suggest prohibiting the use of that technology rather than leaving someone else holding the bag in the event destruction produced by, and where so-called ‘natural’ events were produced by military contractor technology in the guise of ‘mother nature’. * Consider Rep Denis Kucinich as well as former Senator John Glenn attempted to have our Congress prohibit the use of space based weapons. That class ofweapons includes the ‘weather weapons’. http://www.globalresearch.ca/articles/CH0409F.html as well as other articles about this on the Global Research website. Respectfully, Andrea Psoras “CFTC Requests Public Input on Possible Regulation of “Event Contracts” Washington, DC-The Commodity Futures Trading Commission (CFTC) is asking for public 5/7/2008 ::0 r:1 :> I ,.1 -·· -·’ ” r·n 0 Page 2 of35 comment on the appropriate regulatory treatment of finanCial agreements offered by markets commonly referred to as event, prediction, or information markets.

During the past several years, the CFTC has received numerous requests for guidance involving the trading of event contracts. These contracts typically involve financial agreements that are linked to events or measurable outcomes and often serve as information collection vehicles. The contracts are based on a broad spectrum of events, such as the results of presidential elections, world population levels, or economic measures. “Event markets are rapidly evolving, and growing, presenting a host of difficult policy and legal questions including: What public purpose is served in the oversight of these markets and what differentiates these markets from pure gambling outside the CFTC’ s jurisdiction?” said CFTC Acting chairman Walt Lukken.

“The CFTC is evaluating how these markets should be regulated with the proper protections in place and I encourage members of the public to provide their views.” In response to requests for guidance, and to promote regulatory certainty, the CFTC has commenced a comprehensive review of the Commodity Exchange Act’s applicability to event contracts and markets.

The CFTC is issuing a Concept Release to solicit the expertise and opinions of all interested parties, including CFTC registrants, legal practitioners, economists, state and federal regulatory authorities, academics, and event market participants. The Concept Release will be published in the Federal Register shortly; comments will be accepted for 60 days after publication in the Federal Register.” Comments may also be submitted electronically to secretary@cftc.gov. All comments received will be posted on the CFTC’s website. * Weather as a Force Multiplier: Owning the Weather in 2025 A Research Paper Presented To Air Force 2025 August 1996 Below are highlights contained within the actual report. Please remember that this research report was issued in 1996 -8 years ago -and that much of what was discussed as being in preliminary stages back then is now a reality. In the United States, weather-modification will likely become a part of national security policy with both domestic and international applications. Our government will pursue such a policy, depending on its interests, at various levels. In this paper we show that appropriate application of weather-modification can provide battlespace dominance to a degree never before imagined. In the future, such operations will enhance air and space superiority and provide new options for battlespace shaping and battlespace awareness. “The technology is there, waiting for us to pull it all together” [General Gordon R. Sullivan, “Moving into the 21st Century: America’s Army and Modernization,” Military Review (July 1993) quoted in Mary Ann Seagraves and Richard Szymber, “Weather a Force Multiplier,” Military Review, November/December 1995, 75]. A global, precise, real-time, robust, systematic weather-modification capability 5/7/2008 would provide war-fighting CINCs [an acronym meaning “Commander IN Chief’ of a unified command] with a powerful force multiplier to achieve military objectives.

Since weather will be common to all possible futures, a weather-modification capability would be universally applicable and have utility across the entire spectrum of conflict. The capability of influencing the weather even on a small scale could change it from a force degrader to a force multiplier.

In 1957, the president’s advisory committee on weather control explicitly recognized the military potential of weather-modification, warning in their report that it could become a more important weapon than the atom bomb [William B. Meyer, “The Life and Times ofUS Weather: What Can We Do About It?” American Heritage 37, no. 4 (June/July 1986), 48]. Today [since 1969], weather-modification is the alteration of weather phenomena over a limited area for a limited period of time. [Herbert S. Appleman, An Introduction to Weather-modification (Scott AFB, Ill.: Air Weather Service/MAC, September 1969), 1]. In the broadest sense, weather-modification can be divided into two major categories: suppression and intensification of weather patterns. In extreme cases, it might involve the creation of completely new weather patterns, attenuation or control of severe storms, or even alteration of global climate on a far-reaching and/or long-lasting scale.

Extreme and controversial examples of weather modification-creation of made-to-order weather, large-scale climate modification, creation and/or control (or “steering”) of severe storms, etc.-were researched as part of this study … the weather-modification applications proposed in this report range from technically proven to potentially feasible. Applying Weather-modification to Military Operations How will the military, in general, and the USAF, in particular, manage and employ a weather-modification capability?

We envision this will be done by the weather force support element (WFSE), whose primary mission would be to support the war-fighting CINCs with weather-modification options, in addition to current forecasting support. Although the WFSE could operate anywhere as long as it has access to the GWN and the system components already discussed, it will more than likely be a component within the AOC or its 2025-equivalent. With the CINC’s intent as guidance, the WFSE formulates weather-modification options using information provided by the GWN, local weather data network, and weather-modification forecast model.

The options include range of effect, probability of success, resources to be expended, the enemy’s vulnerability, and risks involved. The CINC chooses an effect based on these inputs, and the WFSE then implements the chosen course, selecting the right modification tools and employing them to achieve the desired effect. Sensors detect the change and feed data on the new weather pattern to the modeling system which updates its forecast accordingly. The WFSE checks the effectiveness of its efforts by pulling down the updated current conditions and new forecast(s) from the GWN and local weather data network, and plans follow-on missions as needed. This concept is illustrated in figure 3-2. 5/7/2008 Page 3 of35

Two key technologies are necessary to meld an integrated, comprehensive, responsive, precise, and effective weather-modification system. Advances in the science of chaos are critical to this endeavor. Also key to the feasibility of such a system is the ability to model the extremely complex nonlinear system of global weather in ways that can accurately predict the outcome of changes in the influencing variables. Researchers have already successfully controlled single variable nonlinear systems in the lab and hypothesize that current mathematical techniques and computer capacity could handle systems with up to five variables.

Advances in these two areas would make it feasible to affect regional weather patterns by making small, continuous nudges to one or more influencing factors. Conceivably, with enough lead time and the right conditions, you could get “made-to-order” weather [William Brown, “Mathematicians Learn How to Tame Chaos,” New Scientist (30 May 1992): 16]. The total weather-modification process would be a real-time loop of continuous, appropriate, measured interventions, and feedback capable of producing desired weather behavior. The essential ingredient ofthe weather-modification system is the set of intervention techniques used to modify the weather.

The number of specific intervention methodologies is limited only by the imagination, but with few exceptions they involve infusing either energy or chemicals into the meteorological process in the right way, at the right place and time. The intervention could be designed to modify the weather in a number of ways, such as influencing clouds and precipitation, storm intensity, climate, space, or fog. 5/7/2008 Page 4 of35 PRECIPITATION ” … significant beneficial influences can be derived through judicious exploitation of the solar absorption potential of carbon black dust” [William M. Gray et al., “Weather-modification by Carbon Dust Absorption of Solar Energy,” Journal of Applied Meteorology 15 (April1976): 355]. The study ultimately found that this technology could be used to enhance rainfall on the mesoscale, generate cirrus clouds, and enhance cumulonimbus (thunderstorm) clouds in otherwise dry areas . . . .if we are fortunate enough to have a fairly large body of water available upwind from the targeted battlefield, carbon dust could be placed in the atmosphere over that water. Assuming the dynamics are supportive in the atmosphere, the rising saturated air will eventually form clouds and rainshowers downwind over the land. Numerous dispersal techniques [of carbon dust] have already been studied, but the most convenient, safe, and cost-effective method discussed is the use of afterburner-type jet engines to generate carbon particles while flying through the targeted air.

This method is based on injection ofliquid hydrocarbon fuel into the afterburner’s combustion gases [this explains why contrails have now become chemtrails]. To date, much work has been done on UAVs [Unmanned Aviation Vehicles] which can closely (if not completely) match the capabilities of piloted aircraft. If this UAV technology were combined with stealth and carbon dust technologies, the result could be a UAV aircraft invisible to radar while en route to the targeted area, which could spontaneously create carbon dust in any location. If clouds were seeded (using chemical nuclei similar to those used today or perhaps a more effective agent discovered through continued research) before their downwind arrival to a desired location, the result could be a suppression of precipitation. In other words, precipitation could be “forced” to fall before its arrival in the desired territory, thereby making the desired territory “dry.” FOG Field experiments with lasers have demonstrated the capability to dissipate warm fog at an airfield with zero visibility. Smart materials based on nanotechnology are currently being developed with gigaops computer capability at their core.

They could adjust their size to optimal dimensions for a given fog seeding situation and even make adjustments throughout the process. They might also enhance their dispersal qualities by adjusting their buoyancy, by communicating with each other, and by steering themselves within the fog. They will be able to provide immediate and continuous effectiveness feedback by integrating with a larger sensor network and can also change their temperature and polarity to improve their seeding effects [J. Storrs Hall, “Overview ofNanotechnology,” adapted from papers by Ralph C. Merkle and K. Eric Drexler, Rutgers University, November 1995]. As mentioned above, UAVs could be used to deliver and distribute these smart materials. Recent army research lab experiments have demonstrated the feasibility of 5/7/2008 Page 5 of35 generating fog.

They used commercial equipment to generate thick fog in an area 100 meters long. Further study has shown fogs to be effective at blocking much of the UV/IR/visible spectrum, effectively masking emitters of such radiation from IR weapons [Robert A. Sutherland, “Results of Man-Made Fog Experiment,” Proceedings of the 1991 Battlefield Atmospherics Coriference (Fort Bliss, Tex.: Hinman Hall, 3-6 December~1991)]. STORMS The damage caused by storms is indeed horrendous. For instance, a tropical storm has an energy equal to 10,000 one-megaton hydrogen bombs [Louis J. Battan, Harvesting the Clouds (Garden City, N.Y.: Doubleday & Co., 1960), 120]. At any instant there are approximately 2,000 thunderstorms taking place. In fact 45,000 thunderstorms, which contain heavy rain, hail, microbursts, wind shear, and lightning form daily [GeneS. Stuart, “Whirlwinds and Thunderbolts,” Nature on the Rampage (Washington, D.C.: National Geographic Society, 1986), 130]. Weather-modification technologies might involve techniques that would increase latent heat release in the atmosphere, provide additional water vapor for cloud cell development, and provide additional surface and lower atmospheric heating to increase atmospheric instability.

The focus of the weather-modification effort would be to provide additional “conditions” that would make the atmosphere unstable enough to generate cloud and eventually storm cell development. One area of storm research that would significantly benefit military operations is lightning modification … but some offensive military benefit could be obtained by doing research on increasing the potential and intensity of lightning. Possible mechanisms to investigate would be ways to modify the electropotential characteristics over certain targets to induce lightning strikes on the desired targets as the storm passes over their location. In summary, the ability to modify battlespace weather through storm cell triggering or enhancement would allow us to exploit the technological “weather” advances. SPACE WEATHER-MODIFICATION This section discusses opportunities for control and modification of the ionosphere and near-space environment for force enhancement. A number of methods have been explored or proposed to modify the ionosphere, including injection of chemical vapors and heating or charging via electromagnetic radiation or particle beams (such as ions, neutral particles, x-rays, MeV particles, and energetic electrons)-[Peter M. Banks, “Overview of Ionospheric Modification from Space Platforms,” in Ionospheric Modification and Its Potential to Enhance or Degrade the Performance of Military Systems (AGARD Conference Proceedings 485, October 1990) 19-1].

It is important to note that many techniques to modify the upper atmosphere have been successfully demonstrated experimentally. Ground-based modification techniques employed by the FSU include vertical HF heating, oblique HF heating, microwave heating, and magnetospheric modification [Capt Mike Johnson, Upper 5/7/2008 Page 6 of35 Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992)]. Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic [EM] radiation of a selected source.

Atmospheric Research and Modification-Former Soviet Union (U), DST-18205-475-92 (Foreign Aerospace Science and Technology Center, AF Intelligence Command, 24 September 1992)]. Creation of an artificial uniform ionosphere was first proposed by Soviet researcher A. V. Gurevich in the mid-1970s. An artificial ionospheric mirror (AIM) would serve as a precise mirror for electromagnetic [EM] radiation of a selected frequency or a range of frequencies. AJ1iftcial Ionospheric Mirrors S’IATION ARTIFICIAL WEATHER GRO U.ND-IJAS..Iill AIM GENERATOR 121\niZ STATION While most weather-modification efforts rely on the existence of certain preexisting conditions, it may be possible to produce some weather effects artificially, regardless of preexisting conditions. For instance, virtual weather could be created by influencing the weather inforrilation received by an end user.

Nanotechnology also offers possibilities for creating simulated weather. A cloud, or several clouds, of microscopic computer particles, all communicating with each other and with a larger control system could provide tremendous capability. Interconnected, atmospherically buoyant, and having navigation capability in three dimensions, such clouds could be designed to have a wide-range of properties … Even if power levels achieved were insufficient to be an effective strike weapon [if power levels WERE sufficient, they would be an effective strike weapon], the potential for psychological operations in many situations could be fantastic. One major advantage of using simulated weather to achieve a desired effect is that unlike other approaches, it makes what are otherwise the results of deliberate actions appear to be the consequences of natural weather phenomena. In addition, it is potentially relatively inexpensive to do. According to J. Storrs Hall, a 5/7/2008 Page 7 of35

Andrea Psoras Senior Vice President The Electronic Frontier Foundation, an advocate for freedom of information on the Internet, has condemned Santorum’s bill. “It is a terrible precedent for information policy,” said staff member Ren Bucholz. “If the rule is, data provided by taxpayer money can’t be provided to the public but through a private entity, we won’t have a very useful public agency.” QED International Associates, Inc. US Agent for Rapid Ratings International 708 Third A venue, 23rd Fl New York, NY 10017 (212) 953-40580 apsoras@gmail.com (646) 709-9629c apsoras@qedinternational.com http://www.qedintemational.com

  • 07-13-19

Apollo 11 really landed on the Moon—and here’s how you can be sure (sorry, conspiracy nuts)

We went to the Moon. Here’s all the proof you’ll ever need.

By Charles Fishman7 minute Read

This is the 43rd in an exclusive series of 50 articles, one published each day until July 20, exploring the 50th anniversary of the first-ever Moon landing. You can check out 50 Days to the Moon here every day.

The United States sent astronauts to the Moon, they landed, they walked around, they drove around, they deployed lots of instruments, they packed up nearly half a ton of Moon rocks, and they flew home.

No silly conspiracy was involved.

There were no Hollywood movie sets.

Anybody who writes about Apollo and talks about Apollo is going to be asked how we actually know that we went to the Moon.

Not that the smart person asking the question has any doubts, mind you, but how do we know we went, anyway?

It’s a little like asking how we know there was a Revolutionary War. Where’s the evidence? Maybe it’s just made up by the current government to force us to think about America in a particular way.

How do we know there was a Titanic that sank?

And by the way, when I go to the battlefields at Gettysburg—or at Normandy, for that matter—they don’t look much like battlefields to me. Can you prove we fought a Civil War? World War II?

In the case of Apollo, in the case of the race to the Moon, there is a perfect reply.

The race to the Moon in the 1960s was, in fact, an actual race.

The success of the Soviet space program—from Sputnik to Strelka and Belka to Yuri Gagarin—was the reason for Apollo. John Kennedy launched America to the Moon precisely to beat the Russians to the Moon.

When Kennedy was frustrated with the fact that the Soviets were first to achieve every important milestone in space, he asked Vice President Lyndon Johnson to figure it out—fast. The opening question of JFK’s memo to LBJ:

“Do we have a chance of beating the Soviets by putting a laboratory in space, or by a trip around the Moon, or by a rocket to land on the Moon, or by a rocket to go to the Moon and back with a man. Is there any other space program which promises dramatic results in which we could win?”

Win. Kennedy wanted to know how to beat the Soviets—how to win in space.

That memo was written a month before Kennedy’s dramatic “go to the Moon” speech. The race to the Moon he launched would last right up to the moment, almost 100 months later, when Apollo 11 would land on the Moon.

The race would shape the American and Soviet space programs in subtle and also dramatic ways.

Apollo 8 was the first U.S. mission that went to the Moon: The Apollo capsule and the service module, with Frank Borman, Bill Anders, and Jim Lovell, flew to the Moon at Christmastime in 1968, but without a lunar module. The lunar modules were running behind, and there wasn’t one ready for the flight.

Apollo 8 represented a furious rejuggling of the NASA flight schedule to accommodate the lack of a lunar module. The idea was simple: Let’s get Americans to the Moon quick, even if they weren’t ready to land on the Moon. Let’s “lasso the Moon” before the Soviets do.

At the moment when the mission was conceived and the schedule redone to accommodate a different kind of Apollo 8, in late summer 1968, NASA officials were worried that the Russians might somehow mount exactly the same kind of mission: Put cosmonauts in a capsule and send them to orbit the Moon, without landing. Then the Soviets would have made it to the Moon first.

Apollo 8 was designed to confound that, and it did.

In early December 1968, in fact, the rivalry remained alive enough that Time magazine did a cover story on it. “Race for the Moon” was the headline, and the cover was an illustration of an American astronaut and a Soviet cosmonaut, in spacesuits, leaping for the surface of the Moon.

Seven months later, when Apollo 11, with Michael Collins, Neil Armstrong, and Buzz Aldrin aboard, entered orbit around the Moon on July 19, 1969, there was a Soviet spaceship there to meet them. It was Luna 15, and it had been launched a few days before Apollo 11. Its goal: Land on the Moon, scoop up Moon rocks and dirt, and then dash back to a landing in the Soviet Union before Collins, Aldrin, and Armstrong could return with their own Moon rocks.

If that had happened, the Soviets would at least have been able to claim that they had gotten Moon rocks back to Earth first (and hadn’t needed people to do it).

So put aside for a moment the pure ridiculousness of a Moon landing conspiracy that somehow doesn’t leak out. More than 410,000 Americans worked on Apollo, on behalf of 20,000 companies. Was their work fake? Were they all in on the conspiracy? And then, also, all their family members—more than 1 million people—not one of whom ever whispered a word of the conspiracy?

What of the reporters? Hundreds of reporters covering space, writing stories not just of the dramatic moments, but about all the local companies making space technology, from California to Delaware.

Put aside as well the thousands of hours of audio recordings—between spacecraft and mission control; in mission control, where dozens of controllers talked to each other; in the spacecraft themselves, where there were separate recordings of the astronauts just talking to each other in space. There were 2,502 hours of Apollo spaceflight, more than 100 days. It’s an astonishing undertaking not only to script all that conversation, but then to get people to enact it with authenticity, urgency, and emotion. You can now listen to all of it online, and it would take you many years to do so.

For those who believe the missions were fake, all that can, somehow, be waved off. A puzzling shadow in a picture from the Moon, a quirk in a single moment of audio recording, reveals that the whole thing was a vast fabrication. (With grace and straight-faced reporting, the Associated Press this week reviewed, and rebutted, the most popular sources of the conspiracy theories.)

Forget all that.

If the United States had been faking the Moon landings, one group would not have been in on the conspiracy: The Soviets.

The Soviet Union would have revealed any fraud in the blink of an eye, and not just without hesitation, but with joy and satisfaction.

In fact, the Russians did just the opposite. The Soviet Union was one of the few places on Earth (along with China and North Korea) where ordinary people couldn’t watch the landing of Apollo 11 and the Moon walk in real time. It was real enough for the Russians that they didn’t let their own people see it.

That’s all the proof you need. If the Moon landings had been faked—indeed, if any part of them had been made up, or even exaggerated—the Soviets would have told the world. They were watching. Right to the end, they had their own ambitions to be first to the Moon, in the only way they could muster at that point.

And that’s a kind of proof that the conspiracy-meisters cannot wriggle around.

But another thing is true about the Moon landings: You’ll never convince someone who wants to think they were faked that they weren’t. There is nothing in particular you could ever say, no particular moment or piece of evidence you could produce, that would cause someone like that to light up and say, “Oh! You’re right! We did go to the Moon.”

Anyone who wants to live in a world where we didn’t go to the Moon should be happy there. That’s a pinched and bizarre place, one that defies not just the laws of physics but also the laws of ordinary human relationships.

I prefer to live in the real world, the one in which we did go to the Moon, because the work that was necessary to get American astronauts to the Moon and back was extraordinary. It was done by ordinary people, right here on Earth, people who were called to do something they weren’t sure they could, and who then did it, who rose to the occasion in pursuit of a remarkable goal.

That’s not just the real world, of course. It’s the best of America.

We went to the Moon, and on the 50th anniversary of that first landing, it’s worth banishing forever the nutty idea that we didn’t, and also appreciating what the achievement itself required, and what it says about the people who were able to do it.


A Mysterious Anomaly Under Africa Is Radically Weakening Earth’s Magnetic Field Posted June 29th 2020

PETER DOCKRILL 6 MARCH 2018

Around Earth, an invisible magnetic field traps electrons and other charged particles.
(Image: © NASA’s Goddard Space Flight Center)

Above our heads, something is not right. Earth’s magnetic field is in a state of dramatic weakening – and according to mind-boggling new research, this phenomenal disruption is part of a pattern lasting for over 1,000 years.

The Earth‘s magnetic field is weakening between Africa and South America, causing issues for satellites and space craft.

Scientists studying the phenomenon observed that an area known as the South Atlantic Anomaly has grown considerably in recent years, though the reason for it is not entirely clear.

Using data gathered by the European Space Agency’s (ESA) Swarm constellation of satellites, researchers noted that the area of the anomaly dropped in strength by more than 8 per cent between 1970 and 2020.

“The new, eastern minimum of the South Atlantic Anomaly has appeared over the last decade and in recent years is developing vigorously,” said Jürgen Matzka, from the German Research Centre for Geosciences.

“We are very lucky to have the Swarm satellites in orbit to investigate the development of the South Atlantic Anomaly. The challenge now is to understand the processes in Earth’s core driving theses changes.”

Earth’s magnetic field doesn’t just give us our north and south poles; it’s also what protects us from solar winds and cosmic radiation – but this invisible force field is rapidly weakening, to the point scientists think it could actually flip, with our magnetic poles reversing.

As crazy as that sounds, this actually does happen over vast stretches of time. The last time it occurred was about 780,000 years ago, although it got close again around 40,000 years back.

When it takes place, it’s not quick, with the polarity reversal slowly occurring over thousands of years.

Nobody knows for sure if another such flip is imminent, and one of the reasons for that is a lack of hard data.

The region that concerns scientists the most at the moment is called the South Atlantic Anomaly – a huge expanse of the field stretching from Chile to Zimbabwe. The field is so weak within the anomaly that it’s hazardous for Earth’s satellites to enter it, because the additional radiation it’s letting through could disrupt their electronics.

“We’ve known for quite some time that the magnetic field has been changing, but we didn’t really know if this was unusual for this region on a longer timescale, or whether it was normal,” says physicist Vincent Hare from the University of Rochester in New York.

One of the reasons scientists don’t know much about the magnetic history of this region of Earth is it lacks what’s called archeomagnetic data – physical evidence of magnetism in Earth’s past, preserved in archaeological relics from bygone ages.

One such bygone age belonged to a group of ancient Africans, who lived in the Limpopo River Valley – which borders Zimbabwe, South Africa, and Botswana: regions that fall within the South Atlantic Anomaly of today.

Approximately 1,000 years ago, these Bantu peoples observed an elaborate, superstitious ritual in times of environmental hardship.

During times of drought, they would burn down their clay huts and grain bins, in a sacred cleansing rite to make the rains come again – never knowing they were performing a kind of preparatory scientific fieldwork for researchers centuries later.

“When you burn clay at very high temperatures, you actually stabilise the magnetic minerals, and when they cool from these very high temperatures, they lock in a record of the earth’s magnetic field,” one of the team, geophysicist John Tarduno explains.

As such, an analysis of the ancient artefacts that survived these burnings reveals much more than just the cultural practices of the ancestors of today’s southern Africans.

“We were looking for recurrent behaviour of anomalies because we think that’s what is happening today and causing the South Atlantic Anomaly,” Tarduno says.

“We found evidence that these anomalies have happened in the past, and this helps us contextualise the current changes in the magnetic field.”

Like a “compass frozen in time immediately after [the] burning”, the artefacts revealed that the weakening in the South Atlantic Anomaly isn’t a standalone phenomenon of history.

Similar fluctuations occurred in the years 400-450 CE, 700-750 CE, and 1225-1550 CE – and the fact that there’s a pattern tells us that the position of the South Atlantic Anomaly isn’t a geographic fluke.

“We’re getting stronger evidence that there’s something unusual about the core-mantel boundary under Africa that could be having an important impact on the global magnetic field,” Tarduno says.

The current weakening in Earth’s magnetic field – which has been taking place for the last 160 years or so – is thought to be caused by a vast reservoir of dense rock called the African Large Low Shear Velocity Province, which sits about 2,900 kilometres (1,800 miles) below the African continent.

“It is a profound feature that must be tens of millions of years old,” the researchers explained in The Conversation last year.

“While thousands of kilometres across, its boundaries are sharp.”

This dense region, existing in between the hot liquid iron of Earth’s outer core and the stiffer, cooler mantle, is suggested to somehow be disturbing the iron that helps generate Earth’s magnetic field.

There’s a lot more research to do before we know more about what’s going on here.

As the researchers explain, the conventional idea of pole reversals is that they can start anywhere in the core – but the latest findings suggest what happens in the magnetic field above us is tied to phenomena at special places in the core-mantle boundary.

If they’re right, a big piece of the field weakening puzzle just fell in our lap – thanks to a clay-burning ritual a millennia ago. What this all means for the future, though, no-one is certain.

“We now know this unusual behaviour has occurred at least a couple of times before the past 160 years, and is part of a bigger long-term pattern,” Hare says.

“However, it’s simply too early to say for certain whether this behaviour will lead to a full pole reversal.”

The findings are reported in Geophysical Review Letters.

Extending from Earth like invisible spaghetti is the planet’s magnetic field. Created by the churn of Earth’s core, this field is important for everyday life: It shields the planet from solar particles, it provides a basis for navigation and it might have played an important role in the evolution of life on Earth. 

But what would happen if Earth’s magnetic field disappeared tomorrow? A larger number of charged solar particles would bombard the planet, putting power grids and satellites on the fritz and increasing human exposure to higher levels of cancer-causing ultraviolet radiation. In other words, a missing magnetic field would have consequences that would be problematic but not necessarily apocalyptic, at least in the short term.

And that’s good news, because for more than a century, it’s been weakening. Even now, there are especially flimsy spots, like the South Atlantic Anomaly in the Southern Hemisphere, which create technical problems for low-orbiting satellites. 

Related: What Will Happen to Earth When the Sun Dies?

Read more

One possibility, according to the ESA, is that the weakening field is a sign that the Earth’s magnetic field is about to reverse, whereby the North Pole and South Pole switch places.

The last time a “geomagnetic reversal” took place was 780,000 years ago, with some scientists claiming that the next one is long overdue. Typically, such events take place every 250,000 years.

The repercussions of such an event could be significant, as the Earth’s magnetic field plays an important role in protecting the planet from solar winds and harmful cosmic radiation.

Telecommunication and satellite systems also rely on it to operate, suggesting that computers and mobile phones could experience difficulties.

The South Atlantic Anomaly has been captured by the Swarm satellite constellation (Division of Geomagnetism, DTU Space)

The South Atlantic Anomaly is already causing issues with satellites orbiting Earth, the ESA warned, while spacecrafts flying in the area could also experience “technical malfunctions”.

A 2018 study published in the scientific journal Proceedings of the National Academy of Sciences found that despite the weakening field, “Earth’s magnetic field is probably not reversing”.

The study also explained that the process is not an instantaneous one and could take tens of thousands of years to take place.

ESA said it would continue to monitor the weakening magnetic field with its constellation of Swarm satellites.

“The mystery of the origin of the South Atlantic Anomaly has yet to be solved,” the space agency stated. “However, one thing is certain: magnetic field observations from Swarm are providing exciting new insights into the scarcely understood processes of Earth’s interior.”

Alien life is out there, but our theories are probably steering us away from it May 22nd 2020

If we discovered evidence of alien life, would we even realise it? Life on other planets could be so different from what we’re used to that we might not recognise any biological signatures that it produces.

Recent years have seen changes to our theories about what counts as a biosignature and which planets might be habitable, and further turnarounds are inevitable. But the best we can really do is interpret the data we have with our current best theory, not with some future idea we haven’t had yet.

This is a big issue for those involved in the search for extraterrestrial life. As Scott Gaudi of Nasa’s Advisory Council has said: “One thing I am quite sure of, now having spent more than 20 years in this field of exoplanets … expect the unexpected.”

But is it really possible to “expect the unexpected”? Plenty of breakthroughs happen by accident, from the discovery of penicillin to the discovery of the cosmic microwave background radiation left over from the Big Bang. These often reflect a degree of luck on behalf of the researchers involved. When it comes to alien life, is it enough for scientists to assume “we’ll know it when we see it”?

Many results seem to tell us that expecting the unexpected is extraordinarily difficult. “We often miss what we don’t expect to see,” according to cognitive psychologist Daniel Simons, famous for his work on inattentional blindness. His experiments have shown how people can miss a gorilla banging its chest in front of their eyes. Similar experiments also show how blind we are to non-standard playing cards such as a black four of hearts. In the former case, we miss the gorilla if our attention is sufficiently occupied. In the latter, we miss the anomaly because we have strong prior expectations.

There are also plenty of relevant examples in the history of science. Philosophers describe this sort of phenomenon as “theory-ladenness of observation”. What we notice depends, quite heavily sometimes, on our theories, concepts, background beliefs and prior expectations. Even more commonly, what we take to be significant can be biased in this way.

For example, when scientists first found evidence of low amounts of ozone in the atmosphere above Antarctica, they initially dismissed it as bad data. With no prior theoretical reason to expect a hole, the scientists ruled it out in advance. Thankfully, they were minded to double check, and the discovery was made.

More than 200,000 stars captured in one small section of the sky by Nasa’s TESS mission. Nasa

Could a similar thing happen in the search for extraterrestrial life? Scientists studying planets in other solar systems (exoplanets) are overwhelmed by the abundance of possible observation targets competing for their attention. In the last 10 years scientists have identified more than 3,650 planets – more than one a day. And with missions such as NASA’s TESS exoplanet hunter this trend will continue.

Each and every new exoplanet is rich in physical and chemical complexity. It is all too easy to imagine a case where scientists do not double check a target that is flagged as “lacking significance”, but whose great significance would be recognised on closer analysis or with a non-standard theoretical approach.

The Müller-Lyer optical illusion. Fibonacci/Wikipedia, CC BY-SA

However, we shouldn’t exaggerate the theory-ladenness of observation. In the Müller-Lyer illusion, a line ending in arrowheads pointing outwards appears shorter than an equally long line with arrowheads pointing inwards. Yet even when we know for sure that the two lines are the same length, our perception is unaffected and the illusion remains. Similarly, a sharp-eyed scientist might notice something in her data that her theory tells her she should not be seeing. And if just one scientist sees something important, pretty soon every scientist in the field will know about it.

History also shows that scientists are able to notice surprising phenomena, even biased scientists who have a pet theory that doesn’t fit the phenomena. The 19th-century physicist David Brewster incorrectly believed that light is made up of particles travelling in a straight line. But this didn’t affect his observations of numerous phenomena related to light, such as what’s known as birefringence in bodies under stress. Sometimes observation is definitely not theory-laden, at least not in a way that seriously affects scientific discovery.

We need to be open-minded

Certainly, scientists can’t proceed by just observing. Scientific observation needs to be directed somehow. But at the same time, if we are to “expect the unexpected”, we can’t allow theory to heavily influence what we observe, and what counts as significant. We need to remain open-minded, encouraging exploration of the phenomena in the style of Brewster and similar scholars of the past.

Studying the universe largely unshackled from theory is not only a legitimate scientific endeavour – it’s a crucial one. The tendency to describe exploratory science disparagingly as “fishing expeditions” is likely to harm scientific progress. Under-explored areas need exploring, and we can’t know in advance what we will find.

In the search for extraterrestrial life, scientists must be thoroughly open-minded. And this means a certain amount of encouragement for non-mainstream ideas and techniques. Examples from past science (including very recent ones) show that non-mainstream ideas can sometimes be strongly held back. Space agencies such as NASA must learn from such cases if they truly believe that, in the search for alien life, we should “expect the unexpected”.

Could invisible aliens really exist among us? An astrobiologist explains . May 22nd 2020

Life is pretty easy to recognise. It moves, it grows, it eats, it excretes, it reproduces. Simple. In biology, researchers often use the acronym “MRSGREN” to describe it. It stands for movement, respiration, sensitivity, growth, reproduction, excretion and nutrition.

But Helen Sharman, Britain’s first astronaut and a chemist at Imperial College London, recently said that alien lifeforms that are impossible to spot may be living among us. How could that be possible?

While life may be easy to recognise, it’s actually notoriously difficult to define and has had scientists and philosophers in debate for centuries – if not millennia. For example, a 3D printer can reproduce itself, but we wouldn’t call it alive. On the other hand, a mule is famously sterile, but we would never say it doesn’t live.

As nobody can agree, there are more than 100 definitions of what life is. An alternative (but imperfect) approach is describing life as “a self-sustaining chemical system capable of Darwinian evolution”, which works for many cases we want to describe.

The lack of definition is a huge problem when it comes to searching for life in space. Not being able to define life other than “we’ll know it when we see it” means we are truly limiting ourselves to geocentric, possibly even anthropocentric, ideas of what life looks like. When we think about aliens, we often picture a humanoid creature. But the intelligent life we are searching for doesn’t have to be humanoid.

Life, but not as we know it

Sharman says she believes aliens exist and “there’s no two ways about it”. Furthermore, she wonders: “Will they be like you and me, made up of carbon and nitrogen? Maybe not. It’s possible they’re here right now and we simply can’t see them.”

Such life would exist in a “shadow biosphere”. By that, I don’t mean a ghost realm, but undiscovered creatures probably with a different biochemistry. This means we can’t study or even notice them because they are outside of our comprehension. Assuming it exists, such a shadow biosphere would probably be microscopic.

So why haven’t we found it? We have limited ways of studying the microscopic world as only a small percentage of microbes can be cultured in a lab. This may mean that there could indeed be many lifeforms we haven’t yet spotted. We do now have the ability to sequence the DNA of unculturable strains of microbes, but this can only detect life as we know it – that contain DNA.

If we find such a biosphere, however, it is unclear whether we should call it alien. That depends on whether we mean “of extraterrestrial origin” or simply “unfamiliar”.

Silicon-based life

A popular suggestion for an alternative biochemistry is one based on silicon rather than carbon. It makes sense, even from a geocentric point of view. Around 90% of the Earth is made up of silicon, iron, magnesium and oxygen, which means there’s lots to go around for building potential life.

Artist’s impression of a silicon-based life form. Zita

Silicon is similar to carbon, it has four electrons available for creating bonds with other atoms. But silicon is heavier, with 14 protons (protons make up the atomic nucleus with neutrons) compared to the six in the carbon nucleus. While carbon can create strong double and triple bonds to form long chains useful for many functions, such as building cell walls, it is much harder for silicon. It struggles to create strong bonds, so long-chain molecules are much less stable.

What’s more, common silicon compounds, such as silicon dioxide (or silica), are generally solid at terrestrial temperatures and insoluble in water. Compare this to highly soluble carbon dioxide, for example, and we see that carbon is more flexible and provides many more molecular possibilities.

Life on Earth is fundamentally different from the bulk composition of the Earth. Another argument against a silicon-based shadow biosphere is that too much silicon is locked up in rocks. In fact, the chemical composition of life on Earth has an approximate correlation with the chemical composition of the sun, with 98% of atoms in biology consisting of hydrogen, oxygen and carbon. So if there were viable silicon lifeforms here, they may have evolved elsewhere.

That said, there are arguments in favour of silicon-based life on Earth. Nature is adaptable. A few years ago, scientists at Caltech managed to breed a bacterial protein that created bonds with silicon – essentially bringing silicon to life. So even though silicon is inflexible compared with carbon, it could perhaps find ways to assemble into living organisms, potentially including carbon.

And when it comes to other places in space, such as Saturn’s moon Titan or planets orbiting other stars, we certainly can’t rule out the possibility of silicon-based life.

To find it, we have to somehow think outside of the terrestrial biology box and figure out ways of recognising lifeforms that are fundamentally different from the carbon-based form. There are plenty of experiments testing out these alternative biochemistries, such as the one from Caltech.

Regardless of the belief held by many that life exists elsewhere in the universe, we have no evidence for that. So it is important to consider all life as precious, no matter its size, quantity or location. The Earth supports the only known life in the universe. So no matter what form life elsewhere in the solar system or universe may take, we have to make sure we protect it from harmful contamination – whether it is terrestrial life or alien lifeforms.


Read more: Elon Musk’s Starship may be more moral catastrophe than bold step in space exploration


So could aliens be among us? I don’t believe that we have been visited by a life form with the technology to travel across the vast distances of space. But we do have evidence for life-forming, carbon-based molecules having arrived on Earth on meteorites, so the evidence certainly doesn’t rule out the same possibility for more unfamiliar life forms.

Project HAARP: Is The US Controlling The Weather? – YouTube

www.youtube.com/watch?v=InoHOvYXJ0Q

23/07/2013 · Project HAARP: US Weather Control? A secretive government radio energy experiment in Alaska, with the potential to control the weather or a simple scientific experiment?

The Science of Corona Spread according to Neil Ferguson et al of Imperial College London Posted May 14th 2020

Note this report is about spread and guesswork as to the nature and structure OF Corona with particular regard to mutation and effects of the Corona Virus. It is about a maths model of predicted spread, and rate of spread, with R representing the reinfection rate. R at 1 means each person with Corona can be expected or predicted to infect one other person who will go on to infect one other etc.

What Ferguson does know for certain as a bassis for his modelling is is that the virtually privatised, asset stripped debt loaded poorly equiped run down and management top heavy NHS will fail massively especially in densely populated urban areas of high ethnic diversity, religious bigotry, poverty and squalor

He also knows that a privatised very expensive profit based care homes will fail hideously, so those already close to natural death, especially if they have previous health conditions will die sooner with corona, which given the squalor of the homes will make sure they get it.

So operation smokescreen needs the Ferguson maths to justify putting key at risk voters’ peace of mind above the wider national interest – to hell with the young, scare them to death, blind them with science like the following report which they won’t understand, upon which there will be further analysis and comment here soon.

On the wider scene, Britain has been a massively malign influence on Europe, the U.S and beyond, so Ferguson must factor in no limit to borders, air traffic or illegal immigrants. Though he clearly did not believe his own advice because he broke it at least twice for sexual contact with a married mother.

The maths of his assessment for his affair with a married woman here was simple : M + F = S where M represents male F represents female and S represents sex. But we do not need algebra to explain the obvious anymore than we need what is below, from Fergusoon’s 14 page report.

We might also consider that M + F , because of other human factors/variables, could equal D where D reresents divorce, or MB where MB represents Male Bankruptcy or a number of other possibilities.

But for Ferguson, operation smokescreen, blinding people with science, has only one possibility, LOCKDOWN because that is what the government wanted, the media wanted it and now a lot of workers want it, especially teachers who do not want to go back to work. Britain is ridiculing and patronising European countries for doing the sensible thing and easing out of lockdown. People with brains should fear the British elite more than Europe’s.

Public sector workers are paid to stay at home. Furloughed private sector workers are going to be bankrolled by the taxpaper the Chancellor said so. Lockdown is costing £14 billion a day. Imagine if all that money had been invested in an NHS fit to cope with all the illegal and legal mass of third world immigrants and an ageing population. But moron politicians are always economical with the truth, out to feed their own egos and winging it.

As an ex maths teacher, I could convert all of this into alegbra and probable outcomes. British people are more likely to belive what they can’t understand which is why so many still believe in God. So if God made everything, then God made ‘the science’ so it must be true

It is not is necessary to tell us that if someone catches a cold it is an airborne virus which will spread to anyone in its path, the poorly and old being vulnerable to a cold turning fatal. That is the reality of Corona.

Ferguson made his report on the basis of probability, some limits to the masses, regardless of the damage caused long term, because he got paid, would look good and enhance his and pompous Imperial College’s reputation.

Robert Cook

10 February 2020 Imperial College London COVID-19 Response Team

DOI: https://doi.org/10.25561/77154 Page 1 of 14

10 February 2020 Imperial College London COVID-19 Response Team DOI: https://doi.org/10.25561/77154 Page 1 of 14 Report 4: Severity of 2019-novel coronavirus (nCoV) Ilaria Dorigatti+ , Lucy Okell+ , Anne Cori, Natsuko Imai , Marc Baguelin, Sangeeta Bhatia, Adhiratha Boonyasiri, Zulma Cucunubá, Gina Cuomo-Dannenburg, Rich FitzJohn, Han Fu, Katy Gaythorpe , Arran Hamlet, Wes Hinsley, Nan Hong , Min Kwun, Daniel Laydon, Gemma Nedjati-Gilani, Steven Riley, Sabine van Elsland, Erik Volz, Haowei Wang, Raymond Wang, Caroline Walters , Xiaoyue Xi, Christl Donnelly, Azra Ghani, Neil Ferguson*. With support from other volunteers from the MRC Centre.1 WHO Collaborating Centre for Infectious Disease Modelling MRC Centre for Global Infectious Disease Analysis Abdul Latif Jameel Institute for Disease and Emergency Analytics (J-IDEA) Imperial College London *Correspondence: neil.ferguson@imperial.ac.uk 1 See full list at end of document. +These two authors contributed equally. Summary We present case fatality ratio (CFR) estimates for three strata of 2019-nCoV infections. For cases detected in Hubei, we estimate the CFR to be 18% (95% credible interval: 11%-81%). For cases detected in travellers outside mainland China, we obtain central estimates of the CFR in the range 1.2- 5.6% depending on the statistical methods, with substantial uncertainty around these central values. Using estimates of underlying infection prevalence in Wuhan at the end of January derived from testing of passengers on r