Science -Archive 2

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

In the case of Covid. theApril whole outpouring of how to deal with and exaggerate this problem, brings such so called experts to the fore, with guesswork passed off as science -with no account to the wider harm done to health and mass survival. R.J Cook

Previous Science material is now on the the Science Archive Page.

December 19th 2022

Why scientists can’t give up the hunt for alien life

There will always be “wolf-criers” whose claims wither under scrutiny. But aliens are certainly out there, if science dares to find them.

how many planets
The planets and moons that formed in our own Solar System likely arose from a protoplanetary disk that developed instabilities, which then grew, and the largest survivors continued to attract the surrounding matter. The biggest winners developed their own circumplanetary disks and held onto large, massive, volatile atmospheres, forming the gas giants. Each planet has its own unique features and history, but it’s the ones with solid surfaces that make the best candidates for life. (Credit: NASA/Dana Berry)

Key Takeaways

  • All throughout human history, we’ve gazed up at the stars and wondered if we’re truly alone in the Universe, or if other life — possibly even intelligent life — is out there for us to find.
  • Although there have been many who claim that aliens exist and that we’ve already contacted them, those claims have all withered under scrutiny, with their claimants having cried “wolf” with insufficient evidence. 
  • Nevertheless, the scientific case remains extremely compelling for suspecting that life, and possibly even intelligent life, is out there somewhere. Here’s why we must keep looking.

Ethan Siegel Share Why scientists can’t give up the hunt for alien life on Facebook Share Why scientists can’t give up the hunt for alien life on Twitter Share Why scientists can’t give up the hunt for alien life on LinkedIn

Despite all we’ve learned about ourselves and the physical reality that we all inhabit, the giant question of whether we’re alone in the Universe remains unanswered. We’ve explored the surfaces and atmospheres of many worlds in our own Solar System, but only Earth shows definitive signs of life: past or present. We’ve discovered more than 5,000 exoplanets over the past 30 years, identifying many Earth-sized, potentially inhabited worlds among them. Still, none of them have revealed themselves as actually inhabited, although the prospects for finding extraterrestrial life in the near future are tantalizing.

And finally, we’ve begun searching directly for any signals from space that might indicate the presence of an intelligent, technologically advanced civilization, through endeavors such as SETI (the Search for Extra-Terrestrial Intelligence) and Breakthrough Listen. All of these searches have returned only null results so far, despite memorably loud claims to the contrary. However, the fact that we haven’t met with success just yet should in no way discourage us from continuing to search for life on all three fronts, to the limits of our scientific capabilities. After all, when it comes to the biggest existential question of all, we have no right to expect that the lowest-hanging branches on the cosmic tree of life should bear fruit so easily.

first contact
(Credit: Ryan Somma/flickr)

Each of the three main ways we have of searching for life beyond the life that arose and continues to thrive on planet Earth has its own sets of advantages and disadvantages.

  1. We can access the surfaces and atmospheres of other worlds in our Solar System, enabling us to look for even tiny, microscopic signs of biological activity, including imprints left by ancient, now-extinct forms of life. But we may have to dig through tens of kilometers of ice to find it, or recognize life forms wholly unrelated to life-as-we-know-it on Earth.
  2. With thousands of exoplanets now known, the imminent technological advances that will enable transit spectroscopy and/or direct imaging of Earth-sized worlds could lead us to discover living planets with unmistakable biosignatures in their atmospheres. If life thriving on an Earth-sized world is common, positive detections are only a matter of time and resources.
  3. And searches for extraterrestrial intelligence offer the most profound rewards: a chance to make contact with another, perhaps even a technologically superior, intelligent species. The odds are unknown, but the payoff could be unfathomable.

For these (and other) reasons, the only sensible strategy is to continue to pursue all three methods to the limits of our capabilities, as without superior information, we have no way of knowing what sorts of probabilities any of these methods will have of yielding our first positive detection. After all, “Absence of evidence is not evidence of absence,” and that saying certainly applies to life in the Universe.

space expanding
(Credit: NASA/CXC/M. Weiss)

From a cosmic perspective, the laws that govern the Universe as well as the nature of the components that make it up indicate that the potential for life as a common occurrence might be absolutely inevitable. Initially, at the start of the hot Big Bang, our Universe was hot, dense, and filled with particles, antiparticles, and radiation moving at or indistinguishably close to the speed of light. In these beginning stages, neither the ingredients nor the conditions necessary for chemical-based life were in place; the Universe was born life-free. And yet, as time went on, the potential for biological activity would rise and rise.

As the Universe expanded and cooled, the following steps sequentially occurred:

  • the particles and antiparticles annihilated away, leaving a tiny excess of matter behind,
  • quarks and gluons formed bound states, giving rise to protons and neutrons,
  • fusion reactions occurred, creating the light elements,
  • atoms formed from these atomic nuclei and the surrounding bath of electrons,
  • gravitational contraction and collapse takes place, giving rise to stars,
  • star clusters and other clumps of matter attract, forming galaxies,
  • and within those galaxies, successive generations of stars are formed, creating heavy elements.

Once a galaxy becomes enriched enough with these heavy elements, the new generations of stars that follow can form with rocky worlds within those stellar systems, many of which will have the potential for life.

(Credit: Mike Malaska; ISAS/JAXA, NASA, IKI, NASA/JPL, ESA/NASA/JPL)

Within our observable Universe, since the dawn of the hot Big Bang, sextillions of stars have formed. Of those, the majority of them are found in large, massive, rich galaxies: galaxies comparable in size and mass to the Milky Way or greater. By the time billions of years have gone by, most of the new stars will have sufficiently large fractions of heavy elements to lead to the formation of rocky planets and molecules that are known as precursors to life. These precursor molecules have been found everywhere, from comets and asteroids to the interstellar medium to stellar outflows to planet-forming disks.

And, at this critical step, we find ourselves face-to-face with the end of our scientific certainty.

  • Where, and under what conditions, does life come into existence?
  • On those worlds where life arises, how frequently does it survive and thrive, persisting for billions of years?
  • How often does that life saturate its habitable regions, transforming and feeding back on its biosphere?
  • Where this occurs, how often does life diversify, becoming complex and differentiated?
  • And where that occurs, how frequently does life become intelligent enough to become technologically advanced, capable of communicating across or even traversing the vast interstellar distances?

These questions aren’t merely there for us to philosophically ponder; they’re there for us to gather information about, and eventually, to draw scientifically valid conclusions about such probabilities.

(Credit: NOAA Office of Ocean Exploration and Research)

Of course, there are plenty of valid explanations for why we haven’t succeeded in our searches for life just yet. The most sobering — and the most pessimistic — is that it’s possible that one or more of the steps required to give rise to the type of life we’d be sensitive to are particularly difficult, and only rarely can the Universe achieve them. In other words, it’s possible that any one of life, sustained life, complex and differentiated life, or intelligent and technologically advanced life are rare, and none of the worlds we’ve surveyed possess them. That’s a possibility we have to keep in mind so long as we’re committed to remaining intellectually honest.

But there’s no reason, at least so far, to suspect that’s the primary reason we haven’t discovered life beyond Earth just yet. The old saying, “If at first you don’t succeed, try, try again,” applies wherever the odds of success are unknown, but we have every indication that success is possible under the right circumstances. Here on Earth, the evidence strongly indicates that our home planet is an example of such circumstances, and hence it’s likely that there are places all throughout space and time where life sustains itself, evolves to become complex and differentiated, and achieves a level of technological advancement sufficient for interstellar communication.

The big unknowns are in the probabilities of the various types of alien life that are actually out there, not in the question of whether such achievements are possible within our Universe.

(Credit: J. Wang (UC Berkeley) & C. Marois (Herzberg Astrophysics), NExSS (NASA), Keck Obs.)

That doesn’t mean that we should take seriously every claim that’s been made — even by scientists — that alien life has been found. The “Wow!” signal, for example, was a high-powered radio signal received over the span of 72 seconds back in 1977; although its nature is unknown, it has never been replicated, either back at the original source or anywhere else. Without confirmation or repeatability, we can draw no affirmative, definitive conclusions.

Fast radio bursts, like many signatures observed astronomically, appear in many locations both in and out of our galaxy, but have no indication that they were intelligently created; they are likely simply a natural astronomical phenomena whose origins have yet to be determined.

NASA’s Mars Viking lander conducted numerous tests for life on the Martian surface, with one experiment (the Labeled Release experiment) giving a positive signature. However, the possibility of contamination, the lack of reproducibility, and the lack of a verified follow-up experiment has cast tremendous doubt on the “biological positive” interpretation of the experiment.

And despite the possibility of encountering interstellar space probes, direct alien contact, or even the ubiquity of alien abduction stories, no robust verification of any of these claims has ever come forth. We have to keep our minds open while at the same time remaining skeptical of any grand claims. The conclusions we draw can only be as strong as the supporting evidence for them.

extraterrestrial
(Credit: Danielle Futselaar)

It’s primarily for these reasons — we have every indication that the Universe has all the necessary ingredients for life, but no indication that we’ve found it just yet — that it’s so vital to keep looking in a scientifically scrupulous way. When we do announce that we’ve found extraterrestrial life, we don’t want it to be another instance of crying “wolf” with insufficient evidence that we’ve found a wolf; we want the claim to be supported by overwhelming, unassailable evidence.

  1. That means building fleets of orbiters, landers, sample-return missions, and laboratory-equipped rovers to explore a wide variety of worlds in our Solar System: Venus’s atmosphere, Mars’s surface, Titan’s lakes, and the oceans of Europa, Enceladus, Triton, and Pluto, among others.
  2. That means building superior coronagraphs on world-class space-based and ground-based telescopes, considering the construction of a starshade, and investing in transit spectroscopy. By imaging the atmospheres and surfaces of exoplanets, including breaking down their molecular and atomic constituents and abundances over time, we should be able to identify any world with a life-saturated biosphere.
  3. And that means continuing to search, with greater precisions and sensitivities across the electromagnetic spectrum, for any signals that might come from an intelligent species seeking to communicate or announce their presence.

If you don’t find fruit on the lowest-hanging branches, that doesn’t necessarily mean you give up on the tree; it means you find a way to climb higher, where the fruit may be present but out of your present reach.

(Credit: NASA’s Earth Observatory/NOAA/DOD)

This might also include expanding our searches for extraterrestrial intelligence. While most searches focus on far-reaching radio transmissions, it’s possible that alien civilizations who seek to communicate across the stars and galaxies will rely on a different technology. Perhaps we should be monitoring the tails of water maser lines or the 21 cm spin-flip transition of hydrogen. Perhaps we should be looking for patterns in pulsar signals, including correlating signals between pulsars. Perhaps we should even be looking for extraterrestrial intelligences in gravitational wave signals that we have yet to discover. Wherever a signal can be encoded by a sufficiently advanced species, humanity should be looking and listening.

There are also avenues to explore that won’t reveal alien life, but could help us understand how it arose and arises throughout the Universe. We can recreate the atmospheric conditions found on other worlds or even as they were on Earth long ago in the lab, with an eye toward recreating the origin of life-from-non-life. We can continue exploring the possibility of having nucleic acids (RNAs, DNAs, even PNAs: peptide-based nucleic acids) coevolve with peptides in an early prebiotic environment: perhaps the most compelling candidate for how life first arose on Earth.

(Credit: A. Chotera et al., Chemistry Europe, 2018)

The rewards of finding out that we’re not alone in the Universe would be immeasurable. Perhaps we could learn how to survive the great environmental threats that face us: hazardous asteroids, a changing climate, or violent space weather events. Perhaps there are even more important lessons to be learned about how to overcome our own insufficiencies as human beings: the great challenge of moving beyond our primal nature. Perhaps other civilizations have success stories to offer us, recounting how, in the early days of their technological infancy, they overcame such issues as:

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard! Fields marked with an * are required

  • overconsumption, where they devoured their planet’s resources beyond the point of sustainability,
  • short-term thinking, where they addressed the immediate, urgent problems at the expense of long-term ones that threatened their existence,
  • or the emergence of disease, famine, pestilence, and ecological collapse, resulting from the global changes wrought by a post-industrial society.

Our impulses toward greed, plunder, and self-gratification may not be unique, and there may be more experienced, wiser species out there that have figured out solutions that elude us today. Perhaps, if we’re lucky, they may have lessons waiting-in-the-wings for us that could guide us toward a more successful future.

(Credit: Andrew Z. Colvin/Wikimedia Commons)

Many of us can imagine two different futures unfolding for the enterprise of human civilization. There’s the one we should strive to avoid: where we resort to infighting, squabbling over the limited resources of our world, descending into ideologically-driven wars and ensuring our own eventual destruction. If we never find life beyond Earth — never find anyone else to communicate with, exchange information and culture with, and to give us hope that there’s a future for humanity out there among the stars — perhaps extinction will indeed be our most likely outcome.

But there’s another possible outcome for humanity: a future where we come together collectively to face the gargantuan challenges facing humans, the environment, planet Earth, and our long-term future. Perhaps the discovery of life beyond Earth — and potentially, of one or more intelligent, spacefaring, extraterrestrial civilizations — might give us not only the guidance and knowledge we need to survive our growing pains, but something far grander than any terrestrial achievements to hope for. Until that day arrives, we must make do with the knowledge that, at present, we have only one another to extend our kindnesses and compassion to. Tags Space & Astrophysics

September 9th 2022

How Darwin’s ‘Descent of Man’ Holds Up Over 150 Years After Publication

Questions still swirl around the author’s theories about sexual selection and the evolution of minds and morals.

Smithsonian Magazine

  • Dan Falk

Read when you’ve got time to spare.

More from Smithsonian Magazine

statue of man sitting with a newspaper

Photo by dan_wrench/Getty Images

Charles Darwin’s On the Origin of Species rattled Victorian readers in 1859, even though it said almost nothing about how the idea of evolution applied to human beings. A dozen years later, in 1871, he tackled that subject head-on. In The Descent of Man, and Selection in Relation to Sex, published over 150 years ago, Darwin argued forcefully that all creatures were subject to the same natural laws, and that humans had evolved over countless eons, just as other animals had. “Man,” he wrote, “still bears in his bodily frame the indelible stamp of his lowly origin.”

In Descent, Darwin details a theory that he calls “sexual selection”—the idea that, in many species, males battle other males for access to females, while in other species females choose the biggest or most attractive males to bond with. The male-combat theory would explain, for example, the development of a bull’s horns, or a moose’s antlers, while the quintessential example of “female choice” is seen in peahens, which, Darwin argued, prefer to mate with peacocks having the biggest, most colorful tails. For Darwin, sexual selection was just as important as natural selection, which he had outlined in Origin—the idea that organisms with favorable traits are more likely to reproduce, thus passing on those traits to their offspring. Both mechanisms helped to explain how species evolved over time.

“I think for Darwin, sexual selection was what connected humans with non-human animals,” says Ian Hesketh, a historian of science at the University of Queensland in Australia. “It provided the continuity in Darwin’s system, from animals to humans.”

In Descent, Darwin illustrates this continuity by noting the similarities of the human body to those of our primate cousins and to other mammals, focusing on anatomical structure—such as the similarity of their skeletons—and also on embryology—the embryos of related animals can be almost indistinguishable.

Descent, like Origin, became a huge bestseller. As writer Cyril Aydon put it in A Brief Guide to Charles Darwin: His Life and Times: “With Darwin’s name on the cover, and monkeys and sex on the inside pages, it was a publisher’s dream.” Descent is still seen as a landmark in the history of the life sciences—though, inevitably, some passages strike modern readers as offensive, especially where Darwin speculates on issues of race and on gender roles. He also tackled difficult problems that continue to spark debate today, such as the evolution of minds and of moral beliefs.

Many aspects of sexual selection seemed implausible to Darwin’s contemporaries. For example, the theory attempted to explain the development of so-called secondary sexual characteristics, such as the peacock’s tail or other traits that made a male animal more appealing to a female. If these traits are selected by the female, they can develop to extremes over the course of time—at which point they may hamper, rather than aid in, survival. For example, an overly-colorful tail could attract predators. Darwin’s argument also seemed to suggest that animals possessed a sophisticated ability to rate the attractiveness of each potential mate with a kind of check-list of criteria.

“The most contentious aspect [of the book] has to do with how it relates to the development of coloration and what he called ‘charms’—anything that had to do with wooing the female,” says Hesketh, “No one seemed to be on board with that, because it suggested that animals had an aesthetic sense, and that they were making mate-choices based on really miniscule observations.”

The two aspects of sexual selection were not equally well received: The male-combat idea, which casts males as aggressive and females as passive, seemed plausible enough to Darwin’s contemporaries, as it meshed with the prevailing prejudices of the time. But the other part of the theory, in which females appear to have the power of choice by selecting from among an array of prospective males, struck many as a radical notion. For humans, however, Darwin switched it up; in our own species, he argued, it was the male that did the choosing.

“The argument here is that males have ‘seized the power of selection’ from females, because they’re more powerful, in body and mind, than women,” says Evelleen Richards, a historian at the University of Sydney and the author of Darwin and the Making of Sexual Selection. In Descent, Darwin writes of “man attaining a higher eminence in whatever he takes up than woman can attain—whether requiring deep thought, reason or imagination, or merely the use of the senses and hands.” He added, “Thus man has ultimately become superior to woman.”

Passages like that reveal Darwin’s “androcentric bias,” as Richards puts it, noting that his views on sex and sex differences were very much derived from a male perspective and were a product of Victorian society. To complicate matters, Darwin’s views about sex were intimately tied up with his theories about race. A much-debated question in Darwin’s time was the puzzling diversity of humanity. Did the various races emerge independently of one another? That view, known as “polygenism,” was popular among members of the Anthropological Society of London, which Richards describes as an “out-and-out racist” organization. The Society supported the Confederacy in the U.S. Civil War, and its leader, a speech therapist named James Hunt, declared that we “know that the Races of Europe have now much in their mental and moral nature which the races of Africa have not got.” Others, including Darwin, argued that all races shared a common origin, a view known as “monogenism.” But monogenists still had to explain what caused the diversity seen today. This is where sexual selection comes in. Darwin argued that differing judgements of attractiveness held the key; he believed that men of one tribe or group were naturally most attracted to members of their own tribe. He wrote that “the differences between the tribes, at first very slight, would gradually and inevitably be increased to a greater and greater degree.” Few of Darwin’s readers found this plausible, says Richards, because they imagined European ideals of beauty to be universal; they simply couldn’t imagine, for example, “that black skin could be attractive to anyone,” she says.

All of this, Richards says, highlights the complexity of Darwin’s views on race. In contrast to many of his contemporaries, he believed in “the brotherhood of man,” as Richards puts it, and found slavery repugnant—and yet he believed, as most Victorians did, in a racial hierarchy with Europeans at the top. Even so, some of his ideas—like the notion of Africans being attracted to Africans—struck his contemporaries as too radical.

Perhaps the most difficult puzzle for Darwin was the remarkable cognitive power of humans, and, especially, the human capacity for moral reasoning. Some of Darwin’s contemporaries, notably Alfred Russel Wallace, saw the human mind as evidence that a divine intelligence was guiding evolution. Wallace, who co-discovered natural selection, came to embrace spiritualism in his later years. Historians see Descent largely as a rebuttal to Wallace, that is, as an attempt to set forth a purely naturalistic explanation for the mind and for moral behavior. While the details were far from clear, Darwin saw minds and morals as rooted, ultimately, in biology. For example, he argues that a primitive kind of moral sense can be seen in certain animals—those “endowed with the social instincts” and which “take pleasure in one another’s company, warn one another of danger, defend and aid one another in many ways.” As such instinctive behavior is “highly beneficial to the species, they have in all probability been acquired through natural selection.”

Unlike Origin, which was immediately hailed as a groundbreaking scientific work, Descent has had a checkered history. The idea of sexual selection, in particular, languished in the decades following its publication. That’s partly because of lingering doubts over animals’ aesthetic sense and the idea of female choice, and partly because Darwin was never able to convince his old allies—people like Wallace and also Thomas Henry Huxley—that sexual selection was an important facet of evolution. Others, meanwhile, were uncomfortable with naturalistic accounts of minds and morals. “By the turn of the century, sexual selection, for all intents and purposes, is basically dead,” says Henry-James Meiring, a PhD student working with Hesketh at Queensland.

In the 20th century, however, it began to make a comeback. Biologists absorbed many of the ideas in Descent into the so-called modern synthesis that combined Darwin’s theory of evolution with the new science of genetics; later, aspects of sexual selection received support from evolutionary theories of social behavior. By the 1970s, sexual selection “starts making a comeback in modern science, and in some form has continued ever since,” says Meiring. Evelleen Richards adds that sexual selection has only recently “come back on the agenda as a scientific fact in its own right.”

On the bigger picture—the unity of all living things—Darwin was on the right track. That unity, he reasoned, applies not just to bodies but also to minds. True, scientists continue to debate the question of exactly how the brain (a biological organ) gives rise to a mind (with its intangible mental processes), but it is clear that brains are what make minds possible, and these evolved just as our bodies did. In this sense we’re no different from our primate cousins; Darwin argued that the cognitive powers of human beings differ from those of apes in degree, not in kind. Darwin’s thinking on these problems today “enjoys broad support in disciplines such as neuroscience and evolutionary psychology,” says Meiring.

Other aspects of Darwin’s thinking in Descent continue to spark debate. Some scholars have criticized attempts to explain social behavior in terms of biology as overly reductionist, and many facets of evolutionary psychology, in particular, have faced skepticism in recent years. For example, some anthropologists argue that we don’t know enough about the environment in which early humans lived, or the advantages that particular behavioral traits conferred, to be certain that behaviors observed today are the result of early conditions. And puzzles persist over the origins of language, music and religion.

“Darwin, like any other scientific figure of the past, got some things right and got a lot of things wrong,” says Meiring. “And his own biases around gender and race had an impact on the way that he theorized and thought about science.” In Descent, he says, Darwin grappled with “things that we are still arguing about, and that we have still not resolved. I think that’s possibly its greatest legacy.”


Dan Falk is a science journalist based in Toronto. His books include The Science of Shakespeare and In Search of Time.

Can We Time Travel? A Theoretical Physicist Provides Some Answers

Theories exploring the possibility of time travel rely on the existence of types of matter and energy that we do not understand yet.

The Conversation

  • Peter Watson
More from The Conversation

person holding a clock that's disintegrating away in their hands

Our curiosity about time travel is thousands of years old. Photo by Shutterstock

Time travel makes regular appearances in popular culture, with innumerable time travel storylines in movies, television and literature. But it is a surprisingly old idea: one can argue that the Greek tragedy Oedipus Rex, written by Sophocles over 2,500 years ago, is the first time travel story.

But is time travel in fact possible? Given the popularity of the concept, this is a legitimate question. As a theoretical physicist, I find that there are several possible answers to this question, not all of which are contradictory.

The simplest answer is that time travel cannot be possible because if it was, we would already be doing it. One can argue that it is forbidden by the laws of physics, like the second law of thermodynamics or relativity. There are also technical challenges: it might be possible but would involve vast amounts of energy.

There is also the matter of time-travel paradoxes; we can — hypothetically — resolve these if free will is an illusion, if many worlds exist or if the past can only be witnessed but not experienced. Perhaps time travel is impossible simply because time must flow in a linear manner and we have no control over it, or perhaps time is an illusion and time travel is irrelevant.

Some time travel theories suggest that one can observe the past like watching a movie, but cannot interfere with the actions of people in it.Photo by Rodrigo Gonzales/Unsplash

Laws of Physics

Since Albert Einstein’s theory of relativity — which describes the nature of time, space and gravity — is our most profound theory of time, we would like to think that time travel is forbidden by relativity. Unfortunately, one of his colleagues from the Institute for Advanced Study, Kurt Gödel, invented a universe in which time travel was not just possible, but the past and future were inextricably tangled.

We can actually design time machines, but most of these (in principle) successful proposals require negative energy, or negative mass, which does not seem to exist in our universe. If you drop a tennis ball of negative mass, it will fall upwards. This argument is rather unsatisfactory, since it explains why we cannot time travel in practice only by involving another idea — that of negative energy or mass — that we do not really understand.

Mathematical physicist Frank Tipler conceptualized a time machine that does not involve negative mass, but requires more energy than exists in the universe.

Time travel also violates the second law of thermodynamics, which states that entropy or randomness must always increase. Time can only move in one direction — in other words, you cannot unscramble an egg. More specifically, by travelling into the past we are going from now (a high entropy state) into the past, which must have lower entropy.

This argument originated with the English cosmologist Arthur Eddington, and is at best incomplete. Perhaps it stops you travelling into the past, but it says nothing about time travel into the future. In practice, it is just as hard for me to travel to next Thursday as it is to travel to last Thursday.

Resolving Paradoxes

There is no doubt that if we could time travel freely, we run into the paradoxes. The best known is the “grandfather paradox”: one could hypothetically use a time machine to travel to the past and murder their grandfather before their father’s conception, thereby eliminating the possibility of their own birth. Logically, you cannot both exist and not exist.

Kurt Vonnegut’s anti-war novel Slaughterhouse-Five, published in 1969, describes how to evade the grandfather paradox. If free will simply does not exist, it is not possible to kill one’s grandfather in the past, since he was not killed in the past. The novel’s protagonist, Billy Pilgrim, can only travel to other points on his world line (the timeline he exists in), but not to any other point in space-time, so he could not even contemplate killing his grandfather.

The universe in Slaughterhouse-Five is consistent with everything we know. The second law of thermodynamics works perfectly well within it and there is no conflict with relativity. But it is inconsistent with some things we believe in, like free will — you can observe the past, like watching a movie, but you cannot interfere with the actions of people in it.

Could we allow for actual modifications of the past, so that we could go back and murder our grandfather — or Hitler? There are several multiverse theories that suppose that there are many timelines for different universes. This is also an old idea: in Charles Dickens’ A Christmas Carol, Ebeneezer Scrooge experiences two alternative timelines, one of which leads to a shameful death and the other to happiness.

Time Is a River

Roman emperor Marcus Aurelius wrote that:

Time is like a river made up of the events which happen, and a violent stream; for as soon as a thing has been seen, it is carried away, and another comes in its place, and this will be carried away too.”

We can imagine that time does flow past every point in the universe, like a river around a rock. But it is difficult to make the idea precise. A flow is a rate of change — the flow of a river is the amount of water that passes a specific length in a given time. Hence if time is a flow, it is at the rate of one second per second, which is not a very useful insight.

Theoretical physicist Stephen Hawking suggested that a “chronology protection conjecture” must exist, an as-yet-unknown physical principle that forbids time travel. Hawking’s concept originates from the idea that we cannot know what goes on inside a black hole, because we cannot get information out of it. But this argument is redundant: we cannot time travel because we cannot time travel!

Researchers are investigating a more fundamental theory, where time and space “emerge” from something else. This is referred to as quantum gravity, but unfortunately it does not exist yet.

So is time travel possible? Probably not, but we don’t know for sure!

Peter Watson is an emeritus professor of physics at Carleton University.

August 28th 2022

The Big Bang no longer means what it used to

As we gain new knowledge, our scientific picture of how the Universe works must evolve. This is a feature of the Big Bang, not a bug.

From a pre-existing state, inflation predicts that a series of universes will be spawned as inflation continues, with each one being completely disconnected from every other one, separated by more inflating space. One of these “bubbles,” where inflation ended, gave birth to our Universe some 13.8 billion years ago, where our entire visible Universe is just a tiny portion of that bubble’s volume. Each individual bubble is disconnected from all of the others, and each place where inflation ends gives rise to its own hot Big Bang

Key Takeaways

  • The idea that the Universe had a beginning, or a “day without a yesterday” as it was originally known, goes all the way back to Georges Lemaître in 1927.
  • Although it’s still a defensible position to state that the Universe likely had a beginning, that stage of our cosmic history has very little to do with the “hot Big Bang” that describes our early Universe.
  • Although many laypersons (and even a minority of professionals) still cling to the idea that the Big Bang means “the very beginning of it all,” that definition is decades out of date. Here’s how to get caught up.

Share The Big Bang no longer means what it used to on Facebook Share The Big Bang no longer means what it used to on Twitter Share The Big Bang no longer means what it used to on LinkedIn

If there’s one hallmark inherent to science, it’s that our understanding of how the Universe works is always open to revision in the face of new evidence. Whenever our prevailing picture of reality — including the rules it plays by, the physical contents of a system, and how it evolved from its initial conditions to the present time — gets challenged by new experimental or observational data, we must open our minds to changing our conceptual picture of the cosmos. This has happened many times since the dawn of the 20th century, and the words we use to describe our Universe have shifted in meaning as our understanding has evolved.

Yet, there are always those who cling to the old definitions, much like linguistic prescriptivists, who refuse to acknowledge that these changes have occurred. But unlike the evolution of colloquial language, which is largely arbitrary, the evolution of scientific terms must reflect our current understanding of reality. Whenever we talk about the origin of our Universe, the term “the Big Bang” comes to mind, but our understanding of our cosmic origins have evolved tremendously since the idea that our Universe even had an origin, scientifically, was first put forth. Here’s how to resolve the confusion and bring you up to speed on what the Big Bang originally meant versus what it means today.

(Credit: British Broadcasting Company)

The first time the phrase “the Big Bang” was uttered was over 20 years after the idea was first described. In fact, the term itself comes from one of the theory’s greatest detractors: Fred Hoyle, who was a staunch advocate of the rival idea of a Steady-State cosmology. In 1949, he appeared on BBC radio and advocated for what he called the perfect cosmological principle: the notion that the Universe was homogeneous in both space and time, meaning that any observer not only anywhere but anywhen would perceive the Universe to be in the same cosmic state. He went on to deride the opposing notion as a “hypothesis that all matter of the universe was created in one Big Bang at a particular time in the remote past,” which he then called “irrational” and claimed to be “outside science.”

But the idea, in its original form, wasn’t simply that all of the Universe’s matter was created in one moment in the finite past. That notion, derided by Hoyle, had already evolved from its original meaning. Originally, the idea was that the Universe itself, not just the matter within it, had emerged from a state of non-being in the finite past. And that idea, as wild as it sounds, was an inevitable but difficult-to-accept consequence of the new theory of gravity put forth by Einstein back in 1915: General Relativity.

Instead of an empty, blank, three-dimensional grid, putting a mass down causes what would have been ‘straight’ lines to instead become curved by a specific amount. In General Relativity, we treat space and time as continuous, but all forms of energy, including but not limited to mass, contribute to spacetime curvature. The deeper you are in a gravitational field, the more severely all three dimensions of your space is curved, and the more severe the phenomena of time dilation and gravitational redshift become.

When Einstein first cooked up the general theory of relativity, our conception of gravity forever shifted from the prevailing notion of Newtonian gravity. Under Newton’s laws, the way that gravitation worked was that any and all masses in the Universe exerted a force on one another, instantaneously across space, in direct proportion to the product of their masses and inversely proportional to the square of the distance between them. But in the aftermath of his discovery of special relativity, Einstein and many others quickly recognized that there was no such thing as a universally applicable definition of what “distance” was or even what “instantaneously” meant with respect to two different locations.

With the introduction of Einsteinian relativity — the notion that observers in different frames of reference would all have their own unique, equally valid perspectives on what distances between objects were and how the passage of time worked — it was only almost immediate that the previously absolute concepts of “space” and “time” were woven together into a single fabric: spacetime. All objects in the Universe moved through this fabric, and the task for a novel theory of gravity would be to explain how not just masses, but all forms of energy, shaped this fabric that underpinned the Universe itself.

f you begin with a bound, stationary configuration of mass, and there are no non-gravitational forces or effects present (or they’re all negligible compared to gravity), that mass will always inevitably collapse down to a black hole. It’s one of the main reasons why a static, non-expanding Universe is inconsistent with Einstein’s General Relativity.

Although the laws that governed how gravitation worked in our Universe were put forth in 1915, the critical information about how our Universe was structured had not yet come in. While some astronomers favored the notion that many objects in the sky were actually “island Universes” that were located well outside the Milky Way galaxy, most astronomers at the time thought that the Milky Way galaxy represented the full extent of the Universe. Einstein sided with this latter view, and — thinking the Universe was static and eternal — added a special type of fudge factor into his equations: a cosmological constant.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard! Fields marked with an * are required

Although it was mathematically permissible to make this addition, the reason Einstein did so was because without one, the laws of General Relativity would ensure that a Universe that was evenly, uniformly distributed with matter (which ours seemed to be) would be unstable against gravitational collapse. In fact, it was very easy to demonstrate that any initially uniform distribution of motionless matter, regardless of shape or size, would inevitably collapse into a singular state under its own gravitational pull. By introducing this extra term of a cosmological constant, Einstein could tune it so that it would balance out the inward pull of gravity by proverbially pushing the Universe out with an equal and opposing action.

Edwin Hubble’s original plot of galaxy distances versus redshift (left), establishing the expanding Universe, versus a more modern counterpart from approximately 70 years later (right). In agreement with both observation and theory, the Universe is expanding, and the slope of the line relating distance to recession speed is a constant.

Two developments — one theoretical and one observational — would quickly change this early story that Einstein and others had told themselves.

  1. In 1922, Alexander Friedmann worked out, fully, the equations that governed a Universe that was isotropically (the same in all directions) and homogeneously (the same in all locations) filled with any type of matter, radiation, or other form of energy. He found that such a Universe would never remain static, not even in the presence of a cosmological constant, and that it must either expand or contract, dependent on the specifics of its initial conditions.
  2. In 1923, Edwin Hubble became the first to determine that the spiral nebulae in our skies were not contained within the Milky Way, but rather were located many times farther away than any of the objects that comprised our home galaxy. The spirals and ellipticals found throughout the Universe were, in fact, their own “island Universes,” now known as galaxies, and that moreover — as had previously been observed by Vesto Slipher — the vast majority of them appeared to be moving away from us at remarkably rapid speeds.

In 1927, Georges Lemaître became the very first person to put these pieces of information together, recognizing that the Universe today is expanding, and that if things are getting farther apart and less dense today, then they must have been closer together and denser in the past. Extrapolating this back all the way to its logical conclusion, he deduced that the Universe must have expanded to its present state from a single point-of-origin, which he called either the “cosmic egg” or the “primeval atom.”

This image shows Catholic priest and theoretical cosmologist Georges Lemaître at the Catholic University of Leuven, ca. 1933. Lemaître was among the first to conceptualize the Big Bang as the origin of our Universe within the framework of General Relativity, even though he didn’t use that name himself.

This was the original notion of what would grow into the modern theory of the Big Bang: the idea that the Universe had a beginning, or a “day without yesterday.” It was not, however, generally accepted for some time. Lemaître originally sent his ideas to Einstein, who infamously dismissed Lemaître’s work by responding, “Your calculations are correct, but your physics is abominable.”

Despite the resistance to his ideas, however, Lemaître would be vindicated by further observations of the Universe. Many more galaxies would have their distances and redshifts measured, leading to the overwhelming conclusion the Universe was and still is expanding, equally and uniformly in all directions on large cosmic scales. In the 1930s, Einstein conceded, referring to his introduction of the cosmological constant in an attempt to keep the Universe static as his “greatest blunder.”

However, the next great development in formulating what we know of as the Big Bang wouldn’t come until the 1940s, when George Gamow — perhaps not so coincidentally, an advisee of Alexander Friedmann — came along. In a remarkable leap forward, he recognized that the Universe was not only full of matter, but also radiation, and that radiation evolved somewhat differently from matter in an expanding Universe. This would be of little consequence today, but in the early stages of the Universe, it mattered tremendously.

dark energy

While matter (both normal and dark) and radiation become less dense as the Universe expands owing to its increasing volume, dark energy, and also the field energy during inflation, is a form of energy inherent to space itself. As new space gets created in the expanding Universe, the dark energy density remains constant. Note that individual quanta of radiation are not destroyed, but simply dilute and redshift to progressively lower energies, stretching to longer wavelengths and lower energies as space expands.

Matter, Gamow realized, was made up of particles, and as the Universe expanded and the volume that these particles occupied increased, the number density of matter particles would drop in direct proportion to how the volume grew.

But radiation, while also made up of a fixed number particles in the form of photons, had an additional property: the energy inherent to each photon is determined by the photon’s wavelength. As the Universe expands, the wavelength of each photon gets lengthened by the expansion, meaning that the amount of energy present in the form of radiation decreases faster than the amount of energy present in the form of matter in the expanding Universe.

But in the past, when the Universe was smaller, the opposite would have been true. If we were to extrapolate backward in time, the Universe would have been in a hotter, denser, more radiation-dominated state. Gamow leveraged this fact to make three great, generic predictions about the young Universe.

  1. At some point, the Universe’s radiation was hot enough so that every neutral atom would have been ionized by a quantum of radiation, and that this leftover bath of radiation should still persist today at only a few degrees above absolute zero.
  2. At some even earlier point, it would have been too hot to even form stable atomic nuclei, and so an early stage of nuclear fusion should have occurred, where an initial mix of protons-and-neutrons should have fused together to create an initial set of atomic nuclei: an abundance of elements that predates the formation of atoms.
  3. And finally, this means that there would be some point in the Universe’s history, after atoms had formed, where gravitation pulled this matter together into clumps, leading to the formation of stars and galaxies for the first time.
reionization
(Credit: S. G. Djorgovski et al., Caltech; Caltech Digital Media Center)

These three major points, along with the already-observed expansion of the Universe, form what we know today as the four cornerstones of the Big Bang. Although one was still free to extrapolate the Universe back to an arbitrarily small, dense state — even to a singularity, if you’re daring enough to do so — that was no longer the part of the Big Bang theory that had any predictive power to it. Instead, it was the emergence of the Universe from a hot, dense state that led to our concrete predictions about the Universe.

Over the 1960s and 1970s, as well as ever since, a combination of observational and theoretical advances unequivocally demonstrated the success of the Big Bang in describing our Universe and predicting its properties.

  • The discovery of the cosmic microwave background and the subsequent measurement of its temperature and the blackbody nature of its spectrum eliminated alternative theories like the Steady State model.
  • The measured abundances of the light elements throughout the Universe verified the predictions of Big Bang nucleosynthesis, while also demonstrating the need for fusion in stars to provide the heavy elements in our cosmos.
  • And the farther away we look in space, the less grown-up and evolved galaxies and stellar populations appear to be, while the largest-scale structures like galaxy groups and clusters are less rich and abundant the farther back we look.

The Big Bang, as verified by our observations, accurately and precisely describes the emergence of our Universe, as we see it, from a hot, dense, almost-perfectly uniform early stage.

But what about the “beginning of time?” What about the original idea of a singularity, and an arbitrarily hot, dense state from which space and time themselves could have first emerged?

space expanding
(Credit: NASA/CXC/M. Weiss)

That’s a different conversation, today, than it was back in the 1970s and earlier. Back then, we knew that we could extrapolate the hot Big Bang back in time: back to the first fraction-of-a-second of the observable Universe’s history. Between what we could learn from particle colliders and what we could observe in the deepest depths of space, we had lots of evidence that this picture accurately described our Universe.

But at the absolute earliest times, this picture breaks down. There was a new idea — proposed and developed in the 1980s — known as cosmological inflation, that made a slew of predictions that contrasted with those that arose from the idea of a singularity at the start of the hot Big Bang. In particular, inflation predicted:

  • A curvature for the Universe that was indistinguishable from flat, to the level of between 99.99% and 99.9999%; comparably, a singularly hot Universe made no prediction at all.
  • Equal temperatures and properties for the Universe even in causally disconnected regions; a Universe with a singular beginning made no such prediction.
  • A Universe devoid of exotic high-energy relics like magnetic monopoles; an arbitrarily hot Universe would possess them.
  • A Universe seeded with small-magnitude fluctuations that were almost, but not perfectly, scale invariant; a non-inflationary Universe produces large-magnitude fluctuations that conflict with observations.
  • A Universe where 100% of the fluctuations are adiabatic and 0% are isocurvature; a non-inflationary Universe has no preference.
  • A Universe with fluctuations on scales larger than the cosmic horizon; a Universe originating solely from a hot Big Bang cannot have them.
  • And a Universe that reached a finite maximum temperature that’s well below the Planck scale; as opposed to one whose maximum temperature reached all the way up to that energy scale.

The first three were post-dictions of inflation; the latter four were predictions that had not yet been observed when they were made. On all of these accounts, the inflationary picture has succeeded in ways that the hot Big Bang, without inflation, has not.

(Credit: E. Siegel; ESA/Planck and the DOE/NASA/NSF Interagency Task Force on CMB research)

During inflation, the Universe must have been devoid of matter-and-radiation and instead contained some sort of energy — whether inherent to space or as part of a field — that didn’t dilute as the Universe expanded. This means that inflationary expansion, unlike matter-and-radiation, didn’t follow a power law that leads back to a singularity but rather is exponential in character. One of the fascinating aspects about this is that something that increases exponentially, even if you extrapolate it back to arbitrarily early times, even to a time where t → -∞, it never reaches a singular beginning.

Now, there are many reasons to believe that the inflationary state wasn’t one that was eternal to the past, that there might have been a pre-inflationary state that gave rise to inflation, and that, whatever that pre-inflationary state was, perhaps it did have a beginning. There are theorems that have been proven and loopholes discovered to those theorems, some of which have been closed and some of which remain open, and this remains an active and exciting area of research.

singularity
(Credit: E. Siegel)

But one thing is for certain.

Whether there was a singular, ultimate beginning to all of existence or not, it no longer has anything to do with the hot Big Bang that describes our Universe from the moment that:

  • inflation ended,
  • the hot Big Bang occurred,
  • the Universe became filled with matter and radiation and more,
  • and it began expanding, cooling, and gravitating,

eventually leading to the present day. There are still a minority of astronomers, astrophysicists and cosmologists who use “the Big Bang” to refer to this theorized beginning and emergence of time-and-space, but not only is that not a foregone conclusion anymore, but it doesn’t have anything to do with the hot Big Bang that gave rise to our Universe. The original definition of the Big Bang has now changed, just as our understanding of the Universe has changed. If you’re still behind, that’s ok; the best time to catch up is always right now.

Additional recommended reading:

August 22nd 2022

Quantum Physics Could Finally Explain Consciousness, Scientists Say

We asked a theoretical physicist, an experimental physicist, and a professor of philosophy to weigh in.

By Robert Lea Aug 16, 2022

During the 20th century, researchers pushed the frontiers of science further than ever before with great strides made in two very distinct fields. While physicists discovered the strange counter-intuitive rules that govern the subatomic world, our understanding of how the mind works burgeoned.

Yet, in the newly-created fields of quantum physics and cognitive science, difficult and troubling mysteries still linger, and occasionally entwine. Why do quantum states suddenly resolve when they’re measured, making it at least superficially appear that observation by a conscious mind has the capacity to change the physical world? What does that tell us about consciousness?

Popular Mechanics spoke to three researchers from different fields for their views on a potential quantum consciousness connection. Stop us if you’ve heard this one before: a theoretical physicist, an experimental physicist, and a professor of philosophy walk into a bar …

Quantum Physics and Consciousness Are Weird

Early quantum physicists noticed through the double-slit experiment that the act of attempting to measure photons as they pass through wavelength-sized slits to a detection screen on the other side changed their behavior.

This measurement attempt caused wave-like behavior to be destroyed, forcing light to behave more like a particle. While this experiment answered the question “is light a wave or a particle?” — it’s neither, with properties of both, depending on the circumstance — it left behind a more troubling question in its wake. What if the act of observation with the human mind is actually causing the world to manifest changes , albeit on an incomprehensibly small scale?

Renowned and reputable scientists such as Eugene Wigner, John Bell, and later Roger Penrose, began to consider the idea that consciousness could be a quantum phenomenon. Eventually, so did researchers in cognitive science (the scientific study of the mind and its processes), but for different reasons.

Ulf Danielsson, an author and a professor of theoretical physics at Uppsala University in Sweden, believes one of the reasons for the association between quantum physics and consciousness—at least from the perspective of cognitive science—is the fact that processes on a quantum level are completely random. This is different from the deterministic way in which classical physics proceeds, and means even the best calculations that physicists can come up with in regard to quantum experiments are mere probabilities.

This rationalization isn’t convincing to him, however. “I don’t think that there’s any reason to suppose from the cognitive science direction that quantum mechanics has anything to do with explaining consciousness,” Barrett continues.

From the quantum perspective, however, Barrett sees a clear reason why physicists first proposed the connection to consciousness.

“If it wasn’t for the quantum measurement problem, nobody, including the physicists involved in this early discussion, would be thinking that consciousness and quantum mechanics had anything to do with each other,” he says.

Superposition and Schrödinger’s Cat

At the heart of quantum “weirdness” and the measurement problem, there is a concept called “superposition.”

Because the possible states of a quantum system are described using wave mathematics — or more precisely, wave functions — a quantum system can exist in many overlapping states, or a superposition. The weird thing is, these states can be contradictory.

To see how counter-intuitive this can be, we can refer to one of history’s most famous thought experiments, the Schrödinger’s Cat paradox.

Devised by Erwin Schrödinger, the experiment sees an unfortunate cat placed in a box with what the physicist described as a “diabolical device” for an hour. The device releases a deadly poison if an atom in the box decays during that period. Because the decay of atoms is completely random, there is no way for the experimenter to predict if the cat is dead or alive until the hour is up and the box is opened. This content is imported from {embed-name}. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

August 16th 2022

Hormones in hair may reveal how chronically stressed you are — study

Long-term stress isn’t good for you, and your hair knows it.

Stress can do a number on your body — and that includes the hairs on your head. Stress releases hormones that affect hair pigmentation, turning your luscious locks gray or white. It can also make your hair fall out, triggering hair follicles to enter a “dormant” phase that results in hair loss. And now, according to a paper published Wednesday in the journal PLOS Global Public Health, researchers have discovered that stress levels are also reflected in how much of the hormone cortisol is stored in your hair.

Sloppy Use of Machine Learning Is Causing a ‘Reproducibility Crisis’ in Science

AI hype has researchers in fields from medicine to sociology rushing to use techniques that they don’t always understand—causing a wave of spurious results.

logoThe AI Database →

ApplicationPrediction

End UserResearch

SectorHealth carePublic safetyResearch

TechnologyMachine learning

History shows civil wars to be among the messiest, most horrifying of human affairs. So Princeton professor Arvind Narayanan and his PhD student Sayash Kapoor got suspicious last year when they discovered a strand of political science research claiming to predict when a civil war will break out with more than 90 percent accuracy, thanks to artificial intelligence.

A series of papers described astonishing results from using machine learning, the technique beloved by tech giants that underpins modern AI. Applying it to data such as a country’s gross domestic product and unemployment rate was said to beat more conventional statistical methods at predicting the outbreak of civil war by almost 20 percentage points.

Yet when the Princeton researchers looked more closely, many of the results turned out to be a mirage. Machine learning involves feeding an algorithm data from the past that tunes it to operate on future, unseen data. But in several papers, researchers failed to properly separate the pools of data used to train and test their code’s performance, a mistake termed “data leakage” that results in a system being tested with data it has seen before, like a student taking a test after being provided the answers.

“They were claiming near-perfect accuracy, but we found that in each of these cases, there was an error in the machine-learning pipeline,” says Kapoor. When he and Narayanan fixed those errors, in every instance they found that modern AI offered virtually no advantage.

That experience prompted the Princeton pair to investigate whether misapplication of machine learning was distorting results in other fields—and to conclude that incorrect use of the technique is a widespread problem in modern science.

AI has been heralded as potentially transformative for science because of its capacity to unearth patterns that may be hard to discern using more conventional data analysis. Researchers have used AI to make breakthroughs in predicting protein structures, controlling fusion reactors, probing the cosmos.

Yet Kapoor and Narayanan warn that AI’s impact on scientific research has been less than stellar in many instances. When the pair surveyed areas of science where machine learning was applied, they found that other researchers had identified errors in 329 studies that relied on machine learning, across a range of fields.

Kapoor says that many researchers are rushing to use machine learning without a comprehensive understanding of its techniques and their limitations. Dabbling with the technology has become much easier, in part because the tech industry has rushed to offer AI tools and tutorials designed to lure newcomers, often with the goal of promoting cloud platforms and services. “The idea that you can take a four-hour online course and then use machine learning in your scientific research has become so overblown,” Kapoor says. “People have not stopped to think about where things can potentially go wrong.”

Excitement around AI’s potential has prompted some scientists to bet heavily on its use in research. Tonio Buonassisi, a professor at MIT who researches novel solar cells, uses AI extensively to explore novel materials. He says that while it is easy to make mistakes, machine learning is a powerful tool that should not be abandoned. Errors can often be ironed out, he says, if scientists from different fields develop and share best practices. “You don’t need to be a card-carrying machine-learning expert to do these things right,” he says.

Most Popular

Kapoor and Narayanan organized a workshop late last month to draw attention to what they call a “reproducibility crisis” in science that makes use of machine learning. They were hoping for 30 or so attendees but received registrations from over 1,500 people, a surprise that they say suggests issues with machine learning in science are widespread.

During the event, invited speakers recounted numerous examples of situations where AI had been misused, from fields including medicine and social science. Michael Roberts, a senior research associate at Cambridge University, discussed problems with dozens of papers claiming to use machine learning to fight Covid-19, including cases where data was skewed because it came from a variety of different imaging machines. Jessica Hullman, an associate professor at Northwestern University, compared problems with studies using machine learning to the phenomenon of major results in psychology proving impossible to replicate. In both cases, Hullman says, researchers are prone to using too little data, and misreading the statistical significance of results.

This popular anti-aging goo can help regrow muscle — study

How muscle stem cells activate themselves to repair damaged tissue has boggled scientists. Now we know hyaluronic acid might be the key.

Alec Smith, a regenerative medicine researcher at the University of Washington’s Institute for Stem Cell and Regenerative Medicine, who was not involved in the study, says that this finding may help individuals who lose and can’t grow back muscle due to extensive trauma, such as gunshot wounds or car accidents.

“The big goal in this area of muscle regeneration is to work out how we can enhance the process to allow people to overcome more significant wounds,” he explains to Inverse.

Cosmetician makes microneedling, care to the patient using dermapen
Hyaluronic acid is best known for its cosmetic and anti-aging benefits. Shutterstock

Here’s the background — Scientists already knew that muscle stem cells help out with the healing process and are critical to muscle regeneration. But they tend to arrive long after inflammation sets in, Jeff Dilworth, the study’s lead researcher, and an epigeneticist at The Ottawa Hospital Research Institute in Canada, tells Inverse.

“What happens after muscle injury is that there’s inflammation, immune cells come into the injured muscle to remove damaged tissues,” he says. “Your muscle doesn’t want to waste making new muscle until you’ve got the inflammation resolved so stem cells don’t actually start working until almost two days after injury.”

During inflammation, immune cells like macrophages (which eat injured cells) spit out a deluge of chemicals called cytokines, which tell other cells what to do. To muscle stem cells, also called satellite cells, cytokines tell them not to wake up until most of the inflammatory storm is over.

The discovery — How exactly muscle stem cells can counter their chemical snooze button and activate themselves has boggled scientists for some time. Dilworth and his team stumbled across hyaluronic acid’s role when investigating an enzyme — a protein that acts as a biological catalyst to speed up chemical reactions in cells — that seemed to work only during inflammation.

Looking at mice and muscle cells grown in Petri dishes, the researchers found that in response to the influx of cytokines, the enzyme JMJD3 rouses muscle stem cells awake. More specifically, the enzyme cozies up to a gene involved in hyaluronic acid production called Has2.

“The stem cells, as it’s receiving those signals [from cytokines], is going to start making hyaluronic acid and coats itself,” explains Dilworth. “[This] sort of creates a force field or a barrier around [the muscle stem cells] that’s going to protect it from these negative signals coming from the cytokines.”

Once that happens, muscle stem cells covered in hyaluronic body armor amass at the site of injury in droves. They start replicating, expanding the workforce available to restore and make new muscle.

A green thread of muscle fiber sparsely covered with stem cells.
When a muscle fiber is damaged, stem cells (in pink) start producing and coating themselves with hyaluronic acid (pale green outline). Once the coating gets thick enough, it causes the muscle stem cells to wake up.Dr. Kiran Nakka / The Ottawa Hospital Research Institute

Why it matters — While this is a mouse study and needs studies involving humans to validate the results, “this is a really exciting finding,” says Anthony Atala, director of Wake Forest University’s Intitute for Regenerative Medicine. He was not involved in the study.

“It’s basically advancing our knowledge of how muscle regenerates. It’s also advancing our knowledge of how to make sure we can overcome the challenges for regeneration.”

These challenges lie in tackling diseases that affect the muscles, like muscular dystrophy, which currently has no cure and where gene mutations interfere with healthy muscle growth, leading to a gradual, debilitating, and life-threatening muscle loss. Experiencing acute trauma, like a car accident, results in a similar issue.

Currently, there are efforts to hack or jumpstart the regeneration process using medication, small molecule therapies, and gene therapies — but these are all very much experimental.

“This [study] is a really valuable piece of the puzzle,” says Smith. “But we’re going to still need to understand more about how exactly the regeneration is activated to be able move towards designing therapies that can enhance muscle regeneration.”

What’s next — Dilworth and his team are continuing their investigations to fully suss out the dynamic between hyaluronic acid, muscle stem cells, and inflammation and whether other chemical or cellular players are involved.

“We’re going to use [this research] as a tool for basic science to try and understand the complete pathway of communication between the immune system and the stem cell, which is poorly defined at the moment,” he says.

The researchers want to channel their findings into clinical applications, like finding ways to turn up hyaluronic acid production in the muscle stem cells of older people.

Until then, Dilworth does not recommend you slap some hyaluronic acid to that achy, bruised muscle and hope for a miracle. That definitely won’t work — better to save the tonic for your face instead (for now).More like this

Walk into any cosmetics store and turn into the skincare aisle. Among all the lotions and potions, you will find it hard to avoid the reigning heavyweight of anti-aging ingredients: hyaluronic acid. This sugar molecule typically found in the body reportedly delivers some serious benefits, from ironing out wrinkles and plumping skin to safeguarding eye and joint health.

Now hyaluronic acid can add revitalizing muscles to its glowy resume, according to a paper published Thursday in Science. Researchers discovered that after muscle damage, hyaluronic acid swoops in and nudges muscle stem cells to get cracking on making repairs. The finding could lead to interventions that boost our bodies’ mending mechanisms, particularly in people who experience severe trauma or injury or those with a muscle-wasting condition like muscular dystrophy.

https://www.inverse.com/mind-body/hyaluronic-acid?utm_source=pocket-newtab-global-en-GB

May 18th 2022

Science confirms these parts of the Bible are true

Stars Insider

Slide 1 of 29: Like any other religious texts in history, the Bible is open to interpretation and it's not confirmed by science to be factually accurate in every account. This, however, is not the case for every bit of text in the best-selling book of all time. In fact, some of these verses have been proved by science to be true.Intrigued? Click through the following gallery and discover the parts of the Bible that have been confirmed by science.

Science confirms these parts of the Bible are true

Like any other religious texts in history, the Bible is open to interpretation and it’s not confirmed by science to be factually accurate in every account. This, however, is not the case for every bit of text in the best-selling book of all time. In fact, some of these verses have been proved by science to be true.

Intrigued? Click through the following gallery and discover the parts of the Bible that have been confirmed by science.

Read More Science confirms these parts of the Bible are true (msn.com)

Slide 3 of 29: As for the core of the Earth being hot, well, Job 28:5 mentions it: "The earth, from which food comes, is transformed below as by fire."
Blowing it all away. All climate change records blown this year.

April 29th 2022

Liver Care

If you’re struggling to get rid of those extra pounds or feel tired and low on energy, you could have an overworked liver. Here are 7 liver-damaging mistakes to watch out for:

1. Not drinking enough water — The golden rule is to drink at least 8 full glasses of water every day. According to Dr. Michele Neil-Sherwood from the Functional Medical Institute, “Dehydration can have a direct effect on our liver’s ability to properly detoxify our body.” When your liver can’t clear those toxins, the risk of illness increases.

2. Eating heavy meals or high glycemic foods before bed — Eating heavy meals before bed is a guaranteed way to make your liver work overtime. High glycemic foods are the worst culprits here. This includes foods like breads, white rice, sweets, even “healthy” fruits like watermelon and pineapple. Experts recommend avoiding these liver-taxing foods before bed. If you’re craving a snack, go for fresh carrots which help cleanse your liver.

3. Eating too many trans fats — Trans fats are dangerous preservatives commonly found in prepackaged foods. They often show up as “hydrogenated oils” in the ingredient list. Consuming too many trans fats increases weight gain, packing more fat onto your liver and belly.

4. Eating too much sugar — Refined sugar and high-fructose corn syrup wreak havoc on your liver. Some studies suggest it can damage your liver just as much as alcohol, without being overweight. Fructose is converted to fat in the body, which increases your risk of developing a fatty liver.

5. Not getting enough exercise — Getting a proper amount of exercise is important, even if you’re not overweight. Not only does exercise help you work up a good sweat, but it improves liver detoxification too. Even several brisk walks every week can pay huge benefits.

6. Consuming too much vitamin A — Out of this entire list, this one might surprise you the most! Vitamin A delivers many great benefits at normal doses, such as protecting your eyes and supporting a healthy immune system. But too much vitamin A is toxic to your liver. How much is too much? Generally, doses over 40,000 IU daily. Most people won’t have to worry about going overboard with vitamin A. However, if you take multiple vitamins and supplements that contain vitamin A, pay close attention to the total amount you’re consuming.

7. Taking the wrong herbal supplements — Certain herbal extracts such as kava kava, can be harmful to your liver. That’s why taking the right nutrients is crucial for rejuvenating your liver and restoring a healthy metabolism.

Now there’s a new at-home breakthrough that detoxifies and rejuvenates your liver from the inside. Even better, it can ignite a healthy metabolism and could skyrocket energy in just 30 seconds every morning.

April 23rd 2022

Is Time Travel Possible?

The Short Answer:Although humans can’t hop into a time machine and go back in time, we do know that clocks on airplanes and satellites travel at a different speed than those on Earth.

We all travel in time! We travel one year in time between birthdays, for example. And we are all traveling in time at approximately the same speed: 1 second per second.Animation of a person walking as the hands of a clock move forward

We typically experience time at one second per second. Credit: NASA/JPL-Caltech

NASA’s space telescopes also give us a way to look back in time. Telescopes help us see stars and galaxies that are very far away. It takes a long time for the light from faraway galaxies to reach us. So, when we look into the sky with a telescope, we are seeing what those stars and galaxies looked like a very long time ago.

However, when we think of the phrase “time travel,” we are usually thinking of traveling faster than 1 second per second. That kind of time travel sounds like something you’d only see in movies or science fiction books. Could it be real? Science says yes!Image of galaxies, taken by the Hubble Space Telescope.

This image from the Hubble Space Telescope shows galaxies that are very far away as they existed a very long time ago. Credit: NASA, ESA and R. Thompson (Univ. Arizona)

How do we know that time travel is possible?

More than 100 years ago, a famous scientist named Albert Einstein came up with an idea about how time works. He called it relativity. This theory says that time and space are linked together. Einstein also said our universe has a speed limit: nothing can travel faster than the speed of light (186,000 miles per second).Animation of two train pieces coming together. One says space and the other says time.

Einstein’s theory of relativity says that space and time are linked together. Credit: NASA/JPL-Caltech

What does this mean for time travel? Well, according to this theory, the faster you travel, the slower you experience time. Scientists have done some experiments to show that this is true.

For example, there was an experiment that used two clocks set to the exact same time. One clock stayed on Earth, while the other flew in an airplane (going in the same direction Earth rotates).

After the airplane flew around the world, scientists compared the two clocks. The clock on the fast-moving airplane was slightly behind the clock on the ground. So, the clock on the airplane was traveling slightly slower in time than 1 second per second.Animation of two scenes, one of a house on the ground with the hands of a clock moving and one with a plane flying through the sky with the hands of a clock moving slower.

Credit: NASA/JPL-Caltech

Can we use time travel in everyday life?

We can’t use a time machine to travel hundreds of years into the past or future. That kind of time travel only happens in books and movies. But the math of time travel does affect the things we use every day.

For example, we use GPS satellites to help us figure out how to get to new places. (Check out our video about how GPS satellites work.) NASA scientists also use a high-accuracy version of GPS to keep track of where satellites are in space. But did you know that GPS relies on time-travel calculations to help you get around town?

GPS satellites orbit around Earth very quickly at about 8,700 miles (14,000 kilometers) per hour. This slows down GPS satellite clocks by a small fraction of a second (similar to the airplane example above).Illustration of GPS satellites orbiting around Earth

GPS satellites orbit around Earth at about 8,700 miles (14,000 kilometers) per hour. Credit: GPS.gov

However, the satellites are also orbiting Earth about 12,550 miles (20,200 km) above the surface. This actually speeds up GPS satellite clocks by a slighter larger fraction of a second.

Here’s how: Einstein’s theory also says that gravity curves space and time, causing the passage of time to slow down. High up where the satellites orbit, Earth’s gravity is much weaker. This causes the clocks on GPS satellites to run faster than clocks on the ground.

The combined result is that the clocks on GPS satellites experience time at a rate slightly faster than 1 second per second. Luckily, scientists can use math to correct these differences in time.Illustration of a hand holding a phone with a maps application active.

Credit: NASA/JPL-Caltech

If scientists didn’t correct the GPS clocks, there would be big problems. GPS satellites wouldn’t be able to correctly calculate their position or yours. The errors would add up to a few miles each day, which is a big deal. GPS maps might think your home is nowhere near where it actually is!

In Summary:

Yes, time travel is indeed a real thing. But it’s not quite what you’ve probably seen in the movies. Under certain conditions, it is possible to experience time passing at a different rate than 1 second per second. And there are important reasons why we need to understand this real-world form of time travel.

Read More Is Time Travel Possible? | NASA Space Place – NASA Science for Kids

March 31st 2022

Aliens are among us and abducting people from earth, professor says

Becca Monaghan

A professor has made some frighteningly bold claims. Not only are “alien hybrids walking among us”, they’re also abducting humans and utilising mind control powers to prepare for a mass takeover, apparently.

Dr David Jacobs, professor of history at Temple University in Pennslyvania specialising in Ufology, has written several books on alien abductions.

Read More Aliens are among us and abducting people from earth, professor says (msn.com)

February 2022

What happened before the Big Bang?

Paul Sutter

In the beginning, there was an infinitely dense, tiny ball of matter. Then, it all went bang, giving rise to the atoms, molecules, stars and galaxies we see today.

Or at least, that’s what we’ve been told by physicists for the past several decades. 

But new theoretical physics research has recently revealed a possible window into the very early universe, showing that it may not be “very early” after all. Instead it may be just the latest iteration of a bang-bounce cycle that has been going on for … well, at least once, and possibly forever. 

Read More What happened before the Big Bang? (msn.com)

Einstein mystery solved after 100 years as experts make stunning Relativity breakthrough

John Varga 

Almost a hundred years ago, Albert Einstein won a Nobel Prize for his research on the phenomenon. The photoelectric effect is the emission of electrons when electromagnetic radiation, such as light, hits a material. Electrons emitted in this manner are called photoelectrons.

Scientists have identified the photoelectric effect in molecules extensively.

However, they haven’t yet determined its evolution over time in an experimental measurement.

Read More What happened before the Big Bang? (msn.com)

NASA solar probe sees Venus glowing ‘like an iron from the forge’

Amanda Kooser

NASA’s Parker Solar Probe is on a daring mission to study the sun, but it’s picked up a hobby along the way: studying Venus. On Wednesday, the space agency unveiled stunning new views of the planet that’s been called Earth’s twin. 

Read More NASA solar probe sees Venus glowing ‘like an iron from the forge’ (msn.com)

It looks like Jeff Bezos has plans to cheat death.He's backing a new biotech company working on "cellular rejuvenation programming."© MARK RALSTON – Getty Images He’s backing a new biotech company working on “cellular rejuvenation programming.”

The founder and former CEO of Amazon has reportedly made an investment in the freshly launched Altos Labs, a biotech startup focused on “cellular rejuvenation programming to restore cell health and resilience, with the goal of reversing disease to transform medicine,” according to a January 19 press release. With $3 billion in backing on day one, Altos Labs has hit the ground running with what may be the single largest funding round for a biotech company, according to the Financial Times.

Read More Jeff Bezos Is Paying for a Way to Make Humans Immortal (msn.com)

January 2022

Science breakthrough: Space satellite makes ‘incredible’ discovery of ‘deformed’ planet

Joel Day 

Space: Scientists were stunned by the rugby ball-shaped exoplanet

planet is round because of gravity; its gravity pulls equally from all sides, from the centre to the edges like the spokes of a bicycle wheel. The result is an overall sphere, a three-dimensional circle. The biggest differences between planets come from other things like their size, their terrain, their chemical makeup and distance to their star.

However, a recent discovery made by the European Space Agency‘s (ESA) exoplanet-hunting satellite Cheops has thrown this previously undisputed conclusion on its head.

Scanning the Universe for exoplanets – those worlds outside our Solar System – it caught sight of a planet deformed by the strong tidal pull of its host star.

Read more Science breakthrough: Space satellite makes ‘incredible’ discovery of ‘deformed’ planet (msn.com)

The James Webb Space Telescope’s 1st target star is in the Big Dipper. Here’s where to see it.

Doris Elin Urrutia

The James Webb Space Telescope is setting its eyes towards the Big Dipper.The star HD 84406 is located in the constellation Ursa Major, near the Big Dipper.© Provided by Space The star HD 84406 is located in the constellation Ursa Major, near the Big Dipper.

Although the spacecraft is still months away from beginning its official scientific observations, one particularly bright star named HD 84406 will soon be the object of JWST’s attention.

“Star light, star bright … the first star Webb will see is HD 84406, a sun-like star about 260 light years away,” NASA officials tweeted on Friday (Jan. 28) via JWST’s official Twitter account. 

Comment The James Webb Space Telescope’s 1st target star is in the Big Dipper. Here’s where to see it. (msn.com)

‘X particle’ from the dawn of time detected inside the Large Hadron Collider

Ben Turner 

Physicists at the world’s largest atom smasher have detected a mysterious, primordial particle from the dawn of time.The particle was produced inside the Large Hadron Collider at CERN.© Provided by Space The particle was produced inside the Large Hadron Collider at CERN.

About 100 of the short-lived “X” particles — so named because of their unknown structures — were spotted for the first time amid trillions of other particles inside the Large Hadron Collider (LHC), the world’s largest particle accelerator, located near Geneva at CERN (the European Organization for Nuclear Research). 

Read More ‘X particle’ from the dawn of time detected inside the Large Hadron Collider (msn.com)

Blasts of light coming from ‘mystery’ space object – and Twitter think it’s a sign

Becca Monaghan 

The “mystery” object pulsed three times an hour, brightening for 30 to 60 seconds. Natasha Hurley-Walker, from the International Centre for Radio Astronomy Research, who led the research, said: “This object was appearing and disappearing over a few hours during our observations,”

“That was completely unexpected. It was kind of spooky for an astronomer because there’s nothing known in the sky that does that.

Read More

The “mystery” object pulsed three times an hour, brightening for 30 to 60 seconds. Natasha Hurley-Walker, from the International Centre for Radio Astronomy Research, who led the research, said: “This object was appearing and disappearing over a few hours during our observations,”

“That was completely unexpected. It was kind of spooky for an astronomer because there’s nothing known in the sky that does that.

Read More Blasts of light coming from ‘mystery’ space object – and Twitter think it’s a sign (msn.com)

Battery breakthrough achieves energy density necessary for electric planes

Anthony Cuthbertson 

Electric planes capable of carrying hundreds of passengers have been held back by battery technology development - Wright Electric
Researchers have achieved a world-leading energy density with a next-generation battery design, paving the way for long-distance electric planes.

The lithium-air battery, developed at the Japanese National Institute for Materials Science (NIMS), had an energy density of over 500Wh/kg. By comparison, lithium-ion batteries found in Tesla vehicles have an energy density of 260Wh/kg.

Read More Battery breakthrough achieves energy density necessary for electric planes (msn.com)

Astronomers spot mysterious object unlike anything they have seen before

By Nina Massey, PA Science Correspondent

A mysterious object unlike anything ever seen before has been spotted by astronomers.Astronomers spot mysterious object unlike anything they have seen before (ICRAR/PA)© Provided by PA Media Astronomers spot mysterious object unlike anything they have seen before (ICRAR/PA)

Observations of the “spooky” item showed it releasing a giant burst of energy three times an hour.

As it spins through space the strange object sends out a beam of radiation, and for one minute in every twenty minutes it is one of the brightest objects in the sky.

The researchers think the object could be a neutron star or a white dwarf – collapsed cores of stars – with an ultra-powerful magnetic field.

Read More Astronomers spot mysterious object unlike anything they have seen before (msn.com)

Moon-landing hoax still lives on. But why?

Elizabeth Howell 


Covid live: two-thirds of people with Omicron in England had Covid befor

The moon-landing hoax still lives on, more than 50 years after Apollo 11 — the first crewed mission to land on the moonThe moon-landing hoax still lives on and there seems to be no convincing some people. Here astronaut Buzz Aldrin walks on the surface of the moon near the leg of the lunar module Eagle during the Apollo 11 mission in July 1969. Mission commander Neil Armstrong took this photograph with a 70-millimeter lunar surface camera.© Provided by Space The moon-landing hoax still lives on and there seems to be no convincing some people. Here astronaut Buzz Aldrin walks on the surface of the moon near the leg of the lunar module Eagle during the Apollo 11 mission in July 1969. Mission commander Neil Armstrong took this photograph with a 70-millimeter lunar surface camera.

Read More Moon-landing hoax still lives on. But why? (msn.com)

How is metal 3D printing transforming space travel?

Ailsa Harvey 

Metal 3D printing can produce the most intricate rocket parts, using combustion-resistant material. While 3D printing isn’t new, how has the technology advanced to face the more extreme conditions of space?Metal 3d printing parts© Provided by Space Metal 3d printing parts

Read More How is metal 3D printing transforming space travel? (msn.com) 

New study provides first evidence of non-random mutations in DNA

Harry Baker  

Researchers studying the genetic mutations in a common roadside weed, thale cress (Arabidopsis thaliana), have discovered that the plant can shield the most “essential” genes in its DNA from the changes, while leaving other sections of its genome to build up more alterations. 


Read More New study provides first evidence of non-random mutations in DNA (msn.com)

How northern food clubs are helping stretch budgets in cost of living crisisThe saddest breakup songs in the history of music

Genetic changes that crop up in an organism’s DNA may not be completely random, new research suggests. That would upend one of the key assumptions of the theory of evolution.An artist's interpretation of a double-stranded helix of DNA.

Comment It was actually a U.S Scientist who pioneered DNA testing for crime solving. he said it wasn’t at all fool proof and rather like inventing a car before wheels existed. It is a different story in the U.K with a female scientist from Leicester getting the credit, and the British police and judiciary accepting it as infallible evidence – so long as it suits their purposes.

It is also not true, when DNA is reduced to numbers, to say that two people cannot have the same DNA code. That statement was never true. I was taught on the principle that you can never say with certainty that all crows are black because you will never see them all. Fortunately I went to two excellent universities before higher education became a right and tick box.

It is now known that significant DNA changes take place in humans over time, allied to life stresses and age. Criminals have also realised that they can lay false DNA trails. It is certainly not the wonder tool it is claimed to be, and subject to police malice and blinding juries with science. Here in Britain the police and CPS have a reputation for withholding and fabricating evidence, which is even more serious criminal behaviour.

R J Cook

Scientists finally have explanation for incredibly bright light that came from deep in space

Andrew Griffin 

PRI194739266.jpg

Three years ago, astronomers were stunned to see a bright blue flash that came out of the spiral arm of a distant galaxy, some 200 million light-years away.

The initial detection of the event known as AT2018cow happened in June 2018, when it was seen by a survey in Hawaii, which quickly sent out global alerts to tell other telescopes to look towards it.

Read More Scientists finally have explanation for incredibly bright light that came from deep in space (msn.com)

Archive 2021

NASA discovers new Jupiter-sized planet which is 120 times bigger than Earth

Antony Thrower  

A new planet with three times the mass of Jupiter has been discovered in the galaxy.

Dubbed TOI-2180 b, the planet was found by Citizen Scientist Tom Jacobs using data collected by Nasa’s Transiting Exoplanet Survey Satellite (TESS).

NASA discovers new Jupiter-sized planet which is 120 times bigger than Earth (msn.com)

Omicron’s not the last variant we’ll see. Will the next one be bad? – January 17th 2022

Nicoletta Lanese  

illustration of three coronavirus particles

The new year rode in on a wave of omicron cases, but will this be the last of the variants, or will a brand-new “variant of concern” emerge in 2022?

Comment Omicron’s not the last variant we’ll see. Will the next one be bad? (msn.com)

MIT Technology Review

Antarctica discovery of ‘warm caves’ offered stunning evidence for life beneath ice

Charlie Pittock 

Mount Erebus is Antarctica’s second-highest active volcano, after Mount Sidley, and the southernmost active volcano on Earth. With a summit elevation of 3,684 metres, it is located on Ross Island, an island formed by four volcanoes in the Ross Sea. It has been active for around 1.3 million years and could help provide evidence for a secret world of animals and plants beneath the icy continent.

Read More Antarctica discovery of ‘warm caves’ offered stunning evidence for life beneath ice (msn.com)

James Webb Space Telescope unfurls massive sunshield in major deployment milestone

Mike Wall  


Covid live: Omicron will lead to difficult choices, says UK health boss; India reports…BBC Nigellissima: Why Nigella Lawson turned down an OBE from the Queen in New Year Honours

One of the James Webb Space Telescope’s most nail-biting deployment steps is safely in the books.The sunshield of NASA's James Webb Space Telescope is seen here fully unfurled during prelaunch testing on Earth. Webb unfolded the sunshield in space on Dec. 31, 2021.© Provided by Space The sunshield of NASA’s James Webb Space Telescope is seen here fully unfurled during prelaunch testing on Earth. Webb unfolded the sunshield in space on Dec. 31, 2021.

The $10 billion observatory unfurled its huge sunshield on Friday (Dec. 31), carefully unfolding the five-layer structure by sequentially deploying two booms.

Read More James Webb Space Telescope unfurls massive sunshield in major deployment milestone (msn.com)

December 30th 2021

Nasa’s alien-hunting James Webb Space Telescope gets first surprise breakthrough as its lifetime ‘significantly’ extended

Andrew Griffin  


North Korea urges 1.2 million soldiers to unite behind Kim Jong Un and…From Megan and Harry’s Oprah interview to Spears vs Spears – These are the…FHx_Yg7WYAUc1xO.jpeg© Nasa FHx_Yg7WYAUc1xO.jpeg

Nasa’s new James Webb Space Telescope has seen its first major breakthrough, with the agency announcing it will last “significantly” longer than previously expected.

Read More Nasa’s alien-hunting James Webb Space Telescope gets first surprise breakthrough as its lifetime ‘significantly’ extended (msn.com)

November 28th 2021

US and China Race to Control the Future Through Artificial Intelligence

BY SHI SHAN AND ANNE ZHANG November 27, 2021 Updated: November 28, 2021biggersmallerPrint

News Analysis

As every aspect of modern life becomes more and more digitized, not just the economies of nations, but their sovereign influence will rely more and more on their command of technology, and especially the emerging technology of artificial intelligence (AI).

In the 21st-century information technology revolution, whoever reaches a breakthrough in developing AI will come to dominate the world.

Read More US and China Race to Control the Future Through Artificial Intelligence (theepochtimes.com)

November 16th 2021

Black hole breakthrough as portal ‘wormholes’ could be shortcuts across universe


Stop watching me! How Zoom helped me overcome my dread of phone callsDancing with the Stars recap: heading into the finale two more fan…

Since the early 20th century, physicists have fantasized about deep space travel involving shortcuts through the very fabric of space and time (spacetime). These theoretical wormholes, or portals between two black holes, would allow spacecraft to bypass the vast distances between celestial bodies by bridging two separate points in the universe. In theory, this would allow a spacecraft to cover vast distances in a much shorter period of time and possibly allow for some basic time travel.

Read More Black hole breakthrough as portal ‘wormholes’ could be shortcuts across universe (msn.com)

Scientist solve why some people die from Covid and not others

Laura Sharman  


Coronavirus live: UK reports 193 deaths; Austria announces new Covid…Tom Payne’s wife Jennifer Akerman expecting the couple’s first child

The mystery behind why some people die from Covid and not others has been revealed in a new study.Nurse checks on Covid patient in hospital 090621© PA Nurse checks on Covid patient in hospital 090621

Scientists at Oxford University discovered a piece of DNA that doubles the risk of death from coronavirus.

The gene, called LZTFL1, stops lung cells from fighting off coronavirus and can lead to respiratory failure where oxygen cannot reach vital organs.

Some 15% of Britons and Europeans could have the gene while people of South Asian heritage are at greater risk, with more than three in five thought to have it.

Read More Scientist solve why some people die from Covid and not others (msn.com)

Comment So it is bio engineered to kill Mongoloids in Russia and China.

R J Cook.

NASA Mars rover reveals landing site was a huge lake, could hold signs of alien life

October 8th 2021


Senate avoids US debt disaster, votes to delay borrowingReality TV star Kaitlynn Carter welcomes first child

Mars rovers may soon be given a few new checkpoints to explore. Geologists say specific zones of an ancient river delta, near where NASA’s Perseverance rover is stationed on the rocky red planet, could hold fossilized evidence of extraterrestrial life.Perserverance's view across the Jezero Crater toward the ancient river delta. NASA/JPL-Caltech

Read More NASA Mars rover reveals landing site was a huge lake, could hold signs of alien life (msn.com)

Black hole breakthrough as Einstein’s theory challenged with find: ‘Might need a new one’ September 28th 2021

Sebastian Kettley

The general theory of relativity, or simply general relativity, has been touted as the biggest scientific breakthrough of the 20th century. Published by Albert Einstein in 1915, the theory changed our understanding of Newtonian gravity as a force between bodies into a warping of the very fabric of space and time – spacetime. But the theory is not entirely foolproof and there are situations, particularly in the world of black holes and quantum physics, where cracks start to appear.

Read More Black hole breakthrough as Einstein’s theory challenged with find: ‘Might need a new one’ (msn.com)

Comment Newton wrote more about religion than physics. He followed Galileo who had been shown the wrack at the Vatican as a warning about stating that the earth was not the centre of the universe. Newton explained matters in terms of motion in straight lines. Einstein saw it all as curved. Time has always been a mystery and how the universe started. Hawking’s theory was a fascinating attempt to explain how it all started , but to me consciousness remains the big mystery. As mine weakens with age and despair , I am reminded that objectivity is a problem in science. I am also aware that science is funded by and to serve the ruling elite. It follows that any serious discoveries would be censored as far as public offerings are concerned. However, it is natural and fun to speculate.

R.J Cook


The Observer
UFOs

‘What I saw that night was real’: is it time to take aliens more seriously? – September 17th 2021

‘I don’t know if I was abducted by aliens or not. The whole point of my work is to describe what happened to me’: Whitley Strieber.
‘I don’t know if I was abducted by aliens or not. The whole point of my work is to describe what happened to me’: Whitley Strieber. Illustration: Ana Yael

The Pentagon has been quietly investigating unidentified flying objects since 2007. The fact that they think they might exist is good news to those who claim to have seen themDaniel LavelleSun 12 Sep 2021 11.00 BST

In June, the US government published a long-awaited report into UFOs. Although the report did not, as many had hoped, admit to the existence of little green men, it did reveal that not only were objects appearing in our skies that the Pentagon – which controls the US military – could not explain, but some clearly pose “a safety of flight issue and may pose a challenge to US national security”.

The Pentagon also revealed that it has been taking UFOs so seriously that in 2007 it discreetly set up the Advanced Aerospace Threat Identification Program (AATIP), which has been gathering data on Unexplained Aerial Phenomena (UAPs) ever since.

The unclassified version of the report (there was also a classified version seen only by US lawmakers) found “no clear indications that there is any non-terrestrial explanation” for the sightings. But neither did it rule it out. The report offered five typically mundane possible explanations for the UFOs and, crucially, one catch-all “other” bin.

Read More ‘What I saw that night was real’: is it time to take aliens more seriously? | UFOs | The Guardian

Climate crisis: ‘Tipping points’ that triggered extreme change 55 million years ago found- September 3rd 2021

Sebastian Kettley 


‘Hycean’: a portmanteau of hydrogen and ocean that’s not so far, far awayTV tonight: Robson Green and Tom Brittney return to investigate more village crimes

The groundbreaking discovery challenges previous assumptions about the so-called Paleocene-Eocene Thermal Maximum (PETM) – one of the most extreme periods of global warming in Earth’s history. Until now, scientists have believed the PETM was exacerbated by intense volcanic activity dumping large amounts of carbon dioxide (CO2) in the atmosphere. But the trigger behind this 150-year-long period has remained a mystery.

New research carried out at the University of Exeter has now identified heightened mercury levels in rock samples collected from the North Sea.

The mercury, which has been detected just before the PETM began, may have been caused by extensive volcanic activity.

Read More

Climate crisis: ‘Tipping points’ that triggered extreme change 55 million years ago found – September 3rd 2021


The groundbreaking discovery challenges previous assumptions about the so-called Paleocene-Eocene Thermal Maximum (PETM) – one of the most extreme periods of global warming in Earth’s history. Until now, scientists have believed the PETM was exacerbated by intense volcanic activity dumping large amounts of carbon dioxide (CO2) in the atmosphere. But the trigger behind this 150-year-long period has remained a mystery.

New research carried out at the University of Exeter has now identified heightened mercury levels in rock samples collected from the North Sea.

The mercury, which has been detected just before the PETM began, may have been caused by extensive volcanic activity.

Read More Climate crisis: ‘Tipping points’ that triggered extreme change 55 million years ago found (msn.com)

Earth could be SWALLOWED as dozens of rogue black holes found lurking in Milky Way – September 2nd 2021

Jacob Paul  


Researchers from Harvard University found that the black holes go rogue when their host galaxy collides with another, usually larger, galaxy – and knocks the hole from its central spot. They discovered this by simulating the formation and movement of supermassive black holes over billions of years of universal evolution. They did so by running a series of so-called ‘ROMULUS’ cosmological simulations, tracking the paths of wandering black holes.

Read More Earth could be SWALLOWED as dozens of rogue black holes found lurking in Milky Way (msn.com)

Astronomers spotted a distant planet in the middle of making its own moon

mmcfalljohnsen@insider.com (Morgan McFall-Johnsen)  

July 24th 2021


Hunt for 11-year-old Bolton girl believed to have travelled to LondonKanye West – Donda release, live: Is the rapper still releasing his new album…a star filled sky: The PDS 70 system, located nearly 400 light-years away and still in the process of being formed, with planet PDS 70c circled in blue. ALMA (ESO/NAOJ/NRAO)/Benisty et al.© Provided by Business Insider The PDS 70 system, located nearly 400 light-years away and still in the process of being formed, with planet PDS 70c circled in blue. ALMA (ESO/NAOJ/NRAO)/Benisty et al.

For the first time, astronomers have captured an alien moon in the making.

Two enormous Jupiter-like planets are orbiting a star about 400 light-years away – and one of them seems to be forming a moon. Researchers aimed the radio dishes of the Atacama Large Millimeter/submillimeter Array (ALMA) at the distant planetary system and captured a ring of material surrounding the planet.

That “disc” is exactly how astronomers think moons form. A planet’s gravity captures surrounding dust and gas, then its rotation whips that material into a spinning disc. Over time, the dust and gas falls together into moons. Astronomers still don’t fully understand how this process works, so they could learn a lot from studying this planet.

Similarly, the star itself has a disc – material that could one day coalesce into new planets.

“These new observations are also extremely important to prove theories of planet formation that could not be tested until now,” Jaehan Bae, a researcher at the Earth and Planets Laboratory of the Carnegie Institution for Science, who co-authored the study, said in a press release.

Video: Here’s what would happen to life on Earth if the moon disappeared (Business Insider)Play VideoHere’s what would happen to life on Earth if the moon disappearedClick to expand

Business Insider Logo
Business Insider Logo

The first image, above, shows the star at the center of the disc. The system is called PDS 70, and the planet with the moon disc is called PDS 70c.

The planet’s moon-making halo, captured below, is about the width of the distance between Earth and the sun. That’s about 500 times larger than Saturn’s rings.The planet PDS 70c and its disc. ALMA (ESO/NAOJ/NRAO)/Benisty et al.© ALMA (ESO/NAOJ/NRAO)/Benisty et al. The planet PDS 70c and its disc. ALMA (ESO/NAOJ/NRAO)/Benisty et al.

The disc contains enough material to make three moons the size of our own moon, according to the astronomers, who published their research in The Astrophysical Journal Letters on Thursday.

“In short, it is still unclear when, where, and how planets and moons form,” Stefano Facchini, a co-author of the study and research fellow at the European Southern Observatory, said in the release. “This system therefore offers us a unique opportunity to observe and study the processes of planet and satellite formation.”

The other planet circling this star does not show signs of a disc. The researchers said this could indicate that the moon-disc planet gobbled up all the available material, starving its twin. In addition to forming moons, the disc is likely helping the planet grow larger as material slowly falls into it.

The researchers plan to look at this star and its disc-adorned planet more closely with the Extremely Large Telescope, still under construction in Chile’s Atacama Desert. Once built, it will be the largest visible- and infrared-light telescope on Earth.Read the original article on Business Insider

NASA official Bill Nelson says we are ‘not alone’ in the universe – Posted June 29th 2021

Matt Mcnulty For Dailymail.Com  17 hrs agoLike40 Comments|45


We are not alone in the universe, according to NASA official and former astronaut Bill Nelson.

Nelson’s statement comes just days after the Pentagon was unable to offer an explanation for UFOs spotted by US military personnel in a newly-declassified report released on Friday, which confirmed at least 144 cases of unidentified aerial phenomena (UAP) sightings.

‘Yes, I’ve seen the classified report. It says basically what we thought. We don’t know the answer to what those Navy pilots saw, they know that they saw something, they tracked it, they locked their radar onto it, they followed it, it would suddenly move quickly from one location to another,’ Nelson told CNN on Monday of the Pentagon’s declassified report.

‘And what the report does tell us that is public, is that there have been over 140 of these sightings. So naturally, what I ask our scientists to do is to see if there’s any kind of explanation, from a scientific point of view, and I’m awaiting their report,’ he added, before stating that he has spoke with Navy pilots after a briefing on the matter while he was still serving as Florida‘s senior senator from 2001 to 2019.

‘… I talked to the Navy pilots, when we were briefed in the Senate Armed Services Committee, and my feeling is that there is clearly something there. It may not necessarily be an extraterrestrial, but if it is a technology that some of our adversaries have, then we better be concerned.’

The secrets of human chromosomes have not yet been cracked by scientists, study suggests Posted May 30th 2021

Joe Pinkstone

Scientists have finally weighed a full set of human chromosomes and discovered they are 20 times heavier than expected – declaring there could be “missing components”.

Researchers told the Sunday Telegraph they have no idea as to what that may be.

Chromosomes are bundles of genetic material which exist inside almost every cell of all complex lifeforms, from bacteria to humans and everything in between.

Most humans have 46 chromosomes — 23 pairs — all of different size and shape, but other species have varying numbers.

For example, possums have just 22, foxes have 34 and a great white shark has 82. But Atlas blue butterflies have around 450 and the Adder’s tongue fern has a staggering 1,440.

But regardless of number or organism, all chromosomes follow the same basic structure.Normal male chromosomes, illustration © Stock illustration Normal male chromosomes, illustration

Individual bases of DNA, called A, G, C and T, pair up and form short, double helix-shaped chains which wrap around a ball of eight proteins to create bundles called nucleosomes.

These little packages of genetic material are joined to one another by a thin piece of connective material, and experts refer to them as ‘beads on a string’.

But while we know all this, and that the complete copy of a human genome contains more than 6.4billion base pairs of DNA, the exact and total mass of our chromosomes has never been known.

Scientists from UCL used a powerful X-ray beam in Didcot, Oxfordshire, called Diamond, to weigh a complete set of human chromosomes for the first time.

The researchers bombarded individual chromosomes with X-rays and assessed how much the beam scattered. This diffraction pattern was used to produce a 3D reconstruction of the chromosome’s structure.

The brightness of the Diamond machinery, which outshines the Sun by billions of times, allowed for a highly detailed image.

Professor Robinson and colleagues published their paper in the journal Chromosome Research and found the mass of all 46 human chromosomes to be 242 picograms.

The heaviest is chromosome 1, which is also the largest, and it weighs 10.9 picograms. One picogram is a trillionth of a gram and a grain of sand weighs approximately 0.000000004 picograms.Human Red blood cells © Stock photo Human Red blood cells

A red blood cell, which does not have a nucleus and therefore is devoid of genetic material, weighs around 27 picograms.

“There may be quite a lot of missing components to our chromosomes that are yet to be discovered,” Professor Ian Robinson, senior author of the new study from UCL, told The Sunday Telegraph:

“Chromosomes have been investigated by scientists for 130 years but there are still parts of these complex structures that are poorly understood.” 

He went on: “The mass of DNA we know from the Human Genome Project, but this is the first time we have been able to precisely measure the masses of chromosomes that include this DNA.

“Our measurement suggests the 46 chromosomes in each of our cells weigh 242 picograms.

“This is heavier than we would expect, and, if replicated, points to unexplained excess mass in chromosomes.”

In order to accurately measure the chromosomal mass, the researchers blasted them with X-rays when the cells were in metaphase, before they underwent the splitting process.

Scientists are constantly trying to learn more about the human body, and the mapping of the genome was a key step in that.

However, this study lays bare the fact there is still a long way to go before we fully understand the nuances of our own body.

Archana Bhartiya, a PhD student at the London Centre for Nanotechnology at UCL and lead author of the paper, said: “A better understanding of chromosomes may have important implications for human health.

“A vast amount of study of chromosomes is undertaken in medical labs to diagnose cancer from patient samples.

“Any improvements in our abilities to image chromosomes would therefore be highly valuable.”

In order to accurately measure the chromosomal mass, the researchers blasted them with X-rays when the cells were in metaphase, before they underwent the splitting process.

Scientists are constantly trying to learn more about the human body, and the mapping of the genome was a key step in that.

However, this study lays bare the fact there is still a long way to go before we fully understand the nuances of our own body.

Archana Bhartiya, a PhD student at the London Centre for Nanotechnology at UCL and lead author of the paper, said: “A better understanding of chromosomes may have important implications for human health.

“A vast amount of study of chromosomes is undertaken in medical labs to diagnose cancer from patient samples.

“Any improvements in our abilities to image chromosomes would therefore be highly valuable.”

Small Modular Reactors May 16th 2021

A consortium led by Rolls-Royce is on the hunt for orders for its £2billion nuclear reactors after a redesign that means each will power 100,000 more homes. 

The Mail on Sunday can reveal that the UK Small Modular Reactor (SMR) project has revamped the proposed mini reactors to increase their output. The factory-built reactors will now generate 470 megawatts, enough to provide electricity to a million homes. 

A consortium led by Rolls-Royce is on the hunt for orders for its £2billion nuclear reactors after a redesign that means each will power 100,000 more homes. 

The Mail on Sunday can reveal that the UK Small Modular Reactor (SMR) project has revamped the proposed mini reactors to increase their output. The factory-built reactors will now generate 470 megawatts, enough to provide electricity to a million homes. 

The project, launched in 2015, aims to bring ten mini nuclear reactors into use by 2035, with the first due to enter service around 2030. 

The project, launched in 2015, aims to bring ten mini nuclear reactors into use by 2035, with the first due to enter service around 2030. 

(Gallo Images-Denny Allen/Getty Images) HUMANS

Humans Were Actually Apex Predators For 2 Million Years, New Study Finds Posted April 18th 2021

MIKE MCRAE

Paleolithic cuisine was anything but lean and green, according to a recent study on the diets of our Pleistocene ancestors. For a good 2 million years, Homo sapiens and their ancestors ditched the salad and dined heavily on meat, putting them at the top of the food chain.

It’s not quite the balanced diet of berries, grains, and steak we might picture when we think of ‘paleo’ food. But according to anthropologists from Israel’s Tel Aviv University and the University of Minho in Portugal, modern hunter gatherers have given us the wrong impression of what we once ate.

“This comparison is futile, however, because 2 million years ago hunter-gatherer societies could hunt and consume elephants and other large animals – while today’s hunter gatherers do not have access to such bounty,” says Miki Ben‐Dor from Israel’s Tel Aviv University.

A look through hundreds of previous studies on everything from modern human anatomy and physiology to measures of the isotopes inside ancient human bones and teeth suggests we were primarily apex predators until roughly 12,000 years ago.

Reconstructing the grocery list of hominids who lived as far back as 2.5 million years ago is made all that much more difficult by the fact plant remains don’t preserve as easily as animal bones, teeth, and shells.

Other studies have used chemical analysis of bones and tooth enamel to find localized examples of diets heavy in plant material. But extrapolating this to humanity as a whole isn’t so straight forward.

We can find ample evidence of game hunting in the fossil record, but to determine what we gathered, anthropologists have traditionally turned to modern day ethnography based on the assumption that little has changed.

According to Ben-Dor and his colleagues, this is a huge mistake.

“The entire ecosystem has changed, and conditions cannot be compared,” says Ben‐Dor.

The Pleistocene epoch was a defining time in Earth’s history for us humans. By the end of it, we were marching our way into the far corners of the globe, outliving every other hominid on our branch of the family tree.

261010 web(Dr Miki Ben Dor)

Above: Graph showing where Homo sapiens sat on the spectrum of carnivore to herbivore during the Pleistocene and Upper Pleistocene (UP).

Dominated by the last great ice age, most of what is today Europe and North America was regularly buried under thick glaciers.

With so much water locked up as ice, ecosystems around the world were vastly different to what we see today. Large beasts roamed the landscape, including mammoths, mastodons, and giant sloths – in far greater numbers than we see today.

Of course it’s no secret that Homo sapiens used their ingenuity and uncanny endurance to hunt down these massive meal-tickets. But the frequency with which they preyed on these herbivores hasn’t been so easy to figure out.

Rather than rely solely on the fossil record, or make tenuous comparisons with pre-agricultural cultures, the researchers turned to the evidence embedded in our own bodies and compared it with our closest cousins.

“We decided to use other methods to reconstruct the diet of stone-age humans: to examine the memory preserved in our own bodies, our metabolism, genetics and physical build,” says Ben‐Dor.

“Human behavior changes rapidly, but evolution is slow. The body remembers.”

For example, compared with other primates, our bodies need more energy per unit of body mass. Especially when it comes to our energy-hungry brains. Our social time, such as when it comes to raising children, also limits the amount of time we can spend looking for food.

We have higher fat reserves, and can make use of them by rapidly turning fats into ketones when the need arises. Unlike other omnivores, where fat cells are few but large, ours are small and numerous, echoing those of a predator.

Our digestive systems are also suspiciously like that of animals higher up the food chain. Having unusually strong stomach acid is just the thing we might need for breaking down proteins and killing harmful bacteria you’d expect to find on a week-old mammoth chop.

Even our genomes point to a heavier reliance on a meat-rich diet than a sugar-rich one.

“For example, geneticists have concluded that areas of the human genome were closed off to enable a fat-rich diet, while in chimpanzees, areas of the genome were opened to enable a sugar-rich diet,” says Ben‐Dor.

The team’s argument is extensive, touching upon evidence in tool use, signs of trace elements and nitrogen isotopes in Paleolithic remains, and dental wear.

It all tells a story where our genus’ trophic level – Homo’s position in the food web – became highly carnivorous for us and our cousins, Homo erectus, roughly 2.5 million years ago, and remained that way until the upper Paleolithic around 11,700 years ago.

From there, studies on modern hunter-gatherer communities become a little more useful as a decline in populations of large animals and fragmentation of cultures around the world saw to more plant consumption, culminating in the Neolithic revolution of farming and agriculture.

None of this is to say we ought to eat more meat. Our evolutionary past isn’t an instruction guide on human health, and as the researchers emphasize, our world isn’t what it used to be.

But knowing where our ancestors sat in the food web has a big impact on understanding everything from our own health and physiology, to our influence over the environment in times gone by.

This research was published in the American Journal of Physical Anthropology.

Big Bang & Big Questions April 14th 2021 – Robert Cook

I always regret abandoning physics and the hard sciences for the social ones – though I did have the pleasure of taeching maths and physics in my 30s, discovering how dull and unimaginatively the subjects were presented and treated in State Schools.

I found the following article very interesting. Being a university student, giving up my passion for athletics and cross country running in my final undergraduate year , I became very involved with some highly intelligent students , including a long lost girlfriend used as muse in my novel , exploring the reality of gender ( as opposed to simple sex characteristsics consciousness ( Man , Maid ,Woman – 2003 ). Consciousness expanding drugs were part of the the trendy students and wider sub culture. This was the generation tat would never grrow old and die. Pece and love was the theme. Unisex was tye label, and I was put off my ambition to be a military pilot.

Men are easily seduced by pretty posh blondes , expenisve cars and false promises. Image Appledene Archives.

That final year nearly cost me my degree, but I was always told I was something of a mad sceintist – that’s another story. One of my biggest weaknesses is a duty to empiricism that bogs me down and wastes time. Brilliant , and even just plain good scientists, need courage and imagination. It is often said that not enough women go into the hard sciences.

As someone judged by the system people as a paranoid schozophrenic transsexual female , for what I have said and written – oddly adding ,’with a secure female identity’ , one must wonder whether females havs such courage and imagination and at the prejudiced and constrictive nature of soft sciences – as we see with the psychiatrist’s reliance on their DSM bible and the dubious science of epidemeology during the so called pandemic.

Whatever, a country that once led the world in science is failing to inspire youngsters regardless of sex , while scientific research is warped towards military hard & software alongside bio warfare. The following is a breath of fresh air. Robert Cook

Confirmed! We Live in a Simulation Posted April 14th 2021

Fouad Khan

Fouad Khan is a senior editor at Nature Energy and tweets at @fouadmkhan.

We must never doubt Elon Musk again

Confirmed! We Live in a Simulation
Credit: Sean Gladwell Getty Images

Ever since the philosopher Nick Bostrom proposed in the Philosophical Quarterly that the universe and everything in it might be a simulation, there has been intense public speculation and debate about the nature of reality. Such public intellectuals as Tesla leader and prolific Twitter gadfly Elon Musk have opined about the statistical inevitability of our world being little more than cascading green code. Recent papers have built on the original hypothesis to further refine the statistical bounds of the hypothesis, arguing that the chance that we live in a simulation may be 50–50.

The claims have been afforded some credence by repetition by luminaries no less esteemed than Neil deGrasse Tyson, the director of Hayden Planetarium and America’s favorite science popularizer. Yet there have been skeptics. Physicist Frank Wilczek has argued that there’s too much wasted complexity in our universe for it to be simulated. Building complexity requires energy and time. Why would a conscious, intelligent designer of realities waste so many resources into making our world more complex than it needs to be? It’s a hypothetical question, but still may be needed.: Others, such as physicist and science communicator Sabine Hossenfelder, have argued that the question is not scientific anyway. Since the simulation hypothesis does not arrive at a falsifiable prediction, we can’t really test or disprove it, and hence it’s not worth seriously investigating.

However, all these discussions and studies of the simulation hypothesis have, I believe, missed a key element of scientific inquiry: plain old empirical assessment and data collection. To understand if we live in a simulation we need to start by looking at the fact that we already have computers running all kinds of simulations for lower level “intelligences” or algorithms. For easy visualization, we can imagine these intelligences as any nonperson characters in any video game that we play, but in essence any algorithm operating on any computing machine would qualify for our thought experiment. We don’t need the intelligence to be conscious, and we don’t need it to even be very complex, because the evidence we are looking for is “experienced” by all computer programs, simple or complex, running on all machines, slow or fast.Advertisement

All computing hardware leaves an artifact of its existence within the world of the simulation it is running. This artifact is the processor speed. If for a moment we imagine that we are a software program running on a computing machine, the only and inevitable artifact of the hardware supporting us, within our world, would be the processor speed. All other laws we would experience would be the laws of the simulation or the software we are a part of. If we were a Sim or a Grand Theft Auto character these would be the laws of the game. But anything we do would also be constrained by the processor speed no matter the laws of the game. No matter how complete the simulation is, the processor speed would intervene in the operations of the simulation.

In computing systems, of course, this intervention of the processing speed into the world of the algorithm being executed happens even at the most fundamental level. Even at the most fundamental level of simple operations such as addition or subtraction, the processing speed dictates a physical reality onto the operation that is detached from the simulated reality of the operation itself.

Here’s a simple example. A 64-bit processor would perform a subtraction between say 7,862,345 and 6,347,111 in the same amount of time as it would take to perform a subtraction between two and one (granted all numbers are defined as the same variable type). In the simulated reality, seven million is a very large number, and one is a comparatively very small number. In the physical world of the processor, the difference in scale between these two numbers is irrelevant. Both subtractions in our example constitute one operation and would take the same time. Here we can clearly now see the difference between a “simulated” or abstract world of programmed mathematics and a “real” or physical world of microprocessor operations.

Within the abstract world of programmed mathematics, the processing speed of operations per second will be observed, felt, experienced, noted as an artifact of underlying physical computing machinery. This artifact will appear as an additional component of any operation that is unaffected by the operation in the simulated reality. The value of this additional component of the operation would simply be defined as the time taken to perform one operation on variables up to a maximum limit that is the memory container size for the variable. So, in an eight-bit computer, for instance to oversimplify, this would be 256. The value of this additional component will be the same for all numbers up to the maximum limit. The additional hardware component will thus be irrelevant for any operations within the simulated reality except when it is discovered as the maximum container size. The observer within the simulation has no frame for quantifying the processor speed except when it presents itself as an upper limit.Advertisement

If we live in a simulation, then our universe should also have such an artifact. We can now begin to articulate some properties of this artifact that would help us in our search for such an artifact in our universe.

  • The artifact is as an additional component of every operation that is unaffected by the magnitude of the variables being operated upon and is irrelevant within the simulated reality until a maximum variable size is observed.
  • The artifact presents itself in the simulated world as an upper limit.
  • The artifact cannot be explained by underlying mechanistic laws of the simulated universe. It has to be accepted as an assumption or “given” within the operating laws of the simulated universe.
  • The effect of the artifact or the anomaly is absolute. No exceptions.

Now that we have some defining features of the artifact, of course it becomes clear what the artifact manifests itself as within our universe. The artifact is manifested as the speed of light.

Space is to our universe what numbers are to the simulated reality in any computer. Matter moving through space can simply be seen as operations happening on the variable space. If matter is moving at say 1,000 miles per second, then 1,000 miles worth of space is being transformed by a function, or operated upon every second. If there were some hardware running the simulation called “space” of which matter, energy, you, me, everything is a part, then one telltale sign of the artifact of the hardware within the simulated reality “space” would be a maximum limit on the container size for space on which one operation can be performed. Such a limit would appear in our universe as a maximum speed.

newsletter promo

This maximum speed is the speed of light. We don’t know what hardware is running the simulation of our universe or what properties it has, but one thing we can say now is that the memory container size for the variable space would be about 300,000 kilometers if the processor performed one operation per second.

This helps us arrive at an interesting observation about the nature of space in our universe. If we are in a simulation, as it appears, then space is an abstract property written in code. It is not real. It is analogous to the numbers seven million and one in our example, just different abstract representations on the same size memory block. Up, down, forward, backward, 10 miles, a million miles, these are just symbols. The speed of anything moving through space (and therefore changing space or performing an operation on space) represents the extent of the causal impact of any operation on the variable “space.” This causal impact cannot extend beyond about 300,000 km given the universe computer performs one operation per second. Advertisement

We can see now that the speed of light meets all the criteria of a hardware artifact identified in our observation of our own computer builds. It remains the same irrespective of observer (simulated) speed, it is observed as a maximum limit, it is unexplainable by the physics of the universe, and it is absolute. The speed of light is a hardware artifact showing we live in a simulated universe.

But this is not the only indication that we live in a simulation. Perhaps the most pertinent indication has been hiding right in front of our eyes. Or rather behind them. To understand what this critical indication is, we need to go back to our empirical study of simulations we know of. Imagine a character in a role-playing game (RPG), say a Sim or the player character in Grand Theft Auto. The algorithm that represents the character and the algorithm that represents the game environment in which the character operates are intertwined at many levels. But even if we assume that the character and the environment are separate, the character does not need a visual projection of its point of view in order to interact with the environment.

The algorithms take into account some of the environmental variables and some of the character’s state variables to project and determine the behavior of both the environment and the character. The visual projection or what we see on the screen is for our benefit. It is a subjective projection of some of the variables within the program so that we can experience the sensation of being in the game. The audiovisual projection of the game is an integrated subjective interface for the benefit of us, essentially someone controlling the simulation. The integrated subjective interface has no other reason to exist except to serve us. A similar thought experiment can be run with movies. Movies often go into the point of view of characters and try to show us things from their perspective. Whether or not a particular movie scene does that or not, what’s projected on the screen and the speakers—the integrated experience of the film—has no purpose for the characters in the film. It is entirely for our benefit.

Pretty much since the dawn of philosophy we have been asking the question: Why do we need consciousness? What purpose does it serve? Well, the purpose is easy to extrapolate once we concede the simulation hypothesis. Consciousness is an integrated (combining five senses) subjective interface between the self and the rest of the universe. The only reasonable explanation for its existence is that it is there to be an “experience.” That’s its primary raison d’être. Parts of it may or may not provide any kind of evolutionary advantage or other utility. But the sum total of it exists as an experience and hence must have the primary function of being an experience. An experience by itself as a whole is too energy-expensive and information-restrictive to have evolved as an evolutionary advantage. The simplest explanation for the existence of an experience or qualia is that it exists for the purpose of being an experience.

There is nothing in philosophy or science, no postulates, theories or laws, that would predict the emergence of this experience we call consciousness. Natural laws do not call for its existence, and it certainly does not seem to offer us any evolutionary advantages. There can only be two explanations for its existence. First is that there are evolutionary forces at work that we don’t know of or haven’t theorized yet that select for the emergence of the experience called consciousness. The second is that the experience is a function we serve, a product that we create, an experience we generate as human beings. Who do we create this product for? How do they receive the output of the qualia generating algorithms that we are? We don’t know. But one thing’s for sure, we do create it. We know it exists. That’s the only thing we can be certain about. And that we don’t have a dominant theory to explain why we need it.Advertisement

So here we are generating this product called consciousness that we apparently don’t have a use for, that is an experience and hence must serve as an experience. The only logical next step is to surmise that this product serves someone else.

Now, one criticism that can be raised of this line of thinking is that unlike the RPG characters in, say. Grand Theft Auto, we actually experience the qualia ourselves. If this is a product for someone else than why are we experiencing it? Well, the fact is the characters in Grand Theft Auto also experience some part of the qualia of their existence. The experience of the characters is very different from the experience of the player of the game, but between the empty character and the player there is a gray area where parts of the player and parts of the character combine to some type of consciousness.

The players feel some of the disappointments and joys that are designed for the character to feel. The character experiences the consequences of the player’s behavior. This is a very rudimentary connection between the player and the character, but already with virtual reality devices we are seeing the boundaries blur. When we are riding a roller coaster as a character in say the Oculus VR device, we feel the gravity.

Where is that gravity coming from? It exists somewhere in the space between the character that is riding the roller coaster and our minds occupying the “mind” of the character. It can certainly be imagined that in the future this in-between space would be wider. It is certainly possible that as we experience the world and generate qualia, we are experiencing some teeny tiny part of the qualia ourselves while maybe a more information-rich version of the qualia is being projected to some other mind for whose benefit the experience of consciousness first came into existence.  

So, there you have it. The simplest explanation for the existence of consciousness is that it is an experience being created, by our bodies, but not for us. We are qualia-generating machines. Like characters in Grand Theft Auto, we exist to create integrated audiovisual outputs. Also, as with characters in Grand Theft Auto, our product mostly likely is for the benefit of someone experiencing our lives through us.Advertisement

What are the implications of this monumental find? Well, first of all we can’t question Elon Musk again. Ever. Secondly, we must not forget what the simulation hypothesis really is. It is the ultimate conspiracy theory. The mother of all conspiracy theories, the one that says that everything, with the exception of nothing, is fake and a conspiracy designed to fool our senses. All our worst fears about powerful forces at play controlling our lives unbeknownst to us, have now come true. And yet this absolute powerlessness, this perfect deceit offers us no way out in its reveal. All we can do is come to terms with the reality of the simulation and make of it what we can.

Here, on earth. In this life.

Preface to the following April 12th 2021

Bringing science into social issues indicates the increasing grip of social engineering. The only ‘equality’ about life forms is atoms , their consequent molecules and how they are arranged, form and decay. It is all down to numbers and the artificial measuring stick of time.

Equality is a con , encouraged as a Utopian concept. It hides the extreme and growing gap between the all powerful protected controlling elite and the ever expanding masses.

The value of each member of the masses falls as the numbers run out of control. Their total value is their usefulness to the elite, divided by their increasing numbers. So it falls every year, all the more so because ever advancing technology makes them redundant, leaving a residual value for some as cannon fodder and for others as victims of our injustice system – with ever more extreme punishment as examples and a deterrent to wealth threatening rebllion.

A related equation is called ‘Divide and Rule.’ It was used to build empires. It is used now to divide and rule ethnic groups and genders. Reducing everything to algorithms is patronising , showing contempt for the concept of individuality – the number of individuals presenting to the system as cancer in elite eyes. Cancers need blasting to stop them growing , protecting the elite’s body. Their advancing weaponry and wars are good for that.

Robert Cook

A Computer Scientist Who Tackles Inequality Through Algorithms

Rediet Abebe uses the tools of theoretical computer science to understand pressing social problems — and try to fix them. 3

Rachel CrowellContributing Writer


April 1, 2021


View PDF/Print Modealgorithmsartificial intelligencebig datacomputer sciencedatamathematicsQ&AAll topics

When Rediet Abebe arrived at Harvard University as an undergraduate in 2009, she planned to study mathematics. But her experiences with the Cambridge public schools soon changed her plans.

Abebe, 29, is from Addis Ababa, Ethiopia’s capital and largest city. When residents there didn’t have the resources they needed, she attributed it to community-level scarcity. But she found that argument unconvincing when she learned about educational inequality in Cambridge’s public schools, which she observed struggling in an environment of abundance.

To learn more, Abebe started attending Cambridge school board meetings. The more she discovered about the schools, the more eager she became to help. But she wasn’t sure how that desire aligned with her goal of becoming a research mathematician.

“I thought of these interests as different,” said Abebe, a junior fellow of the Harvard Society of Fellows and an assistant professor at the University of California, Berkeley. “At some point, I actually thought I had to choose, and I was like, ‘OK, I guess I’ll choose math and the other stuff will be my hobby.’”

After college Abebe was accepted into a doctoral program in mathematics, but she ended up deferring to attend an intensive one-year math program at the University of Cambridge. While there, she decided to switch her focus to computer science, which allowed her to combine her talent for mathematical thinking with her strong desire to address social problems related to discrimination, inequity and access to opportunity. She ended up getting a doctorate in computer science at Cornell University.

The Joy of x — Season 2, Ep. 6

The computer scientist Rediet Abebe’s passion for applied mathematics closely aligns with her passion to solve problems with poverty and social inequality.

Today, Abebe uses the tools of theoretical computer science to help design algorithms and artificial intelligence systems that address real-world problems. She has modeled the role played by income shocks, like losing a job or government benefits, in leading people into poverty, and she’s looked at ways of optimizing the allocation of government financial assistance. She’s also working with the Ethiopian government to better account for the needs of a diverse population by improving the algorithm the country uses to match high school students with colleges.

Abebe is a co-founder of the organizations Black in AI — a community of Black researchers working in artificial intelligence — and Mechanism Design for Social Good, which brings together researchers from different disciplines to address social problems.

Quanta Magazine spoke with Abebe recently about her childhood fear that she’d be forced to become a medical doctor, the social costs of bad algorithmic design, and how her background in math sharpens her work. This interview is based on multiple phone interviews and has been condensed and edited for clarity.

You’re currently involved in a project to reform the Ethiopian national educational system. The work was born in part from your own negative experiences with it. What happened?

In the Ethiopian national system, when you finished 12th grade, you’d take this big national exam and submit your preferences for the 40-plus public universities across the country. There was a centralized assignment process that determined what university you were going to and what major you would have. I was so panicked about this.

Why?

I realized I was a high-scoring student when I was in middle school. And the highest-scoring students tended to be assigned to medicine. I was like 12 and super panicked that I might have to be a medical doctor instead of studying math, which is what I really wanted to do.

What did you end up doing?

I thought, “I may have to go abroad.” I learned that in the U.S., you can get full financial aid if you do really well and get into the top schools.

So you went to Harvard as an undergraduate and planned to become a research mathematician. But then you had an experience that changed your plans. What happened?

I was excited to study math at Harvard. At the same time, I was interested in what was going on in the city of Cambridge. There was a massive achievement gap in elementary schools in Cambridge. A lot of students who were Black, Latinx, low-income or students with disabilities, or immigrant students, were performing two to four grades below their peers in the same classroom. I was really interested in why this was happening.

Abebe switched fields from math to computer science in order to learn tools she could apply to social problems like poverty and educational inequality.
Video: Abebe switched fields from math to computer science in order to learn tools she could apply to social problems like poverty and educational inequality. Constanza Hevia for Quanta Magazine

You eventually switched focus from math to computer science. What about computer science made you think that it was a place you could work on social issues that you care about?

It’s an inherently outward-looking field. Let’s take a government organization that has income subsidies it can give out. And it has to do so under budget constraints. You have some objective you’re trying to optimize for and some constraints around fairness or efficiency. So you have to formalize that.

From there, you can design algorithms and prove things about them. So you can say, “I can guarantee that the algorithm does this; I can guarantee that it gives you the optimal solution or at least it’s this close to the optimal solution.”

Does your math background still help?

Math and theoretical computer science force you to be precise. Ambiguity is a bug in mathematics. If I give you a proof and it’s vague, then it’s not complete. On the algorithmic side of things, it forces you to be very explicit about what your goals are and what the input is.

Within computer science, what would you say is your research community?

I’m one of the co-founders and an organizer for Mechanism Design for Social Good. We started in 2016 as a small online reading group that was interested in understanding how we can use techniques from theoretical computer science, economics and operations research communities to improve access to opportunity. We were inspired by how algorithmic and mechanism design techniques have been used in problems like improving kidney exchange and the way students are assigned to schools. We wanted to explore where else these techniques, combined with insights from the social sciences and humanistic studies, can be used.

The group grew steadily. Now it’s massive and spans over 50 countries and multiple disciplines, including computer science, economics, operations research, sociology, public policy and social work.

The term “mechanism design” may not be immediately familiar to a lot of people. What does it mean?

Mechanism design is like if you had an algorithm designed, but you were aware that the input data is something that could be strategically manipulated. So you’re trying to create something that’s robust to that.

When you see a social problem that you want to work on, what’s your process for getting started?

Let’s say I’m interested in income shocks and what impact those have on people’s economic welfare. First I go and learn from people from other disciplines. I talk to social workers, policymakers and nonprofits. I try to absorb as much information as I can and understand as best I can what other experts find useful.

And I let this very bottom-up process determine what types of questions I should tackle. So sometimes that ends up being like, there’s some really interesting data set and people are like, “Here’s what we’ve done with it, but maybe you can do more.” Or it ends up being a modeling question, where there’s some phenomenon that the algorithmic side of my work allows us to capture and model, and then I ask questions around some sort of intervention.

Does your work address any issues tied to the COVID-19 pandemic?

My income-shocks work is extremely timely. If you’re losing a job, or a lot of people are getting sick, those are shocks. Medical expenses are a shock. There’s been this massive global disruption that we all have to deal with. But certain people have to deal with more of it and different types of it than others.

A diptych. On the left, Rediet Abebe sitting outside working on a laptop. On the right, a close-up of Abebe's laptop screen showing an article on increasing the presence of Black people in the field of artificial intelligence.
Abebe is co-founder of the organization Black in AI, a community of Black researchers working in artificial intelligence. Constanza Hevia for Quanta Magazine

How did you start to dig into this as a research topic?

We were first interested in how to best model welfare when we know individuals are experiencing income shocks. We wanted to see whether we could provide a model of welfare that captures people’s income and wealth, as well as the frequency with which they may experience income shocks and the severity of those shocks.

Once we created a model, we were then able to ask questions around how to provide assistance, such as income subsidies.

And what did you find?

We find, for example, that if the assistance is a wealth subsidy, which gives people a one-time, upfront subsidy, rather than an income subsidy, which is a month-to-month commitment, then the set of individuals you should target can be completely different from one another.

These types of qualitative insights have been useful in discussions with individuals working in policy and nonprofit organizations. Often in discussions around poverty-alleviation programs, we hear statements like, “This program would like to assist the most number of people,” but we ignore that there are a lot of decisions that have to be made to translate such a statement into a concrete allocation scheme.

You also published a paper exploring how different types of big, adverse life events relate to poverty. What did you find?

Predicting what factors lead someone into poverty is very hard. But we can still get qualitative insights about what things help you predict poverty better than others. We find that for male respondents, interactions with the criminal justice system, like being stopped by the police or being a victim of a crime, seem to be very predictive of experiencing poverty in the future. Whereas for female respondents, we find that financial shocks like income decreases, major expenses, benefit decreases and so on seem to hold a lot more predictive power.

You are also the co-founder of the Equity and Access in Algorithms, Mechanisms, and Optimization conference, which is being held for the first time later this year and which engages lots of the types of questions we’ve been talking about. What is its focus?

We are providing an international venue for researchers and practitioners to come together to discuss problems that impact marginalized communities, like housing instability and homelessness, equitable access to education and health care, and digital and data rights. It is inspiring to see the investment folks make to identify the right questions, provide holistic solutions, think critically about unintended consequences, and iterate many times over as needed.

You also have that ongoing work with Ethiopia’s government on its national education system. How are you trying to change the way this assignment process works?

I’m working with the Ethiopian Ministry of Education to understand and inform the matching process of seniors in high school to public universities. We’re still in the beginning stages of this.

Related:


  1. Statistics Postdoc Tames Decades-Old Geometry Problem
  2. The Computer Scientist Who Shrinks Big Data
  3. A New Algorithm for Graph Crossings, Hiding in Plain Sight

Ethiopia has over 80 different ethnic groups and an incredibly diverse people. There are diversity considerations. You have different genders, different ethnic groups and different regions that they came from.

You might say, “We’re still going to try to make sure that everyone gets one of their top three choices.” But we want to make sure that you don’t end up in a school that has everyone from the same region or everyone is one gender.

What are the costs of getting the matching process wrong?

I mean, in any of the social problems that I work on, the cost of getting something wrong is super high. With this matching case, once you match to something, that’s probably where you’re going to go because the outside options might not be good. And so I’m deciding whether you end up close to home versus really, really far away from home. I’m deciding whether you end up in a region or in a school that has studies, classes and research that align with your work or not. It really, really matters.

Comment on Quantum Darwinism April 10th 2021

I am surprised that talk of Darwinism at any level is any longer legal. The main tenet of Darwin was ‘survival of the fittest.’ In a world where the white man’s technology has ripped off the oddly named BAME over populated countries and continents , continuing to do so, it is a moot point as to what fitness means – with greed and war led technology creating misery along with religious delusions for the masses and obscene wealth for the powerful minority , including elite members of all 3 stem races

However , everything comes down to numbers and concepts like critical mass and last straws. So one could visualise the following complexities in terms of how one copes with impossible stress for a long time before exploding into anger and/or action. You can’t see your thoughts as words , but there is theoretically a quantum level to the brain chemistry consciousness and perceptions.

The danger with state funded scientists laying down laws about what they think we are and why we behave as we do, should be apparent from the crimes of Nazi scientists. The word expert is meant to comfort us but often presages danger to our freedon of thought , speech , action and consequences.

Robert Cook

Quantum Darwinism, an Idea to Explain Objective Reality, Passes First Tests

Three experiments have vetted quantum Darwinism, a theory that explains how quantum possibilities can give rise to objective, classical reality.

Quanta Magazine

  • Philip Ball

Read when you’ve got time to spare.Quantum_Darwinism_2880x1620_Lede.jpg

Credit: Olena Shmahalo / Quanta Magazine.

It’s not surprising that quantum physics has a reputation for being weird and counterintuitive. The world we’re living in sure doesn’t feel quantum mechanical. And until the 20th century, everyone assumed that the classical laws of physics devised by Isaac Newton and others — according to which objects have well-defined positions and properties at all times — would work at every scale. But Max Planck, Albert Einstein, Niels Bohr and their contemporaries discovered that down among atoms and subatomic particles, this concreteness dissolves into a soup of possibilities. An atom typically can’t be assigned a definite position, for example — we can merely calculate the probability of finding it in various places. The vexing question then becomes: How do quantum probabilities coalesce into the sharp focus of the classical world?

Physicists sometimes talk about this changeover as the “quantum-classical transition.” But in fact there’s no reason to think that the large and the small have fundamentally different rules, or that there’s a sudden switch between them. Over the past several decades, researchers have achieved a greater understanding of how quantum mechanics inevitably becomes classical mechanics through an interaction between a particle or other microscopic system and its surrounding environment.

One of the most remarkable ideas in this theoretical framework is that the definite properties of objects that we associate with classical physics — position and speed, say — are selected from a menu of quantum possibilities in a process loosely analogous to natural selection in evolution: The properties that survive are in some sense the “fittest.” As in natural selection, the survivors are those that make the most copies of themselves. This means that many independent observers can make measurements of a quantum system and agree on the outcome — a hallmark of classical behavior.

This idea, called quantum Darwinism (QD), explains a lot about why we experience the world the way we do rather than in the peculiar way it manifests at the scale of atoms and fundamental particles. Although aspects of the puzzle remain unresolved, QD helps heal the apparent rift between quantum and classical physics.

Only recently, however, has quantum Darwinism been put to the experimental test. Three research groups, working independently in Italy, China and Germany, have looked for the telltale signature of the natural selection process by which information about a quantum system gets repeatedly imprinted on various controlled environments. These tests are rudimentary, and experts say there’s still much more to be done before we can feel sure that QD provides the right picture of how our concrete reality condenses from the multiple options that quantum mechanics offers. Yet so far, the theory checks out.

Survival of the Fittest

At the heart of quantum Darwinism is the slippery notion of measurement — the process of making an observation. In classical physics, what you see is simply how things are. You observe a tennis ball traveling at 200 kilometers per hour because that’s its speed. What more is there to say?

In quantum physics that’s no longer true. It’s not at all obvious what the formal mathematical procedures of quantum mechanics say about “how things are” in a quantum object; they’re just a prescription telling us what we might see if we make a measurement. Take, for example, the way a quantum particle can have a range of possible states, known as a “superposition.” This doesn’t really mean it is in several states at once; rather, it means that if we make a measurement we will see one of those outcomes. Before the measurement, the various superposed states interfere with one another in a wavelike manner, producing outcomes with higher or lower probabilities.

But why can’t we see a quantum superposition? Why can’t all possibilities for the state of a particle survive right up to the human scale?

The answer often given is that superpositions are fragile, easily disrupted when a delicate quantum system is buffeted by its noisy environment. But that’s not quite right. When any two quantum objects interact, they get “entangled” with each other, entering a shared quantum state in which the possibilities for their properties are interdependent. So say an atom is put into a superposition of two possible states for the quantum property called spin: “up” and “down.” Now the atom is released into the air, where it collides with an air molecule and becomes entangled with it. The two are now in a joint superposition. If the atom is spin-up, then the air molecule might be pushed one way, while, if the atom is spin-down, the air molecule goes another way — and these two possibilities coexist. As the particles experience yet more collisions with other air molecules, the entanglement spreads, and the superposition initially specific to the atom becomes ever more diffuse. The atom’s superposed states no longer interfere coherently with one another because they are now entangled with other states in the surrounding environment — including, perhaps, some large measuring instrument. To that measuring device, it looks as though the atom’s superposition has vanished and been replaced by a menu of possible classical-like outcomes that no longer interfere with one another.

This process by which “quantumness” disappears into the environment is called decoherence. It’s a crucial part of the quantum-classical transition, explaining why quantum behavior becomes hard to see in large systems with many interacting particles. The process happens extremely fast. If a typical dust grain floating in the air were put into a quantum superposition of two different physical locations separated by about the width of the grain itself, collisions with air molecules would cause decoherence — making the superposition undetectable — in about 10−31 seconds. Even in a vacuum, light photons would trigger such decoherence very quickly: You couldn’t look at the grain without destroying its superposition.

Surprisingly, although decoherence is a straightforward consequence of quantum mechanics, it was only identified in the 1970s, by the late German physicist Heinz-Dieter Zeh. The Polish-American physicist Wojciech Zurek further developed the idea in the early 1980s and made it better known, and there is now good experimental support for it.

But to explain the emergence of objective, classical reality, it’s not enough to say that decoherence washes away quantum behavior and thereby makes it appear classical to an observer. Somehow, it’s possible for multiple observers to agree about the properties of quantum systems. Zurek, who works at Los Alamos National Laboratory in New Mexico, argues that two things must therefore be true.

First, quantum systems must have states that are especially robust in the face of disruptive decoherence by the environment. Zurek calls these “pointer states,” because they can be encoded in the possible states of a pointer on the dial of a measuring instrument. A particular location of a particle, for instance, or its speed, the value of its quantum spin, or its polarization direction can be registered as the position of a pointer on a measuring device. Zurek argues that classical behavior — the existence of well-defined, stable, objective properties — is possible only because pointer states of quantum objects exist.

What’s special mathematically about pointer states is that the decoherence-inducing interactions with the environment don’t scramble them: Either the pointer state is preserved, or it is simply transformed into a state that looks nearly identical. This implies that the environment doesn’t squash quantumness indiscriminately but selects some states while trashing others. A particle’s position is resilient to decoherence, for example. Superpositions of different locations, however, are not pointer states: Interactions with the environment decohere them into localized pointer states, so that only one can be observed. Zurek described this “environment-induced superselection” of pointer states in the 1980s.

But there’s a second condition that a quantum property must meet to be observed. Although immunity to interaction with the environment assures the stability of a pointer state, we still have to get at the information about it somehow. We can do that only if it gets imprinted in the object’s environment. When you see an object, for example, that information is delivered to your retina by the photons scattering off it. They carry information to you in the form of a partial replica of certain aspects of the object, saying something about its position, shape and color. Lots of replicas are needed if many observers are to agree on a measured value — a hallmark of classicality. Thus, as Zurek argued in the 2000s, our ability to observe some property depends not only on whether it is selected as a pointer state, but also on how substantial a footprint it makes in the environment. The states that are best at creating replicas in the environment — the “fittest,” you might say — are the only ones accessible to measurement. That’s why Zurek calls the idea quantum Darwinism.

It turns out that the same stability property that promotes environment-induced superselection of pointer states also promotes quantum Darwinian fitness, or the capacity to generate replicas. “The environment, through its monitoring efforts, decoheres systems,” Zurek said, “and the very same process that is responsible for decoherence should inscribe multiple copies of the information in the environment.”

Information Overload

It doesn’t matter, of course, whether information about a quantum system that gets imprinted in the environment is actually read out by a human observer; all that matters for classical behavior to emerge is that the information get there so that it could be read out in principle. “A system doesn’t have to be under study in any formal sense” to become classical, said Jess Riedel, a physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a proponent of quantum Darwinism. “QD putatively explains, or helps to explain, all of classicality, including everyday macroscopic objects that aren’t in a laboratory, or that existed before there were any humans.”

About a decade ago, while Riedel was working as a graduate student with Zurek, the two showed theoretically that information from some simple, idealized quantum systems is “copied prolifically into the environment,” Riedel said, “so that it’s necessary to access only a small amount of the environment to infer the value of the variables.” They calculated that a grain of dust one micrometer across, after being illuminated by the sun for just one microsecond, will have its location imprinted about 100 million times in the scattered photons.

It’s because of this redundancy that objective, classical-like properties exist at all. Ten observers can each measure the position of a dust grain and find that it’s in the same location, because each can access a distinct replica of the information. In this view, we can assign an objective “position” to the speck not because it “has” such a position (whatever that means) but because its position state can imprint many identical replicas in the environment, so that different observers can reach a consensus.

What’s more, you don’t have to monitor much of the environment to gather most of the available information — and you don’t gain significantly more by monitoring more than a fraction of the environment. “The information one can gather about the system quickly saturates,” Riedel said.

This redundancy is the distinguishing feature of QD, explained Mauro Paternostro, a physicist at Queen’s University Belfast who was involved in one of the three new experiments. “It’s the property that characterizes the transition towards classicality,” he said.

Quantum Darwinism challenges a common myth about quantum mechanics, according to the theoretical physicist Adán Cabello of the University of Seville in Spain: namely, that the transition between the quantum and classical worlds is not understood and that measurement outcomes cannot be described by quantum theory. On the contrary, he said, “quantum theory perfectly describes the emergence of the classical world.”

Just how perfectly remains contentious, however. Some researchers think decoherence and QD provide a complete account of the quantum-classical transition. But although these ideas attempt to explain why superpositions vanish at large scales and why only concrete “classical” properties remain, there’s still the question of why measurements give unique outcomes. When a particular location of a particle is selected, what happens to the other possibilities inherent in its quantum description? Were they ever in any sense real? Researchers are compelled to adopt philosophical interpretations of quantum mechanics precisely because no one can figure out a way to answer that question experimentally.

Into the Lab

Quantum Darwinism looks fairly persuasive on paper. But until recently that was as far as it got. In the past year, three teams of researchers have independently put the theory to the experimental test by looking for its key feature: how a quantum system imprints replicas of itself on its environment.

The experiments depended on the ability to closely monitor what information about a quantum system gets imparted to its environment. That’s not feasible for, say, a dust grain floating among countless billions of air molecules. So two of the teams created a quantum object in a kind of “artificial environment” with only a few particles in it. Both experiments — one by Paternostro and collaborators at Sapienza University of Rome, and the other by the quantum-information expert Jian-Wei Pan and co-authors at the University of Science and Technology of China — used a single photon as the quantum system, with a handful of other photons serving as the “environment” that interacts with it and broadcasts information about it.

Both teams passed laser photons through optical devices that could combine them into multiply entangled groups. They then interrogated the environment photons to see what information they encoded about the system photon’s pointer state — in this case its polarization (the orientation of its oscillating electromagnetic fields), one of the quantum properties able to pass through the filter of quantum Darwinian selection.

A key prediction of QD is the saturation effect: Pretty much all the information you can gather about the quantum system should be available if you monitor just a handful of surrounding particles. “Any small fraction of the interacting environment is enough to provide the maximal classical information about the observed system,” Pan said.

The two teams found precisely this. Measurements of just one of the environment photons revealed a lot of the available information about the system photon’s polarization, and measuring an increasing fraction of the environment photons provided diminishing returns. Even a single photon can act as an environment that introduces decoherence and selection, Pan explained, if it interacts strongly enough with the lone system photon. When interactions are weaker, a larger environment must be monitored.

The third experimental test of QD, led by the quantum-optical physicist Fedor Jelezko at Ulm University in Germany in collaboration with Zurek and others, used a very different system and environment, consisting of a lone nitrogen atom substituting for a carbon atom in the crystal lattice of a diamond — a so-called nitrogen-vacancy defect. Because the nitrogen atom has one more electron than carbon, this excess electron cannot pair up with those on neighboring carbon atoms to form a chemical bond. As a result, the nitrogen atom’s unpaired electron acts as a lone “spin,” which is like an arrow pointing up or down or, in general, in a superposition of both possible directions.

This spin can interact magnetically with those of the roughly 0.3% of carbon nuclei present in the diamond as the isotope carbon-13, which, unlike the more abundant carbon-12, also has spin. On average, each nitrogen-vacancy spin is strongly coupled to four carbon-13 spins within a distance of about 1 nanometer.

By controlling and monitoring the spins using lasers and radio-frequency pulses, the researchers could measure how a change in the nitrogen spin is registered by changes in the nuclear spins of the environment. As they reported in a preprint last September, they too observed the characteristic redundancy predicted by QD: The state of the nitrogen spin is “recorded” as multiple copies in the surroundings, and the information about the spin saturates quickly as more of the environment is considered.

Zurek says that because the photon experiments create copies in an artificial way that simulates an actual environment, they don’t incorporate a selection process that picks out “natural” pointer states resilient to decoherence. Rather, the researchers themselves impose the pointer states. In contrast, the diamond environment does elicit pointer states. “The diamond scheme also has problems, because of the size of the environment,” Zurek added, “but at least it is, well, natural.”

Generalizing Quantum Darwinism

So far, so good for quantum Darwinism. “All these studies see what is expected, at least approximately,” Zurek said.

Riedel says we could hardly expect otherwise, though: In his view, QD is really just the careful and systematic application of standard quantum mechanics to the interaction of a quantum system with its environment. Although this is virtually impossible to do in practice for most quantum measurements, if you can sufficiently simplify a measurement, the predictions are clear, he said: “QD is most like an internal self-consistency check on quantum theory itself.”

But although these studies seem consistent with QD, they can’t be taken as proof that it is the sole description for the emergence of classicality, or even that it’s wholly correct. For one thing, says Cabello, the three experiments offer only schematic versions of what a real environment consists of. What’s more, the experiments don’t cleanly rule out other ways to view the emergence of classicality. A theory called “spectrum broadcasting,” for example, developed by Pawel Horodecki at the Gdańsk University of Technology in Poland and collaborators, attempts to generalize QD. Spectrum broadcast theory (which has only been worked through for a few idealized cases) identifies those states of an entangled quantum system and environment that provide objective information that many observers can obtain without perturbing it. In other words, it aims to ensure not just that different observers can access replicas of the system in the environment, but that by doing so they don’t affect the other replicas. That too is a feature of genuinely “classical” measurements.

Horodecki and other theorists have also sought to embed QD in a theoretical framework that doesn’t demand any arbitrary division of the world into a system and its environment, but just considers how classical reality can emerge from interactions between various quantum systems. Paternostro says it might be challenging to find experimental methods capable of identifying the rather subtle distinctions between the predictions of these theories.

Still, researchers are trying, and the very attempt should refine our ability to probe the workings of the quantum realm. “The best argument for performing these experiments probably is that they are good exercise,” Riedel said. “Directly illustrating QD can require some very difficult measurements that will push the boundaries of existing laboratory techniques.” The only way we can find out what measurement really means, it seems, is by making better measurements.

Philip Ball is a science writer and author based in London who contributes frequently to Nature, New Scientist, Prospect, Nautilus and The Atlantic, among other publications.

AdvertisementQuanta Magazine

More from Quanta Magazine

Gods of Science April 9th 2021

There is a problem with deifying any scientist. They do it now more than ever, for a gullible public. The powers quote the science as an absolute. Once the church set Galileo free, they did it with Newton – 80% of whose writings were upon religion and God.

Science ceases to be science once it is a matter of blind faith and acceptance. Religion is not compatible with science. It is a contradiction in terms. But they deified Einstein after his beautiful theory of relatvity. Einstein jealously protected his paradigm , resisting string theory.

I received Hawking’s ‘Brief History of Time’ as a Christmas present. As a one time maths and science teacher, I understood every word. But it was theoretical with much inductive logic and God thrown in for good measure to please his religious wife.

One wonders if he would have become half the celebrity he was but for his terrible illness and electronic voice synthesiser. along with guest appearances on the ‘Big Bang’. For truth we need to focus on scientific method before bland references to ‘the science’ like we keep hearing over Covid 19. The Naziz had their versions of science, justifying many crimes against humanity.

The greatest crime of science so far is using the knowledge of atoms for mass murder. The flying rocket bombs of von Braun were a close second and brilliantly combined to create a new ICBM age of fear. Robert Cook

Renaissance painter, architect, inventor – Leonardo da Vinci is often remembered as one of history’s greatest artists, yet this overlooks his radical approach to the challenges of flight, manufacturing and war. Here, we chart the important chapters in Leonardo’s professional life, from his boyhood apprenticeship in Florence to his final years – plus his genius visions for the future… Posted April 7th 2021

Leonardo da Vinci’s rise to success: a timeline

What were the turning points in Leonardo da Vinci’s long career? Art historian Maya Corry explores…
1

Leonardo moves to Florence – and an artist’s workshop

Around 1464, the young Leonardo went to Florence to live with his father. Although he did not have the full advantages of those born in wedlock, his illegitimacy was not a serious hindrance. While the church stridently condemned sex outside marriage, the realities of life, love and lust meant that many children were the result of such unions. Leonardo was welcomed into his father’s home, and Ser Piero provided for him just as he did for his legitimate offspring. The boy would have received a basic education, being taught to read, write and do sums.

At 12 years old, Leonardo reached the age when boys of his status started to learn a profession, but due to his illegitimacy he could not follow his father and become a notary. His artistic talent was perhaps already apparent by this time, for Ser Piero arranged for him to be apprenticed to the Florentine artist Andrea del Verrocchio. Apprenticeships lasted around six years and were often formalised with a contract. These listed the responsibilities of the master: to keep the lad fed, housed, clean and well-dressed, and to teach him all the skills necessary to succeed in his line of work. In return, the child promised to be diligent, honest and – in a sign of the unhappiness endured by some apprentices – not to run away.

An engraving depicting a master and his apprentices carrying out their duties in a busy artist's workshop

A Flemish engraving dating from the 16th century depicting a master and his apprentices carrying out their duties in a busy artist’s workshop. (Image by Alamy)

Verrocchio was a prosperous painter and sculptor. He ran a busy workshop, a space for both living and working, in which he trained apprentices and employed assistants to help him produce the many works of art that his patrons commissioned. Initially, Leonardo would have risen early to light the fire, grind the pigments to make paint, prime panels and prepare all the materials needed for the day’s work. In time, he would have graduated to more skilled and important jobs, learning all that he needed to know along the way.
2

The apprentice blossoms into artistic maturity

Throughout the next years, Leonardo continued to work closely with Verrocchio, and by 1473 had likely graduated to the position of a paid collaborator. Successful Renaissance artists commonly employed assistants to help them complete large commissions, with several people often working on a single painting. Contracts sometimes specified how much of a picture was to be by the master’s own hand – the greater the proportion, the more expensive it was. He tended to be responsible for the most important parts, such as faces and main figures, with patrons happy to leave background details to assistants.

Verrocchio depended on this kind of arrangement to produce his Baptism of Christ altarpiece, on which at least three different artists worked. Giorgio Vasari, the great 16th-century writer on art, claimed that Leonardo contributed the left-hand angel in the painting, and that its great beauty prompted fierce jealousy in Verrocchio. Although Vasari wrote decades after the events and we have to take his words with a pinch of salt, many art historians nevertheless agree that the angel – and some parts of the landscape – were painted by the young artist.

The Baptism of Christ painted by Verrocchio and assistants

The Baptism of Christ (1475–78) by Verrocchio and assistants. Leonardo’s skilfully painted angel (far left) was said to have made his master jealous. (Photo by DeAgostini /Getty Images)

By this point, Leonardo was also producing works of art that were entirely his own efforts, such as the Annunciation. This picture might have been his ‘masterpiece’: the work that proved he had mastered his profession and was eligible to join the painters’ guild. It shows the young Madonna interrupted in her reading by the arrival of Gabriel, winged like a bird of prey, who tells her she will give birth to the son of God. They appear in a beautiful garden, the ground strewn with flowers. In the background the vista fades away into misty mountains. Both the Virgin and angel are delicate beauties, in the same vein as the Baptism of Christ’s left-hand angel. In these early paintings, we can see themes that were to preoccupy Leonardo throughout his career: the workings of light and vision; emotional interaction between figures; the careful observation of the natural world; and the depiction of ideal beauty.

3

Leonardo proves his worth to the Duke of Milan

Around 1482, Leonardo left Tuscany and journeyed north to Milan, seeking the patronage of the city’s ruler, Ludovico Sforza. For ambitious artists, writers, scholars and musicians, there was nothing better than an official position at the court of a great lord or lady. It came with a salary, providing freedom from the usual pressure to hustle for commissions and stick to agreed deadlines.

This was clearly an attractive prospect for Leonardo, and he presented himself to Ludovico with a hard sell. With a canny awareness of what would most appeal to the duke, he laid out his skills in a letter. First and foremost, he declared, he was a master of “instruments of war”, who could build ingenious weapons for Ludovico that would “cause terror to the enemy” (this was a time of almost constant conflict). Most of the letter is taken up with descriptions of these “secret” military inventions, but Leonardo also mentions the bronze equestrian monument Ludovico wished to erect in honour of his late father, Francesco, boasting that he would be able to make this “to the immortal glory and eternal honour… of the illustrious house of Sforza”.

A 19th-century relief at Piazza della Scala, Milan, showing Leonardo presenting his new navigation system for the Navigli canal to Ludovico. (Image by Alamy)

A 19th-century relief at Piazza della Scala, Milan, showing Leonardo presenting his new navigation system for the Navigli canal to Ludovico. (Image by Alamy)

Leonardo concluded by listing his other talents: in architecture, hydraulics, sculpture and, finally, painting. During the Renaissance, it was common for painters to have several strings to their bow. Many were also skilled in other fields, such as sculpture, metalwork, manuscript illumination or engineering. Some read classical texts and published learned treatises on these topics. Leonardo was not entirely unusual then, but the range of areas in which he claimed to be a master was broad, making him an attractive prospect to a ruler such as Ludovico.

Although the duke was rich he was not profligate and Leonardo did not secure the salary he coveted until 1489. In the meantime, he took on commissions such as the Virgin of the Rocks altarpiece. This shows the apocryphal meeting of the little cousins Christ and John the Baptist in a mysterious rocky landscape, watched over by the Virgin and an angel. The carefully arranged composition is suffused with a gentle light and sense of calm majesty, the figures united by gestures and gazes. The painting showcases his talents and was swiftly celebrated. 4

Leonardo’s intimate court paintings break new ground

In July 1493, Leonardo noted that a woman named ‘Caterina’ had joined his household in Milan. This could have been a housekeeper, but it may be that after many years, he was finally reunited with his mother. This would have presumably brought additional happiness at a time of general prosperity and success for the artist, who had been given quarters in the Corte Vecchia, an old ducal palace. There he had a large workshop space, allowing him to build a huge model of the monument to Ludovico’s father. Included among the members of his workshop were young Milanese artists such as Giovanni Antonio Boltraffio and Marco d’Oggiono, as well as apprentices including Gian Giacomo Caprotti da Oreno, better known as Salaì. Under Leonardo’s influence, they produced numerous drawings and paintings of exquisite young men and women.

Leonardo was fascinated by physical loveliness, but the activities of the workshop were also shaped by the tastes of the courtly circle that surrounded Ludovico. This included nobles, scholars, poets, musicians and physicians, many of whom were also interested in ideal beauty, and what it communicated about those who possessed it. Leonardo and Boltraffio (who was of noble blood) were welcomed into this world. Pleasurable time was passed debating the key intellectual questions of the day, and Leonardo was praised for his knowledge and verbal skill. During this period, he produced a number of portraits of members of the court: a musician who was probably his friend Atalante Migliorotti (Portrait of a Musician); the educated and erudite Cecilia Gallerani, Ludovico’s teenage mistress (Lady with an Ermine); and a self-possessed, dark-haired woman, possibly Lucrezia Crivelli (La Belle Ferronnière), pictured below.

Contemporaries spoke with admiration of Leonardo's ability to encapsulate an individual's inner world in a single image. (Photo by Leemage/Corbis via Getty Images)

Contemporaries spoke with admiration of Leonardo’s ability to encapsulate an individual’s inner world in a single image. (Photo by Leemage/Corbis via Getty Images)

In these paintings, Leonardo employed traditional methods of identifying a sitter – the musician, for example, holds a sheet of music – and potent symbolism. The ermine caressed by Cecilia represents both chastity and lust, and is a play on her name (the Greek word for ‘weasel’ is similar to Gallerani). But he also sought psychological realism, rejecting the more traditional profile format in favour of dynamic poses that highlight the life and movement of each sitter, and make viewing feel like a truly interactive experience. Contemporaries spoke with admiration of Leonardo’s ability to encapsulate an individual’s inner world in a single image. The court poet Bernardino Bellincioni wrote that the painted Cecilia “appears to be listening”, and that she would remain “alive and beautiful” for all eternity thanks to Leonardo’s skill. 5

A religious masterpiece is born

Relatively early in the 1490s, Leonardo received another major commission. He was asked to paint a mural of the Last Supper in the refectory of the Dominican monastery of Santa Maria delle Grazie in Milan, where the ducal family often worshipped. The task of depicting Christ’s final meal with his disciples, when he revealed to them foreknowledge of his terrible betrayal and death, must have been exciting for Leonardo. It allowed him to explore visually his beliefs about how the body communicates inner states of being.

Fascination with this question drove both his artistic and scientific investigations, for it is impossible to clearly divide one from the other. Leonardo’s notes are full of assertions that the painter ought to be constantly aware of how the “motions of the mind” are visible in bodily movements, gestures and facial expressions. He even recorded the faces of passers-by that struck him as particularly interesting and animated. As ever, he wanted to comprehend the underlying mechanisms of these processes, and his skull studies also reveal a probing effort to understand how the intellect, or soul, is linked to the body’s physical apparatus.

Detail from The Last Supper

Detail from The Last Supper showing the reaction of Christ’s disciples as he predicts his betrayal and death. (Photo by: Leemage/UIG via Getty Images)

The Last Supper gave Leonardo the opportunity to put his theories on display. Astonished and devastated by Christ’s announcement that one of them would cause his death, the disciples convey their feelings with fierce clarity through their body language. The Apostle James flings his arms out in shock, his face registering horror. John the Evangelist turns away from Jesus in pain, as St Peter grabs his knife and gestures in disbelief. Judas’s pose reveals his guilt: unlike the others, he does not gesture wildly or in sorrow, but simply turns to Christ in surprise and clutches to himself a bag of coins, the payment for his betrayal. Jesus is the calm centre of the composition, and our eyes are led inexorably to him by the spatial arrangement of the picture and its vanishing point.

While the subject of the picture was much to Leonardo’s liking, its size posed a challenge. He preferred to work slowly and delicately, but fresco painting had to be done quickly. To solve this problem, he developed a new method of applying the pigment, allowing him to move at his preferred pace. Over the years the duke became impatient with the slow progress of the painting, and Leonardo had to mollify him with promises that he was getting on with it. Ultimately, Ludovico was much pleased with the work, and he rewarded Leonardo with the gift of a vineyard near Porta Vercellina. The picture’s fame spread, although Leonardo’s experiments with the new way of applying the pigment soon caused it to begin to deteriorate. 6

From military architecture to the Mona Lisa

Having spent the previous year working as a military architect and engineer for Cesare Borgia, captain of the papal armies, in 1503 Leonardo sought a new patron. He wrote to the Ottoman Sultan Bayezid II describing his prowess in hydraulics and engineering, and offering to build bridges: one “as high as a building, and even tall ships will be able to sail under it”; another “across the Bosporus to allow people to travel between Europe and Asia”. Nothing came of this overture and Leonardo, who was now 51, must have been frustrated by the loss of security and, above all, freedom that he had experienced since leaving Milan. He had to return to the world of the jobbing artist, bound by the terms of contracts, with his time spoken for.

A study for the mural of the Battle of Ansiari.

A study for the mural of the Battle of Ansiari. (Photo by © Alinari Archives/CORBIS/Corbis via Getty Images)

Leonardo came to be employed by the Florentine republic to manage the diversion of the river Arno, and was commissioned to produce an enormous mural of the battle of Anghiari in the city’s Great Council Hall. The painting, in the seat of power where government was conducted, was to celebrate Florentine military prowess, and was intended to match another mural, of the battle of Cascina, by Michelangelo. The plan thus pitted the two great Tuscan artists against one another in direct competition. Leonardo’s surviving drawings for his mural reveal tangles of men and horses caught in the heat of battle. Faces contort with tension, rage and valour; as with The Last Supper, he wanted viewers to be immersed in the emotion of the scene. There is another similarity with The Last Supper: once more, Leonardo experimented with painting techniques, and once more he was not successful. The colours of the mural ran together, and parts were obscured.

In the same year Leonardo began work on a portrait of Lisa Gherardini, the wife of the merchant Francesco del Giocondo. He could not have known that this little painting, with its clever play on Lisa’s name – her smile indicating that she was giocondo (jocund) – would become the most famous work of art ever created. 7

Leonardo deepens his anatomical investigations

By 1510, Leonardo was settled in Milan and in receipt of a salary from the French king Louis XII, allowing him to focus his attentions on his own interests rather than a major commission. Probably working alongside Marcantonio della Torre, a professor of anatomy from the nearby University of Pavia, he had ready access to bodies for dissection. He started compiling a treatise on anatomy, beginning with the study of “a perfect man” and then discussing the bodies of an old man, an infant and a woman, taking in the development of the foetus in the womb. Leonardo also produced a series of drawings of the skeleton and musculature that remain breathtaking in their detail, clarity and beauty. They not only demonstrate his desire to reveal the body’s secrets, but also an extraordinary level of artistic innovation.

Partly thanks to his experience in architecture and engineering, Leonardo developed new methods of depicting the complexity of bodily systems and structures in two dimensions that communicate clearly with no loss of information. These included exploded and layered views, and sequential drawings in series. His anatomical work in this period was driven by empirical observation, but in his notes, we find references to the infinite wisdom of the twin creators, nature and God (“il maestro”), thanks to whom the internal workings of the body are organised so perfectly.

A study of a foetus in the womb accompanied by detailed hand-written notes

A study of a foetus in the womb accompanied by detailed hand-written notes, from Leonardo’s notebook, c1510. (Image by Alamy)

In these years the artist was accompanied by Francesco Melzi, a young Milanese nobleman who became a sort of adopted son to him (formal or informal adoptions were common in the Renaissance, often utilised by those who did not have a natural heir). When, in December 1511, warfare once again forced Leonardo to leave Milan, Melzi hosted him in his family’s villa at Vaprio d’Adda, Lombardy.

While staying in the Melzi villa, Leonardo reverted to his interest in the dissection of animals – a mainstay of anatomical investigation at a time when it was not always easy to access human bodies. His fervent desire to comprehend the workings of the heart are revealed in the copious notes and drawings he made of the heart of oxen, wherein he carefully observed the passage of blood through the valves.
8

Gathering a lifetime’s meditations

In 1516 Leonardo went to live in France, at the invitation of the new king Francis I. In 1517, he received a visit from Cardinal Luigi d’Aragona. The cardinal’s secretary recorded that, on a previous occasion, he had visited The Last Supper in Milan, which was “most excellent” but “beginning to deteriorate”. Now he encountered Leonardo, himself “an old man”, who showed them three paintings: a “Florentine woman done from life” (likely the Mona Lisa), Saint John the Baptist and a Virgin and Child with Saint Anne. All three were “most perfect”. It was unusual for an artist to keep paintings with him for such lengthy periods and not part with them, but the fact that Leonardo did so indicates the pictures’ importance to him. It was also convenient to have them ready to display to important guests of the king. Leonardo’s fame was well established by this point, and it would have been politically useful for Francis I to be able to bask in the reflected glory of being his patron.

Unfortunately Leonardo was no longer capable of painting owing to his age and infirmity. He still did some teaching, but mainly spent his working days organising his voluminous notes for publication. The cardinal’s secretary recalled being shown writings on machines and hydraulics and many anatomical drawings by Leonardo, who told them he had performed 30 dissections over his lifetime.

In peace and security, the artist concluded his final years, marshalling a lifetime’s work of meditation on the mysteries of life: the forces of nature; God’s movement in the universe; and the perfection of the human body and soul. His fascination with these weighty themes drove his activities in painting, sculpture, anatomy, natural science, architecture, optics and hydraulics.

Although today we consider the realms of art and science to be separate, this is not something that Leonardo and his Renaissance contemporaries would have acknowledged. Rather than seeking to compartmentalise his many spheres of activity, we come closer to Leonardo when we recognise the underlying interests that motivated and fuelled them all.

Maya Corry is an art historian at the University of Oxford, whose research is focused on early modern Italy. She is author of upcoming Beautiful Bodies: Spirituality, Sexuality and Gender in Leonardo’s Milan (OUP). This article first appeared in BBC History Magazine’s Leonardo da Vinci Special Edition


7 of Leonardo da Vinci’s visions for the future

Marina Wallace explores seven of Leonardo da Vinci’s most forward-thinking ideas and inventions for BBC History Magazine – from the telescope to the flying machine…

Works of art

Drawing was, for da Vinci, primarily a learning exercise: a type of brainstorming on paper. Always keen to experiment with new techniques, da Vinci would make clay models, cover them with linen dipped in wet clay, and then draw from them. Black and white pigment was then applied with a brush as a way of executing studies in light and shade – known as chiaroscuro.

One of da Vinci’s most famous works, Mona Lisa, exemplifies the sfumato technique he is known for, where colours are blurred like smoke to produce softened outlines. In the words of da Vinci himself, “the eye does not know the edge of any body”.

Da Vinci was not afraid to adopt unorthodox methods in painting. In his c1498 work The Last Supper he rejected traditional fresco techniques of the day (pigment mixed with water and sometimes egg yolk on moist plaster). Instead, he experimented with other water and oil-based mediums in order to create his masterpiece.

Technical examination of panel paintings, such as his c1501 work Madonna of the Yarnwinder, has also revealed that da Vinci used strikingly complex underdrawings in his work. Spolvero marks (charcoal dust) have been discovered beneath several of his paintings, which confirms he used a cartoon – a full-size preparatory study for a painting transferred onto the panel via a method similar to tracing.

His use of hand- and fingerprints to blend shadows also distinguishes his paintings from those of his contemporaries, and his use of light influenced many artists after him. His unique way of viewing drawing as an investigative technique still influences artists, including Joseph Beuys who, in 1975, produced several conceptual works influenced by da Vinci’s manuscripts in the Codex Madrid (1490–1505).

Human Anatomy

Throughout his career da Vinci strove for accuracy in his anatomical drawings. Although most of these were based on studies of live subjects, they reveal his knowledge of the underlying structures observed by dissection. Da Vinci acquired a human skull in 1489, and his first documented human dissection was of a 100-year-old man, whose peaceful death he witnessed in a Florentine hospital in 1506.

Curious about the structures and functions of the body, da Vinci dissected around 30 corpses in his lifetime.

Human dissection was tightly regulated by the church, which objected to what it saw as desecration of the dead. Nevertheless, da Vinci’s dissections were carried out openly in the Hospital of Santa Maria Nuova in Florence. Among his drawings is an ink and chalk sketch of a baby in utero (right), probably made by dissecting a miscarried foetus and a woman who had died in childbirth.

Da Vinci perceived the workings of the human body to be a perfect reflection of engineering and vice versa. In 1508, his studies of hydrodynamics coincided with the study of the aortic valve and the flow of blood to the heart. He annotated instructions for wax casts and glass models of the aorta and recorded experiments with flowing water, using grass seeds to track the flow of ‘blood’. Through these experiments he observed that the orifice of a heart’s open valve is triangular and that the heart has four chambers.

Da Vinci’s anatomical discoveries weren’t widely disseminated, and it was another century before the rest of the world began to catch up: William Harvey didn’t publish his theories on the circulation of blood until 1628.

Study of Optics

A number of da Vinci’s manuscripts contain writings on vision, including important studies of optics as well as theories relating to shadow, light and colour. For da Vinci, the eye was the most important of the sense organs: “the window of the soul”, as he put it. We now know how the eye works, but in the artist’s time, sight was a mystery. To complicate matters further, the eye was a difficult organ to dissect. When cut in to, it collapses and the lens takes on a more spherical shape.

Da Vinci boiled his eye specimens, unknowingly distorting their lenses. After close examination he concluded that the eye was a geometrical body, comprising two concentric spheres: the outer “albugineous sphere”, and the inner “vitreous” or “crystalline sphere”. At the back of the eye, opposite the pupil, he observed, was an opening into the optic nerve by which images were sent to the imprensiva in the brain, where all sensory information was collated.

Da Vinci's 'Sections Of A Man's Head'

Da Vinci, intrigued by vision, recorded his experiments to find out how the eye works. (Photo by Print Collector/Getty Images)

Leonardo da Vinci’s observations on the workings of the eye preceded Johannes Kepler’s fundamental studies in the 17th century on the inner working of human retina, convex and concave lenses, and other properties of light and astronomy.

And like Kepler a century later, da Vinci was also fascinated by his observations of celestial bodies. He stated: “The moon is not luminous in itself. It does not shine without the sun.” In his notes he includes a reminder to himself to construct glasses through which to see the moon magnified. Although da Vinci never built his telescope – the first example wasn’t created until 1608 – the initial idea was his.

Manned Flight

Da Vinci was fascinated by the phenomenon of flight. He felt that if he could arrive at a full understanding of how birds fly, he would be able to apply this knowledge to constructing a machine that allowed man to take to the skies. He attempted to combine the dynamic potential of the human body with an imitation of natural flight.

In his notes, da Vinci cites bats, kites and other birds as models to emulate, referring to his flying machine as the “great bird”. He made attempts at solving the problem of manned flight as early as 1478 and his many studies of the flight of birds and plans for flying machines are contained in his Codex on the Flight of Birds, 1505. He explored the mechanism of bird flight in detail, recording how they achieve balanced dynamism through the science of the motions of air.

Leonardo da Vinci's design for fixed-wing aircraft

Da Vinci’s talents ranged from anatomy to military engineering, and many of his innovations were not explored further for several centuries. (Photo by SSPL/Getty Images)

One of the innovations da Vinci sketched out was an ornithopter, a bird-like system with a prone man operating two wings through foot pedals. For safety reasons he suggested that the machine should be tested over a lake and that a flotation device be placed under the structure to keep it from sinking if it fell into the water.

Da Vinci’s flight designs are not complete and most were impractical, like his sketch of an aerial screw design, which has been described as a predecessor of the helicopter. However, his hang glider has since been successfully constructed. After da Vinci, the 17th and 18th centuries witnessed several attempts at man-powered flight. The first rigorous study of the physics of flight was made in the 1840s by Sir George Cayley, who has been called the ‘father of aviation’.

Technical Drawing

Automation of industrial processes is often seen as a 19th-century concept, but da Vinci’s design for a file cutter shows the same idea. The operator turns a crank to raise a weight. After this the machine operates autonomously.

Some of da Vinci’s most modern-looking drawings are his studies of basic industrial machines. His best examples are designed to translate simple movement by the operator into a complex set of actions to automate a process. One particularly interesting device was for grinding convex mirrors, while his Codex Atlanticus shows a hoist that translates the backward and forward motion of a handle into the rotation of wheels to raise or lower weight. Next to simple drawings are exploded views (showing the order of assembly) to make the mechanism crystal clear.

The Codex Madrid, bound volumes with precise drawings concerning mainly the science of mechanisms, was rediscovered in 1966. Priority is given to the drawings, which are accompanied by a commentary or a caption. The care taken with the layout of each page and the finesse of the drawings indicates they are close to publishable form, either as a presentation manuscript or printed treatises. By showing component parts of machines in a clear fashion, da Vinci pioneered what was to come much later in the industrial age.

Almost all his industrial designs were proposals rather than inventions translated into concrete form. We might wonder how these could have revolutionised manufacturing had they been realised, but the real lesson da Vinci offers the world of science, mechanics, engineering and industry is less in his inventions and more in his highly innovative representational style and brilliantly drawn demonstrations.

Geology

Before da Vinci, very few scientists studied rocks trying to determine how they formed. The dominant belief about Earth science came from antiquity and Aristotle’s idea that rocks evolved over time, seeking to become perfect elements such as gold or mercury – a merging of geology with alchemy. Geological knowledge was based on the assumption that the Earth, surrounded by spheres of water, air and fire, was a divine creation. Deposits of fossils were thought to have been laid down by ‘the deluge’ (biblical flood) or to be of miraculous origin.

Da Vinci noted that fossils were too heavy to float: they could not have been carried to high ground by flood waters. Observing how in places there were several layers of fossils, he reasoned that such phenomena could not be the result of a single event. He observed layers of fossils in mountains high above sea level, concluding that the landscape was formed by repeated flooding and the erosive powers of water.

He wrote about his observations of rocks: “Drawn by my eager desire, wishing to see the great manifestation of the various strange shapes made by formative nature, I wandered some way among gloomy rocks, coming to the entrance of a great cavern, in front of which I stood for some time, stupefied and incomprehending such a thing.” In drawings such as A Deluge, and paintings such as the two versions of the Virgin of the Rocks, da Vinci captures his sense of mystery and wonder, replacing the divine with observation and physical explanations.

It was not until the 1830s that scientists including Charles Lyell and then Charles Darwin became convinced that the surface of Earth changes over time only slowly and gradually, not by sudden catastrophic events such as the biblical flood.

Da Vinci studied “strange shapes made by formative nature”, as seen in his painting Virgin of the Rocks

Da Vinci studied “strange shapes made by formative nature”, as seen in his painting Virgin of the Rocks. (Photo by Universal History Archive/Universal Images Group via Getty Images)

Engineering

Da Vinci’s extraordinary inventiveness led him to attempt to solve complex technical problems, such as transmitting motion from one plane onto another using intricate arrays of gears, cams, axles and levers. He was the first to design separate components that could be deployed in a variety of devices, ranging from complex units such as the gears for barrel springs and ring bearings for axles to simple hinges. His mechanics included levers, cranes and ball bearings. As we’ve already noted, he drew such devices with great attention to reality, knowing that drawings needed to be amplified with designs of the individual parts.

Da Vinci’s genius as an engineer lay in seeing clearly how design must be informed by the mathematical laws of physics rather than just practice. He undertook military, civil, hydraulic, mechanical and architectural engineering, first applying his talents aged 30, when he was employed in Milan by Ludovico Sforza as a military engineer, an occupation he held for many years. Da Vinci designed instruments for war, including catapults and other weapons, and had ideas for submarines and machine guns.

For Sforza, da Vinci designed several bridges, including a revolving bridge for use by armies on the move. With wheels, a rope-and-pulley system and a counterweight tank for balance, it could be packed away and transported. Some of his famous designs, such as the ‘tank’ (left), were not practical devices but technological musings aimed at a patron. His civil engineering projects, meanwhile, included geometry studies and designs of canals and churches with domes.

Da Vinci’s innovative attitude about how things work made him a pioneer in what later became the science of mechanics.

Marina Wallace was a director of the Universal Leonardo project, which aimed to deepen understanding of da Vinci. Her most recent book is 30-Second Leonardo da Vinci (Ivy Press, 2018)

This article was first published in the May 2019 edition of BBC History Magazine

Is Physical Law an Alien Intelligence? Posted April 7th 2021

Alien life could be so advanced it becomes indistinguishable from physics.

Nautilus

  • Caleb Scharf

Read when you’ve got time to spare.

Illustration by Tianhua Mao

Perhaps Arthur C. Clarke was being uncharacteristically unambitious. He once pointed out that any sufficiently advanced technology is going to be indistinguishable from magic. If you dropped in on a bunch of Paleolithic farmers with your iPhone and a pair of sneakers, you’d undoubtedly seem pretty magical. But the contrast is only middling: The farmers would still recognize you as basically like them, and before long they’d be taking selfies. But what if life has moved so far on that it doesn’t just appear magical, but appears like physics?

After all, if the cosmos holds other life, and if some of that life has evolved beyond our own waypoints of complexity and technology, we should be considering some very extreme possibilities. Today’s futurists and believers in a machine “singularity” predict that life and its technological baggage might end up so beyond our ken that we wouldn’t even realize we were staring at it. That’s quite a claim, yet it would neatly explain why we have yet to see advanced intelligence in the cosmos around us, despite the sheer number of planets it could have arisen on—the so-called Fermi Paradox.

For example, if machines continue to grow exponentially in speed and sophistication, they will one day be able to decode the staggering complexity of the living world, from its atoms and molecules all the way up to entire planetary biomes. Presumably life doesn’t have to be made of atoms and molecules, but could be assembled from any set of building blocks with the requisite complexity. If so, a civilization could then transcribe itself and its entire physical realm into new forms. Indeed, perhaps our universe is one of the new forms into which some other civilization transcribed its world.

These possibilities might seem wholly untestable, because part of the conceit is that sufficiently advanced life will not just be unrecognizable as such, but will blend completely into the fabric of what we’ve thought of as nature. But viewed through the warped bottom of a beer glass, we can pick out a few cosmic phenomena that—at crazy as it sounds—might fit the requirements.

***

For example, only about 5 percent of the mass-energy of the universe consists of ordinary matter: the protons, neutrons, and electrons that we’re composed of. A much larger 27 percent is thought to be unseen, still mysterious stuff. Astronomical evidence for this dark, gravitating matter is convincing, albeit still not without question. Vast halos of dark matter seem to lurk around galaxies, providing mass that helps hold things together via gravity. On even larger scales, the web-like topography traced by luminous gas and stars also hints at unseen mass.

Cosmologists usually assume that dark matter has no microstructure. They think it consists of subatomic particles that interact only via gravity and the weak nuclear force and therefore slump into tenuous, featureless swathes. They have arguments to support this point of view, but of course we don’t really know for sure. Some astronomers, noting subtle mismatches between observations and models, have suggested that dark matter has a richer inner life. At least some component may comprise particles that interact with one another via long-range forces. It may seem dark to us, but have its own version of light that our eyes cannot see.

In that case, dark matter could contain real complexity, and perhaps it is where all technologically advanced life ends up or where most life has always been. What better way to escape the nasty vagaries of supernova and gamma-ray bursts than to adopt a form that is immune to electromagnetic radiation? Upload your world to the huge amount of real estate on the dark side and be done with it.

If you’re a civilization that has learned how to encode living systems in different substrates, all you need to do is build a normal-matter-to-dark-matter data-transfer system: a dark-matter 3D printer. Perhaps the mismatch of astronomical models and observations is evidence not just of self-interacting dark matter, but of dark matter that is being artificially manipulated.

***

Or to take this a step further, perhaps the behavior of normal cosmic matter that we attribute to dark matter is brought on by something else altogether: a living state that manipulates luminous matter for its own purposes. Consider that at present we have neither identified the dark-matter particles nor come up with a compelling alternative to our laws of physics that would account for the behavior of galaxies and clusters of galaxies. Would an explanation in terms of life be any less plausible than a failure of established laws?

Part of the fabric of the universe is a product of intelligence.

The universe does other funky and unexpected stuff. Notably, it began to expand at an accelerated rate about 5 billion years ago. This acceleration is conventionally chalked up to dark energy. But cosmologists don’t know why the cosmic acceleration began when it did. In fact, one explanation with a modicum of traction is that the timing has to do with life—an anthropic argument. The dark energy didn’t become significant until enough time had gone by for life to take hold on Earth. For many cosmologists, that means our universe must be part of a vast multiverse where the strength of dark energy varies from place to place. We live in one of the places suitable for life like us. Elsewhere, dark energy is stronger and blows the universe apart too quickly for cosmic structures to form and life to take root.

But perhaps there is another reason for the timing coincidence: that dark energy is related to the activities of living things. After all, any very early life in the universe would have already experienced 8 billion years of evolutionary time by the time expansion began to accelerate. It’s a stretch, but maybe there’s something about life itself that affects the cosmos, or maybe those well-evolved denizens decided to tinker with the expansion.

There are even possible motivations for that action. Life absorbs low-entropy energy (such as visible light from the sun), does useful work with that energy, and dumps higher-entropy energy back into the universe as waste heat. But if the surrounding universe ever got too warm—too filled with thermal refuse—things would stagnate. Luckily we live in an expanding and constantly cooling cosmos. What better long-term investment by some hypothetical life 5 billion years ago than to get the universe to cool even faster? To be sure, it may come to rue its decision: Hundreds of billions of years later the accelerating expansion would dilute matter so quickly that civilizations would run out of fresh sources of energy. Also, an accelerating universe does not cool forever, but eventually approaches a floor in temperature.

One idea for the mechanism of an accelerating cosmic expansion is called quintessence, a relative of the Higgs field that permeates the cosmos. Perhaps some clever life 5 billion years ago figured out how to activate that field. How? Beats me, but it’s a thought-provoking idea, and it echoes some of the thinking of cosmologist Freeman Dyson’s famous 1979 paper “Time Without End,” where he looked at life’s ability in the far, far future to act on an astrophysical scale.

***

Once we start proposing that life could be part of the solution to cosmic mysteries, there’s no end to the fun possibilities. Although dark-matter life is a pretty exotic idea, it’s still conceivable that we might recognize what it is, even capturing it in our labs one day (or being captured by it). We can take a tumble down a different rabbit hole by considering that we don’t recognize advanced life because it forms an integral and unsuspicious part of what we’ve considered to be the natural world.

Life’s desire to avoid trouble points to some options. If it has a choice, life always looks for ways to lower its existential risk. You don’t build your nest on the weakest branch or produce trillions of single-celled clones unless you build in some variation and backup.

Maybe there’s something about life itself that affects the cosmos.

A species can mitigate risk by spreading, decentralizing, and seeding as much real estate as possible. In this context, hyper-advanced life is going to look for ways to get rid of physical locality and to maximize redundancy and flexibility. The quantum realm offers good options. The cosmos is already packed with electromagnetic energy. Today, at any instant, about 400 photons of cosmic microwave radiation are streaming through any cubic centimeter of free space. They collectively have less energy than ordinary particles such as protons and electrons, but vastly outnumber them. That’s a lot of potential data carriers. Furthermore, we could imagine that these photons are cleverly quantum-mechanically entangled to help with error control.

By storing its essential data in photons, life could give itself a distributed backup system. And it could go further, manipulating new photons emitted by stars to dictate how they interact with matter. Fronts of electromagnetic radiation could be reaching across the cosmos to set in motion chains of interstellar or planetary chemistry with exquisite timing, exploiting wave interference and excitation energies in atoms and molecules. The science-fiction writer Stanisław Lem put forward a similar idea, involving neutrinos rather than photons, in the novel His Master’s Voice.

That’s one way that life could disappear into ordinary physics. But even these ideas skirt the most disquieting extrapolations.

Toward the end of Carl Sagan’s 1985 science-fiction novel Contact, the protagonist follows the suggestion of an extraterrestrial to study transcendental numbers. After computing to 1020 places, she finds a clearly artificial message embedded in the digits of this fundamental number. In other words, part of the fabric of the universe is a product of intelligence or is perhaps even life itself.

It’s a great mind-bending twist for a book. Perhaps hyper-advanced life isn’t just external. Perhaps it’s already all around. It is embedded in what we perceive to be physics itself, from the root behavior of particles and fields to the phenomena of complexity and emergence.

In other words, life might not just be in the equations. It might be the equations.

Caleb Scharf is an astrophysicist, the Director of Astrobiology at Columbia University in New York, and a founder of yhousenyc.org, an institute that studies human and machine consciousness. His latest book is The Copernicus Complex: Our Cosmic Significance in a Universe of Planets and Probabilities.

This Physicist’s Ideas of Time Will Blow Your Mind Posted March 29th 2021

Is time only in our head?

Quartz

  • Ephrat Livni

Read when you’ve got time to spare.

Time is the space between memory and anticipation. Photo by EPA/Ralf Hirschberger

Time feels real to people. But it doesn’t even exist, according to quantum physics. “There is no time variable in the fundamental equations that describe the world,” theoretical physicist Carlo Rovelli tells Quartz.

If you met him socially, Rovelli wouldn’t assault you with abstractions and math to prove this point. He’d “rather not ruin a party with physics,” he says. We don’t have to understand the mechanics of the universe to go about our daily lives. But it’s good to take a step back every once in a while.

“Time is a fascinating topic because it touches our deepest emotions. Time opens up life and takes everything away. Wondering about time is wondering about the very sense of our life. This is [why] I have spent my life studying time,” Rovelli explains.

Rovelli’s book, The Order of Time, published in April 2018, is about our experience of time’s passage as humans, and the fact of its absence at minuscule and vast scales. He makes a compelling argument that chronology and continuity are just a story we tell ourselves in order to make sense of our existence.


Time As Illusion

Time, Rovelli contends, is merely a perspective, rather than a universal truth. It’s a point of view that humans share as a result of our biology and evolution, our place on Earth, and the planet’s place in the universe.

“From our perspective, the perspective of creatures who make up a small part of the world—we see that world flowing in time,” the physicist writes. At the quantum level, however, durations are so short that they can’t be divided and there is no such thing as time.

In fact, Rovelli explains, there are actually no things at all. Instead, the universe is made up of countless events. Even what might seem like a thing—a stone, say—is really an event taking place at a rate we can’t register. The stone is in a continual state of transformation, and on a long enough timeline, even it is fleeting, destined to take on some other form.

In the “elementary grammar of the world, there is neither space nor time—only processes that transform physical quantities from one to another, from which it is possible to calculate possibilities and relations,” the scientist writes.

Rovelli argues that time only seems to pass in an ordered fashion because we happen to be on Earth, which has a certain, unique entropic relationship to the rest of the universe. Essentially, the way our planet moves creates a sensation of order for us that’s not necessarily the case everywhere in the universe. Just as orchids grow in Florida swamps and not in California’s deserts, so is time a product of the planet we are on and its relation to the surroundings; a fluke, not inherent to the universe.

The world seems ordered, going from past to present, linking cause and effect, because of our perspective. We superimpose order upon it, fixing events into a particular, linear series. We link events to outcomes, and this give us a sense of time.

But the universe is much more complex and chaotic than we can allow for, according to Rovelli. Humans rely on approximate descriptions that actually ignore most of the other events, relations, and possibilities. Our limitations create a false, or incomplete, sense of order that doesn’t tell the whole story.

The physicist argues that, in fact, we “blur” the world to focus on it, blind ourselves to see. For that reason, Rovelli writes, “Time is ignorance.”


Wait, What?

If all this sounds terribly abstract, that’s because it is. But there’s some relatively simple proof to support the notion time is a fluid, human concept—an experience, rather than inherent to the universe.

Imagine, for example, that you are on Earth, viewing a far-off planet, called Proxima b, through a telescope. Rovelli explains that “now” doesn’t describe the same present on Earth and that planet. The light you on Earth see when looking at Proxima b is old news, conveying what was on that planet four years ago. “There is no special moment of Proxima b that corresponds to the present here and now,” Rovelli writes.

This might sound strange, until you consider something as mundane as making an international call. You’re in New York, talking to friends in London. When their words reach your ears, milliseconds have passed, and “now” is no longer the same “now” as it was when the person on the line replied, “I can hear you now.”

Consider, too, that we don’t share the same time in different places. Someone in London is always experiencing a different point in their day than someone in New York. Your New York morning is their afternoon. Your evening is their midnight. You only share the same time with people in a limited place, and even that is a relatively new invention.

It was not until the 19th century, when train travel demanded uniformity, that “noon” came at the same time in New York and Boston, say. Before we needed to agree on time precisely, every place—even relatively close villages—operated on slightly different times. “Noon” was when the sun was highest in the sky and, in Europe, church bells signaled when this time arrived—ringing at different times in every place. By the 20th century, we had agreed upon time zones. But it was a business decision, not a fact of the universe.

Time even passes at different rates from place to place, Rovelli notes. On a mountaintop, time passes faster than at sea level. Similarly, the hands of a clock on the floor will move slightly slower than the hands of a clock on a tabletop.

Likewise, time will seem to pass slower or faster depending on what you’re doing. The minutes in a quantum physics class might crawl by, seeming interminable, while the hours of a party fly.

All these differences are evidence that “times are legion,” according to the physicist. And none of these are exactly true, describing time in its entirety.

“Time is a multilayered, complex concept with multiple, distinct properties deriving from various different approximations,” Rovelli writes. “The temporal structure of the world is different from the naïve image that we have of it.” The simple sense of time that we share works, more or less, in our lives. But it just isn’t accurate when describing the universe “in its minute folds or its vastness.”


Time is a Story We’re Always Telling Ourselves

Though physics gives us insights into the mystery of time, ultimately, the scientist argues, that too is unsatisfactory to us as humans. The simple feeling we have that time passes by, or flows—borne of a fluke, naiveté, and limitations—is precisely what time is for us.

Rovelli argues that what we experience as time’s passage is a mental process happening in the space between memory and anticipation. “Time is the form in which we beings whose brains are made up essentially of memory and foresight interact with our world: it is the source of our identity,” he writes.

Basically, he believes, time is a story we’re always telling ourselves in the present tense, individually and together. It’s a collective act of introspection and narrative, record-keeping and expectation, that’s based on our relationship to prior events and the sense that happenings are impending.  It is this tale that gives us our sense of self as well, a feeling that many neuroscientists, mystics, and the physicist argue is a mass delusion.

Without a record—or memory—and expectations of continuation, we would not experience time’s passage or even know who we are, Rovelli contends. Time, then, is an emotional and psychological experience. “It’s loosely connected with external reality,” he says, “but it is mostly something that happens now in our head.”


Ephrat Livni is a writer and lawyer. She has worked around the world and now reports on government and the Supreme Court from Washington, DC.

Is Consciousness Everywhere?

Posted March 26th 2021

Experience is in unexpected places, including in all animals, large and small, and perhaps even in brute matter itself.

Honey bees can recognize faces, communicate the location and quality of food sources to their sisters via the waggle dance, and navigate complex mazes with the help of cues they store in short-term memory. Image: Boba Jaglicic/Unsplash

By: Christof Koch

What is common between the delectable taste of a favorite food, the sharp sting of an infected tooth, the fullness after a heavy meal, the slow passage of time while waiting, the willing of a deliberate act, and the mixture of vitality, tinged with anxiety, just before a competitive event?

All are distinct experiences. What cuts across each is that all are subjective states, and all are consciously felt. Accounting for the nature of consciousness appears elusive, with many claiming that it cannot be defined at all, yet defining it is actually straightforward. Here goes: Consciousness is experience.

This article is adapted from Christof Koch’s book “The Feeling of Life Itself.” Koch is chief scientist of the MindScope Program at the Allen Institute for Brain Science.

That’s it. Consciousness is any experience, from the most mundane to the most exalted. Some distinguish awareness from consciousness; I don’t find this distinction helpful and so I use these two words interchangeably. I also do not distinguish between feeling and experience, although in everyday use feeling is usually reserved for strong emotions, such as feeling angry or in love. As I use it, any feeling is an experience. Collectively taken, then, consciousness is lived reality. It is the feeling of life itself.

But who else, besides myself, has experiences? Because you are so similar to me, I abduce that you do. The same logic applies to other people. Apart from the occasional solitary solipsist this is uncontroversial. But how widespread is consciousness in the cosmos at large? How far consciousness extends its dominion within the tree of life becomes more difficult to abduce as species become more alien to us.

One line of argument takes the principles of integrated information theory (IIT) to their logical conclusion. Some level of experience can be found in all organisms, it says, including perhaps in Paramecium and other single-cell life forms. Indeed, according to IIT, which aims to precisely define both the quality and the quantity of any one conscious experience, experience may not even be restricted to biological entities but might extend to non-evolved physical systems previously assumed to be mindless — a pleasing and parsimonious conclusion about the makeup of the universe.

How Widespread Is Consciousness in the Tree of Life?

The evolutionary relationship among bacteria, fungi, plants, and animals is commonly visualized using the tree of life metaphor. All living species, whether fly, mouse, or person, lie somewhere on the periphery of the tree, all equally adapted to their particular ecological niches.

Every living organism descends in an unbroken lineage from the last universal common ancestor (abbreviated to a charming LUCA) of planetary life. This hypothetical species lived an unfathomable 3.5 billion years ago, smack at the center of the tree-of-life mandala. Evolution explains not only the makeup of our bodies but also the constitution of our minds — for they don’t get a special dispensation.

The tree of life: Based on the complexity of their behavior and nervous systems, it is likely that it feels like something to be a bird, mammal (marked by *), insect, and cephalopod — represented here by a crow, dog, bee, and octopus. The extent to which consciousness is shared across the entire animal kingdom, let alone across all of life’s vast domain, is at present difficult to establish. The last universal common ancestor of all living things is at the center, with time radiating outward.

Given the similarities at the behavioral, physiological, anatomical, developmental, and genetic levels between Homo sapiens and other mammals, I have no reason to doubt that all of us experience the sounds and sights, the pains and pleasures of life, albeit not necessarily as richly as we do. All of us strive to eat and drink, to procreate, to avoid injury and death; we bask in the sun’s warming rays, we seek the company of conspecifics, we fear predators, we sleep, and we dream.

While mammalian consciousness depends on a functioning six-layered neocortex, this does not imply that animals without a neocortex do not feel. Again, the similarities between the structure, dynamics, and genetic specification of nervous systems of all tetrapods — mammals, amphibians, birds (in particular ravens, crows, magpies, parrots), and reptiles — allows me to abduce that they too experience the world. A similar inference can be made for other creatures with a backbone, such as fish.

But why be a vertebrate chauvinist? The tree of life is populated by a throng of invertebrates that move about, sense their environment, learn from prior experience, display all the trappings of emotions, communicate with others — insects, crabs, worms, octopuses, and on and on. We might balk at the idea that tiny buzzing flies or diaphanous pulsating jellyfish, so foreign in form, have experiences.

Yet honey bees can recognize faces, communicate the location and quality of food sources to their sisters via the waggle dance, and navigate complex mazes with the help of cues they store in short-term memory. A scent blown into a hive can trigger a return to the place where the bees previously encountered this odor, a type of associative memory. Bees have collective decision-making skills that, in their efficiency, put any academic faculty committee to shame. This “wisdom of the crowd” phenomenon has been studied during swarming, when a queen and thousands of her workers split off from the main colony and chooses a new hive that must satisfy multiple demands crucial to group survival (think of that when you go house hunting). Bumble bees can even learn to use a tool after watching other bees use them.

Consciousness is lived reality. It is the feeling of life itself.

Charles Darwin, in an 1881 book on earthworms, wanted “to learn how far the worms acted consciously and how much mental power they displayed.” Studying their feeding behaviors, Darwin concluded that there was no absolute threshold between complex and simple animals that assigned higher mental powers to one but not to the other. No one has discovered a Rubicon that separates sentient from nonsentient creatures.

Of course, the richness and diversity of animal consciousness will diminish as their nervous system becomes simpler and more primitive, eventually turning into a loosely organized neural net. As the pace of the underlying assemblies becomes more sluggish, the dynamics of the organisms’ experiences will slow down as well.

Does experience even require a nervous system? We don’t know. It has been asserted that trees, members of the kingdom of plants, can communicate with each other in unexpected ways, and that they adapt and learn. Of course, all of that can happen without experience. So I would say the evidence is intriguing but very preliminary. As we step down the ladder of complexity rung by rung, how far down do we go before there is not even an inkling of awareness? Again, we don’t know. We have reached the limits of abduction based on similarity with the only subject we have direct acquaintance with — ourselves.

Consciousness in the Universe

IIT offers a different chain of reasoning. The theory precisely answers the question of who can have an experience: anything with a non-zero maximum of integrated information; anything that has intrinsic causal powers is considered a Whole. What this Whole feels, its experience, is given by its maximally irreducible cause-effect structure. How much it exists is given by its integrated information.

In other words, the theory doesn’t stipulate that there is some magical threshold for experience to switch on. The degree of consciousness is instead measured with Φ, or phi. If phi is zero, then the system doesn’t exist for itself; anything with Φmax greater than zero exists for itself, has an inner view, and has some degree of irreducibility — the larger this number, the more conscious it is. And that means there are a lot of Wholes out there.

Certainly, this includes people and other mammals with neocortex, which we clinically know to be the substrate of experience. But fish, birds, reptiles, and amphibians also possess a telencephalon — the largest and most highly developed part of the brain — that is evolutionarily related to mammalian cortex. Given the attendant circuit complexity, the intrinsic causal power of the telencephalon is likely to be high.

When considering the neural architecture of creatures very different from us, such as the honey bee, we are confronted by vast and untamed neuronal complexity — about one million neurons within a volume the size of a grain of quinoa, a circuit density 10 times higher than that of our neocortex of which we are so proud. And unlike our cerebellum, the bee’s mushroom-shaped body is heavily recurrently connected. It is likely that this little brain forms a maximally irreducible cause-effect structure.

Integrated information is not about input–output processing, function or cognition, but about intrinsic cause-effect power. Having liberated itself from the myth that consciousness is intimately related to intelligence, the theory is free to discard the shackles of nervous systems and to locate intrinsic causal power in mechanisms that do not compute in any conventional sense.

A case in point is that of single-cell organisms, such as Paramecium, the animalcules discovered by the early microscopists in the late 17th century. Protozoa propel themselves through water by whiplash movements of tiny hairs, avoid obstacles, detect food, and display adaptive responses. Because of their minuscule size and strange habitats, we don’t think of them as sentient. But they challenge our presuppositions. One of the early students of such microorganisms, H. S. Jennings, expressed this well:

The writer is thoroughly convinced, after long study of the behavior of this organism, that if Amoeba were a large animal, so as to come within the everyday experience of human beings, its behavior would at once call forth the attribution to it of states of pleasure and pain, of hunger, desire, and the like, on precisely the same basis as we attribute these things to the dog.

Among the best-studied of all organisms are the even smaller Escherichia coli, bacteria that can cause food-poisoning. Their rod-shaped bodies, about the size of a synapse, house several million proteins inside their protective cell wall. No one has modeled in full such vast complexity. Given this byzantine intricacy, the causal power of a bacterium upon itself is unlikely to be zero.Per IIT, it is likely that it feels like something to be a bacterium. It won’t be upset about its pear-shaped body; no one will ever study the psychology of a microorganism. But there will be a tiny glow of experience. This glow will disappear once the bacterium dissolves into its constituent organelles.

Let us travel down further in scale, transitioning from biology to the simpler worlds of chemistry and physics, and compute the intrinsic causal power of a protein molecule, an atomic nucleus or even a single proton. Per the standard model of physics, protons and neutrons are made out of three quarks with fractional electrical charge. Quarks are never observed by themselves. It is therefore possible that atoms constitute an irreducible Whole, a modicum of “enminded” matter. How does it feel to be a single atom compared to the roughly 1026 atoms making up a human brain? Given that its integrated information is presumably barely above zero, just a minute bagatelle, a this-rather-than-not-this?

A bacterium won’t be upset about its pear-shaped body; no one will ever study the psychology of a microorganism. But there will be a tiny glow of experience.

To wrap your mind around this possibility that violates Western cultural sensibilities, consider an instructive analogy. The average temperature of the universe is determined by the afterglow left over from the Big Bang, the cosmic microwave background radiation. It evenly pervades space at an effective temperature of 2.73° above absolute zero. This is utterly frigid, hundreds of degrees colder than any temperature terrestrial organisms can survive. But the fact that the temperature is non-zero implies a corresponding tiny amount of heat in deep space. This of course implies a corresponding tiny amount of experience.

To the extent that I’m discussing the mental with respect to single-cell organisms let alone atoms, I have entered the realm of pure speculation, something I have been trained all my life as a scientist to avoid. Yet three considerations prompt me to cast caution to the wind.

First, these ideas are straightforward extensions of IIT — constructed to explain human-level consciousness — to vastly different aspects of physical reality. This is one of the hallmarks of a powerful scientific theory — predicting phenomena by extrapolating to conditions far from the theory’s original remit. There are many precedents — that the passage of time depends on how fast you travel, that spacetime can break down at singularities known as black holes, that people, butterflies, vegetables, and the bacteria in your gut use the same mechanism to store and copy their genetic information, and so on.

Second, I admire the elegance and beauty of this prediction. (Yes, I’m perfectly cognizant that the last 40 years in theoretical physics have provided ample proof that chasing after elegant theories has yielded no new, empirically testable evidence describing the actual universe we live in.) The mental does not appear abruptly out of the physical. As Leibniz expressed it, natura non facit saltus, or nature does not make sudden leaps (Leibniz was, after all, the co-inventor of infinitesimal calculus). The absence of discontinuities is also a bedrock element of Darwinian thought.

Intrinsic causal power does away with the challenge of how mind emerges from matter. IIT stipulates that it is there all along.

Third, IIT’s prediction that the mental is much more widespread than traditionally assumed resonates with an ancient school of thought: panpsychism.

Many but Not All Things Are Enminded

Common to panpsychism in its various guises is the belief that soul (psyche) is in everything (pan), or is ubiquitous; not only in animals and plants but all the way down to the ultimate constituents of matter — atoms, fields, strings, or whatever. Panpsychism assumes that any physical mechanism either is conscious, is made out of conscious parts, or forms part of a greater conscious whole.

Some of the brightest minds in the West took the position that matter and soul are one substance. This includes the pre-Socratic philosophers of ancient Greece, Thales, and Anaxagoras. Plato espoused such ideas, as did the Renaissance cosmologist Giordano Bruno (burned at the stake in 1600), Arthur Schopenhauer, and the 20th-century paleontologist and Jesuit Teilhard de Chardin (whose books, defending evolutionary views on consciousness, were banned by his church until his death).

Particularly striking are the many scientists and mathematicians with well-articulated panpsychist views. Foremost, of course, is Leibniz. But we can also include the three scientists who pioneered psychology and psychophysics — Gustav Fechner, Wilhelm Wundt, and William James — and the astronomer and mathematicians Arthur Eddington, Alfred North Whitehead, and Bertrand Russell. With the modern devaluation of metaphysics and the rise of analytic philosophy, the last century evicted the mental entirely, not only from most university departments but also from the universe at large. But this denial of consciousness is now being viewed as the “Great Silliness,” and panpsychism is undergoing a revival within the academe.

Debates concerning what exists are organized around two poles: materialism and idealism. Materialism, and its modern version known as physicalism, has profited immensely from Galileo Galilei’s pragmatic stance of removing mind from the objects it studies in order to describe and quantify nature from the perspective of an outside observer. It has done so at the cost of ignoring the central aspect of reality — experience. Erwin Schrödinger, one of the founders of quantum mechanics, after whom its most famous equation is named, expressed this clearly:

The strange fact [is] that on the one hand all our knowledge about the world around us, both that gained in everyday life and that revealed by the most carefully planned and painstaking laboratory experiments, rests entirely on immediate sense perception, while on the other hand this knowledge fails to reveal the relations of the sense perceptions to the outside world, so that in the picture or model we form of the outside world, guided by our scientific discoveries, all sensual qualities are absent.

Idealism, on the other hand, has nothing productive to say about the physical world, as it is held to be a figment of the mind. Cartesian dualism accepts both in a strained marriage in which the two partners live out their lives in parallel, without speaking to each other (this is the interaction problem: how does matter interact with the ephemeral mind?). Behaving like a thwarted lover, analytic, logical-positivist philosophy denies the legitimacy and, in its more extreme version, even the very existence of one partner in the mental-physical relationship. It does so to obfuscate its inability to deal with the mental.

Panpsychism is unitary. There is only one substance, not two. This elegantly eliminates the need to explain how the mental emerges out of the physical and vice versa. Both coexist.

But panpsychism’s beauty is barren. Besides claiming that everything has both intrinsic and extrinsic aspects, it has nothing constructive to say about the relationship between the two. Where is the experiential difference between one lone atom zipping around in interstellar space, the hundred trillion trillion making up a human brain, and the uncountable atoms making up a sandy beach? Panpsychism is silent on such questions.

IIT shares many insights with panpsychism, starting with the fundamental premise that consciousness is an intrinsic, fundamental aspect of reality. Both approaches argue that consciousness is present across the animal kingdom to varying degrees.

Consciousness changes across the lifespan — becoming richer as we grow from a fetus into a teenager and mature into an adult with a fully developed cortex.

All else being equal, integrated information, and with it the richness of experience, increases as the complexity of the associated nervous system grows, although sheer number of neurons is not a guarantee, as shown by the cerebellum. Consciousness waxes and wanes diurnally with alertness and sleep. It changes across the lifespan — becoming richer as we grow from a fetus into a teenager and mature into an adult with a fully developed cortex. It increases when we become familiar with romantic and sexual relationships, with alcohol and drugs, and when we acquire appreciation for games, sports, novels, and art; and it will slowly disintegrate as our aging brains wear out.

Most importantly, though, IIT is a scientific theory, unlike panpsychism. IIT predicts the relationship between neural circuits and the quantity and quality of experience, how to build an instrument to detect experience, pure experience (consciousness without any content) and how to enlarge consciousness by brain-bridging, why certain parts of the brain have it and others not (the posterior cortex versus the cerebellum), why brains with human-level consciousness evolved, and why conventional computers have only a tiny bit of it.

When lecturing about these matters, I often get the you’ve-got-to-be-kidding-stare. This passes once I explain how neither panpsychism nor IIT claim that elementary particles have thoughts or other cognitive processes. Panpsychism does, however, have an Achilles’ heel — the combination problem, a problem that IIT has squarely solved.

On the Impossibility of Group Mind, or Why Your Neurons Are Not Conscious

William James gave a memorable example of the combination problem in the foundational text of American psychology, “The Principles of Psychology” (1890):

Take a sentence of a dozen words, and take twelve men and tell to each one word. Then stand the men in a row or jam them in a bunch, and let each think of his word as intently as he will; nowhere will there be a consciousness of the whole sentence.

Experiences do not aggregate into larger, superordinate experiences. Closely interacting lovers, dancers, athletes, soldiers, and so on do not give rise to a group mind, with experiences above and beyond those of the individuals making up the group. John Searle wrote:

Consciousness cannot spread over the universe like a thin veneer of jam; there has to be a point where my consciousness ends and yours begins.

Panpsychism has not provided a satisfactory answer as to why this should be so. But IIT does. IIT postulates that only maxima of integrated information exist. This is a consequence of the exclusion axiom — any conscious experience is definite, with borders. Certain aspects of experience are in, while a vast universe of possible feelings are out.

The mind–body problem resolved? Integrated information theory posits that any one conscious experience, here that of looking at a Bernese mountain dog, is identical to a maximally irreducible cause-effect structure. Its physical substrate, its Whole, is the operationally defined neural correlate of consciousness. The experience is formed by the Whole but is not identical to it.

Consider the image above, in which I’m looking at my dog Ruby and have a particular visual experience, a maximally irreducible cause-effect structure. It is constituted by the underlying physical substrate, the Whole, here a particular neural correlate of consciousness within the hot zone in my posterior cortex. But the experience is not identical to the Whole. My experience is not my brain.

This Whole has definite borders; a particular neuron is either part of it or not. The latter is true even if this neuron provides some synaptic input to the Whole. What defines the Whole is a maximum of integrated information, with the maximum being evaluated over all spatiotemporal scales and levels of granularities, such as molecules, proteins, subcellular organelles, single neurons, large ensembles of them, the environment the brain interacts with, and so on.

It is the irreducible Whole that forms my conscious experience, not the underlying neurons. So not only is my experience not my brain, but most certainly it is not my individual neurons. While a handful of cultured neurons in a dish may have an itsy-bitsy amount of experience, forming a mini-mind, the hundreds of millions neurons making up my posterior cortex do not embody a collection of millions of mini-minds. There is only one mind, my mind, constituted by the Whole in my brain.

Other Wholes may exist in my brain, or my body, as long as they don’t share elements with the posterior hot zone Whole. Thus, it may feel like something to be my liver, but given the very limited interactions among liver cells, I doubt it feels like a lot.

The exclusion principle also explains why consciousness ceases during slow sleep. At this time, delta waves dominate the EEG and cortical neurons have regular hyperpolarized down-states during which they are silent, interspersed by active up-states when neurons are more depolarized. These on- and off-periods are regionally coordinated. As a consequence, the cortical Whole breaks down, shattering into small cliques of interacting neurons. Each one probably has only a whit of integrated information. Effectively, “my” consciousness vanishes in deep sleep, replaced by myriad of tiny Wholes, none of which is remembered upon awakening.

The exclusion postulate also dictates whether or not an aggregate of conscious entities — ants in a colony, cells making up a tree, bees in a hive, starlings in a murmurating flock, an octopus with its eight semiautonomous arms, or the hundreds of Chinese dancers and musicians during the choreographed opening ceremony of the 2008 Olympic games in Beijing — exist as conscious entities. A herd of buffalo during a stampede or a crowd can act as if it had “one mind,” but this remains a mere figure of speech unless there is a phenomenal entity that feels like something above and beyond the experiences of the individuals making up the group. Per IIT, this would require the extinction of the individual Wholes, as the integrated information for each of them is less than the Φmax of the Whole. Everybody in the crowd would give up his or her individual consciousness to the mind of the group, like being assimilated into the hive mind of the Borg in the“Star Trek” universe.

Androids, if their physical circuitry is anything like today’s CPUs, cannot dream of electric sheep.

IIT’s exclusion postulate does not permit the simultaneous existence of both individual and group mind. Thus, the Anima Mundi or world soul is ruled out, as it requires that the mind of all sentient beings be extinguished in favor of the all-encompassing soul. Likewise, it does not feel like anything to be the three hundred million citizens of the United States of America. As an entity, the United States has considerable extrinsic causal powers, such as the power to execute its citizens or start a war. But the country does not have maximally irreducible intrinsic cause-effect power. Countries, corporations, and other group agents exist as powerful military, economic, financial, legal, and cultural entities. They are aggregates but not Wholes. They have no phenomenal reality and no intrinsic causal power.

Thus, per IIT, single cells may have some intrinsic existence, but this does not necessarily hold for the microbiome or trees. Animals and people exist for themselves, but herds and crowds do not. Maybe even atoms exist for themselves, but certainly not spoons, chairs, dunes, or the universe at large.

IIT posits two sides to every Whole: an exterior aspect, known to the world and interacting with other objects, including other Wholes; and an interior aspect, what it feels like, its experience. It is a solitary existence, with no direct windows into the interior of other Wholes. Two or more Wholes can fuse to give rise to a larger Whole but at the cost of losing their previous identity.

Finally, panpsychism has nothing intelligible to say about consciousness in machines. But IIT does. Conventional digital computers, built out of circuit components with sparse connectivity and little overlap among their inputs and their outputs, do not constitute a Whole. Computers have only a tiny amount of highly fragmented intrinsic cause-effect power, no matter what software they are executing and no matter their computational power. Androids, if their physical circuitry is anything like today’s CPUs, cannot dream of electric sheep. It is, of course, possible to build computing machinery that closely mimics neuronal architectures. Such neuromorphic engineering artifacts could have lots of integrated information. But we are far from those.

IIT can be thought of as an extension of physics to the central fact of our lives — consciousness. Textbook physics deals with the interaction of objects with each other, dictated by extrinsic causal powers. My and your experiences are the way brains with irreducible intrinsic causal powers feel like from the inside.

IIT offers a principled, coherent, testable, and elegant account of the relationship between these two seemingly disparate domains of existence — the physical and the mental — grounded in extrinsic and intrinsic causal powers. Causal power of two different kinds is the only sort of stuff needed to explain everything in the universe. These powers constitute ultimate reality.

Further experimental work will be essential to validate, modify, or perhaps even reject these views. If history is any guide, future discoveries in laboratories and clinics, or perhaps off-planet, will surprise us.

We have come to the end of our voyage. Illuminated by the light of our pole star — consciousness — the universe reveals itself to be an orderly place. It is far more enminded than modernity, blinded by its technological supremacy over the natural world, takes it to be. It is a view more in line with earlier traditions that respected and feared the natural world.

Experience is in unexpected places, including in all animals, large and small, and perhaps even in brute matter itself. But consciousness is not in digital computers running software, even when they speak in tongues. Ever-more powerful machines will trade in fake consciousness, which will, perhaps, fool most. But precisely because of the looming confrontation between natural, evolved and artificial, engineered intelligence, it is absolutely essential to assert the central role of feeling to a lived life.


Christof Koch is Chief Scientist of both the MindScope Program at the Allen Institute for Brain Science and The Tiny Blue Dot Foundation, following 27 years as a Professor at the California Institute of Technology. He is the author of several books, including “Consciousness: Confessions of a Romantic Reductionist” and “The Feeling of Life Itself Why Consciousness Is Widespread but Can’t Be Computed,” from which this article is adapted.

Scientists may have solved ancient mystery of ‘first computer’ Posted March 23rd 2021

Researchers claim breakthrough in study of 2,000-year-old Antikythera mechanism, an astronomical calculator found in sea

Computer model of how the Antikythera mechanism may have worked

Computer model of how the Antikythera mechanism may have worked. Photograph: UCLIan Sample Science editor@iansampleFri 12 Mar 2021 10.00 GMT

Last modified on Fri 12 Mar 2021 14.51 GMT

6306

From the moment it was discovered more than a century ago, scholars have puzzled over the Antikythera mechanism, a remarkable and baffling astronomical calculator that survives from the ancient world.

The hand-powered, 2,000-year-old device displayed the motion of the universe, predicting the movement of the five known planets, the phases of the moon and the solar and lunar eclipses. But quite how it achieved such impressive feats has proved fiendishly hard to untangle.

Now researchers at UCL believe they have solved the mystery – at least in part – and have set about reconstructing the device, gearwheels and all, to test whether their proposal works. If they can build a replica with modern machinery, they aim to do the same with techniques from antiquity.

“We believe that our reconstruction fits all the evidence that scientists have gleaned from the extant remains to date,” said Adam Wojcik, a materials scientist at UCL. While other scholars have made reconstructions in the past, the fact that two-thirds of the mechanism are missing has made it hard to know for sure how it worked.

The mechanism, often described as the world’s first analogue computer, was found by sponge divers in 1901 amid a haul of treasures salvaged from a merchant ship that met with disaster off the Greek island of Antikythera. The ship is believed to have foundered in a storm in the first century BC as it passed between Crete and the Peloponnese en route to Rome from Asia Minor.

The Antikythera mechanism

The Antikythera mechanism is estimated to date back to around 80 BC. Photograph: X-Tek Group/AFP

The battered fragments of corroded brass were barely noticed at first, but decades of scholarly work have revealed the object to be a masterpiece of mechanical engineering. Originally encased in a wooden box one foot tall, the mechanism was covered in inscriptions – a built-in user’s manual – and contained more than 30 bronze gearwheels connected to dials and pointers. Turn the handle and the heavens, as known to the Greeks, swung into motion.

Michael Wright, a former curator of mechanical engineering at the Science Museum in London, pieced together much of how the mechanism operated and built a working replica, but researchers have never had a complete understanding of how the device functioned. Their efforts have not been helped by the remnants surviving in 82 separate fragments, making the task of rebuilding it equivalent to solving a battered 3D puzzle that has most of its pieces missing.

Writing in the journal Scientific Reports, the UCL team describe how they drew on the work of Wright and others, and used inscriptions on the mechanism and a mathematical method described by the ancient Greek philosopher Parmenides, to work out new gear arrangements that would move the planets and other bodies in the correct way. The solution allows nearly all of the mechanism’s gearwheels to fit within a space only 25mm deep.

According to the team, the mechanism may have displayed the movement of the sun, moon and the planets Mercury, Venus, Mars, Jupiter and Saturn on concentric rings. Because the device assumed that the sun and planets revolved around Earth, their paths were far more difficult to reproduce with gearwheels than if the sun was placed at the centre. Another change the scientists propose is a double-ended pointer they call a “Dragon Hand” that indicates when eclipses are due to happen.

Computer model of the mechanism’s gears

Computer model of the mechanism’s gears. Photograph: UCL

The researchers believe the work brings them closer to a true understanding of how the Antikythera device displayed the heavens, but it is not clear whether the design is correct or could have been built with ancient manufacturing techniques. The concentric rings that make up the display would need to rotate on a set of nested, hollow axles, but without a lathe to shape the metal, it is unclear how the ancient Greeks would have manufactured such components.

“The concentric tubes at the core of the planetarium are where my faith in Greek tech falters, and where the model might also falter,” said Wojcik. “Lathes would be the way today, but we can’t assume they had those for metal.”

Whether or not the model works, more mysteries remain. It is unclear whether the Antikythera mechanism was a toy, a teaching tool or had some other purpose. And if the ancient Greeks were capable of such mechanical devices, what else did they do with the knowledge?

“Although metal is precious, and so would have been recycled, it is odd that nothing remotely similar has been found or dug up,” Wojcik said. “If they had the tech to make the Antikythera mechanism, why did they not extend this tech to devising other machines, such as clocks?”

Quantum Mischief Rewrites the Laws of Cause and Effect March 21st 2021

Spurred on by quantum experiments that scramble the ordering of causes and their effects, some physicists are figuring out how to abandon causality altogether.59

Read Later
An illustration of an airplane with its contrails coming out in front of it.
The mystery of indefinite causal order leaves the order of events uncertain. Cody Muir for Quanta Magazine

Natalie WolchoverSenior Writer/Editor


March 11, 2021


View PDF/Print Modegeneral relativitygravityphysicsquantum gravityquantum physicstheoretical physicsAll topics

The Prime Number Conspiracy - The Biggest Ideas in Math from Quanta – Available now!

Alice and Bob, the stars of so many thought experiments, are cooking dinner when mishaps ensue. Alice accidentally drops a plate; the sound startles Bob, who burns himself on the stove and cries out. In another version of events, Bob burns himself and cries out, causing Alice to drop a plate.

Over the last decade, quantum physicists have been exploring the implications of a strange realization: In principle, both versions of the story can happen at once. That is, events can occur in an indefinite causal order, where both “A causes B” and “B causes A” are simultaneously true.

“It sounds outrageous,” admitted Časlav Brukner, a physicist at the University of Vienna.

The possibility follows from the quantum phenomenon known as superposition, where particles maintain all possible realities simultaneously until the moment they’re measured. In labs in Austria, China, Australia and elsewhere, physicists observe indefinite causal order by putting a particle of light (called a photon) in a superposition of two states. They then subject one branch of the superposition to process A followed by process B, and subject the other branch to B followed by A. In this procedure, known as the quantum switch, A’s outcome influences what happens in B, and vice versa; the photon experiences both causal orders simultaneously.

Over the last five years, a growing community of quantum physicists has been implementing the quantum switch in tabletop experiments and exploring the advantages that indefinite causal order offers for quantum computing and communication. It’s “really something that could be useful in everyday life,” said Giulia Rubino, a researcher at the University of Bristol who led the first experimental demonstration of the quantum switch in 2017.

But the practical uses of the phenomenon only make the deep implications more acute.

Physicists have long sensed that the usual picture of events unfolding as a sequence of causes and effects doesn’t capture the fundamental nature of things. They say this causal perspective probably has to go if we’re ever to figure out the quantum origin of gravity, space and time. But until recently, there weren’t many ideas about how post-causal physics might work. “Many people think that causality is so basic in our understanding of the world that if we weaken this notion we would not be able to make coherent, meaningful theories,” said Brukner, who is one of the leaders in the study of indefinite causality.

That’s changing as physicists contemplate the new quantum switch experiments, as well as related thought experiments in which Alice and Bob face causal indefiniteness created by the quantum nature of gravity. Accounting for these scenarios has forced researchers to develop new mathematical formalisms and ways of thinking. With the emerging frameworks, “we can make predictions without having well-defined causality,” Brukner said.

Correlation, Not Causation

Progress has grown swifter recently, but many practitioners trace the origin of this line of attack on the quantum gravity problem to work 16 years ago by Lucien Hardy, a British-Canadian theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. “In my case,” said Brukner, “everything started with Lucien Hardy’s paper.”

Hardy was best known at the time for taking a conceptual approach made famous by Albert Einstein and applying it to quantum mechanics.

Einstein revolutionized physics not by thinking about what exists in the world, but by considering what individuals can possibly measure. In particular, he imagined people on moving trains making measurements with rulers and clocks. By using this “operational” approach, he was able to conclude that space and time must be relative.

A man with curly gray hair and wireframe glasses smiles for the camera while holding a piece of chalk up to a blackboard.
Lucien Hardy originated the study of indefinite causality as a route to understanding the quantum nature of gravity. Gabriela Secara/Perimeter Institute

In 2001, Hardy applied this same approach to quantum mechanics. He reconstructed all of quantum theory starting from five operational axioms.

He then set out to apply it to an even bigger problem: the 80-year-old problem of how to reconcile quantum mechanics and general relativity, Einstein’s epic theory of gravity. “I’m driven by this idea that perhaps the operational way of thinking about quantum theory may be applied to quantum gravity,” Hardy told me over Zoom this winter.

The operational question is: In quantum gravity, what can we, in principle, observe? Hardy thought about the fact that quantum mechanics and general relativity each have a radical feature. Quantum mechanics is famously indeterministic; its superpositions allow for simultaneous possibilities. General relativity, meanwhile, suggests that space and time are malleable. In Einstein’s theory, massive objects like Earth stretch the space-time “metric” — essentially the distance between hash marks on a ruler, and the duration between ticks of clocks. The nearer you are to a massive object, for instance, the slower your clock ticks. The metric then determines the “light cone” of a nearby event — the region of space-time that the event can causally influence.

When you combine these two radical features, Hardy said, two simultaneous quantum possibilities will stretch the metric in different ways. The light cones of events become indefinite — and thus, so does causality itself.

Most work on quantum gravity elides one of these features. Some researchers, for instance, attempt to characterize the behavior of “gravitons,” quantum units of gravity. But the researchers have the gravitons interact against a fixed background time. “We’re so used to thinking about the world evolving in time,” Hardy noted. He reasons, though, that quantum gravity will surely inherit general relativity’s radical feature and lack fixed time and fixed causality. “So the idea is really to throw caution to the wind,” said the calm, serious physicist, “and really embrace this wild situation where you have no definite causal structure.”

Over Zoom, Hardy used a special projector to film a whiteboard, where he sketched out various thought experiments, starting with one that helped him see how to describe data entirely without reference to the causal order of events.

He imagined an array of probes drifting in space. They’re taking data — recording, say, the polarized light spewing out of a nearby exploding star, or supernova. Every second, each probe logs its location, the orientation of its polarizer (a device like polarized sunglasses that either lets a photon through or blocks it depending on its polarization), and whether a detector, located behind the polarizer, detects a photon or not. The probe transmits this data to a man in a room, who prints it on a card. After some time, the experimental run ends; the man in the room shuffles all the cards from all the probes and forms a stack.

I’m driven by this idea that perhaps the operational way of thinking about quantum theory may be applied to quantum gravity.

Lucien Hardy

The probes then rotate their polarizers and make a new series of measurements, producing a new stack of cards, and repeat the process, so that the man in the room ultimately has many shuffled stacks of out-of-order measurements. “His job is to try to make some sense of the cards,” Hardy said. The man wants to devise a theory that accounts for all the statistical correlations in the data (and, in this way, describes the supernova) without any information about the data’s causal relationships or temporal order, since those might not be fundamental aspects of reality.

How might the man do this? He could first arrange the cards by location, dealing out cards from each stack so that those pertaining to spacecraft in a certain region of space go in the same pile. In doing this for each stack, he could start to notice correlations between piles. He might note that whenever a photon is detected in one region, there’s a high detection probability in another region, so long as the polarizers are angled the same way in both places. (Such a correlation would mean that the light passing through these regions tends to share a common polarization.) He could then combine probabilities into expressions pertaining to larger composite regions, and in this way, he could “build up mathematical objects for bigger and bigger regions from smaller regions,” Hardy said.

What we normally think of as causal relationships — such as photons traveling from one region of the sky to another, correlating measurements made in the first region with measurements made later in the second region — act, in Hardy’s formalism, like data compression. There’s a reduction in the amount of information needed to describe the whole system, since one set of probabilities determines another.

Hardy called his new formalism the “causaloid” framework, where the causaloid is the mathematical object used to calculate the probabilities of outcomes of any measurement in any region. He introduced the general framework in a dense 68-page paper in 2005, which showed how to formulate quantum theory in the framework (essentially by reducing its general probability expressions to the specific case of interacting quantum bits).

Hardy thought it should be possible to formulate general relativity in the causaloid framework too, but he couldn’t quite see how to proceed. If he could manage that, then, he wrote in another paper, “the framework might be used to construct a theory of quantum gravity.”

The Quantum Switch

A few years later, in Pavia, Italy, the quantum information theorist Giulio Chiribella and three colleagues were mulling over a different question: What kinds of computations are possible? They had in mind the canonical work of the theoretical computer scientist Alonzo Church. Church developed a set of formal rules for building functions — mathematical machines that take an input and yield an output. A striking feature of Church’s rulebook is that the input of a function can be another function.

The four Italian physicists asked themselves: What kinds of functions of functions might be possible in general, beyond what computers were currently capable of? They came up with a procedure that involves two functions, A and B, that get assembled into a new function. This new function — what they called the quantum switch — is a superposition of two options. In one branch of the superposition, the function’s input passes through A, then B. In the other, it passes through B, then A. They hoped that the quantum switch “could be the basis of a new model of computation, inspired by the one of Church,” Chiribella told me.

At first, the revolution sputtered. Physicists couldn’t decide whether the quantum switch was deep or trivial, or if it was realizable or merely hypothetical. Their paper took four years to get published.

By the time it finally came out in 2013, researchers were starting to see how they might build quantum switches.

A man and woman lean over an optical table; violet laser light can be seen bouncing between some mirrors.
Giulia Rubino, Philip Walther and their collaborators performed the first experimental demonstration of the quantum switch at the University of Vienna in 2017. Valeria Saggio

They might, for instance, shoot a photon toward an optical device called a beam splitter. According to quantum mechanics, the photon has a 50-50 chance of being transmitted or reflected, and so it does both.

The transmitted version of the photon hurtles toward an optical device that rotates the polarization direction of the light in some well-defined way. The photon next encounters a similar device that rotates it a different way. Let’s call these devices A and B, respectively.

Meanwhile, the reflected version of the photon encounters B first, then A. The end result of the polarization in this case is different.

We can think of these two possibilities — A before B, or B before A — as indefinite causal order. In the first branch, A causally influences B in the sense that if A hadn’t occurred, B’s input and output would be totally different. Likewise, in the second branch, B causally influences A in that the latter process couldn’t have happened otherwise.

After these alternative causal events have occurred, another beam splitter reunites the two versions of the photon. Measuring its polarization (and that of many other photons) yields a statistical spread of outcomes.

Brukner and two collaborators devised ways to quantitatively test whether these photons are really experiencing an indefinite causal order. In 2012, the researchers calculated a ceiling on how statistically correlated the polarization results can be with the rotations performed at A and B if the rotations occurred in a fixed causal order. If the value exceeds this “causal inequality,” then causal influences must go in both directions; causal order must have been indefinite.

“The idea of the causal inequality was really cool, and a lot of people decided to jump in the field,” said Rubino, who jumped in herself in 2015. She and her colleagues produced a landmark demonstration of the quantum switch in 2017 that worked roughly like the one above. Using a simpler test devised by Brukner and company, they confirmed that causal order was indefinite.

Attention turned to what could be done with the indefiniteness. Chiribella and co-authors argued that far more information could be transmitted over noisy channels when sent through the channels in an indefinite order. Experimentalists at the University of Queensland and elsewhere have since demonstrated this communication advantage.

In “the most beautiful experiment” done so far, according to Rubino, Jian-Wei Pan at the University of Science and Technology of China in Hefei demonstrated in 2019 that two parties can compare long strings of bits exponentially more efficiently when transmitting bits in both directions at once rather than in a fixed causal order — an advantage proposed by Brukner and co-authors in 2016. A different group in Hefei reported in January that, whereas engines normally need a hot and cold reservoir to work, with a quantum switch they could extract heat from reservoirs of equal temperature — a surprising use suggested a year ago by Oxford theorists.

It’s not immediately clear how to extend this experimental work to investigate quantum gravity. All the papers about the quantum switch nod at the link between quantum gravity and indefinite causality. But superpositions of massive objects — which stretch the space-time metric in multiple ways at once — collapse so quickly that no one has thought of how to detect the resulting fuzziness of causal relationships. So instead researchers turn to thought experiments.

Quantum Equivalence Principle

You’ll recall Alice and Bob. Imagine they’re stationed in separate laboratory spaceships near Earth. Bizarrely (but not impossibly), Earth is in a quantum superposition of two different places. You don’t need a whole planet to be in superposition for gravity to create causal indefiniteness: Even a single atom, when it’s in a superposition of two places, defines the metric in two ways simultaneously. But when you’re talking about what’s measurable in principle, you might as well go big.

In one branch of the superposition, Earth is nearer to Alice’s lab, and so her clock ticks slower. In the other branch, Earth is nearer to Bob, so his clock ticks slower. When Alice and Bob communicate, causal order gets all switched up.

In a key paper in 2019, Magdalena Zych, Brukner and collaborators proved that this situation would allow Alice and Bob to achieve indefinite causal order.

Samuel Velasco/Quanta Magazine

First, a photon is split by a beam splitter into two possible paths and heads to both Alice’s lab and Bob’s. The setup is such that in the branch of the superposition where Alice’s clock ticks slower, the photon reaches Bob’s lab first; he rotates its polarization and sends the photon to Alice, who then performs her own rotation and sends the photon on to a third person, Charlie, in a faraway third lab. In the other branch of the superposition, the photon reaches Alice first and goes from her to Bob to Charlie. Just as in the example of the quantum switch, this “gravitational quantum switch” creates a superposition of A then B and B then A.

Charlie then brings the photon’s two paths back together and measures its polarization. Alice, Bob and Charlie run the experiment over and over. They find that their rotations and measurement outcomes are so statistically correlated that the rotations must have happened in an indefinite causal order.

To analyze causal indefiniteness in scenarios such as this, the Vienna researchers developed a way of encoding probabilities for observing different outcomes in different locations without reference to a fixed background time, as in Hardy’s causaloid approach. Their “process matrix formalism” can handle probabilities that causally influence each other in neither direction, one direction or both at once. “You can very well define conditions under which you can preserve these probabilities but didn’t assume that probabilities are before or after,” Brukner said.

Meanwhile, Hardy achieved his goal of formulating general relativity in the causaloid framework in 2016. Essentially, he found a fancier way of sorting his stacks of cards. He showed that you can map any measurements that you might make onto an abstract space devoid of causal assumptions. You might, for instance, inspect a small patch of the universe and measure everything you can about it — the density of oxygen, the amount of dark energy, and so on. You can then plot the measurements of this patch as a single point in an abstract high-dimensional space, one that has a different axis for each measurable quantity. Repeat for as many patches of space-time as you please.

After you’ve mapped space-time’s contents in this other space, patterns and surfaces begin to appear. The plot retains all the correlations that existed in space-time, but now without any sense of background time, or cause and effect. You can then use the causaloid framework to build up expressions for probabilities pertaining to larger and larger regions of the plot.

This common framework for both quantum mechanics and general relativity may provide a language for quantum gravity, and Hardy is busy mulling next steps.

A diptych of two photos; one shows a bald man in glasses with arms crossed leaning against a blackboard and the other shows a woman in a blazer using a marker to point at an equation on a whiteboard.
Časlav Brukner of the University of Vienna, Magdalena Zych of the University of Queensland and other theorists have developed new mathematical frameworks for analyzing situations in which gravity renders causality indefinite. IQOQI and Mateusz Kotyrba; Courtesy of Magdalena Zych

There’s one concept that both he and the Vienna theorists have recently identified as a potential bridge to future, post-causal physics: A “quantum equivalence principle” analogous to the equivalence principle that, a century ago, showed Einstein the way to general relativity. One way of stating Einstein’s equivalence principle is that even though space-time can wildly stretch and curve, local patches of it (such as the inside of a falling elevator) look flat and classical, and Newtonian physics applies. “The equivalence principle allowed you to find the old physics inside the new physics,” Hardy said. “That gave Einstein just enough.”

Here’s the analogous principle: Quantum gravity allows the space-time metric to curve wildly in multiple ways simultaneously. This means any event will have multiple mismatched light cones — in short, causality is indefinite.

But Hardy notes that if you look at different space-time metrics, you can find a way of identifying points so that the light cones match up, at least locally. Just as space-time looks Newtonian inside Einstein’s elevator, these points define a reference frame where causality looks definite. “Points that were in the future of one light cone are also in the future of the other ones, so their local causal structure agrees.”

Hardy’s quantum equivalence principle asserts that there will always be such points. “It’s a way to deal with the wildness of indefinite causal structure,” he said.

Einstein came up with his equivalence principle in 1907 and took until 1915 to work out general relativity; Hardy hopes to chart a similar course in his pursuit of quantum gravity, though he notes, “I’m not as smart as Einstein, nor as young.”

Brukner, Flaminia Giacomini and others are pursuing similar ideas about quantum reference frames and equivalence principles.

It’s not yet clear how these researchers’ operational approach to quantum gravity intersects efforts like string theory and loop quantum gravity, which more directly aim to quantize gravity into discrete units (invisibly small “strings” or “loops” in those two cases). Brukner notes that these latter approaches “do not have immediate operational implications.” Like Hardy, he prefers to “try to clarify concepts involved and try to connect them to things that we can, in principle, observe.”

But ultimately quantum gravity must be specific — answering not just the question “What can we observe?” but also “What exists?” That is, what are the quantum building blocks of gravity, space and time?

According to Zych, research on indefinite causal structures is helping with the search for the full theory of quantum gravity in two ways: by providing a mathematical framework, and by informing the development of specific theories, since the reasoning should hold in any approach to the quantization of gravity. She said, “We are building intuition about the phenomena associated with quantum features of temporal and causal order, which will help to get our heads around these issues within a complete quantum gravity theory.”

Related:


  1. The Most Famous Paradox in Physics Nears Its End
  2. Does Time Really Flow? New Clues Come From a Century-Old Approach to Math.
  3. Why the Laws of Physics Are Inevitable

Hardy is currently participating in a large research collaboration called QISS aimed at cross-fertilizing communities of researchers like him, with backgrounds in quantum foundations and quantum information, with other communities of quantum gravity researchers. Carlo Rovelli, a well-known loop quantum gravity theorist at Aix-Marseille University in France who leads QISS, called Hardy “an accurate thinker” who approaches issues “from a different perspective and with a different language” that Rovelli finds useful.

Hardy thinks his causaloid framework might be compatible with loops or strings, potentially suggesting how to formulate those theories in a way that doesn’t envision objects evolving against a fixed background time. “We’re trying to find different routes up the mountain,” he said. He suspects that the surest route to quantum gravity is the one that “has at its heart this idea of indefinite causal structure.”

Do Not Think For Yourselves, You Are Not Educated, Trust Scientists – Posted March 20th 2021

5 ways to spot if someone is trying to mislead you when it comes to science Posted March 20th 2021

by Hassan Vally, The Conversation 

It’s not a new thing for people to try to mislead you when it comes to science. But in the age of COVID-19—when we’re being bombarded with even more information than usual, when there’s increased uncertainty, and when we may be feeling overwhelmed and fearful—we’re perhaps even more susceptible to being deceived.

The challenge is to be able to identify when this may be happening. Sometimes it’s easy, as often even the most basic fact-checking and logic can be potent weapons against misinformation.

But often, it can be hard. People who are trying either to make you believe something that isn’t true, or to doubt something that is true, use a variety of strategies that can manipulate you very effectively.

Here are five to look out for.

1. The ‘us versus them’ narrative

This is one of the most common tactics used to mislead. It taps into our intrinsic distrust of authority and paints those with evidence-based views as part of some other group that’s not be trusted. This other group—whether people or an institution—is supposedly working together against the common good, and may even want to harm us.

Recently we’ve seen federal MP Craig Kelly use this device. He has repeatedly referred to “big goverment” being behind a conspiracy to withhold hydroxychloroquine and ivermectin from the public (these drugs currently don’t have proven benefits against COVID-19). Kelly is suggesting there are forces working to prevent doctors from prescribing these drugs to treat COVID-19, and that he’s on our side.

His assertion is designed to distract from, or completely dismiss, what the scientific evidence is telling us. It’s targeted at people who feel disenfranchised and are predisposed to believing these types of claims.

Although this is one of the least sophisticated strategies used to mislead, and easy to spot, it can be very effective.

2. “I’m not a scientist, but…’

People tend to use the phrase “I’m not a scientist, but…” as a sort of universal disclaimer which they feel allows them to say whatever they want, regardless of scientific accuracy.

A phrase with similar intent is “I know what the science says, but I’m keeping an open mind.” People who want to disregard what the evidence is showing, but at the same time want to appear reasonable and credible, often use these phrases.

Politicians are among the most frequent offenders. On an episode of Q&A in 2020, Senator Jim Molan indicated he was not “relying on the evidence” to form his conclusions about whether climate change was caused by humans. He was keeping an open mind, he said.

If you hear any statements that sound faintly like these ones, particularly from a politician, alarm bells should ring very loudly.

3. Reference to ‘the science not being settled’

This is perhaps one of the most powerful strategies used to mislead.

There are of course times when the science is not settled, and when this is the case, scientists openly argue different points of view based on the evidence available.

Currently, experts are having an important debate around the role of tiny airborne particles called aerosols in the transmission of COVID-19. As for most things COVID-related, we’re working with limited and uncertain evidence, and the landscape is in constant flux. This type of debate is healthy.

But people might suggest the science isn’t settled in a mischievous way, to overstate the degree of uncertainty in an area. This strategy exploits the broader community’s limited understanding of the scientific process, including the fact all scientific findings are associated with a degree of uncertainty.

It’s well documented the tobacco industry designed the playbook on this to dismiss the evidence that smoking causes lung cancer.

The goal here is to raise doubt, create confusion and undermine the science. The power in this strategy lies in the fact it’s relatively easy to employ—particularly in today’s digital age.

4. Overly simplistic explanations

Oversimplifications and generalizations are where many conspiracy theories are born.

Science is often messy, complex and full of nuance. The truth can be much harder to explain, and can sometimes sound less plausible, than a simple but incorrect explanation.

We’re naturally drawn to simple explanations. And if they tap into our fears and exploit our cognitive biases—systematic errors we make when we interpret information—they can be extremely seductive.

Conspiracy theories, such as the one suggesting 5G is the cause of COVID-19, take off because they offer a simple explanation for something frightening and complex. This particular claim also feeds into concerns some people may have about new technologies.

As a general rule, when something appears too good or too bad to be true, it usually is.

5. Cherry-picking

People who use this approach treat scientific studies like individual chocolates in a gift box, where you can choose the ones you like and disregard the ones you don’t. Of course, this isn’t how science works.

It’s important to understand not all studies are equal; some provide much stronger evidence than others. You can’t just conveniently put all your faith in the studies that align with your views, and ignore those that don’t.

When scientists evaluate evidence, they go through a systematic process to assess the whole body of evidence. This is a crucial task that requires expertise.

The cherry-picking tactic can be hard to counter because unless you’re across all the evidence, you’re not likely to know whether the studies being presented have been deliberately curated to mislead you.

This is yet another reason to rely on the experts who understand the full breadth of the evidence and can interpret it sensibly.

The pandemic has highlighted the speed at which misinformation can travel, and how dangerous this can be. Regardless of how sensible or educated we think we are, we can all be taken in by people trying to mislead us.

The key to preventing this is to understand some of the common tactics used to mislead, so we’ll be better placed to spot them, and this may prompt us to seek out more reliable sources of information.

Comment The above article references science being messy, implying it is beyond the brain power and education of the average person – state education being mainly about brainwashing , PSHE etc.

What the writer is really getting at is why we must hunker down for more lockdown as they hype up the menace of a third wave. We are all too stupid to understand why lockdown has worked so well and that millions more would have died without it. We mustn’t suspect that it hasn’t worked at all , and that the virus has been hyped up while borders have remained open. Only a deranged conspiracy theorist would suspect a hidden agenda. Leave it all to the ‘scientists.’

We are told that vaccines are no reason to lift lockdown whatever the apparent damage to the masses lives, income and sanity. Hence vaccination has been suspended in U.K and Europe’s leadership ( sic ) are bickering over what vaccine can and cannot be used. R.J Cook

Covid Lockdown Goalposts have wheels and will keep moving.


Mathematician Proves Huge Result on ‘Dangerous’ Problem March 9th 2021

Mathematicians regard the Collatz conjecture as a quagmire and warn each other to stay away. But Terence Tao has made more progress than anyone in decades.

Quanta Magazine

  • Kevin Hartnett

Read when you’ve got time to spare.1093px-All_Collatz_sequences_of_a_length_inferior_to_20.svg.png

Take a number, any number. If it’s even, halve it. If it’s odd, multiply by 3 and add 1. Repeat. Do all starting numbers lead to 1? Credit: Lovasoa / Wikimedia Commons / Public Domain.

Experienced mathematicians warn up-and-comers to stay away from the Collatz conjecture. It’s a siren song, they say: Fall under its trance and you may never do meaningful work again.

The Collatz conjecture is quite possibly the simplest unsolved problem in mathematics — which is exactly what makes it so treacherously alluring.

“This is a really dangerous problem. People become obsessed with it and it really is impossible,” said Jeffrey Lagarias, a mathematician at the University of Michigan and an expert on the Collatz conjecture.

In 2019, one of the top mathematicians in the world dared to confront the problem — and came away with one of the most significant results on the Collatz conjecture in decades.

On September 8, 2019, Terence Tao posted a proof showing that — at the very least — the Collatz conjecture is “almost” true for “almost” all numbers. While Tao’s result is not a full proof of the conjecture, it is a major advance on a problem that doesn’t give up its secrets easily.

“I wasn’t expecting to solve this problem completely,” said Tao, a mathematician at the University of California, Los Angeles. “But what I did was more than I expected.”

The Collatz Conundrum

Lothar Collatz likely posed the eponymous conjecture in the 1930s. The problem sounds like a party trick. Pick a number, any number. If it’s odd, multiply it by 3 and add 1. If it’s even, divide it by 2. Now you have a new number. Apply the same rules to the new number. The conjecture is about what happens as you keep repeating the process.

Intuition might suggest that the number you start with affects the number you end up with. Maybe some numbers eventually spiral all the way down to 1. Maybe others go marching off to infinity.

But Collatz predicted that’s not the case. He conjectured that if you start with a positive whole number and run this process long enough, all starting values will lead to 1. And once you hit 1, the rules of the Collatz conjecture confine you to a loop: 1, 4, 2, 1, 4, 2, 1, on and on forever.

Over the years, many problem solvers have been drawn to the beguiling simplicity of the Collatz conjecture, or the “3x + 1 problem,” as it’s also known. Mathematicians have tested quintillions of examples (that’s 18 zeros) without finding a single exception to Collatz’s prediction. You can even try a few examples yourself with any of the many “Collatz calculators” online. The internet is awash in unfounded amateur proofs that claim to have resolved the problem one way or the other. Collatz_560.jpg

Try it for yourself here. Credit: Lucy Reading-Ikkanda / Quanta Magazine.

“You just need to know multiplying by 3 and dividing by 2 and you can start playing around with it right away. It’s very tempting to try,” said Marc Chamberland, a mathematician at Grinnell College who produced a popular YouTube video on the problem called “The Simplest Impossible Problem.”

But legitimate proofs are rare.

In the 1970s, mathematicians showed that almost all Collatz sequences — the list of numbers you get as you repeat the process — eventually reach a number that’s smaller than where you started — weak evidence, but evidence nonetheless, that almost all Collatz sequences incline toward 1. From 1994 until Tao’s result this year, Ivan Korec held the record for showing just how much smaller these numbers get. Other results have similarly picked at the problem without coming close to addressing the core concern.

“We really don’t understand the Collatz question well at all, so there hasn’t been much significant work on it,” said Kannan Soundararajan, a mathematician at Stanford University who has worked on the conjecture.

The futility of these efforts has led many mathematicians to conclude that the conjecture is simply beyond the reach of current understanding — and that they’re better off spending their research time elsewhere.

“Collatz is a notoriously difficult problem — so much so that mathematicians tend to preface every discussion of it with a warning not to waste time working on it,” said Joshua Cooper of the University of South Carolina in an email.

An Unexpected Tip

Lagarias first became intrigued by the conjecture as a student at least 40 years ago. For decades he has served as the unofficial curator of all things Collatz. He’s amassed a library of papers related to the problem, and in 2010 he published some of them as a book titled The Ultimate Challenge: The 3x + 1 Problem.

“Now I know lots more about the problem, and I’d say it’s still impossible,” Lagarias said.

Tao doesn’t normally spend time on impossible problems. In 2006 he won the Fields Medal, math’s highest honor, and he is widely regarded as one of the top mathematicians of his generation. He’s used to solving problems, not chasing pipe dreams.

“It’s actually an occupational hazard when you’re a mathematician,” he said. “You could get obsessed with these big famous problems that are way beyond anyone’s ability to touch, and you can waste a lot of time.”

But Tao doesn’t entirely resist the great temptations of his field. Every year, he tries his luck for a day or two on one of math’s famous unsolved problems. Over the years, he’s made a few attempts at solving the Collatz conjecture, to no avail.

Then this past August an anonymous reader left a comment on Tao’s blog. The commenter suggested trying to solve the Collatz conjecture for “almost all” numbers, rather than trying to solve it completely.

“I didn’t reply, but it did get me thinking about the problem again,” Tao said.

And what he realized was that the Collatz conjecture was similar, in a way, to the types of equations — called partial differential equations — that have featured in some of the most significant results of his career.

Inputs and Outputs

Partial differential equations, or PDEs, can be used to model many of the most fundamental physical processes in the universe, like the evolution of a fluid or the ripple of gravity through space-time. They arise in situations where the future position of a system — like the state of a pond five seconds after you’ve thrown a rock into it — depends on contributions from two or more factors, like the water’s viscosity and velocity.

Complicated PDEs wouldn’t seem to have much to do with a simple question about arithmetic like the Collatz conjecture.

But Tao realized there was something similar about them. With a PDE, you plug in some values, get other values out, and repeat the process — all to understand that future state of the system. For any given PDE, mathematicians want to know if some starting values eventually lead to infinite values as an output or whether an equation always yields finite values, regardless of the values you start with.

For Tao, this goal had the same flavor as investigating whether you always eventually get the same number (1) from the Collatz process no matter what number you feed in. As a result, he recognized that techniques for studying PDEs could apply to the Collatz conjecture.

One particularly useful technique involves a statistical way of studying the long-term behavior of a small number of starting values (like a small number of initial configurations of the water in a pond) and extrapolating from there to the long-term behavior of all possible starting configurations of the pond.

In the context of the Collatz conjecture, imagine starting with a large sample of numbers. Your goal is to study how these numbers behave when you apply the Collatz process. If close to 100% of the numbers in the sample end up either exactly at 1 or very close to 1, you might conclude that almost all numbers behave the same way.

But for the conclusion to be valid, you’d have to construct your sample very carefully. The challenge is akin to generating a sample of voters in a presidential poll. To extrapolate accurately from the poll to the population as a whole, you’d need to weight the sample with the correct proportion of Republicans and Democrats, women and men, and so on.

Numbers have their own “demographic” characteristics. There are odd and even numbers, of course, and numbers that are multiples of 3, and numbers that differ from each other in even subtler ways. When you construct a sample of numbers, you can weight it toward containing certain kinds of numbers and not others — and the better you choose your weights, the more accurately you’ll be able to draw conclusions about numbers as a whole.

Weighty Choices

Tao’s challenge was much harder than just figuring out how to create an initial sample of numbers with the proper weights. At each step in the Collatz process, the numbers you’re working with change. One obvious change is that almost all numbers in the sample get smaller.

Another, maybe less obvious change is that the numbers might start to clump together. For example, you could begin with a nice, uniform distribution like the numbers from 1 to 1 million. But five Collatz iterations later, the numbers are likely to be concentrated in a few small intervals on the number line. In other words, you may start out with a good sample, but five steps later it’s hopelessly skewed.

“Ordinarily one would expect the distribution after the iteration to be completely different from the one you started with,” said Tao in an email.

Tao’s key insight was figuring out how to choose a sample of numbers that largely retains its original weights throughout the Collatz process.

For example, Tao’s starting sample is weighted to contain no multiples of 3, since the Collatz process quickly weeds out multiples of 3 anyway. Some of the other weights Tao came up with are more complicated. He weights his starting sample toward numbers that have a remainder of 1 after being divided by 3, and away from numbers that have a remainder of 2 after being divided by 3.

The result is that the sample Tao starts with maintains its character even as the Collatz process proceeds.

“He found some way to continue this process further, so that after some number of steps you still know what’s going on,” Soundararajan said. “When I first saw the paper, I was very excited and thought that it was very striking.”

Tao used this weighting technique to prove that almost all Collatz starting values — 99% or more — eventually reach a value that is quite close to 1. This allowed him to draw conclusions along the lines of 99% of starting values greater than 1 quadrillion eventually reach a value below 200.

It is arguably the strongest result in the long history of the conjecture.

“It’s a great advance in our knowledge of what’s happening on this problem,” said Lagarias. “It’s certainly the best result in a very long time.”

Tao’s method is almost certainly incapable of getting all the way to a full proof of the Collatz conjecture. The reason is that his starting sample still skews a little after each step in the process. The skewing is minimal as long as the sample still contains many different values that are far from 1. But as the Collatz process continues and the numbers in the sample draw closer to 1, the small skewing effect becomes more and more pronounced — the same way that a slight miscalculation in a poll doesn’t matter much when the sample size is large but has an outsize effect when the sample size is small.

Any proof of the full conjecture would likely depend on a different approach. As a result, Tao’s work is both a triumph and a warning to the Collatz curious: Just when you think you might have cornered the problem, it slips away.

“You can get as close as you want to the Collatz conjecture, but it’s still out of reach,” Tao said.

Kevin Hartnett is a senior writer at Quanta Magazine covering mathematics and computer science.

Scientists clone the first U.S. endangered species March 2nd 2021

A black-footed ferret was duplicated from the genes of an animal that died more than 30 years ago.

Image: Cloned ferret

Elizabeth Ann is the first cloned black-footed ferret and first-ever cloned U.S. endangered species, at 50-days old on Jan. 29, 2021.U.S. Fish and Wildlife Service via APFeb. 19, 2021, 4:58 AM GMTBy The Associated Press

CHEYENNE, Wyo. — Scientists have cloned the first U.S. endangered species, a black-footed ferret duplicated from the genes of an animal that died over 30 years ago.

The slinky predator named Elizabeth Ann, born Dec. 10 and announced Thursday, is cute as a button. But watch out — unlike the domestic ferret foster mom who carried her into the world, she’s wild at heart.

“You might have been handling a black-footed ferret kit and then they try to take your finger off the next day,” U.S. Fish and Wildlife Service black-footed ferret recovery coordinator Pete Gober said Thursday. “She’s holding her own.”

Elizabeth Ann was born and is being raised at a Fish and Wildlife Service black-footed ferret breeding facility in Fort Collins, Colorado. She’s a genetic copy of a ferret named Willa who died in 1988 and whose remains were frozen in the early days of DNA technology.

Cloning eventually could bring back extinct species such as the passenger pigeon. For now, the technique holds promise for helping endangered species including a Mongolian wild horse that was cloned and last summer born at a Texas facility.

“Biotechnology and genomic data can really make a difference on the ground with conservation efforts,” said Ben Novak, lead scientist with Revive & Restore, a biotechnology-focused conservation nonprofit that coordinated the ferret and horse clonings.

Black-footed ferrets are a type of weasel easily recognized by dark eye markings resembling a robber’s mask. Charismatic and nocturnal, they feed exclusively on prairie dogs while living in the midst of the rodents’ sometimes vast burrow colonies.

Even before cloning, black-footed ferrets were a conservation success story. They were thought extinct — victims of habitat loss as ranchers shot and poisoned off prairie dog colonies that made rangelands less suitable for cattle — until a ranch dog named Shep brought a dead one home in Wyoming in 1981.

Scientists gathered the remaining population for a captive breeding program that has released thousands of ferrets at dozens of sites in the western U.S., Canada and Mexico since the 1990s.

Lack of genetic diversity presents an ongoing risk. All ferrets reintroduced so far are the descendants of just seven closely related animals — genetic similarity that makes today’s ferrets potentially susceptible to intestinal parasites and diseases such as sylvatic plague.

Willa could have passed along her genes the usual way, too, but a male born to her named Cody “didn’t do his job” and her lineage died out, said Gober.

When Willa died, the Wyoming Game and Fish Department sent her tissues to a “frozen zoo” run by San Diego Zoo Global that maintains cells from more than 1,100 species and subspecies worldwide. Eventually scientists may be able to modify those genes to help cloned animals survive.

Recommended

Animal NewsTwo Florida men arrested after TikTok allegedly shows unlicensed surgery on dog

Animal NewsVideo shows a herd of cows take over Indiana highway

“With these cloning techniques, you can basically freeze time and regenerate those cells,” Gober said. “We’re far from it now as far as tinkering with the genome to confer any genetic resistance, but that’s a possibility in the future.”

Cloning makes a new plant or animal by copying the genes of an existing animal. Texas-based Viagen, a company that clones pet cats for $35,000 and dogs for $50,000, cloned a Przewalski’s horse, a wild horse species from Mongolia born last summer.

Similar to the black-footed ferret, the 2,000 or so surviving Przewalski’s horses are descendants of just a dozen animals.

Viagen also cloned Willa through coordination by Revive & Restore, a wildlife conservation organization focused on biotechnology. Besides cloning, the nonprofit in Sausalito, California, promotes genetic research into imperiled life forms ranging from sea stars to jaguars.

“How can we actually apply some of those advances in science for conservation? Because conservation needs more tools in the toolbox. That’s our whole motivation. Cloning is just one of the tools,” said Revive & Restore co-founder and executive director Ryan Phelan.

Elizabeth Ann was born to a tame domestic ferret, which avoided putting a rare black-footed ferret at risk. Two unrelated domestic ferrets also were born by cesarian section; a second clone didn’t survive.

Elizabeth Ann and future clones of Willa will form a new line of black-footed ferrets that will remain in Fort Collins for study. There currently are no plans to release them into the wild, said Gober.

Novak, the lead scientist at Revive & Restore, calls himself the group’s “passenger pigeon guy” for his work to someday bring back the once common bird that has been extinct for over a century. Cloning birds is considered far more challenging than mammals because of their eggs, yet the group’s projects even include trying to bring back a woolly mammoth, a creature extinct for thousands of years.

The seven-year effort to clone a black-footed ferret was far less theoretical, he said, and shows how biotechnology can help conservation now. In December, Novak loaded up a camper and drove to Fort Collins with his family to see the results firsthand.

“I absolutely had to see our beautiful clone in person,” Novak said. “There’s just nothing more incredible than that.”

In Science Fiction, We Are Never Home Posted February 25th 2021

Where technology leads to exile and yearning.

By Steve Erickson February 10, 2021

This essay first appeared in our “Home” issue way back in 2013. But somehow feels so timely today.

Halfway through director Alfonso Cuarón’s Gravity, Sandra Bullock suffers the most cosmic case of homesick blues since Keir Dullea was hurled toward the infinite in 2001: A Space Odyssey nearly half a century ago. For Bullock, home is (as it was for Dullea) the Earth, looming below so huge it would seem she couldn’t miss it, if she could somehow just fall from her shattered spacecraft. She cares about nothing more than getting back to where she came from, even as 2001’s Dullea is in flight, accepting his exile and even embracing it.

Science fiction has long been distinguished by these dual impulses—leaving home and returning—when it’s not marked by the way that home leaves us, or deceives us when it’s no longer the place we recognize once we’re back. As a genre, science fiction has become the cultural expression of how progress and technology by definition distance us from what we recognize, turning the home that once was a sanctuary into a prison when we feel confined and then a destination when we’re lost. Earth-bound, claustrophobic, curbed by our dimensional limits, we’re compelled by the imperative of exploration; far-flung, rootless, untethered to reference points, we covet the familiar where we believe we’re safe, even if the familiar never really was all that safe.

Through intersecting themes of estrangement, memory, and exodus, science fiction, more than any art form, means to destabilize surroundings that are already precarious and constantly refine the nature of reality and what it is to be human. In the same way that science answers what’s previously unfathomable, science fiction confronts the established and traditional with which home is synonymous, challenging us to speculate on what might have been or what could be while where we live, which is the present, is transformed into the terrain of the future, where we’ve never been. The science fiction imagination is a nomad on an endless expedition and the result is a literature of the psyche, torn between the heart that yearns for home and the mind too restless to stay still.

Sapolsky_TH-F1

Also in Literature

The Book No One Read

By Lee Billings

I remember well the first time my certainty of a bright future evaporated, when my confidence in the panacea of technological progress was shaken. It was in 2007, on a warm September evening in San Francisco, where I was relaxing…READ MORE

Estrangement

In the first science fiction story, whose title novelist Arthur C. Clarke and director Stanley Kubrick would borrow for their own odyssey three millennia later, Ulysses departs home for glory in the Trojan wars, then overcomes one fantastic obstacle after another to arrive back in an Ithaca that’s no longer the place frozen in memory. In Gulliver’s Travels, Jonathan Swift’s hero impatiently sheds his known turf for the high seas and Lilliput only to escape to the home he thought he was escaping in the first place, where he finds himself more estranged than when he went. In The Wonderful Wizard of Oz, both the L. Frank Baum novel and the classic 1939 movie, the pull to home is so strong that Dorothy would happily give up a spectacular new residence of Technicolor marvels for the black-and-white, cyclone-blasted Dust Bowl and the preternatural inkling that the Kansan flatlands are where she belongs. All of these stories figure that home is where the heart is, until the heart finds its own coordinates in doubt. In the nuclear age home is that much less reliable, with the last half of the 20th century belying much more aggressively the security that always was an illusion—assuming home isn’t the storm cellar or bomb shelter from which we emerge to find the house gone once the squall passes.

In the 1950s, with atomic tests going off over the nearest ridge—dry runs for Armageddon—the now recognized giant of late-20th century science fiction, author Philip K. Dick, wrote some of his strangest books before succumbing to the publishing pressures of pure genre. These were surveys of the newly surreal suburbs that were a phenomenon of the post-War years. As conveyed by Dick’s Confessions of a Crap Artist and The Man Whose Teeth Were All Exactly Alike, suburbia fetishized the idea of home as it had lodged itself in the American dream—the manifestation of refuge and order and success where everyone went quietly nuts in the view of Dick, who was a little nuts himself: “We don’t feel at home anywhere we go,” he declares in Crap Artist’s opening lines. The traveling salesman of In Milton Lumky Territory returns home to fall in love with an older woman who, he comes to realize, was the second-grade teacher he detested; When she reduces his adult life to rubble much as she did his childhood, he retreats into a long reverie that ends the book where he and the ex-teacher find domestic happiness.

This subversive evocation of home—as an ideal that enthralls us in defiance of its reality, that continues to lure us even when every impression of home has been betrayed—later informed much of Dick’s more straightforward science fiction work of the ‘60s and ‘70s such as Dr. Bloodmoney, in which an earth consumed by nuclear conflagration leaves forsaken in space an astronaut who has only his survivor’s guilt to keep him company, and Flow My Tears, the Policeman Said, where the eponymous cop wandering the skies like a Flying Dutchman has too irrevocably fouled his own nest to resume his place there.

In the 1950s, with atomic tests going off over the nearest ridge, Philip K. Dick wrote, “We don’t feel at home anywhere we go.”

If the ’50s didn’t invent alienation (a pretty fancy word in those days), the decade identified it and then the ’60s and ’70s ran with it. In J. G. Ballard’s novels Concrete Island and The Drowned World, home becomes more contrived and artificial as it becomes less credible and steadfast. The eponymous structure of Ballard’s High Rise is home at its most hermetic, offering everything its inhabitants need or want, but as the building’s boundaries become a quarantine, the social contract within breaks down, “the psychology of high-rise life…exposed with damning results.” In Ballard’s work we’re left to our own psychological construct of home which is less organic and grounded: “Living in high rises required a special type of behavior, one that was acquiescent, restrained, even perhaps slightly mad.” In Samuel Delany’s mammoth Dhalgren, the city as a vast metropolitan home literally is a thing of words, as solid or ephemeral as words can be, a notion that 20 years later Mark Z. Danielewski would explode in his textual fantasia House of Leaves, where the Dr. Who-like abode that’s bigger on the inside than on the outside is fashioned from the pages of a book itself. By the ’80s the home writ large as a city no longer is the sum of parts: The Los Angeles of Ridley Scott’s Blade Runner—mutt urbanopolis with not a single natural element other than all its conflicting landscapes of sea and desert at war with each other—cancels out and replaces itself so rapaciously as to raise questions about the humanity that built it.

Memory

All great cities take on an identity of their own so overwhelming as to become oppressive. But if “home” and “human” sound like variations on each other—“human” comes from the same Latin word as “earth”—the hometown no longer feels like home when it becomes something other than human. What it means to be human becomes gauged not merely by what’s experienced but something more acute: In Dick’s novels (including the one that was the basis of Blade Runner) that measure is what is remembered. In his dying moments, Blade Runner’s murderous android played by Rutger Hauer lapses into a rhapsodic recollection that may or may not actually be his own, but by which he achieves humanity just by virtue of how much it moves him. Home is another name for the most profound of memories.

As the deprivation chamber of their craft severs their deepest bonds, the travelers of Stanislaw Lem’s novel Solaris find home has followed them in the form of memories they can bear to neither conjure nor relinquish, something progressively more metaphysical in the Andrei Tarkovsky screen adaptation and the Steven Soderbergh remake. “Where are we?” a startled cosmonaut asks the apparition of his dead fiancée, and when she answers, “At home,” he says, “Where’s that?” The banished begin to assume responsibility for their banishment; they’re afflicted by remorse for, first, having departed home so fecklessly, and second for whatever readjustments they begin to make when they can’t complete the round trip, even if the failure isn’t theirs. Solaris’ cosmonaut is pursued by the anger and culpability that attended the fiancée’s suicide back on the Earth he shared with her. Pitched from that home, he becomes home unto himself—home at the speed of light but slower than the speed of grief.

Battlestar Galactica’s paradox is that the nearer its travelers draw to their new home, the less familiar everything becomes.

With every definition of home in shambles, it would be a wonder the homeless still feel so lost if it weren’t for the humanity at stake. Ulysses was beset by shame and self-reproach over not only his abandonment of home but home’s abandonment of him: This is the prototype of the science-fiction character who hovers suspended in an impossible state of deportation. For Bullock in Gravity, all those miles above Earth, reconnecting with home means surmounting the death of a daughter. In David Bowie’s song “Space Oddity,” Major Tom is doomed to the sad revelations of one lonely sunrise after sunset, until 11 years later he returns in the sequel “Ashes to Ashes” a junkie strung out over the heavens, corrupted and less innocent for his survival. The protagonist of Robert Heinlein’s novel Stranger in a Strange Land (for which the counterculture ’60s had a disproportionate regard) is an earthling come to Earth after being born and raised on Mars, his sense of home so discombobulated that his only recourse, apparently, is to become a messiah. Walter Tevis’ The Man Who Fell to Earth finds an extraterrestrial (played by Bowie in the Nicolas Roeg film) tumbling to our planet searching for a new world where he might bring his family, who are dying of a drought back on the old world; when the alien winds up marooned, with everything about the new residence an excruciating reminder that his family has been deserted to die, he mulls his dwindling options. “Old pathways,” he calls them, “to ancient homes and new deaths.”

Exodus

More than in any other magnum opus, the themes of estrangement, memory, and exodus converge in last decade’s television series, Battlestar Galactica. The crew and inhabitants of several starships led by the Galactica are the sole human beings stranded by the obliteration of their world at the hands of an artificial race that humans created to be slaves. When the Galactica sets off in search of a new home, the promised land is a place called “Earth,” so much a part of lore that no one can be certain it’s real, and the quest becomes all consuming—disrupting loyalties, rending relationships, testing democratic principles, calling into question faith and, most fundamentally, leaving people to suspect that not only their neighbors aren’t human but they aren’t either. Some, consequently, take their own lives.

As in the fiction of Dick, Lem, and Ballard, in Battlestar Galactica science and technology constantly mutate our perception of reality, home, and ourselves. Reading Ballard’s science-fiction novels, the British novelist Will Self writes that “we feel simultaneously several different forms of the Unheimliche; that uncanny sensation which Freud (a major influence on Ballard) defined as drawing its potency from its very closeness to what is familiar—or, literally, ‘homelike.’ ” Galactica’s paradox is that the nearer its travelers draw to their new home, the less familiar everything becomes, until their own bodies are least familiar of all; by the time they find home at the series’ conclusion, they’re the strangest of strangers in the strangest of lands. Galactica lays waste to not just spatial or even temporal latitudes but those intuitive: “There must be some kind of way out of here,” characters quote to each other, by the end of season three, from a well known Bob Dylan song that won’t be written for thousands of years. Over the course of a newly redefined eternity, Earth finally looms before them so huge it would seem they couldn’t miss it, if they could somehow just fall from their shattered selves.

Steve Erickson is the author of 10 novels including Shadowbahn, Zeroville, These Dreams of You and Our Ecstatic Days—as well as two works of literary non-fiction about politics and culture—that have been translated into 11 languages. Erickson is the recipient of a Guggenheim Fellowship, the American Academy of Arts and Letters prize and the Lannan Lifetime Achievement Award.

A Brain Built From Atomic Switches Can Learn

A tiny self-organized mesh full of artificial synapses recalls its experiences and can solve simple problems. Its inventors hope it points the way to devices that match the brain’s energy-efficient computing prowess. Posted February 24th 2021

Quanta Magazine

  • Andreas von Bubnoff

Read when you’ve got time to spare.ArtificialBrain_2880x1620.jpg

Credit: Eric Nyquist for Quanta Magazine.

Brains, beyond their signature achievements in thinking and problem solving, are paragons of energy efficiency. The human brain’s power consumption resembles that of a 20-watt incandescent lightbulb. In contrast, one of the world’s largest and fastest supercomputers, the K computer in Kobe, Japan, consumes as much as 9.89 megawatts of energy — an amount roughly equivalent to the power usage of 10,000 households. Yet in 2013, even with that much power, it took the machine 40 minutes to simulate just a single second’s worth of 1 percent of human brain activity.

Now engineering researchers at the California NanoSystems Institute at the University of California, Los Angeles, are hoping to match some of the brain’s computational and energy efficiency with systems that mirror the brain’s structure. They are building a device, perhaps the first one, that is “inspired by the brain to generate the properties that enable the brain to do what it does,” according to Adam Stieg, a research scientist and associate director of the institute, who leads the project with Jim Gimzewski, a professor of chemistry at UCLA.

The device is a far cry from conventional computers, which are based on minute wires imprinted on silicon chips in highly ordered patterns. The current pilot version is a 2-millimeter-by-2-millimeter mesh of silver nanowires connected by artificial synapses. Unlike silicon circuitry, with its geometric precision, this device is messy, like “a highly interconnected plate of noodles,” Stieg said. And instead of being designed, the fine structure of the UCLA device essentially organized itself out of random chemical and electrical processes.

Yet in its complexity, this silver mesh network resembles the brain. The mesh boasts 1 billion artificial synapses per square centimeter, which is within a couple of orders of magnitude of the real thing. The network’s electrical activity also displays a property unique to complex systems like the brain: “criticality,” a state between order and chaos indicative of maximum efficiency.

Moreover, preliminary experiments suggest that this neuromorphic (brainlike) silver wire mesh has great functional potential. It can already perform simple learning and logic operations. It can clean the unwanted noise from received signals, a capability that’s important for voice recognition and similar tasks that challenge conventional computers. And its existence proves the principle that it might be possible one day to build devices that can compute with an energy efficiency close to that of the brain.

These advantages look especially appealing as the limits of miniaturization and efficiency for silicon microprocessors now loom. “Moore’s law is dead, transistors are no longer getting smaller, and [people] are going, ‘Oh, my God, what do we do now?’” said Alex Nugent, CEO of the Santa Fe-based neuromorphic computing company Knowm, who was not involved in the UCLA project. “I’m very excited about the idea, the direction of their work,” Nugent said. “Traditional computing platforms are a billion times less efficient.”

Switches That Act Like Synapses

Energy efficiency wasn’t Gimzewski’s motivation when he started the silver wire project 10 years ago. Rather, it was boredom. After using scanning tunneling microscopes to look at electronics at the atomic scale for 20 years, he said, “I was tired of perfection and precise control [and] got a little bored with reductionism.”

In 2007, he accepted an invitation to study single atomic switches developed by a group that Masakazu Aono led at the International Center for Materials Nanoarchitectonics in Tsukuba, Japan. The switches contain the same ingredient that turns a silver spoon black when it touches an egg: silver sulfide, sandwiched between solid metallic silver.

Applying voltage to the devices pushes positively charged silver ions out of the silver sulfide and toward the silver cathode layer, where they are reduced to metallic silver. Atom-wide filaments of silver grow, eventually closing the gap between the metallic silver sides. As a result, the switch is on and current can flow. Reversing the current flow has the opposite effect: The silver bridges shrink, and the switch turns off.

Soon after developing the switch, however, Aono’s group started to see irregular behavior. The more often the switch was used, the more easily it would turn on. If it went unused for a while, it would slowly turn off by itself. In effect, the switch remembered its history. Aono and his colleagues also found that the switches seemed to interact with each other, such that turning on one switch would sometimes inhibit or turn off others nearby.

Most of Aono’s group wanted to engineer these odd properties out of the switches. But Gimzewski and Stieg (who had just finished his doctorate in Gimzewski’s group) were reminded of synapses, the switches between nerve cells in the human brain, which also change their responses with experience and interact with each other. During one of their many visits to Japan, they had an idea. “We thought: Why don’t we try to embed them in a structure reminiscent of the cortex in a mammalian brain [and study that]?” Stieg said.

Building such an intricate structure was a challenge, but Stieg and Audrius Avizienis, who had just joined the group as a graduate student, developed a protocol to do it. By pouring silver nitrate onto tiny copper spheres, they could induce a network of microscopically thin intersecting silver wires to grow. They could then expose the mesh to sulfur gas to create a silver sulfide layer between the silver wires, as in the Aono team’s original atomic switch.

Self-Organized Criticality

When Gimzewski and Stieg told others about their project, almost nobody thought it would work. Some said the device would show one type of static activity and then sit there, Stieg recalled. Others guessed the opposite: “They said the switching would cascade and the whole thing would just burn out,” Gimzewski said.

But the device did not melt. Rather, as Gimzewski and Stieg observed through an infrared camera, the input current kept changing the paths it followed through the device — proof that activity in the network was not localized but rather distributed, as it is in the brain.

Then, one fall day in 2010, while Avizienis and his fellow graduate student Henry Sillin were increasing the input voltage to the device, they suddenly saw the output voltage start to fluctuate, seemingly at random, as if the mesh of wires had come alive. “We just sat and watched it, fascinated,” Sillin said.

They knew they were on to something. When Avizienis analyzed several days’ worth of monitoring data, he found that the network stayed at the same activity level for short periods more often than for long periods. They later found that smaller areas of activity were more common than larger ones.

“That was really jaw-dropping,” Avizienis said, describing it as “the first [time] we pulled a power law out of this.” Power laws describe mathematical relationships in which one variable changes as a power of the other. They apply to systems in which larger scale, longer events are much less common than smaller scale, shorter ones — but are also still far more common than one would expect from a chance distribution. Per Bak, the Danish physicist who died in 2002, first proposed power laws as hallmarks of all kinds of complex dynamical systems that can organize over large timescales and long distances. Power-law behavior, he said, indicates that a complex system operates at a dynamical sweet spot between order and chaos, a state of “criticality” in which all parts are interacting and connected for maximum efficiency.

As Bak predicted, power-law behavior has been observed in the human brain: In 2003, Dietmar Plenz, a neuroscientist with the National Institutes of Health, observed that groups of nerve cells activated others, which in turn activated others, often forming systemwide activation cascades. Plenz found that the sizes of these cascades fell along a power-law distribution, and that the brain was indeed operating in a way that maximized activity propagation without risking runaway activity.

The fact that the UCLA device also shows power-law behavior is a big deal, Plenz said, because it suggests that, as in the brain, a delicate balance between activation and inhibition keeps all of its parts interacting with one another. The activity doesn’t overwhelm the network, but it also doesn’t die out.

Gimzewski and Stieg later found an additional similarity between the silver network and the brain: Just as a sleeping human brain shows fewer short activation cascades than a brain that’s awake, brief activation states in the silver network become less common at lower energy inputs. In a way, then, reducing the energy input into the device can generate a state that resembles the sleeping state of the human brain.

Training and Reservoir Computing

But even if the silver wire network has brainlike properties, can it solve computing tasks? Preliminary experiments suggest the answer is yes, although the device is far from resembling a traditional computer.

For one thing, there is no software. Instead, the researchers exploit the fact that the network can distort an input signal in many different ways, depending on where the output is measured. This suggests possible uses for voice or image recognition, because the device should be able to clean a noisy input signal.

But it also suggests that the device could be used for a process called reservoir computing. Because one input could in principle generate many, perhaps millions, of different outputs (the “reservoir”), users can choose or combine outputs in such a way that the result is a desired computation of the inputs. For example, if you stimulate the device at two different places at the same time, chances are that one of the millions of different outputs will represent the sum of the two inputs.

The challenge is to find the right outputs and decode them and to find out how best to encode information so that the network can understand it. The way to do this is by training the device: by running a task hundreds or perhaps thousands of times, first with one type of input and then with another, and comparing which output best solves a task. “We don’t program the device but we select the best way to encode the information such that the [network behaves] in an interesting and useful manner,” Gimzewski said.

In work that’s soon to be published, the researchers trained the wire network to execute simple logic operations. And in unpublished experiments, they trained the network to solve the equivalent of a simple memory task taught to lab rats called a T-maze test. In the test, a rat in a T-shaped maze is rewarded when it learns to make the correct turn in response to a light. With its own version of training, the network could make the correct response 94 percent of the time.

So far, these results aren’t much more than a proof of principle, Nugent said. “A little rat making a decision in a T-maze is nowhere close to what somebody in machine learning does to evaluate their systems” on a traditional computer, he said. He doubts the device will lead to a chip that does much that’s useful in the next few years.

But the potential, he emphasized, is huge. That’s because the network, like the brain, doesn’t separate processing and memory. Traditional computers need to shuttle information between different areas that handle the two functions. “All that extra communication adds up because it takes energy to charge wires,” Nugent said. With traditional machines, he said, “literally, you could run France on the electricity that it would take to simulate a full human brain at moderate resolution.” If devices like the silver wire network can eventually solve tasks as effectively as machine-learning algorithms running on traditional computers, they could do so using only one-billionth as much power. “As soon as they do that, they’re going to win in power efficiency, hands down,” Nugent said.

The UCLA findings also lend support to the view that under the right circumstances, intelligent systems can form by self-organization, without the need for any template or process to design them. The silver network “emerged spontaneously,” said Todd Hylton, the former manager of the Defense Advanced Research Projects Agency program that supported early stages of the project. “As energy flows through [it], it’s this big dance because every time one new structure forms, the energy doesn’t go somewhere else. People have built computer models of networks that achieve some critical state. But this one just sort of did it all by itself.”

Gimzewski believes that the silver wire network or devices like it might be better than traditional computers at making predictions about complex processes. Traditional computers model the world with equations that often only approximate complex phenomena. Neuromorphic atomic switch networks align their own innate structural complexity with that of the phenomenon they are modeling. They are also inherently fast — the state of the network can fluctuate at upward of tens of thousands of changes per second. “We are using a complex system to understand complex phenomena,” Gimzewski said.

 In 2017, at a meeting of the American Chemical Society in San Francisco, Gimzewski, Stieg and their colleagues presented the results of an experiment in which they fed the device the first three years of a six-year data set of car traffic in Los Angeles, in the form of a series of pulses that indicated the number of cars passing by per hour. After hundreds of training runs, the output eventually predicted the statistical trend of the second half of the data set quite well, even though the device had never seen it.

Perhaps one day, Gimzewski jokes, he might be able to use the network to predict the stock market. “I’d like that,” he said, adding that this was why he was trying to get his students to study atomic switch networks — “before they catch me making a fortune.”

Andreas von Bubnoff is an award-winning science journalist and multimedia producer based in New York City.

Machines Are Inventing New Math We’ve Never Seen Posted February 20th 2021

Pushing the boundaries of math requires great minds to pose fascinating problems. What if a machine could do it? Now, scientists created one that can.MRby Mordechai Rorvig10.2.21

A good conjecture has something like a magnetic pull for the mind of a mathematician. At its best, a mathematical conjecture states something extremely profound in an extremely precise and succinct way, crying out for proof or disproof.

But posing a good conjecture is difficult. It must be deep enough to provoke curiosity and investigation, but not so obscure as to be impossible to glimpse in the first place. Many of the most famous problems in mathematics are conjectures, and not solutions, such as Fermat’s last theorem. Advertisement

Now, a group of researchers from the Technion in Israel and Google in Tel Aviv presented an automated conjecturing system that they call the Ramanujan Machine, named after the mathematician Srinivasa Ramanujan, who developed thousands of innovative formulas in number theory with almost no formal training. The software system has already conjectured several original and important formulas for universal constants that show up in mathematics. The work was published last week in Nature

One of the formulas created by the Machine can be used to compute the value of a universal constant called Catalan’s number more efficiently than any previous human-discovered formulas. But the Ramanujan Machine is imagined not to take over mathematics, so much as provide a sort of feeding line for existing mathematicians. 

As the researchers explain in the paper, the entire discipline of mathematics can be broken down into two processes, crudely speaking: conjecturing things and proving things. Given more conjectures, there is more grist for the mill of the mathematical mind, more for mathematicians to prove and explain.

That’s not to say their system is unambitious. As the researchers put it, the Ramanujan Machine is “trying to replace the mathematical intuition of great mathematicians and providing leads to further mathematical research.”Advertisement

The researchers’ system is not, however, a universal mathematics machine. Rather, it conjectures formulas for how to compute the value of specific numbers called universal constants. The most famous of such constants, pi, gives the ratio between a circle’s circumference and diameter. Pi can be called universal because it shows up all across mathematics, and constant because it maintains the same value for every circle, no matter the size. Tech

OK, WTF Are ‘Virtual Particles’ and Do They Actually Exist?

Mordechai Rorvig11.5.20

In particular, the researchers’ system produces conjectures for the value of universal constants (like pi), written in terms of elegant formulas called continued fractions. Continued fractions are essentially fractions, but more dizzying. The denominator in a continued fraction includes a sum of two terms, the second of which is itself a fraction, whose denominator itself contains a fraction, and so on, out to infinity.

Continued fractions have long compelled mathematicians with their peculiar combination of simplicity and profundity, with the total value of the fraction often equalling important constants. In addition to being “intrinsically fascinating” for their aesthetics, they are also useful for determining the fundamental properties of the constants, as Robert Doughtery-Bliss and Doron Zeilberger of Rutgers University wrote in a preprint from 2020. 

The Ramanujan Machine is built off of two primary algorithms. These find continued fraction expressions that, with a high degree of confidence, seem to equal universal constants. That confidence is important, as otherwise, the conjectures would be easily discarded and provide little value. Advertisement

Each conjecture takes the form of an equation. The idea is that the quantity on the left side of the equals sign, a formula involving a universal constant, should be equal to the quantity on the right, a continued fraction. 

To get to these conjectures, the algorithm picks arbitrary universal constants for the left side and arbitrary continued fractions for the right, and then computes each side separately to a certain precision. If the two sides appear to align, the quantities are calculated to higher precision to make sure their alignment is not a coincidence of imprecision. Critically, formulas already exist to compute the value of universal constants like pi to an arbitrary precision, so that the only obstacle to verifying the sides match is computing time.

Prior to algorithms such as this, mathematicians would have needed to use existing mathematical knowledge and theorems to make such a conjecture. But with the automated conjectures, mathematicians may be able to use them to reverse engineer hidden theorems or more elegant results, as Doughtery-Bliss and Zeilberger have already shown.

But the researchers’ most notable discovery so far is not hidden knowledge, but a new conjecture of surprising importance. This conjecture allows for the computation of Catalan’s constant, a specialized universal constant whose value is needed for many mathematical problems. 

The continued fraction expression of the newly discovered conjecture allows for the most rapid computation yet of Catalan’s constant, beating out prior formulas, which took longer to crank through the computer. This appears to mark a new progress point for computing, somewhat like the first time that computers beat out the chessmasters; but this time, in the game of making conjectures.

Brain’s ‘Back

At a sleep research symposium in January 2020, Janna Lendner presented findings that hint at a way to look at people’s brain activity for signs of the boundary between wakefulness and unconsciousness. For patients who are comatose or under anesthesia, it can be all-important that physicians make that distinction correctly. Doing so is trickier than it might sound, however, because when someone is in the dreaming state of rapid-eye movement (REM) sleep, their brain produces the same familiar, smoothly oscillating brain waves as when they are awake.

Lendner argued, though, that the answer isn’t in the regular brain waves, but rather in an aspect of neural activity that scientists might normally ignore: the erratic background noise.

Some researchers seemed incredulous. “They said, ‘So, you’re telling me that there’s, like, information in the noise?’” said Lendner, an anesthesiology resident at the University Medical Center in Tübingen, Germany, who recently completed a postdoc at the University of California, Berkeley. “I said, ‘Yes. Someone’s noise is another one’s signal.’”

Lendner is one of a growing number of neuroscientists energized by the idea that noise in the brain’s electrical activity could hold new clues to its inner workings. What was once seen as the neurological equivalent of annoying television static may have profound implications for how scientists study the brain.

Skeptics used to tell the neuroscientist Bradley Voytek that there was nothing worth studying in these noisy features of brain activity. But his own studies of changes in electrical noise as people age, as well as previous literature on statistical trends in irregular brain activity, convinced him that they were missing something. So he spent years working on a way to help scientists rethink their data.

“It’s insufficient to go up in front of a group of scientists and say, ‘Hey, I think we’ve been doing things wrong,’” said Voytek, an associate professor of cognitive science and data science at the University of California, San Diego. “You’ve got to give them a new tool to do things” differently or better.

Photo of Bradley Voytek, an associate professor of cognitive science and data science at the University of California, San Diego.
Bradley Voytek, an associate professor of cognitive science and data science at the University of California, San Diego, helped to draw attention to the significance of aperiodic activity in the brain by developing software to study it. Jessica Voytek

In collaboration with neuroscientists at UC San Diego and Berkeley, Voytek developed software that isolates regular oscillations — like alpha waves, which are studied heavily in both sleeping and waking subjects — hiding in the aperiodic parts of brain activity. This gives neuroscientists a new tool to dissect both the regular waves and the aperiodic activity in order to disentangle their roles in behavior, cognition and disease.

The phenomenon that Voytek and other scientists are investigating in a variety of ways goes by many names. Some call it “the 1/f slope” or “scale-free activity”; Voytek has pushed to rebrand it “the aperiodic signal” or “aperiodic activity.”

It’s not just a quirk of the brain. The patterns that Lendner, Voytek and others look for are related to a phenomenon that scientists started noticing in complex systems throughout the natural world and technology in 1925. The statistical structure crops up mysteriously in so many different contexts that some scientists even think it represents an undiscovered law of nature.

Although published studies have looked at arrhythmic brain activity for more than 20 years, no one has been able to establish what it really means. Now, however, scientists have better tools for isolating aperiodic signals in new experiments and looking more deeply older data, too. Thanks to Voytek’s algorithm and other methods, a flurry of studies published in the last few years have run with the idea that aperiodic activity contain hidden treasures that may advance the study of aging, sleep, childhood development and more.

What Is Aperiodic Activity?

Our bodies groove to the familiar rhythms of heartbeats and breaths — persistent cycles essential to survival. But there are equally vital drumbeats in the brain that don’t seem to have a pattern, and they may contain new clues to the underpinnings of behavior and cognition.

When a neuron sends a chemical called glutamate to another neuron, it makes the recipient more likely to fire; this scenario is called excitation. Conversely, if a neuron spits out the neurotransmitter gamma-aminobutyric acid, or GABA, the recipient neuron becomes less likely to fire; that’s inhibition. Too much of either has consequences: Excitation gone haywire leads to seizures, while inhibition characterizes sleep and, in more extreme cases, coma.

To study the delicate balance between excitation and inhibition, scientists measure the brain’s electrical activity with electroencephalography, or EEG. Cycles in excitation and inhibition form waves that have been linked to different mental states. Brain emissions at around 8 to 12 hertz, for example, form the alpha wave pattern associated with sleep.

But the brain’s electrical output doesn’t produce perfectly smooth curves. Instead, the lines jitter as they slope up toward peaks and down toward troughs. Sometimes brain activity has no regularity and instead looks more like electrical noise. The “white noise” component of this is truly random like static, but some of it has a more interesting statistical structure.

It’s those imperfections in the smoothness, and in the noise, that interest neuroscientists like Voytek. “It’s random, but there’s different kinds of random,” he said.

Examples of images of white noise, pink noise and brown noise.
Not all noise is equal, as seen in these spectrograms in which low frequencies are at the bottom, high frequencies are at the top, and the brighter colors represent more intensity. In “white noise” (left) the intensity of signals is roughly equal at all frequencies. In 1/f noise, often called “pink noise” (center), the intensity drops at higher frequencies at a certain rate. In “brown noise” (right), the drop-off in intensity is much steeper. Courtesy of Thomas Donoghue

To quantify this aperiodic activity, scientists break down the raw EEG data, much as a prism can decompose a sunbeam into a rainbow of different colors. They first employ a technique called Fourier analysis. Any set of data plotted over time can be expressed as a sum of trigonometric functions like sine waves, which can be expressed in terms of their frequency and amplitude. Scientists can plot the wave amplitudes at different frequencies in a graph called a power spectrum.

The amplitudes for power spectra are usually plotted in logarithmic coordinates because of the wide range in their values. For purely random white noise, the power spectrum curve is relatively flat and horizontal, with a slope of zero, because it’s about the same at all frequencies. But neural data produces curves with a negative slope such that lower frequencies have higher amplitudes, and the intensity drops off exponentially for higher frequencies. This shape is called 1/f, referring to that inverse relationship between the frequency and the amplitude. Neuroscientists are interested in what the flatness or steepness of the slope might indicate about the brain’s inner workings.

Analyzing EEG data in this way is analogous to looking at the sound waves from an audio recording made on a bridge over a highway, explains Lawrence Ward, a cognitive neuroscientist at the University of British Columbia. The hum of the tires from random passing cars would produce aperiodic background features, but nearby trains that sound a whistle every 10 minutes would generate a periodic signal with peaks in the data louder than the background. A sudden one-time event like a long horn honk or a vehicle collision would produce a noticeable spike in the sound wave, contributing to the overall 1/f slope.

Awareness of the 1/f phenomenon dates back to a 1925 paper by J.B. Johnson of Bell Telephone Laboratories, who was looking at noise in vacuum tubes. The German scientist Hans Berger published the first human EEG study just four years later. Neuroscience research in subsequent decades focused heavily on the prominent periodic waves in brain activity. Yet 1/f fluctuations were found in all kinds of electrical noise, stock market activity, biological rhythms, and even pieces of music — and no one knew why.

Get Quanta Magazine delivered to your inboxRecent newsletters

A figure that shows how Fourier analysis can be used to decompose an aperiodic signal to plot its power spectrum.

Perhaps because it seemed so universal, many biologists dismissed the idea that looking at noise through the lens of 1/f characteristics could yield useful signals; they thought it might be a form of noise from the scientific instruments used, wrote Biyu J. He, an assistant professor of neurology, neuroscience and physiology at New York University Grossman School of Medicine, in a 2014 review in Trends in Cognitive Sciences.

But He and others debunked that idea through experiments controlling for instrument noise, which turned out to be much smaller in magnitude than aperiodic brain activity. In a 2010 paper in Neuron, He and her colleagues also found that while EEG readouts, seismic waves in the ground, and stock market fluctuations all exhibit 1/f trends, the data from these sources exhibits different higher-order statistical structures. That insight put a dent in the idea that a single law of nature generates aperiodic signals in everything.

However, it isn’t a completely settled question. Ward has found mathematical commonalities in different contexts and believes something fundamental could be going on behind the scenes.

Either way, both Ward and He agree it’s worth probing deeper in the brain.

“For decades, brain activity contained in the ‘1/f’ slope has been deemed unimportant and was often removed from analyses in order to emphasize brain oscillations,” He wrote in the 2014 paper. “However, in recent years, increasing evidence suggests that scale-free brain activity contributes actively to brain functioning.”

New Signals From Noise

Voytek fell into the topic of aperiodic signals somewhat accidentally: He originally wanted to model and remove white noise from EEG data. But as he hacked away at a code to pull out noise, he started paying more attention to what was interesting within it.

The brains of older adults seem to have more aperiodic activity than those of younger adults, Voytek found in a 2015 study with his doctoral adviser Robert Knight, a professor of neuroscience at Berkeley. Voytek and Knight observed that as the brain ages, it is dominated more by white noise. They also found correlations between this noise and age-related working memory decline.

Janna Lendner, an anesthesia resident at the University Medical Center in Tübingen, Germany, has studied how the aperiodic noise in the brain of sleeping patients may hold clues to how conscious they might be. Hannes Schramm, University Medical Center Tübingen

Voytek wanted neuroscientists to have software that could more easily and automatically isolate the periodic and aperiodic features in any data set, including old ones, and help researchers look for meaningful 1/f trends. So he and his team wrote a program for an algorithm that could do that.

The demand for a tool like this became clear immediately. After Voytek and colleagues posted their code to the website biorxiv.org on April 11, 2018, it received nearly 2,000 downloads within the month — a big hit for a niche neuroscience computational tool. In November of that year, Voytek moderated a standing-room-only talk at the Society for Neuroscience conference on how to use it. Because of its popularity, he organized a last-minute follow-up session, where his lab team provided tech support to dozens of interested scientists. The tutorial and e-mail exchanges led to new collaborations.

One of those collaborations was Lendner’s study of markers for arousal during sleep, published in the online journal eLife in July 2020. With Voytek’s software, Lendner and her colleagues found that in the aperiodic noise of test subjects’ EEGs, the high-frequency activity dropped off faster during REM sleep than when they were awake. In other words, the slope of the power spectrum was steeper.

Spectrogram of brain activity collected over a full night of sleep.
This spectrogram shows the brain activity collected overnight from a sleeping patient by Lendner and her colleagues. The white line tracks changes in the slope of the spectrum, which relate to the patient’s state of wakefulness. Courtesy of Janna Lendner

In their paper, Lendner and her co-authors argue that aperiodic signals can serve as a unique signature to measure a person’s state of consciousness. A new objective marker like this could help to improve the practice of anesthesia and treatments for coma patients.

Other published studies that used Voytek’s code included investigations of ADHD medication efficacy and studies of sex-based differences in brain activity in people with autism. The code was published in a peer-reviewed journal for the first time — Nature Neuroscience — in November 2020; Thomas Donoghue of UC San Diego and Matar Haller (then at Berkeley) were co-first authors of the paper, with Avgusta Shestyuk of Berkeley serving as co-senior author with Voytek. They and other members of the team demonstrated the code’s performance on simulated data and its potential to reveal new findings.

Natalie Schaworonkow, a postdoctoral fellow in Voytek’s lab, usually researches regular oscillations like alpha waves, “which are more beautiful than the aperiodic signal,” she said, making Voytek laugh in our shared Zoom call. But when her interests recently turned to the infant brain and the electrical patterns that are signatures of its cognitive development, she was faced with a problem, because infants do not produce these elegant alpha waves; exactly when and how the waves start to appear is an open question.

She used the algorithm to analyze an open EEG data set of infant brain activity. In a new paper published in Developmental Cognitive Neuroscience, Schaworonkow and Voytek found large changes in aperiodic activity during the first seven months of life. More research is needed, however, to figure out whether this activity reflects greater engagement in tasks as children grow up or just increases in gray matter density.

Voytek’s code has driven a lot of recent research, but it isn’t the only game in town for aperiodic noise analysis. In 2015, when Haiguang Wen of the tech company Nvidia and Zhongming Liu of the University of Michigan both worked at Purdue University (Wen was a research assistant and Liu was an associate professor), they published a different approach to isolating the periodic from the aperiodic components in brain activity, called irregular-resampling auto-spectral analysis (IRASA). Meanwhile, Biyu He has been working on the topic since before either of these tools arrived on the scene; so too did the late neuroscientist Walter J. Freeman, whose work inspired Voytek. It’s possible to do this kind of work by hand, though it is far more time-consuming.

Having a tool that allows neuroscientists to easily examine their data in terms of periodic and aperiodic signals is important because the data itself is just a set of numbers gathered over a specific period of time. A graph of points by itself doesn’t say anything about brain functioning or malfunctioning.

“Interpretation is what matters in neuroscience, right? Because that’s what we make clinical decision-making off of and drug development and all of this kind of stuff,” Voytek said. A huge wealth of data sets in the literature has the potential to yield new insights when reexamined in this way, he said, and “we haven’t been interpreting them as richly as we should.”

What Does It Mean?

A big limitation in scientists’ exploration of these aperiodic features is that no one knows exactly what causes them physiologically. More research is needed to clarify the respective contributions of different neurotransmitters, neural circuits and large-scale network interactions, said Sylvain Baillet, a professor of neurology and neurosurgery, biomedical engineering, and computer science at McGill University.

“The causes and the sources are still not identified,” Baillet said. “But we have to do this research to accumulate knowledge and observations.”

One theory is that aperiodic signals somehow reflect the delicate balance between excitation and inhibition that the brain needs to keep itself healthy and active. Too much excitation may overload the brain, while too much inhibition may put it to sleep, Lendner said.

Knight thinks that explanation is on the right track. “I wouldn’t want to say I’m positive it’s an inhibition-excitation ratio change, but I think it’s the most parsimonious explanation,” he said.

Neuroscientists are interested in what the flatness or steepness of the slope might indicate about the brain’s inner workings.

An alternative idea is that the aperiodic signals simply reflect the brain’s physical organization.

Based on how other physical systems reflect 1/f behaviors, Ward thinks there could be some kind of structural, hierarchical relationship in the brain that gives rise to the aperiodic activity. For example, this might arise from the way that huge numbers of neurons organize themselves into groups, which then form larger regions that work together.

Brain activity related to 1/f trends may be ideally suited to processing sensory input in the natural environment, since that often exhibits 1/f-type fluctuations, He said. Her 2018 study in The Journal of Neuroscience explores how the brain is able to make predictions about sounds that also have 1/f properties, suggesting that aperiodic activity “is involved in processing and predicting naturalistic stimuli,” she said in an email. It isn’t surprising to her that music, from jazz to Bach, can also have 1/f properties — after all, music is a creation of the human brain.

To test hypotheses about where aperiodic signals come from, Voytek said, researchers need to look more closely at what kinds of neural circuitry could give rise to them. Neuroscientists can then try to link sites with those circuits to the brain’s overall physiology for a better idea of which neural mechanisms generate specific activity patterns, and to predict how the aperiodic and periodic signals would look in different brain disorders.

Voytek is also hoping to do more large-scale studies that apply the code to existing data sets to tease out untapped signals.

Related:


  1. Sleeping Brain Waves Draw a Healthy Bath for Neurons
  2. Dueling Brain Waves Anchor or Erase Learning During Sleep
  3. How Brain Waves Surf Sound Waves to Process Speech

Lendner and Knight are currently analyzing data on coma patients at the University of Alabama to see if aperiodic activity correlates with how a coma evolves. Their prediction is that if a person is coming out of a coma, a rise in high-frequency activity in the brain will show up as a change in the 1/f slope. The preliminary results are promising, Lendner said.

For Baillet, the aperiodic signals in the brain are a bit like dark matter, the invisible scaffolding of the universe that interacts with normal matter only through gravity. We don’t understand what it’s made of or what its properties are, but it’s out there in the celestial background, furtively holding the Milky Way together.

Scientists haven’t figured out what causes these aperiodic signals yet, but they too may reflect an essential support structure for the universe in our heads. Something mysterious may help tip our minds from waking life into slumber.

Sign inSubscribeThe Big Story: mRNA

The next act for messenger RNA could be bigger than covid vaccines Posted February 14th 2021

New messenger RNA vaccines to fight the coronavirus are based on a technology that could transform medicine. Next up: sickle cell and HIV. by

February 5, 2021

gene vaccine illo
SELMAN DESIGNSelman Design

On December 23, as part of a publicity push to encourage people to get vaccinated against covid-19, the University of Pennsylvania released footage of two researchers who developed the science behind the shots, Katalin Karikó and Drew Weissman, getting their inoculations. The vaccines, icy concoctions of fatty spheres and genetic instructions, used a previously unproven technology based on messenger RNA and had been built and tested in under a year, thanks to discoveries the pair made starting 20 years earlier.

In the silent promotional clip, neither one speaks or smiles as a nurse inserts the hypodermic into their arms. I later asked Weissman, who has been a physician and working scientist since 1987, what he was thinking in that moment. “I always wanted to develop something that helps people,” he told me. “When they stuck that needle in my arm, I said, ‘I think I’ve finally done it.’”

The infection has killed more than 2 million people globally, including some of Weissman’s childhood friends. So far, the US vaccine campaign has relied entirely on shots developed by Moderna Therapeutics of Cambridge, Massachusetts, and BioNTech in Mainz, Germany, in partnership with Pfizer. Both employ Weissman’s discoveries. (Weissman’s lab gets funding from BioNTech, and Karikó now works at the company.)

Unlike traditional vaccines, which use live viruses, dead ones, or bits of the shells that viruses come cloaked in to train the body’s immune system, the new shots use messenger RNA—the short-lived middleman molecule that, in our cells, conveys copies of genes to where they can guide the making of proteins.

The message the mRNA vaccine adds to people’s cells is borrowed from the coronavirus itself—the instructions for the crown-like protein, called spike, that it uses to enter cells. This protein alone can’t make a person sick; instead, it prompts a strong immune response that, in large studies concluded in December, prevented about 95% of covid-19 cases.

Drew Weissman
Drew Weissman’s work with messenger RNA led to successful covid-19 vaccines.

Beyond potentially ending the pandemic, the vaccine breakthrough is showing how messenger RNA may offer a new approach to building drugs.

In the near future, researchers believe, shots that deliver temporary instructions into cells could lead to vaccines against herpes and malaria, better flu vaccines, and, if the covid-19 germ keeps mutating, updated coronavirus vaccinations, too.

But researchers also see a future well beyond vaccines. They think the technology will permit cheap gene fixes for cancer, sickle-cell disease, and maybe even HIV.

For Weissman, the success of covid vaccines isn’t a surprise but a welcome validation of his life’s work. “We have been working on this for over 20 years,” he says. “We always knew RNA would be a significant therapeutic tool.”

Perfect timing

Despite those two decades of research, though, messenger RNA had never been used in any marketed drug before last year.

Related Story

“None of us were ready” to manufacture genetic vaccines for a billion people

The CEO of Moderna, whose novel covid vaccine could be authorized this week, is worried about how to make enough of the product.

Then, in December 2019, the first reports emerged from Wuhan, China, about a scary transmissible pneumonia, most likely some kind of bat virus. Chinese government censors at first sought to cover up the outbreak, but on January 10, 2020, a Shanghai scientist posted the germ’s genetic code online through a contact in Australia. The virus was already moving quickly, jumping onto airplanes and popping up in Hong Kong and Thailand. But the genetic information moved even faster. It arrived in Mainz at the headquarters of BioNTech, and in Cambridge at Moderna, where some researchers got the readout as a Microsoft Word file.

Scientists at Moderna, a biotech specializing in messenger RNA, were able to design a vaccine on paper in 48 hours, 11 days before the US even had its first recorded case. Inside of six weeks, Moderna had chilled doses ready for tests in animals.

Unlike most biotech drugs, RNA is not made in fermenters or living cells—it’s produced inside plastic bags of chemicals and enzymes. Because there’s never been a messenger RNA drug on the market before, there was no factory to commandeer and no supply chain to call on.

When I spoke to Moderna CEO Stéphane Bancel in December, just before the US Food and Drug Administration authorized his company’s vaccine, he was feeling confident about the shot but worried about making enough of it. Moderna had promised to make up to a billion doses during 2021. Imagine, he said, that Henry Ford was rolling the first Model T off the production line, only to be told the world needed a billion of them.

Bancel calls the way covid-19 arrived just as messenger RNA technology was ready an “aberration of history.”

In other words, we got lucky.

Human bioreactors

The first attempt to use synthetic messenger RNA to make an animal produce a protein was in 1990. It worked but a big problem soon arose. The injections made mice sick. “Their fur gets ruffled. They lose weight, stop running around,” says Weissman. Give them a large dose, and they’d die within hours. “We quickly realized that messenger RNA was not usable,” he says.

The culprit was inflammation. Over a few billion years, bacteria, plants, and mammals have all evolved to spot the genetic material from viruses and react to it. Weissman and Karikó’s next step, which “took years,” he says, was to identify how cells were recognizing the foreign RNA.

As they found, cells are packed with sensing molecules that distinguish your RNA from that of a virus. If these molecules see viral genes, they launch a storm of immune molecules called cytokines that hold the virus at bay while your body learns to cope with it. “It takes a week to make an antibody response; what keeps you alive for those seven days is these sensors,” Weissman says. But too strong a flood of cytokines can kill you.

Stay updated on MIT Technology Review initiatives and events?

The eureka moment was when the two scientists determined they could avoid the immune reaction by using chemically modified building blocks to make the RNA. It worked. Soon after, in Cambridge, a group of entrepreneurs began setting up Moderna Therapeutics to build on Weissman’s insight.

Vaccines were not their focus. At the company’s founding in 2010, its leaders imagined they might be able to use RNA to replace the injected proteins that make up most of the biotech pharmacopoeia, essentially producing drugs inside the patient’s own cells from an RNA blueprint. “We were asking, could we turn a human into a bioreactor?” says Noubar Afeyan, the company’s cofounder and chairman and the head of Flagship Pioneering, a firm that starts biotech companies.

If so, the company could easily name 20, 30, or even 40 drugs that would be worth replacing. But Moderna was struggling with how to get the messenger RNA to the right cells in the body, and without too many side effects. Its scientists were also learning that administering repeat doses, which would be necessary to replace biotech blockbusters like a clotting factor that’s given monthly, was going to be a problem. “We would find it worked once, then the second time less, and then the third time even lower,” says Afeyan. “That was a problem.”

Moderna pivoted. What kind of drug could you give once and still have a big impact? The answer eventually became obvious: a vaccine. With a vaccine, the initial supply of protein would be enough to train the immune system in ways that could last years, or a lifetime.

A second major question was how to package the delicate RNA molecules, which last for only a couple of minutes if exposed. Weissman says he tried 40 different carriers, including water droplets, sugar, and proteins from salmon sperm. It was like Edison looking for the right filament to make an electric lamp. “Almost anything people published, we tried,” he says. Most promising were nanoparticles made from a mixture of fats. But these were secret commercial inventions and are still the basis of patent disputes. Weissman didn’t get his hands on them until 2014, after half a decade of attempts.

When he finally did, he loved what he saw. “They were better than anything else we had tried,” he says. “It had what you wanted in a drug. High potency, no adverse events.” By 2017, Weissman’s lab had shown how to vaccinate mice and monkeys against the Zika virus using messenger RNA, an effort that soon won funding from BioNTech. Moderna was neck and neck.  It quickly published results of an early human test of a new mRNA influenza vaccine and would initiate a large series of clinical studies involving diseases including Zika.

Pivoting to vaccines did have a drawback for Moderna. Andrew Lo, a professor at MIT’s Laboratory for Financial Engineering, says that most vaccines lose money. The reason is that many shots sell for a “fraction of their economic value.” Governments will pay $100,000 for a cancer drug that adds a month to a person’s life but only want to pay $5 for a vaccine that can protect against an infectious disease for good. Lo calculated that vaccine programs for emerging threats like Zika or Ebola, where outbreaks come and go, would deliver a -66% return on average. “The economic model for vaccines is broken,” he says.

Related Story

The fast-spreading coronavirus variant is turning up in US sewers

<p>The toilet flushes of millions of people can track the rise of dangerous new strains of the covid-19 virus.</p>

On the other hand, vaccines are more predictable. When Lo’s team analyzed thousands of clinical trials, they found that vaccine programs frequently succeed. Around 40% of vaccine candidates in efficacy tests, called phase 2 clinical trials, proved successful, a rate 10 times that of cancer drugs.

Adding to mRNA vaccines’ chance of success was a lucky break. Injected into the arm, the nanoparticles holding the critical instructions seemed to home in on dendritic cells, the exact cell type whose job is to train the immune system to recognize a virus. What’s more, something about the particles put the immune system on alert. It wasn’t planned, but they were working as what’s called a vaccine adjuvant. “We couldn’t believe the effect,” says Weissman.

Vaccines offered Moderna’s CEO, Bancel, a chance to advance a phalanx of new products. Since every vaccine would use the same nanoparticle carrier, they could be rapidly reprogrammed, as if they were software. (Moderna had even trademarked the name “mRNA OS,” for operating system.) “The way we make mRNA for one vaccine is exactly the same as for another,” he says. “Because mRNA is an information molecule, the difference between our covid vaccine, Zika vaccine, and flu vaccine is only the order of the nucleotides.”

95% effective

Back in March 2020, when the vaccine programs were getting under way, skeptics said messenger RNA was still an unproven technology. Even this magazine said a vaccine would take 18 months, at a minimum—a projection that proved off by a full nine months. “Sometimes things take a long time just because people think it does,” says Afeyan. “That weighs on you as a scientific team. People are saying, ‘Don’t go any faster!’”

The shots from Moderna and BioNTech proved effective by December and were authorized that month in the US. But the record speed was not due only to the novel technology. Another reason was the prevalence of infection. Because so many people were catching covid-19, the studies were able to amass evidence quickly.

Is messenger RNA really a better vaccine? The answer seems to be a resounding yes. There are some side effects, but both shots are about 95% effective (that is, they stop 95 out of 100 cases), a record so far unmatched by other covid-19 vaccines and far better than the performance of flu vaccines. Another injection, made by AstraZeneca using an engineered cold virus, is around 75% effective. A shot developed in China using deactivated covid-19 germs protected only half the people who got it, although it did stop severe disease.

“This could change how we make vaccines from here on out,” says Ron Renaud, the CEO of Translate Bio, a company working with the technology.

The potency of the shots, and the ease with which they can be reprogrammed, mean researchers are already preparing to go after HIV, herpes, infant respiratory virus, and malaria—all diseases for which there’s no successful vaccine. Also on the drawing board: “universal” flu vaccines and what Weissman calls a “pan-coronavirus” shot that could offer basic protection against thousands of pathogens in that category, which have led not only to covid-19 but, before that, to the infection SARS and probably other pandemics throughout history.

“You have to assume we’re going to have more,” Weissman says. “So instead of shutting down the world for a year while you make a new vaccine, we’ll have a vaccine ready to go.”

drug production line
ingredient manufacturing

Facilities of the biopharmaceutical company Lonza in Switzerland and New Hampshire, which are helping to manufacture Moderna’s vaccine.

Last spring, Bancel began petitioning the government to pay for vast manufacturing centers to make messenger RNA. He imagined a megafactory that “companies could use in peacetime” but that could be quickly reoriented to churn out shots during the next pandemic. That would be insurance, he says, against a nightmare scenario of a germ that spreads as fast as covid but has the 50% fatality rate of Ebola. If “governments spend billions on nuclear weapons they hope to never use,” Bancel argued in April, then “we should equip ourselves so this never happens again.”

Later that month, as part of Operation Warp Speed, the US effort to produce the vaccines, Moderna was effectively picked as a national champion to build such centers. The government handed it nearly $500 million to develop its vaccine and expand manufacturing.

Beyond vaccines

After the covid vaccines, some researchers expect Moderna and BioNTech to return to their original plans for the technology, like treating more conventional ailments such as heart attacks, cancer, or rare inherited diseases. But there’s no guarantee of success in that arena.

“Although there are a lot of potential therapeutic applications for synthetic mRNA in principle, in practice the problem of delivering sufficient amounts of mRNA to the right place in the body is going to be a huge and possibly insurmountable challenge in most cases,” says Luigi Warren, a biotech entrepreneur whose research as a postdoc formed the nucleus of Moderna.

What went wrong with America’s $44 million vaccine data system?

The CDC ordered software that was meant to manage the vaccine rollout. Instead, it has been plagued by problems and abandoned by most states.

There is one application in addition to vaccines, however, where brief exposure to messenger RNA could have effects lasting years, or even a lifetime.

In late 2019, before covid-19, the US National Institutes of Health and the Bill and Melinda Gates Foundation announced they would spend $200 million developing affordable gene therapies for use in sub-Saharan Africa. The top targets: HIV and sickle-cell disease, which are widespread there.

Gates and the NIH didn’t say how they would make such cutting-edge treatments cheap and easy to use, but Weissman told me that the plan may depend on using messenger RNA to add instructions for gene-editing tools like CRISPR to a person’s body, making permanent changes to the genome. Think of mass vaccination campaigns, says Weissman, except with gene editing to correct inherited disease.

Right now, gene therapy is complex and expensive. Since 2017, several types have been approved in the US and Europe. One, a treatment for blindness, in which viruses carry a new gene to the retina, costs $425,000 per eye.

A startup called Intellia Therapeutics is testing a treatment that packages CRISPR into RNA and then into a nanoparticle, with which it hopes to cure a painful inherited liver disease. The aim is to make the gene scissors appear in a person’s cells, cut out the problem gene, and then fade away. The company tested the drug on a patient for the first time in 2020.

It’s not a coincidence that Intellia is treating a liver disease. When dripped into the bloodstream through an IV, lipid nanoparticles tend to all end up in the liver—the body’s house-cleaning organ. “If you want to treat a liver disease, great—anything else, you have a problem,” says Weissman.

But Weissman says he’s figured out how to target the nanoparticles so that they wind up inside bone marrow, which constantly manufactures all red blood cells and immune cells. That would be a hugely valuable trick—so valuable that Weissman wouldn’t tell me how he does it. It’s a secret, he says, “until we get the patents filed.”

He intends to use this technique to try to cure sickle-cell disease by sending new instructions into the cells of the body’s blood factory. He’s also working with researchers who are ready to test on monkeys whether immune cells called T cells can be engineered to go on a seek-and-destroy mission after HIV and cure that infection, once and for all.

What all this means is that the fatty particles of messenger RNA may become a way to edit genomes at massive scales, and on the cheap. A drip drug that allows engineering of the blood system could become a public health boon as significant as vaccines. The burden of sickle-cell, an inherited disease that shortens lives by decades (or, in poor regions, kills during childhood), falls most heavily on Black people in equatorial Africa, Brazil, and the US. HIV has also become a lingering scourge: about two-thirds of people living with the virus, or dying from it, are in Africa.

Moderna and BioNTech have been selling their covid-19 vaccine shots for $20 to $40 a dose. What if that were the cost of genetic modification, too? “We could correct sickle-cell with a single shot,” Weissman says. “We think that is groundbreaking new therapy.”

There are fantastic fortunes to be made in mRNA technology. At least five people connected to Moderna and BioNTech are now billionaires, including Bancel. Weissman is not one of them, though he stands to get patent royalties. He says he prefers academia, where people are less likely to tell him what to research—or, just as important, what not to. He’s always looking for the next great scientific challenge: “It’s not that the vaccine is old news, but it was obvious they were going to work.” Messenger RNA, he says, “has an incredible future.”

He started a covid-19 vaccine company. Then he hosted a superspreader event.

With cases spiking, the Los Angeles area banned gatherings. One Silicon Valley entrepreneur thought he could beat the odds.

abundance360 covid event

Categorized in Tech policy2 days

Why a failure to vaccinate the world will put us all at risk

Even if the developed world gets its citizens vaccinated in a year, virus mutations and economic instability will roil unvaccinated countries for years—and end up costing everyone.Categorized in Tech policy3 days

Deepfake porn is ruining women’s lives. Now the law may finally ban it.

After years of activists fighting to protect victims of image-based sexual violence, deepfakes are finally forcing lawmakers to pay attention.Categorized in Pandemic Technology Project3 days

Chicago thinks Zocdoc can help solve its vaccine chaos

But though more people may get vaccines faster, the most vulnerable could still be left behind.Categorized in Climate change3 days

Here’s Biden’s plan to reboot climate innovation

It includes developing cheaper and better ways to capture carbon emissions or draw them out of the atmosphere.The Big Story: AI auditsFeb 11

Auditors are testing hiring algorithms for bias, but there’s no easy fix

AI audits may overlook certain types of bias, and they don’t necessarily verify that a hiring tool picks the best candidates for a job. 

Christo Wilson

  1. 01.The coming war on the hidden algorithms that trap people in povertyDec 04
  2. 02.Worried about your firm’s AI ethics? These startups are here to help.Jan 15
  3. 03.Predictive policing algorithms are racist. They need to be dismantled.Jul 17

The battle of algorithms: Uncovering offensive AI

Sponsored

<p>Learn about current and emerging applications of offensive AI, defensive AI, and the ongoing battle of algorithms between the two.</p> In association with DarktraceCategorized in Exposure notifications4 days

The UK’s covid app made a serious difference during the winter surge

That’s a big deal for exposure notifications, which have had a tough time proving how useful they are.SpaceFeb 10

There’s a tantalizing sign of a habitable-zone planet in Alpha Centauri

<p>If confirmed, it could mean a life-supporting planet resides is just a stone’s throw away from Earth, orbiting a sun-like star.</p>

alpha centauri

The UAE’s Hope probe has successfully arrived in orbit around Mars

These might be the best places for future Mars colonists to look for ice

  • We want to hear from you: AI and the Future of Work – 2021 In just 2 minutes, you can help MIT Technology Review Insights understand how enterprises are using AI to shape the future of work.  Take the survey

Physicists Finally Nail the Proton’s Size, and Hope Dies Posted February 12th 2021

A new measurement appears to have eliminated an anomaly that had captivated physicists for nearly a decade. 45

Read Later
An illustration of a chaotic scene of spheres representing quarks and gluons.
A proton is made of a swarm of quarks and gluons, as imagined in this illustration. ©️ CERN

Natalie WolchoverSenior Writer/Editor


September 11, 2019


View PDF/Print ModeAbstractions blogexperimental physicsphysicsAll topics

Alice and Bob Meet the Wall of Fire - The Biggest Ideas in Science from Quanta – Available now!

In 2010, physicists in Germany reported that they had made an exceptionally precise measurement of the size of the proton, the positively charged building block of atomic nuclei. The result was very puzzling.

Randolf Pohl of the Max Planck Institute of Quantum Optics and collaborators had measured the proton using special hydrogen atoms in which the electron that normally orbits the proton was replaced by a muon, a particle that’s identical to the electron but 207 times heavier. Pohl’s team found the muon-orbited protons to be 0.84 femtometers in radius — 4% smaller than those in regular hydrogen, according to the average of more than two dozen earlier measurements.

If the discrepancy was real, meaning protons really shrink in the presence of muons, this would imply unknown physical interactions between protons and muons — a fundamental discovery. Hundreds of papers speculating about the possibility have been written in the near-decade since.

But hopes that the “proton radius puzzle” would upend particle physics and reveal new laws of nature have now been dashed by a new measurement reported on Sept. 6 in Science.

Abstractions navigates promising ideas in science and mathematics. Journey with us and join the conversation.


See all Abstractions blog

After Pohl’s muonic hydrogen result nine years ago, a team of physicists led by Eric Hessels of York University in Toronto set out to remeasure the proton in regular, “electronic” hydrogen. Finally, the results are in: Hessels and company have pegged the proton’s radius at 0.833 femtometers, give or take 0.01, a measurement exactly consistent with Pohl’s value. Both measurements are more precise than earlier attempts, and they suggest that the proton does not change size depending on context; rather, the old measurements using electronic hydrogen were wrong.

Pohl, who first heard about Hessels’ preliminary finding at a workshop in the summer of 2018, called it “a fantastic result,” albeit one that “points to the most mundane explanation” of the proton radius puzzle.

Similarly, Hessels said he and his colleagues were very pleased that their measurement “agreed with the very accurate measurement in muonic hydrogen,” even if the result is somewhat bittersweet. “We know that we don’t understand all the laws of physics yet,” he said, “so we have to chase down all of these things that might give us hints.”

The proton’s radius was not trivial to chase down. To deduce its value, Hessels and colleagues had to measure the Lamb shift: the difference between hydrogen’s first and second excited energy levels, called the 2S and 2P states. Hessels said he has wanted to measure the Lamb shift since he was an undergraduate in the 1980s, but the proton radius puzzle finally gave him the impetus to do so. “It’s an extremely difficult measurement,” he said. “I needed a good reason.”

The orbital structure of the 2S and 2P states of hydrogen.
The 2S and 2P states of hydrogen show where the electron could be found at any given time. These images show the possible locations of the electron in each state; the proton, unmarked, is at the center of each image. In the 2S state, the electron overlaps the proton, and for a non-zero amount of time, the electron is inside of the proton itself. In the 2P state, the electron and the proton never overlap. PoorLeno

The Lamb shift, named for the American physicist Willis Lamb, who first attempted to measure it in 1947, reveals the proton’s radius in the following way: When an electron orbits the proton in the 2S state, it spends part of its time inside the proton (which is a constellation of elementary particles called quarks and gluons, with a lot of empty space). When the electron is inside the proton, the proton’s charge pulls the electron in opposing directions, partly canceling itself out. As a result, the amount of electrical attraction between the two decreases, reducing the energy that binds the atom together. The larger the proton, the more time the electron spends inside it, the less strongly bound the electron is, and the more easily it can hop away.

By firing a laser into a cloud of hydrogen gas, Hessels and his team caused electrons to jump from the 2S state to the 2P state, where the electron never overlaps the proton. Pinpointing the energy required for the electron to make this jump revealed how weakly bound it was in the 2S state, when residing partly inside the proton. This directly revealed the proton’s size.

Related:


  1. New Measurement Deepens Proton Puzzle
  2. The Sun Is Stranger Than Astrophysicists Imagined
  3. With a Simple Twist, a ‘Magic’ Material Is Now the Big Thing in Physics
  4. Real-Life Schrödinger’s Cats Probe the Boundary of the Quantum World

Pohl followed the same logic to deduce the proton radius from the Lamb shift of muonic hydrogen in 2010. But because muons are heavier, they huddle around protons more tightly in the 2S state than electrons do. This means they spend more time inside the proton, making the Lamb shift in muonic hydrogen several million times more sensitive to the proton’s radius than it is in normal hydrogen.

In the latter case, Hessels had to measure the energy difference between 2S and 2P to parts-per-million accuracy in order to deduce a precise value for the proton’s radius.

The new result implies that earlier attempts to measure the proton’s radius in electronic hydrogen tended to overshoot the true value. It’s unclear why this would be so. Some researchers may continue to improve and verify measurements of the proton’s size in order to put the puzzle to rest, but Hessels’ work is done. “We are dismantling our apparatus,” he said.

This article was reprinted on Wired.com and in Spanish at Investigacionyciencia.es.

Share this article

Copied!


Newsletter

Get Quanta Magazine delivered to your inboxRecent newsletters

Closed Loophole Confirms the Unreality of the Quantum World Posted February 12th 2021

When a loophole was found in a famous experiment designed to prove that quantum objects don’t have intrinsic properties, it was quickly sewed shut, closing the door on many “hidden variable” theories.

Quanta Magazine

  • Anil Ananthaswamy

Read when you’ve got time to spare.Wheeler_Smoky_Dragon_2880.jpg

Credit: Olena Shmahalo / Quanta Magazine.

The theoretical physicist John Wheeler once used the phrase “great smoky dragon” to describe a particle of light going from a source to a photon counter. “The mouth of the dragon is sharp, where it bites the counter. The tail of the dragon is sharp, where the photon starts,” Wheeler wrote. The photon, in other words, has definite reality at the beginning and end. But its state in the middle — the dragon’s body — is nebulous. “What the dragon does or looks like in between we have no right to speak.”

Wheeler was espousing the view that elementary quantum phenomena are not real until observed, a philosophical position called anti-realism. He even designed an experiment to show that if you hold on to realism — in which quantum objects such as photons always have definite, intrinsic properties, a position that encapsulates a more classical view of reality — then one is forced to concede that the future can influence the past. Given the absurdity of backward time-travel, Wheeler’s experiment became an argument for anti-realism at the level of the quantum.

But in May 2018, Rafael Chaves and colleagues at the International Institute of Physics in Natal, Brazil, found a loophole. They showed that Wheeler’s experiment, given certain assumptions, can be explained using a classical model that attributes to a photon an intrinsic nature. They gave the dragon a well-defined body, but one that is hidden from the mathematical formalism of standard quantum mechanics.

Chaves’s team then proposed a twist to Wheeler’s experiment to test the loophole. With unusual alacrity, three teams raced to do the modified experiment. Their results, reported in early June 2018, have shown that a class of classical models that advocate realism cannot make sense of the results. Quantum mechanics may be weird, but it’s still, oddly, the simplest explanation around.

Dragon Trap

Wheeler devised his experiment in 1983 to highlight one of the dominant conceptual conundrums in quantum mechanics: wave-particle duality. Quantum objects seem to act either like particles or waves, but never both at the same time. This feature of quantum mechanics seems to imply that objects have no inherent reality until observed. “Physicists have had to grapple with wave-particle duality as an essential, strange feature of quantum theory for a century,” said David Kaiser, a physicist and historian of science at the Massachusetts Institute of Technology. “The idea pre-dates other quintessentially strange features of quantum theory, such as Heisenberg’s uncertainty principle and Schrödinger’s cat.”

The phenomenon is underscored by a special case of the famous double-slit experiment called the Mach-Zehnder interferometer.

In the experiment, a single photon is fired at a half-silvered mirror, or beam splitter. The photon is either reflected or transmitted with equal probability — and thus can take one of two paths. In this case, the photon will take either path 1 or path 2, and then go on to hit either detector D1 or D2 with equal probability. The photon acts like an indivisible whole, showing us its particle-like nature. DelayedChoiceExperiment_560-891x1720.jpg

Credit: Lucy Reading-Ikkanda / Quanta Magazine.

But there’s a twist. At the point where path 1 and path 2 cross, one can add a second beam splitter, which changes things. In this setup, quantum mechanics says that the photon seems to take both paths at once, as a wave would. The two waves come back together at the second beam splitter. The experiment can be set up so that the waves combine constructively — peak to peak, trough to trough — only when they move toward D1. The path toward D2, by contrast, represents destructive interference. In such a setup, the photon will always be found at D1 and never at D2. Here, the photon displays its wavelike nature.

Wheeler’s genius lay in asking: what if we delay the choice of whether to add the second beam splitter? Let’s assume the photon enters the interferometer without the second beam splitter in place. It should act like a particle. One can, however, add the second beam splitter at the very last nanosecond. Both theory and experiment show that the photon, which until then was presumably acting like a particle and would have gone to either D1 or D2, now acts like a wave and goes only to D1. To do so, it had to seemingly be in both paths simultaneously, not one path or the other. In the classical way of thinking, it’s as if the photon went back in time and changed its character from particle to wave.

One way to avoid such retro-causality is to deny the photon any intrinsic reality and argue that the photon becomes real only upon measurement. That way, there is nothing to undo.

Such anti-realism, which is often associated with the Copenhagen interpretation of quantum mechanics, took a theoretical knock with Chaves’s work, at least in the context of this experiment. His team wanted to explain counterintuitive aspects of quantum mechanics using a new set of ideas called causal modeling, which has grown in popularity in the past decade, advocated by computer scientist Judea Pearl and others. Causal modeling involves establishing cause-and-effect relationships between various elements of an experiment. Often when studying correlated events — call them A and B — if one cannot conclusively say that A causes B, or that B causes A, there exists a possibility that a previously unsuspected or “hidden” third event, C, causes both. In such cases, causal modeling can help uncover C.

Chaves and his colleagues Gabriela Lemos and Jacques Pienaar focused on Wheeler’s delayed choice experiment, fully expecting to fail at finding a model with a hidden process that both grants a photon intrinsic reality and also explains its behavior without having to invoke retro-causality. They thought they would prove that the delayed-choice experiment is “super counterintuitive, in the sense that there is no causal model that is able to explain it,” Chaves said.

But they were in for a surprise. The task proved relatively easy. They began by assuming that the photon, immediately after it has crossed the first beam splitter, has an intrinsic state denoted by a “hidden variable.” A hidden variable, in this context, is something that’s absent from standard quantum mechanics but that influences the photon’s behavior in some way. The experimenter then chooses to add or remove the second beam splitter. Causal modeling, which prohibits backward time travel, ensures that the experimenter’s choice cannot influence the past intrinsic state of the photon.

Given the hidden variable, which implies realism, the team then showed that it’s possible to write down rules that use the variable’s value and the presence or absence of the second beam splitter to guide the photon to D1 or D2 in a manner that mimics the predictions of quantum mechanics. Here was a classical, causal, realistic explanation. They had found a new loophole.

This surprised some physicists, said Tim Byrnes, a theoretical quantum physicist at New York University, Shanghai. “What people didn’t really appreciate is that this kind of experiment is susceptible to a classical version that perfectly mimics the experimental results,” Byrnes said. “You could construct a hidden variable theory that didn’t involve quantum mechanics.”

“This was the step zero,” Chaves said. The next step was to figure out how to modify Wheeler’s experiment in such a way that it could distinguish between this classical hidden variable theory and quantum mechanics.

In their modified thought experiment, the full Mach-Zehnder interferometer is intact; the second beam splitter is always present. Instead, two “phase shifts” — one near the beginning of the experiment, one toward the end — serve the role of experimental dials that the researcher can adjust at will.

The net effect of the two phase shifts is to change the relative lengths of the paths. This changes the interference pattern, and with it, the presumed “wavelike” or “particle-like” behavior of the photon. For example, the value of the first phase shift could be such that the photon acts like a particle inside the interferometer, but the second phase shift could force it to act like a wave. The researchers require that the second phase shift is set after the first.

With this setup in place, Chaves’s team came up with a way to distinguish between a classical causal model and quantum mechanics. Say the first phase shift can take one of three values, and the second one of two values. That makes six possible experimental settings in total. They calculated what they expected to see for each of these six settings. Here, the predictions of a classical hidden variable model and standard quantum mechanics differ. They then constructed a formula. The formula takes as its input probabilities calculated from the number of times that photons land on particular detectors (based on the setting of the two phase shifts). If the formula equals zero, the classical causal model can explain the statistics. But if the equation spits out a number greater than zero, then, subject to some constraints on the hidden variable, there’s no classical explanation for the experiment’s outcome.

Chaves teamed up with Fabio Sciarrino, a quantum physicist at the University of Rome La Sapienza, and his colleagues to test the inequality. Simultaneously, two teams in China — one led by Jian-Wei Pan, an experimental physicist at the University of Science and Technology of China (USTC) in Hefei, China, and another by Guang-Can Guo, also at USTC — carried out the experiment.

Each team implemented the scheme slightly differently. Guo’s group stuck to the basics, using an actual Mach-Zehnder interferometer. “It is the one that I would say is actually the closest to Wheeler’s original proposal,” said Howard Wiseman, a theoretical physicist at Griffith University in Brisbane, Australia, who was not part of any team.

But all three showed that the formula is greater than zero with irrefutable statistical significance. They ruled out the classical causal models of the kind that can explain Wheeler’s delayed-choice experiment. The loophole has been closed. “Our experiment has salvaged Wheeler’s famous thought experiment,” Pan said.

Hidden Variables That Remain

Kaiser is impressed by Chaves’s “elegant” theoretical work and the experiments that ensued. “The fact that each of the recent experiments has found clear violations of the new inequality … provides compelling evidence that ‘classical’ models of such systems really do not capture how the world works, even as quantum-mechanical predictions match the latest results beautifully,” he said.

The formula comes with certain assumptions. The biggest one is that the classical hidden variable used in the causal model can take one of two values, encoded in one bit of information. Chaves thinks this is reasonable, since the quantum system — the photon — can also only encode one bit of information. (It either goes in one arm of the interferometer or the other.) “It’s very natural to say that the hidden variable model should also have dimension two,” Chaves said.

But a hidden variable with additional information-carrying capacity can restore the classical causal model’s ability to explain the statistics observed in the modified delayed-choice experiment.

In addition, the most popular hidden variable theory remains unaffected by these experiments. The de Broglie-Bohm theory, a deterministic and realistic alternative to standard quantum mechanics, is perfectly capable of explaining the delayed-choice experiment. In this theory, particles always have positions (which are the hidden variables), and hence have objective reality, but they are guided by a wave. So reality is both wave and particle. The wave goes through both paths, the particle through one or the other. The presence or absence of the second beam splitter affects the wave, which then guides the particle to the detectors — with exactly the same results as standard quantum mechanics.

For Wiseman, the debate over Copenhagen versus de Broglie-Bohm in the context of the delayed-choice experiment is far from settled. “So in Copenhagen, there is no strange inversion of time precisely because we have no right to say anything about the photon’s past,” he wrote in an email. “In de Broglie-Bohm there is a reality independent of our knowledge, but there is no problem as there is no inversion — there is a unique causal (forward in time) description of everything.”

Kaiser, even as he lauds the efforts so far, wants to take things further. In current experiments, the choice of whether or not to add the second phase shift or the second beam splitter in the classic delayed-choice experiment was being made by a quantum random-number generator. But what’s being tested in these experiments is quantum mechanics itself, so there’s a whiff of circularity. “It would be helpful to check whether the experimental results remain consistent, even under complementary experimental designs that relied on entirely different sources of randomness,” Kaiser said.

To this end, Kaiser and his colleagues have built such a source of randomness using photons coming from distant quasars, some from more than halfway across the universe. The photons were collected with a one-meter telescope at the Table Mountain Observatory in California. If a photon had a wavelength less than a certain threshold value, the random number generator spit out a 0, otherwise a 1. In principle, this bit can be used to randomly choose the experimental settings. If the results continue to support Wheeler’s original argument, then “it gives us yet another reason to say that wave-particle duality is not going to be explained away by some classical physics explanation,” Kaiser said. “The range of conceptual alternatives to quantum mechanics has again been shrunk, been pushed back into a corner. That’s really what we are after.”

For now, the dragon’s body, which for a brief few weeks had come into focus, has gone back to being smoky and indistinct.

Anil Ananthaswamy is a journalist and author. His latest book is “Through Two Doors at Once.”

Decades-Old Graph Problem Yields to Amateur Mathematician Posted February 4th 2021

By making the first progress on the “chromatic number of the plane” problem in over 60 years, an anti-aging pundit has achieved mathematical immortality.

Quanta Magazine

  • Evelyn Lamb

Read when you’ve got time to spare.Graph_826_MHeule_5K.jpg

This 826-vertex graph requires at least five colors to ensure that no two connected vertices are the same shade. (Click here for a high-resolution version.) Credit: Olena Shmahalo/Quanta Magazine; Source: Marijn Heule.

In 1950 Edward Nelson, then a student at the University of Chicago, asked the kind of deceptively simple question that can give mathematicians fits for decades. Imagine, he said, a graph — a collection of points connected by lines. Ensure that all of the lines are exactly the same length, and that everything lies on the plane. Now color all the points, ensuring that no two connected points have the same color. Nelson asked: What is the smallest number of colors that you’d need to color any such graph, even one formed by linking an infinite number of vertices?

The problem, now known as the Hadwiger-Nelson problem or the problem of finding the chromatic number of the plane, has piqued the interest of many mathematicians, including the famously prolific Paul Erdős. Researchers quickly narrowed the possibilities down, finding that the infinite graph can be colored by no fewer than four and no more than seven colors. Other researchers went on to prove a few partial results in the decades that followed, but no one was able to change these bounds.

Then in April 2018, Aubrey de Grey, a biologist known for his claims that people alive today will live to the age of 1,000, posted a paper to the scientific preprint site arxiv.org with the title “The Chromatic Number of the Plane Is at Least 5.” In it, he describes the construction of a unit-distance graph that can’t be colored with only four colors. The finding represents the first major advance in solving the problem since shortly after it was introduced. “I got extraordinarily lucky,” de Grey said. “It’s not every day that somebody comes up with the solution to a 60-year-old problem.”

De Grey appears to be an unlikely mathematical trailblazer. He is the co-founder and chief science officer of an organization that aims to develop technologies for “reversing the negative effects of aging.” He found his way to the chromatic number of the plane problem through a board game. Decades ago, de Grey was a competitive Othello player, and he fell in with some mathematicians who were also enthusiasts of the game. They introduced him to graph theory, and he comes back to it now and then. “Occasionally, when I need a rest from my real job, I’ll think about math,” he said. Over Christmas last year, he had a chance to do that.

It is unusual, but not unheard of, for an amateur mathematician to make significant progress on a long-standing open problem. In the 1970s, Marjorie Rice, a homemaker with no mathematical background, ran across a Scientific American column about pentagons that tile the plane. She eventually added four new pentagons to the list. Gil Kalai, a mathematician at the Hebrew University of Jerusalem, said it is gratifying to see a nonprofessional mathematician make a major breakthrough. “It really adds to the many facets of the mathematical experience,” he said.

Perhaps the most famous graph coloring question is the four-color theorem. It states that, assuming every country is one continuous lump, any map can be colored using only four colors so that no two adjacent countries have the same color. The exact sizes and shapes of the countries don’t matter, so mathematicians can translate the problem into the world of graph theory by representing every country as a vertex and connecting two vertices with an edge if the corresponding countries share a border. ChromaticNumber_560Inline.jpg

Credit: Lucy Reading-Ikkanda / Quanta Magazine.

The Hadwiger-Nelson problem is a bit different. Instead of considering a finite number of vertices, as there would be on a map, it considers infinitely many vertices, one for each point in the plane. Two points are connected by an edge if they are exactly one unit apart. To find a lower bound for the chromatic number, it suffices to create a graph with a finite number of vertices that requires a particular number of colors. That’s what de Grey did.

De Grey based his graph on a gadget called the Moser spindle, named after mathematical brothers Leo and William Moser. It is a configuration of just seven points and 11 edges that has a chromatic number of four. Through a delicate process, and with minimal computer assistance, de Grey fused copies of the Moser spindle and another small assembly of points into a 20,425-vertex monstrosity that could not be colored using four colors. He was later able to shrink the graph to 1,581 vertices and do a computer check to verify that it was not four-colorable. ChromaticColoring_1581vGraph_DeGrey_2K.jpg

De Grey’s 1,581-vertex graph. (Click here for a high-resolution version.) Credit: Olena Shmahalo / Quanta Magazine; Source: Aubrey de Grey.

The discovery of any graph that requires five colors was a major accomplishment, but mathematicians wanted to see if they could find a smaller graph that would do the same. Perhaps finding a smaller five-color graph — or the smallest possible five-color graph — would give researchers further insight into the Hadwiger-Nelson problem, allowing them to prove that exactly five shades (or six, or seven) are enough to color a graph made from all the points of the plane.

De Grey pitched the problem of finding the minimal five-color graph to Terence Tao, a mathematician at the University of California, Los Angeles, as a potential Polymath problem. Polymath began about 10 years ago when Timothy Gowers, a mathematician at the University of Cambridge, wanted to find a way to facilitate massive online collaborations in mathematics. Work on Polymath problems is done publicly, and anyone can contribute. Recently, de Grey was involved with a Polymath collaboration that led to significant progress on the twin prime problem.

Tao says not every math problem is a good fit for Polymath, but de Grey’s has a few things going for it. The problem is easy to understand and start working on, and there is a clear measure of success: lowering the number of vertices in a non-four-colorable graph. Soon enough, Dustin Mixon, a mathematician at Ohio State University, and his collaborator Boris Alexeev found a graph with 1,577 vertices. On Saturday, Marijn Heule, a computer scientist at the University of Texas, Austin, found one with just 874 vertices. Yesterday he lowered this number to 826 vertices.

Such work has sparked hope that the six-decade-old Hadwiger-Nelson problem is worth another look. “For a problem like this, the final solution might be some incredibly deep mathematics,” said Gordon Royle, a mathematician at the University of Western Australia. “Or it could just be somebody’s ingenuity finding a graph that requires many colors.”

Evelyn Lamb is a science and mathematics writer as well as a mathematician at the University of Utah.

A giant black hole keeps evading detection and scientists can’t explain it January 13th 2021

By Mike Wall 12 days ago

Scientists are stumped by this black hole mystery.

This composite image of the galaxy cluster Abell 2261 contains optical data from NASA's Hubble Space Telescope and Japan's Subaru Telescope showing galaxies in the cluster and in the background, and data from NASA's Chandra X-ray Observatory showing hot gas (colored pink) pervading the cluster. The middle of the image shows the large elliptical galaxy in the center of the cluster. This composite image of the galaxy cluster Abell 2261 contains optical data from NASA’s Hubble Space Telescope and Japan’s Subaru Telescope showing galaxies in the cluster and in the background, and data from NASA’s Chandra X-ray Observatory showing hot gas (colored pink) pervading the cluster. The middle of the image shows the large elliptical galaxy in the center of the cluster. (Image: © X-ray: NASA/CXC/Univ of Michigan/K. Gültekin; Optical: NASA/STScI/NAOJ/Subaru; Infrared: NSF/NOAO/KPNO)

An enormous black hole keeps slipping through astronomers’ nets.

Supermassive black holes are thought to lurk at the hearts of most, if not all, galaxies. Our own Milky Way has one as massive as 4 million suns, for example, and M87’s — the only black hole ever imaged directly — tips the scales at a whopping 2.4 billion solar masses.

The big galaxy at the core of the cluster Abell 2261, which lies about 2.7 billion light-years from Earth, should have an even larger central black hole — a light-gobbling monster that weighs as much as 3 billion to 100 billion suns, astronomers estimate from the galaxy’s mass. But the exotic object has evaded detection so far.

Related: Historic first images of a black hole show Einstein was right (again)Click here for more Space.com videos…CLOSE

For instance, researchers previously looked for X-rays streaming from the galaxy’s center, using data gathered by NASA’s Chandra X-ray Observatory in 1999 and 2004. X-rays are a potential black-hole signature: As material falls into a black hole’s maw, it accelerates and heats up tremendously, emitting lots of high-energy X-ray light. But that hunt turned up nothing.

Now, a new study has conducted an even deeper search for X-rays in the same galaxy, using Chandra observations from 2018. And this new effort didn’t just look in the galaxy’s center; it also considered the possibility that the black hole was knocked toward the hinterlands after a monster galactic merger.

When black holes and other massive objects collide, they throw off ripples in space-time known as gravitational waves. If the emitted waves aren’t symmetrical in all directions, they could end up pushing the merged supermassive black hole away from the center of the newly enlarged galaxy, scientists say.

Such “recoiling” black holes are purely hypothetical creatures; nobody has definitively spotted one to date. Indeed, “it is not known whether supermassive black holes even get close enough to each other to produce gravitational waves and merge; so far, astronomers have only verified the mergers of much smaller black holes,” NASA officials wrote in a statement about the new study.

“The detection of recoiling supermassive black holes would embolden scientists using and developing observatories to look for gravitational waves from merging supermassive black holes,” they added. 

Abell 2261’s central galaxy is a good place to hunt for such a unicorn, researchers said, for it bears several possible signs of a dramatic merger. For example, observations by the Hubble Space Telescope and ground-based Subaru Telescope show that its core, the region of highest star density, is much larger than expected for a galaxy of its size. And the densest stellar patch is about 2,000 light-years away from the galaxy’s center — “strikingly distant,” NASA officials wrote.Click here for more Space.com videos…

In the new study, a team led by Kayhan Gultekin from the University of Michigan found that the densest concentrations of hot gas were not in the galaxy’s central regions. But the Chandra data didn’t reveal any significant X-ray sources, either in the galactic core or in big clumps of stars farther afield. So the mystery of the missing supermassive black hole persists.

That mystery could be solved by Hubble’s successor — NASA’s big, powerful James Webb Space Telescope, which is scheduled to launch in October 2021. 

If James Webb doesn’t spot a black hole in the galaxy’s heart or in one of its bigger stellar clumps, “then the best explanation is that the black hole has recoiled well out of the center of the galaxy,” NASA officials wrote.

The new study has been accepted for publication in a journal of the American Astronomical Society. You can read it for free at the online preprint site arXiv.org

Mike Wall is the author of “Out There” (Grand Central Publishing, 2018; illustrated by Karl Tate), a book about the search for alien life. Follow him on Twitter @michaeldwall. Follow us on Twitter @Spacedotcom or Facebook. 

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com..

Quantum Leaps, Long Assumed to Be Instantaneous, Take Time

An experiment caught a quantum system in the middle of a jump — something the originators of quantum mechanics assumed was impossible. Posted January 12th 2021

Quanta Magazine

  • Philip Ball

Read when you’ve got time to spare.Screenshot_2020-09-23 Quantum Leaps, Long Assumed to Be Instantaneous, Take Time Quanta Magazine.png

A quantum leap is a rapidly gradual process. Credit: Quanta Magazine; source: qoncha.

When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.

A 2019 experiment shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.” The study, which was led by Zlatko Minev, a graduate student in Devoret’s lab, was published on Monday in Nature. Already, colleagues are excited. “This is really a fantastic experiment,” said the physicist William Oliver of the Massachusetts Institute of Technology, who wasn’t involved in the work. “Really amazing.”

But there’s more. With their high-speed monitoring system, the researchers could spot when a quantum jump was about to appear, “catch” it halfway through, and reverse it, sending the system back to the state in which it started. In this way, what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control. We can take charge of the quantum.

All Too Random

The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr, Werner Heisenberg and their colleagues in the mid-1920s, in a picture now commonly called the Copenhagen interpretation. Bohr had argued earlier that the energy states of electrons in atoms are “quantized”: Only certain energies are available to them, while all those in between are forbidden. He proposed that electrons change their energy by absorbing or emitting quantum particles of light — photons — that have energies matching the gap between permitted electron states. This explained why atoms and molecules absorb and emit very characteristic wavelengths of light — why many copper salts are blue, say, and sodium lamps yellow.

Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s. Heisenberg’s quantum mechanics enumerated all the allowed quantum states, and implicitly assumed that jumps between them are instant — discontinuous, as mathematicians would say. “The notion of instantaneous quantum jumps … became a foundational notion in the Copenhagen interpretation,” historian of science Mara Beller has written.

Another of the architects of quantum mechanics, the Austrian physicist Erwin Schrödinger, hated that idea. He devised what seemed at first to be an alternative to Heisenberg’s math of discrete quantum states and instant jumps between them. Schrödinger’s theory represented quantum particles in terms of wavelike entities called wave functions, which changed only smoothly and continuously over time, like gentle undulations on the open sea. Things in the real world don’t switch suddenly, in zero time, Schrödinger thought — discontinuous “quantum jumps” were just a figment of the mind. In a 1952 paper called “Are there quantum jumps?,” Schrödinger answered with a firm “no,” his irritation all too evident in the way he called them “quantum jerks.”

The argument wasn’t just about Schrödinger’s discomfort with sudden change. The problem with a quantum jump was also that it was said to just happen at a random moment — with nothing to say why that particular moment. It was thus an effect without a cause, an instance of apparent randomness inserted into the heart of nature. Schrödinger and his close friend Albert Einstein could not accept that chance and unpredictability reigned at the most fundamental level of reality. According to the German physicist Max Born, the whole controversy was therefore “not so much an internal matter of physics, as one of its relation to philosophy and human knowledge in general.” In other words, there’s a lot riding on the reality (or not) of quantum jumps.

Seeing Without Looking

To probe further, we need to see quantum jumps one at a time. In 1986, three teams of researchers reported them happening in individual atoms suspended in space by electromagnetic fields. The atoms flipped between a “bright” state, where they could emit a photon of light, and a “dark” state that did not emit at random moments, remaining in one state or the other for periods of between a few tenths of a second and a few seconds before jumping again. Since then, such jumps have been seen in various systems, ranging from photons switching between quantum states to atoms in solid materials jumping between quantized magnetic states. In 2007 a team in France reported jumps that correspond to what they called “the birth, life and death of individual photons.”

In these experiments the jumps indeed looked abrupt and random — there was no telling, as the quantum system was monitored, when they would happen, nor any detailed picture of what a jump looked like. The Yale team’s setup, by contrast, allowed them to anticipate when a jump was coming, then zoom in close to examine it. The key to the experiment is the ability to collect just about all of the available information about it, so that none leaks away into the environment before it can be measured. Only then can they follow single jumps in such detail.

The quantum systems the researchers used are much larger than atoms, consisting of wires made from a superconducting material — sometimes called “artificial atoms” because they have discrete quantum energy states analogous to the electron states in real atoms. Jumps between the energy states can be induced by absorbing or emitting a photon, just as they are for electrons in atoms.

Devoret and colleagues wanted to watch a single artificial atom jump between its lowest-energy (ground) state and an energetically excited state. But they couldn’t monitor that transition directly, because making a measurement on a quantum system destroys the coherence of the wave function — its smooth wavelike behavior  — on which quantum behavior depends. To watch the quantum jump, the researchers had to retain this coherence. Otherwise they’d “collapse” the wave function, which would place the artificial atom in one state or the other. This is the problem famously exemplified by Schrödinger’s cat, which is allegedly placed in a coherent quantum “superposition” of live and dead states but becomes only one or the other when observed.

To get around this problem, Devoret and colleagues employ a clever trick involving a second excited state. The system can reach this second state from the ground state by absorbing a photon of a different energy. The researchers probe the system in a way that only ever tells them whether the system is in this second “bright” state, so named because it’s the one that can be seen. The state to and from which the researchers are actually looking for quantum jumps is, meanwhile, the “dark” state — because it remains hidden from direct view.

The researchers placed the superconducting circuit in an optical cavity (a chamber in which photons of the right wavelength can bounce around) so that, if the system is in the bright state, the way that light scatters in the cavity changes. Every time the bright state decays by emission of a photon, the detector gives off a signal akin to a Geiger counter’s “click.”

The key here, said Oliver, is that the measurement provides information about the state of the system without interrogating that state directly. In effect, it asks whether the system is in, or is not in, the ground and dark states collectively. That ambiguity is crucial for maintaining quantum coherence during a jump between these two states. In this respect, said Oliver, the scheme that the Yale team has used is closely related to those employed for error correction in quantum computers. There, too, it’s necessary to get information about quantum bits without destroying the coherence on which the quantum computation relies. Again, this is done by not looking directly at the quantum bit in question but probing an auxiliary state coupled to it.

The strategy reveals that quantum measurement is not about the physical perturbation induced by the probe but about what you know (and what you leave unknown) as a result. “Absence of an event can bring as much information as its presence,” said Devoret. He compares it to the Sherlock Holmes story in which the detective infers a vital clue from the “curious incident” in which a dog did not do anything in the night. Borrowing from a different (but often confused) dog-related Holmes story, Devoret calls it “Baskerville’s Hound meets Schrödinger’s Cat.”

To Catch a Jump

The Yale team saw a series of clicks from the detector, each signifying a decay of the bright state, arriving typically every few microseconds. This stream of clicks was interrupted approximately every few hundred microseconds, apparently at random, by a hiatus in which there were no clicks. Then after a period of typically 100 microseconds or so, the clicks resumed. During that silent time, the system had presumably undergone a transition to the dark state, since that’s the only thing that can prevent flipping back and forth between the ground and bright states.

So here in these switches from “click” to “no-click” states are the individual quantum jumps — just like those seen in the earlier experiments on trapped atoms and the like. However, in this case Devoret and colleagues could see something new.

Before each jump to the dark state, there would typically be a short spell where the clicks seemed suspended: a pause that acted as a harbinger of the impending jump. “As soon as the length of a no-click period significantly exceeds the typical time between two clicks, you have a pretty good warning that the jump is about to occur,” said Devoret.

That warning allowed the researchers to study the jump in greater detail. When they saw this brief pause, they switched off the input of photons driving the transitions. Surprisingly, the transition to the dark state still happened even without photons driving it — it is as if, by the time the brief pause sets in, the fate is already fixed. So although the jump itself comes at a random time, there is also something deterministic in its approach.

With the photons turned off, the researchers zoomed in on the jump with fine-grained time resolution to see it unfold. Does it happen instantaneously — the sudden quantum jump of Bohr and Heisenberg? Or does it happen smoothly, as Schrödinger insisted it must? And if so, how?

The team found that jumps are in fact gradual. That’s because, even though a direct observation could reveal the system only as being in one state or another, during a quantum jump the system is in a superposition, or mixture, of these two end states. As the jump progresses, a direct measurement would be increasingly likely to yield the final rather than the initial state. It’s a bit like the way our decisions may evolve over time. You can only either stay at a party or leave it — it’s a binary choice — but as the evening wears on and you get tired, the question “Are you staying or leaving?” becomes increasingly likely to get the answer “I’m leaving.”

The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump. Using a method called tomographic reconstruction, the researchers could figure out the relative weightings of the dark and ground states in the superposition. They saw these weights change gradually over a period of a few microseconds. That’s pretty fast, but it’s certainly not instantaneous.

What’s more, this electronic system is so fast that the researchers could “catch” the switch between the two states as it is happening, then reverse it by sending a pulse of photons into the cavity to boost the system back to the dark state. They can persuade the system to change its mind and stay at the party after all.

Flash of Insight

The experiment shows that quantum jumps “are indeed not instantaneous if we look closely enough,” said Oliver, “but are coherent processes”: real physical events that unfold over time.

The gradualness of the “jump” is just what is predicted by a form of quantum theory called quantum trajectories theory, which can describe individual events like this. “It is reassuring that the theory matches perfectly with what is seen” said David DiVincenzo, an expert in quantum information at Aachen University in Germany, “but it’s a subtle theory, and we are far from having gotten our heads completely around it.”

The possibility of predicting quantum jumps just before they occur, said Devoret, makes them somewhat like volcanic eruptions. Each eruption happens unpredictably, but some big ones can be anticipated by watching for the atypically quiet period that precedes them. “To the best of our knowledge, this precursory signal [to a quantum jump] has not been proposed or measured before,” he said.

Devoret said that an ability to spot precursors to quantum jumps might find applications in quantum sensing technologies. For example, “in atomic clock measurements, one wants to synchronize the clock to the transition frequency of an atom, which serves as a reference,” he said. But if you can detect right at the start if the transition is about to happen, rather than having to wait for it to be completed, the synchronization can be faster and therefore more precise in the long run.

DiVincenzo thinks that the work might also find applications in error correction for quantum computing, although he sees that as “quite far down the line.” To achieve the level of control needed for dealing with such errors, though, will require this kind of exhaustive harvesting of measurement data — rather like the data-intensive situation in particle physics, said DiVincenzo.

The real value of the result is not, though, in any practical benefits; it’s a matter of what we learn about the workings of the quantum world. Yes, it is shot through with randomness — but no, it is not punctuated by instantaneous jerks. Schrödinger, aptly enough, was both right and wrong at the same time.

Philip Ball is a science writer and author based in London who contributes frequently to Nature, New Scientist, Prospect, Nautilus and The Atlantic, among other publications.

One Hundred Years Ago, Einstein’s Theory of General Relativity Baffled the Press and the Public

Few people claimed to fully understand it, but the esoteric theory still managed to spark the public’s imagination. Posted January 10th 2021

Smithsonian Magazine

  • Dan Falk

Read when you’ve got time to spare.GettyImages-3091072crop.jpg

After two eclipse expeditions confirmed Einstein’s theory of general relativity, the scientist became an international celebrity. Pictured above in his home, circa 1925. Photo from General Photographic Agency / Getty Images.

When the year 1919 began, Albert Einstein was virtually unknown beyond the world of professional physicists. By year’s end, however, he was a household name around the globe. November 1919 was the month that made Einstein into “Einstein,” the beginning of the former patent clerk’s transformation into an international celebrity.

On November 6, scientists at a joint meeting of the Royal Society of London and the Royal Astronomical Society announced that measurements taken during a total solar eclipse earlier that year supported Einstein’s bold new theory of gravity, known as general relativity. Newspapers enthusiastically picked up the story. “Revolution in Science,” blared the Times of London; “Newtonian Ideas Overthrown.” A few days later, the New York Times weighed in with a six-tiered headline—rare indeed for a science story. “Lights All Askew in the Heavens,” trumpeted the main headline. A bit further down: “Einstein’s Theory Triumphs” and “Stars Not Where They Seemed, or Were Calculated to Be, But Nobody Need Worry.”

The spotlight would remain on Einstein and his seemingly impenetrable theory for the rest of his life. As he remarked to a friend in 1920: “At present every coachman and every waiter argues about whether or not the relativity theory is correct.” In Berlin, members of the public crowded into the classroom where Einstein was teaching, to the dismay of tuition-paying students. And then he conquered the United States. In 1921, when the steamship Rotterdam arrived in Hoboken, New Jersey, with Einstein on board, it was met by some 5,000 cheering New Yorkers. Reporters in small boats pulled alongside the ship even before it had docked. An even more over-the-top episode played out a decade later, when Einstein arrived in San Diego, en route to the California Institute of Technology where he had been offered a temporary position. Einstein was met at the pier not only by the usual throng of reporters, but by rows of cheering students chanting the scientist’s name.

The intense public reaction to Einstein has long intrigued historians. Movie stars have always attracted adulation, of course, and 40 years later the world would find itself immersed in Beatlemania—but a physicist? Nothing like it had ever been seen before, and—with the exception of Stephen Hawking, who experienced a milder form of celebrity—it hasn’t been seen since, either.

Over the years, a standard, if incomplete, explanation emerged for why the world went mad over a physicist and his work: In the wake of a horrific global war—a conflict that drove the downfall of empires and left millions dead—people were desperate for something uplifting, something that rose above nationalism and politics. Einstein, born in Germany, was a Swiss citizen living in Berlin, Jewish as well as a pacifist, and a theorist whose work had been confirmed by British astronomers. And it wasn’t just any theory, but one which moved, or seemed to move, the stars. After years of trench warfare and the chaos of revolution, Einstein’s theory arrived like a bolt of lightning, jolting the world back to life.

Mythological as this story sounds, it contains a grain of truth, says Diana Kormos-Buchwald, a historian of science at Caltech and director and general editor of the Einstein Papers Project. In the immediate aftermath of the war, the idea of a German scientist—a German anything—receiving acclaim from the British was astonishing.

“German scientists were in limbo,” Kormos-Buchwald says. “They weren’t invited to international conferences; they weren’t allowed to publish in international journals. And it’s remarkable how Einstein steps in to fix this problem. He uses his fame to repair contact between scientists from former enemy countries.”

At that time, Kormos-Buchwald adds, the idea of a famous scientist was unusual. Marie Curie was one of the few widely known names. (She already had two Nobel Prizes by 1911; Einstein wouldn’t receive his until 1922, when he was retroactively awarded the 1921 prize.) However, Britain also had something of a celebrity-scientist in the form of Sir Arthur Eddington, the astronomer who organized the eclipse expeditions to test general relativity. Eddington was a Quaker and, like Einstein, had been opposed to the war. Even more crucially, he was one of the few people in England who understood Einstein’s theory, and he recognized the importance of putting it to the test.

“Eddington was the great popularizer of science in Great Britain. He was the Carl Sagan of his time,” says Marcia Bartusiak, science author and professor in MIT’s graduate Science Writing program. “He played a key role in getting the media’s attention focused on Einstein.”

It also helped Einstein’s fame that his new theory was presented as a kind of cage match between himself and Isaac Newton, whose portrait hung in the very room at the Royal Society where the triumph of Einstein’s theory was announced.

“Everyone knows the trope of the apple supposedly falling on Newton’s head,” Bartusiak says. “And here was a German scientist who was said to be overturning Newton, and making a prediction that was actually tested—that was an astounding moment.”

Much was made of the supposed incomprehensibility of the new theory. In the New York Times story of November 10, 1919—the “Lights All Askew” edition—the reporter paraphrases J.J. Thompson, president of the Royal Society, as stating that the details of Einstein’s theory “are purely mathematical and can only be expressed in strictly scientific terms” and that it was “useless to endeavor to detail them for the man in the street.” The same article quotes an astronomer, W.J.S. Lockyer, as saying that the new theory’s equations, “while very important,” do not “affect anything on this earth. They do not personally concern ordinary human beings; only astronomers are affected.” (If Lockyer could have time travelled to the present day, he would discover a world in which millions of ordinary people routinely navigate with the help of GPS satellites, which depend directly on both special and general relativity.)

The idea that a handful of clever scientists might understand Einstein’s theory, but that such comprehension was off limits to mere mortals, did not sit well with everyone—including the New York Times’ own staff. The day after the “Lights All Askew” article ran, an editorial asked what “common folk” ought to make of Einstein’s theory, a set of ideas that “cannot be put in language comprehensible to them.” They conclude with a mix of frustration and sarcasm: “If we gave it up, no harm would be done, for we are used to that, but to have the giving up done for us is—well, just a little irritating.”

Things were not going any smoother in London, where the editors of the Times confessed their own ignorance but also placed some of the blame on the scientists themselves. “We cannot profess to follow the details and implications of the new theory with complete certainty,” they wrote on November 28, “but we are consoled by the reflection that the protagonists of the debate, including even Dr. Einstein himself, find no little difficulty in making their meaning clear.”

Readers of that day’s Times were treated to Einstein’s own explanation, translated from German. It ran under the headline, “Einstein on his Theory.” The most comprehensible paragraph was the final one, in which Einstein jokes about his own “relative” identity: “Today in Germany I am called a German man of science, and in England I am represented as a Swiss Jew. If I come to be regarded as a bête noire, the descriptions will be reversed, and I shall become a Swiss Jew for the Germans, and a German man of science for the English.”

Not to be outdone, the New York Times sent a correspondent to pay a visit to Einstein himself, in Berlin, finding him “on the top floor of a fashionable apartment house.” Again they try—both the reporter and Einstein—to illuminate the theory. Asked why it’s called “relativity,” Einstein explains how Galileo and Newton envisioned the workings of the universe and how a new vision is required, one in which time and space are seen as relative. But the best part was once again the ending, in which the reporter lays down a now-clichéd anecdote which would have been fresh in 1919: “Just then an old grandfather’s clock in the library chimed the mid-day hour, reminding Dr. Einstein of some appointment in another part of Berlin, and old-fashioned time and space enforced their wonted absolute tyranny over him who had spoken so contemptuously of their existence, thus terminating the interview.”

Efforts to “explain Einstein” continued. Eddington wrote about relativity in the Illustrated London News and, eventually, in popular books. So too did luminaries like Max Planck, Wolfgang Pauli and Bertrand Russell. Einstein wrote a book too, and it remains in print to this day. But in the popular imagination, relativity remained deeply mysterious. A decade after the first flurry of media interest, an editorial in the New York Times lamented: “Countless textbooks on relativity have made a brave try at explaining and have succeeded at most in conveying a vague sense of analogy or metaphor, dimly perceptible while one follows the argument painfully word by word and lost when one lifts his mind from the text.”

Eventually, the alleged incomprehensibility of Einstein’s theory became a selling point, a feature rather than a bug. Crowds continued to follow Einstein, not, presumably, to gain an understanding of curved space-time, but rather to be in the presence of someone who apparently did understand such lofty matters. This reverence explains, perhaps, why so many people showed up to hear Einstein deliver a series of lectures in Princeton in 1921. The classroom was filled to overflowing—at least at the beginning, Kormos-Buchwald says. “The first day there were 400 people there, including ladies with fur collars in the front row. And on the second day there were 200, and on the third day there were 50, and on the fourth day the room was almost empty.” 1919_eclipse_positive.jpg

Original caption: From the report of Sir Arthur Eddington on the expedition to verify Albert Einstein’s prediction of the bending of light around the sun. Photo from Wikimedia Commons / Public Domain.

If the average citizen couldn’t understand what Einstein was saying, why were so many people keen on hearing him say it? Bartisuak suggests that Einstein can be seen as the modern equivalent of the ancient shaman who would have mesmerized our Paleolithic ancestors. The shaman “supposedly had an inside track on the purpose and nature of the universe,” she says. “Through the ages, there has been this fascination with people that you think have this secret knowledge of how the world works. And Einstein was the ultimate symbol of that.”

The physicist and science historian Abraham Pais has described Einstein similarly. To many people, Einstein appeared as “a new Moses come down from the mountain to bring the law and a new Joshua controlling the motion of the heavenly bodies.” He was the “divine man” of the 20th century.

Einstein’s appearance and personality helped. Here was a jovial, mild-mannered man with deep-set eyes, who spoke just a little English. (He did not yet have the wild hair of his later years, though that would come soon enough.) With his violin case and sandals—he famously shunned socks—Einstein was just eccentric enough to delight American journalists. (He would later joke that his profession was “photographer’s model.”) According to Walter Isaacson’s 2007 biography, Einstein: His Life and Universe, the reporters who caught up with the scientist “were thrilled that the newly discovered genius was not a drab or reserved academic” but rather “a charming 40-year-old, just passing from handsome to distinctive, with a wild burst of hair, rumpled informality, twinkling eyes, and a willingness to dispense wisdom in bite-sized quips and quotes.”

The timing of Einstein’s new theory helped heighten his fame as well. Newspapers were flourishing in the early 20th century, and the advent of black-and-white newsreels had just begun to make it possible to be an international celebrity. As Thomas Levenson notes in his 2004 book Einstein in Berlin, Einstein knew how to play to the cameras. “Even better, and usefully in the silent film era, he was not expected to be intelligible. … He was the first scientist (and in many ways the last as well) to achieve truly iconic status, at least in part because for the first time the means existed to create such idols.”

Einstein, like many celebrities, had a love-hate relationship with fame, which he once described as “dazzling misery.” The constant intrusions into his private life were an annoyance, but he was happy to use his fame to draw attention to a variety of causes that he supported, including Zionism, pacifism, nuclear disarmament and racial equality.

Not everyone loved Einstein, of course. Various groups had their own distinctive reasons for objecting to Einstein and his work, John Stachel, the founding editor of the Einstein Papers Project and a professor at Boston University, told me in a 2004 interview. Some American philosophers rejected relativity for being too abstract and metaphysical, while some Russian thinkers felt it was too idealistic. Some simply hated Einstein because he was a Jew.

“Many of those who opposed Einstein on philosophical grounds were also anti-Semites, and later on, adherents of what the Nazis called Deutsche Physic—‘German physics’—which was ‘good’ Aryan physics, as opposed to this Jüdisch Spitzfindigkeit—‘Jewish subtlety,’ Stachel says. “So one gets complicated mixtures, but the myth that everybody loved Einstein is certainly not true. He was hated as a Jew, as a pacifist, as a socialist [and] as a relativist, at least.” As the 1920s wore on, with anti-Semitism on the rise, death threats against Einstein became routine. Fortunately he was on a working holiday in the United States when Hitler came to power. He would never return to the country where he had done his greatest work.

For the rest of his life, Einstein remained mystified by the relentless attention paid to him. As he wrote in 1942, “I never understood why the theory of relativity with its concepts and problems so far removed from practical life should for so long have met with a lively, or indeed passionate, resonance among broad circles of the public. … What could have produced this great and persistent psychological effect? I never yet heard a truly convincing answer to this question.”

Today, a full century after his ascent to superstardom, the Einstein phenomenon continues to resist a complete explanation. The theoretical physicist burst onto the world stage in 1919, expounding a theory that was, as the newspapers put it, “dimly perceptible.” Yet in spite of the theory’s opacity—or, very likely, because of it—Einstein was hoisted onto the lofty pedestal where he remains to this day. The public may not have understood the equations, but those equations were said to reveal a new truth about the universe, and that, it seems, was enough.

Dan Falk is a science journalist based in Toronto. His books include “The Science of Shakespeare” and “In Search of Time.”

A Physicist’s Physicist Ponders the Nature of Reality

Edward Witten reflects on the meaning of dualities in physics and math, emergent space-time, and the pursuit of a complete description of nature. Posted January 8th 2021

Quanta Magazine

  • Natalie Wolchover

Read when you’ve got time to spare.EdWitten_2880x1780_03.jpg

Edward Witten in his office at the Institute for Advanced Study in Princeton, New Jersey. All photos by Jean Sweep for Quanta Magazine.

Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the leading candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.

During a visit in fall 2017, I spotted Witten on the Institute’s central lawn and requested an interview; in his quick, alto voice, he said he couldn’t promise to be able to answer my questions but would try. Later, when I passed him on the stone paths, he often didn’t seem to see me.

Physics luminaries since Albert Einstein, who lived out his days in the same intellectual haven, have sought to unify gravity with the other forces of nature by finding a more fundamental quantum theory to replace Einstein’s approximate picture of gravity as curves in the geometry of space-time. M-theory, which Witten proposed in 1995, could conceivably offer this deeper description, but only some aspects of the theory are known. M-theory incorporates within a single mathematical structure all five versions of string theory, which renders the elements of nature as minuscule vibrating strings. These five string theories connect to each other through “dualities,” or mathematical equivalences. Over the past 30 years, Witten and others have learned that the string theories are also mathematically dual to quantum field theories — descriptions of particles moving through electromagnetic and other fields that serve as the language of the reigning “Standard Model” of particle physics. While he’s best known as a string theorist, Witten has discovered many new quantum field theories and explored how all these different descriptions are connected. His physical insights have led time and again to deep mathematical discoveries.

Researchers pore over his work and hope he’ll take an interest in theirs. But for all his scholarly influence, Witten, who is 66, does not often broadcast his views on the implications of modern theoretical discoveries. Even his close colleagues eagerly suggested questions they wanted me to ask him.

When I arrived at his office at the appointed hour on a summery Thursday last month, Witten wasn’t there. His door was ajar. Papers covered his coffee table and desk — not stacks, but floods: text oriented every which way, some pages close to spilling onto the floor. (Research papers get lost in the maelstrom as he finishes with them, he later explained, and every so often he throws the heaps away.) Two girls smiled out from a framed photo on a shelf; children’s artwork decorated the walls, one celebrating Grandparents’ Day. When Witten arrived minutes later, we spoke for an hour and a half about the meaning of dualities in physics and math, the current prospects of M-theory, what he’s reading, what he’s looking for, and the nature of reality. The interview has been condensed and edited for clarity.

Witten_KidsArt_2K.jpg

Physicists are talking more than ever lately about dualities, but you’ve been studying them for decades. Why does the subject interest you?

People keep finding new facets of dualities. Dualities are interesting because they frequently answer questions that are otherwise out of reach. For example, you might have spent years pondering a quantum theory and you understand what happens when the quantum effects are small, but textbooks don’t tell you what you do if the quantum effects are big; you’re generally in trouble if you want to know that. Frequently dualities answer such questions. They give you another description, and the questions you can answer in one description are different than the questions you can answer in a different description.

What are some of these newfound facets of dualities?

It’s open-ended because there are so many different kinds of dualities. There are dualities between a gauge theory [a theory, such as a quantum field theory, that respects certain symmetries] and another gauge theory, or between a string theory for weak coupling [describing strings that move almost independently from one another] and a string theory for strong coupling. Then there’s AdS/CFT duality, between a gauge theory and a gravitational description. That duality was discovered 20 years ago, and it’s amazing to what extent it’s still fruitful. And that’s largely because around 10 years ago, new ideas were introduced that rejuvenated it. People had new insights about entropy in quantum field theory — the whole story about “it from qubit.”

The AdS/CFT duality connects a theory of gravity in a space-time region called anti-de Sitter space (which curves differently than our universe) to an equivalent quantum field theory describing that region’s gravity-free boundary. Everything there is to know about AdS space — often called the “bulk” since it’s the higher-dimensional region — is encoded, like in a hologram, in quantum interactions between particles on the lower-dimensional boundary. Thus, AdS/CFT gives physicists a “holographic” understanding of the quantum nature of gravity.

That’s the idea that space-time and everything in it emerges like a hologram out of information stored in the entangled quantum states of particles.

Yes. Then there are dualities in math, which can sometimes be interpreted physically as consequences of dualities between two quantum field theories. There are so many ways these things are interconnected that any simple statement I try to make on the fly, as soon as I’ve said it I realize it didn’t capture the whole reality. You have to imagine a web of different relationships, where the same physics has different descriptions, revealing different properties. In the simplest case, there are only two important descriptions, and that might be enough. If you ask me about a more complicated example, there might be many, many different ones.

I’m not certain what we should hope for. Traditionally, quantum field theory was constructed by starting with the classical picture [of a smooth field] and then quantizing it. Now we’ve learned that there are a lot of things that happen that that description doesn’t do justice to. And the same quantum theory can come from different classical theories. Now, Nati Seiberg [a theoretical physicist who works down the hall] would possibly tell you that he has faith that there’s a better formulation of quantum field theory that we don’t know about that would make everything clearer. I’m not sure how much you should expect that to exist. That would be a dream, but it might be too much to hope for; I really don’t know.

There’s another curious fact that you might want to consider, which is that quantum field theory is very central to physics, and it’s actually also clearly very important for math. But it’s extremely difficult for mathematicians to study; the way physicists define it is very hard for mathematicians to follow with a rigorous theory. That’s extremely strange, that the world is based so much on a mathematical structure that’s so difficult.

What do you see as the relationship between math and physics?

I prefer not to give you a cosmic answer but to comment on where we are now. Physics in quantum field theory and string theory somehow has a lot of mathematical secrets in it, which we don’t know how to extract in a systematic way. Physicists are able to come up with things that surprise the mathematicians. Because it’s hard to describe mathematically in the known formulation, the things you learn about quantum field theory you have to learn from physics.

I find it hard to believe there’s a new formulation that’s universal. I think it’s too much to hope for. I could point to theories where the standard approach really seems inadequate, so at least for those classes of quantum field theories, you could hope for a new formulation. But I really can’t imagine what it would be.

You can’t imagine it at all? 

No, I can’t. Traditionally it was thought that interacting quantum field theory couldn’t exist above four dimensions, and there was the interesting fact that that’s the dimension we live in. But one of the offshoots of the string dualities of the 1990s was that it was discovered that quantum field theories actually exist in five and six dimensions. And it’s amazing how much is known about their properties.

I’ve heard about the mysterious (2,0) theory, a quantum field theory describing particles in six dimensions, which is dual to M-theory describing strings and gravity in seven-dimensional AdS space. Does this (2,0) theory play an important role in the web of dualities?

Yes, that’s the pinnacle. In terms of conventional quantum field theory without gravity, there is nothing quite like it above six dimensions. From the (2,0) theory’s existence and main properties, you can deduce an incredible amount about what happens in lower dimensions. An awful lot of important dualities in four and fewer dimensions follow from this six-dimensional theory and its properties. However, whereas what we know about quantum field theory is normally from quantizing a classical field theory, there’s no reasonable classical starting point of the (2,0) theory. The (2,0) theory has properties [such as combinations of symmetries] that sound impossible when you first hear about them. So you can ask why dualities exist, but you can also ask why is there a 6-D theory with such and such properties? This seems to me a more fundamental restatement.

Dualities sometimes make it hard to maintain a sense of what’s real in the world, given that there are radically different ways you can describe a single system. How would you describe what’s real or fundamental?

What aspect of what’s real are you interested in? What does it mean that we exist? Or how do we fit into our mathematical descriptions?

The latter.

Well, one thing I’ll tell you is that in general, when you have dualities, things that are easy to see in one description can be hard to see in the other description. So you and I, for example, are fairly simple to describe in the usual approach to physics as developed by Newton and his successors. But if there’s a radically different dual description of the real world, maybe some things physicists worry about would be clearer, but the dual description might be one in which everyday life would be hard to describe.

What would you say about the prospect of an even more optimistic idea that there could be one single quantum gravity description that really does help you in every case in the real world?

Well, unfortunately, even if it’s correct I can’t guarantee it would help. Part of what makes it difficult to help is that the description we have now, even though it’s not complete, does explain an awful lot. And so it’s a little hard to say, even if you had a truly better description or a more complete description, whether it would help in practice.

Are you speaking of M-theory?

M-theory is the candidate for the better description.

You proposed M-theory 22 years ago. What are its prospects today?

Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

Witten_Drawing_V.psdcombo.jpg

What’s an example of something else we might need?

Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

Are you willing to speculate about how it would be different?

I really doubt I can say anything useful. I guess I suspect that there’s an extra layer of abstractness compared to what we’re used to. I tend to think that there isn’t a precise quantum description of space-time — except in the types of situations where we know that there is, such as in AdS space. I tend to think, otherwise, things are a little bit murkier than an exact quantum description. But I can’t say anything useful.

The other night I was reading an old essay by the 20th-century Princeton physicist John Wheeler. He was a visionary, certainly. If you take what he says literally, it’s hopelessly vague. And therefore, if I had read this essay when it came out 30 years ago, which I may have done, I would have rejected it as being so vague that you couldn’t work on it, even if he was on the right track.

I’m trying to learn about what people are trying to say with the phrase “it from qubit.” Wheeler talked about “it from bit,” but you have to remember that this essay was written probably before the term “qubit” was coined and certainly before it was in wide currency. Reading it, I really think he was talking about qubits, not bits, so “it from qubit” is actually just a modern translation.

Don’t expect me to be able to tell you anything useful about it — about whether he was right. When I was a beginning grad student, they had a series of lectures by faculty members to the new students about theoretical research, and one of the people who gave such a lecture was Wheeler. He drew a picture on the blackboard of the universe visualized as an eye looking at itself. I had no idea what he was talking about. It’s obvious to me in hindsight that he was explaining what it meant to talk about quantum mechanics when the observer is part of the quantum system. I imagine there is something we don’t understand about that.

Observing a quantum system irreversibly changes it, creating a distinction between past and future. So the observer issue seems possibly related to the question of time, which we also don’t understand. With the AdS/CFT duality, we’ve learned that new spatial dimensions can pop up like a hologram from quantum information on the boundary. Do you think time is also emergent — that it arises from a timeless complete description?

I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay. As you’ll read, he thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.

Do you consider Wheeler a hero?

I wouldn’t call him a hero, necessarily, no. Really I just became curious what he meant by “it from bit,” and what he was saying. He definitely had visionary ideas, but they were too far ahead of their time. I think I was more patient in reading a vague but inspirational essay than I might have been 20 years ago. He’s also got roughly 100 interesting-sounding references in that essay. If you decided to read them all, you’d have to spend weeks doing it. I might decide to look at a few of them.

Why do you have more patience for such things now?

I think when I was younger I always thought the next thing I did might be the best thing in my life. But at this point in life I’m less persuaded of that. If I waste a little time reading somebody’s essay, it doesn’t seem that bad.

Do you ever take your mind off physics and math?

My favorite pastime is tennis. I am a very average but enthusiastic tennis player.

In my career I’ve only been able to take small jumps. Relatively small jumps. What Wheeler was talking about was an enormous jump. And he does say at the beginning of the essay that he has no idea if this will take 10, 100 or 1,000 years.

And he was talking about explaining how physics arises from information.

Yes. The way he phrases it is broader: He wants to explain the meaning of existence. That was actually why I thought you were asking if I wanted to explain the meaning of existence.

I see. Does he have any hypotheses?

No. He only talks about things you shouldn’t do and things you should do in trying to arrive at a more fundamental description of physics.

Do you have any ideas about the meaning of existence?

No. [Laughs.]

Natalie Wolchover is a senior writer and editor at Quanta Magazine covering the physical sciences.

Vodka made out of thin air: toasting the planet’s good health Posted January 3rd 2021

Air Vodka bottle

Drink to that: carbon negative vodka

The Air Company, based in New York, makes vodka from two ingredients: carbon dioxide and water. Each bottle that’s produced takes carbon dioxide out of the air. It has been chosen as one of the finalists in the $20m NRG COSIA Carbon XPRIZE, which aims to incentivise innovation in the field of carbon capture, utilisation and storage.

The company’s chief technology officer, Stafford Sheehan, hit upon the idea while trying to create artificial photosynthesis as a chemistry PhD student at Yale. Photosynthesis, you may remember from chemistry at school, is the process by which plants use sunlight to convert CO2 into energy. For 2bn years, plants were equal to the task of balancing the carbon in the atmosphere – but now we are emitting it at a rate beyond what nature can restore with photosynthesis. Hence the interest in carbon capture. “Our aim is to take CO2 that would otherwise be emitted into the atmosphere and transform it into things that are better for the planet,” says Sheehan

A nice cold martini is undoubtedly better for the planet than global warming. Unfortunately, however, it would require 11 quadrillion Air Vodka Martinis to make any kind of significant impact. But Sheehan hopes to make alcohol for a variety of different applications. “Ethanol, methanol and propanol are three of the most-produced chemicals in the world, all alcohols,” he says. “Plastics, resins, fragrances, cleaners, sanitisers, bio-jet fuel… almost all start from alcohol. If we can make the base alcohol for all of those from carbon that would otherwise be emitted, that would make a major impact.”

Air Company currently captures its CO2 from old-fashioned alcohol production: concentrated CO2 rising from a standard fuel-alcohol fermentation stack is transformed into vodka. That’s a fairly boutique product. However, power stations are much more plentiful sources. “You can burn natural gas, then capture the CO2 you’re emitting, and that feeds you the carbon dioxide,” says Sheehan. “That’s what we’d like to do and that’s where you can do it at scale.” Richard Godwin

Nasa’s Mars rover and the ‘seven minutes of terror’ Posted December 31st 2020

By Jonathan Amos
BBC Science CorrespondentPublished23 DecemberRelated Topics

The US space agency (Nasa) has released an animation showing how its one-tonne Perseverance rover will land on Mars on 18 February.

The robot is being sent to a crater called Jezero where it will search for evidence of past life. But to undertake this science, it must first touch down softly.

The sequence of manoeuvres needed to land on Mars is often referred to as the “seven minutes of terror” – and with good reason.

So much has to go right in a frighteningly short space of time or the arriving mission will dig a very big and very expensive new hole in the Red Planet.

What’s more, it’s all autonomous.

With a distance on the day of 209 million km (130 million miles) between Earth and Mars, every moment and every movement you see in the animation has to be commanded by onboard computers.

It starts more than 100km above Mars where the Perseverance rover will encounter the first wisps of atmosphere.

Artwork: The skycrane
image captionArtwork: The “skycrane” lowers the rovers on a series of nylon cords

At this point, the vehicle, in its protective capsule, is travelling at 20,000km/h (12,000mph).

In little more than 400 seconds, the descent system has to reduce this velocity to less than 1m/s at the surface.

Most of the work is done by a heat shield.

As the capsule plunges deeper into the Martian air, it gets super-hot at more than 1,000C – but at the same time, the drag slows the fall dramatically.

By the time the supersonic parachute deploys from the backshell of the capsule, the velocity has already been reduced to 1,200km/h.

Perseverance will ride the 21.5m-wide parachute for just over a minute, further scrubbing that entry speed.

The most complex phases are still to come, however.

Perseverance
image captionOne tonne of high technology: Seven instruments, 23 cameras, two microphones and a drill

At an altitude of 2km, and while moving at 100m/s – the Perseverance rover and its “Skycrane” separate from the backshell and fall away.

Eight rockets then ignite on the cradle to bring the rover into a hovering position just above the surface. Nylon cords are used to lower the multi-billion-dollar wheeled vehicle to the ground.

But that’s still not quite it.

When Perseverance senses contact, it must immediately sever the cables or it will be dragged behind the crane as the cradle flies away to dispose of itself at a safe distance.

The sequence looks much the same as was used to put Nasa’s last rover, Curiosity, on the surface of Mars eight years ago. However, the navigation tools have been improved to put Perseverance down in an even more precisely defined landing zone.

Touchdown is expected in the late afternoon, local time, on Mars – just before 21:00 GMT on Earth.

It’s worth remembering that on the day of landing, the time it takes for a radio signal to reach Earth from Mars will be roughly 700 seconds.

This means that when Nasa receives the message from Perseverance that it has engaged the top of the atmosphere, the mission will already have been dead or alive on the planet’s surface for several minutes.

The robot will be recording its descent on camera and with microphones. The media files will be sent back to Earth after landing – assuming Perseverance survives.

Presentational grey line
Rover diagram

Read our guides to Perseverance (also known as the Mars 2020 mission) – where it’s going and what it will be doing.

media captionThe perseverance rover will deploy a helicopter
Infographic
image captionPerseverance will target a crater that once held a lake

Why does COVID-19 kill some people and spare others? Study finds certain patients – particularly men – have ‘autoantibodies’ that cause immune system proteins to attack the body’s own cells and tissues Posted December 31st 2020

  • A new study found 10% of around 1,000 severely ill coronavirus patients have so-called autoantibodies
  • They disable immune system proteins that prevent the virus from multiplying itself and also attack the body’s cells and tissues
  • Researchers say the patients didn’t develop autoantibodies in response to the virus but had them before the pandemic began
  • Of the 101 patients with autoantibodies, 94% were male, which may suggest why men are more likely to die from COVID-19 than women

By Mary Kekatos Senior Health Reporter For Dailymail.com

Published: 18:27, 13 November 2020 | Updated: 20:53, 13 November 2020

768 shares 121 View comments

Researchers believe they are one step closer to understanding why the novel coronavirus kills some people and spares others.

In a new study, they found that some critically ill patients have antibodies that deactivate certain immune system proteins.

These antibodies, which are known autoantibodies, allow the virus to replicate and spread through the body, and also attack our cells and tissues.

What’s more, nearly all of the patients with these autoantibodies were men.

The international team, led by the Institut Imagine in Paris and Rockefeller University in New York City, says the findings suggest that doctors should screen patients for these autoantibodies so they can provide treatment to patients before they fall too ill. A new study found some severely ill coronavirus patients have so-called autoantibodies, which allow the virus to replicating and spread throughout the body, and also attack the body's cells and tissues. Pictured: Medics transfer a patient on a stretcher from an ambulance outside of Emergency at Coral Gables Hospital in Florida, July 2020

A new study found some severely ill coronavirus patients have so-called autoantibodies, which allow the virus to replicating and spread throughout the body, and also attack the body’s cells and tissues. Pictured: Medics transfer a patient on a stretcher from an ambulance outside of Emergency at Coral Gables Hospital in Florida, July 2020Nearly 10% of severely ill patients had autoantibodies (left) compared to those with mild or asymptomatic cases (center) and healthy controls (right)

Of the 101 patients with autoantibodies, 94% were male, which may suggest why men are more likely to die than women (above)

For the study, published in the journal Science, the team looked at nearly 1,000 patients with with life-threatening cases of COVID-19.

These patients were compared with more than 600 patients who had mild or asymptomatic cases and around 1,200 healthy controls. 

Results showed nearly 10 percent of those with critical cases had autoantibodies that disable immune system proteins called interferons.

Interferons are signaling proteins released by infected cells and are named for their prowess to ‘interfere’ with the virus’s ability to multiply itself. 

They are known as ‘call-to-arms’ immune cells because they control viral replication until more immune cells have time to arrive and attack the pathogen. 

Autoantibodies block interferons and attack the body’s cells, tissues and organs.

None of the 663 people with asymptomatic or mild cases of COVID-19 had autoantibodies and just four of the healthy controls did.

However, the team said these critically ill patients didn’t make autoantibodies in response to being infected. 

Rather, they always had them and the autoantibodies didn’t cause any problems until patients contracted the virus.  

‘Before COVID, their condition was silent,’ lead author Paul Bastard, a PhD candidate at Instiut Imagine and a researcher at Rockefeller University, told Kaiser Health News.

‘Most of them hadn’t gotten sick before.’

The team also found that 94 percent of the 101 coronavirus patients who had autoantibodies were men.  

Researchers say it may explain why, around the world, men have accounted for about 60 percent of deaths from COVID-19.

‘You see significantly more men dying in their 30s, not just in their 80s,’ Dr Sabra Klein, a professor of molecular microbiology and immunology at the Johns Hopkins Bloomberg School of Public Health, told Kaiser Health nNews.

Bastard, the lead author, says screening coronavirus patients might help predict which ones will become severely ill and allow doctors to treat them sooner. 

‘This is one of the most important things we’ve learned about the immune system since the start of the pandemic,’ Dr Eric Topol, executive vice president for research at Scripps Research in San Diego, who was not involved in the study, told Kaiser Health News. 

‘This is a breakthrough finding.’  +7

A Test for the Leading Big Bang Theory

Cosmologists have predicted the existence of an oscillating signal that could distinguish between cosmic inflation and alternative theories of the universe’s birth.

Quanta Magazine

  • Natalie Wolchover

The leading hypothesis about the universe’s birth — that a quantum speck of space became energized and inflated in a split second, creating a baby cosmos — solves many puzzles and fits all observations to date. Yet this “cosmic inflation” hypothesis lacks definitive proof. Telltale ripples that should have formed in the inflating spatial fabric, known as primordial gravitational waves, haven’t been detected in the geometry of the universe by the world’s most sensitive telescopes. Their absence has fueled underdog theories of cosmogenesis in recent years. And yet cosmic inflation is wriggly. In many variants of the idea, the sought-after ripples would simply be too weak to observe.

“The question is whether one can test the entire [inflation] scenario, not just specific models,” said Avi Loeb, an astrophysicist and cosmologist at Harvard University. “If there is no guillotine that can kill off some theories, then what’s the point?”

In a paper that appeared on the physics preprint site, arxiv.org, in February 2019, Loeb and two Harvard colleagues, Xingang Chen and Zhong-Zhi Xianyu, suggested such a guillotine. The researchers predicted an oscillatory pattern in the distribution of matter throughout the cosmos that, if detected, could distinguish between inflation and alternative scenarios — particularly the hypothesis that the Big Bang was actually a bounce preceded by a long period of contraction.

The paper has yet to be peer-reviewed, but Will Kinney, an inflationary cosmologist at the University at Buffalo and a visiting professor at Stockholm University, said “the analysis seems correct to me.” He called the proposal “a very elegant idea.”

“If the signal is real and observable, it would be very interesting,” Sean Carroll of the California Institute of Technology said in an email.

If it does exist, the signal would appear in density variations across the universe. Imagine taking a giant ice cream scoop to the sky and counting how many galaxies wind up inside. Do this many times all over the cosmos, and you’ll find that the number of scooped-up galaxies will vary above or below some average. Now increase the size of your scoop. When scooping larger volumes of universe, you might find that the number of captured galaxies now varies more extremely than before. As you use progressively larger scoops, according to Chen, Loeb and Xianyu’s calculations, the amplitude of matter density variations should oscillate between more and less extreme as you move up the scales. “What we showed,” Loeb explained, is that from the form of these oscillations, “you can tell if the universe was expanding or contracting when the density perturbations were produced” — reflecting an inflationary or bounce cosmology, respectively.

Regardless of which theory of cosmogenesis is correct, cosmologists believe that the density variations observed throughout the cosmos today were almost certainly seeded by random ripples in quantum fields that existed long ago.

Because of quantum uncertainty, any quantum field that filled the primordial universe would have fluctuated with ripples of all different wavelengths. Periodically, waves of a certain wavelength would have constructively interfered, forming peaks — or equivalently, concentrations of particles. These concentrations later grew into the matter density variations seen on different scales in the cosmos today.

But what caused the peaks at a particular wavelength to get frozen into the universe when they did? According to the new paper, the timing depended on whether the peaks formed while the universe was exponentially expanding, as in inflation models, or while it was slowly contracting, as in bounce models.

If the universe contracted in the lead-up to a bounce, ripples in the quantum fields would have been squeezed. At some point the observable universe would have contracted to a size smaller than ripples of a certain wavelength, like a violin whose resonant cavity is too small to produce the sounds of a cello. When the too-large ripples disappeared, whatever peaks, or concentrations of particles, existed at that scale at that moment would have been “frozen” into the universe. As the observable universe shrank further, ripples at progressively smaller and smaller scales would have vanished, freezing in as density variations. Ripples of some sizes might have been constructively interfering at the critical moment, producing peak density variations on that scale, whereas slightly shorter ripples that disappeared a moment later might have frozen out of phase. These are the oscillations between high and low density variations that Chen, Loeb and Xianyu argue should theoretically show up as you change the size of your galaxy ice cream scoop.

The authors argue that a qualitative difference between the forms of oscillations in the two scenarios will reveal which one occurred. In both cases, it was as if the quantum field put tick marks on a piece of tape as it rushed past — representing the expanding or contracting universe. If space were expanding exponentially, as in inflation, the tick marks imprinted on the universe by the field would have grown farther and farther apart. If the universe contracted, the tick marks should have become closer and closer together as a function of scale. Thus Chen, Loeb and Xianyu argue that the changing separation between the peaks in density variations as a function of scale should reveal the universe’s evolutionary history. “We can finally see whether the primordial universe was actually expanding or contracting, and whether it did it inflationarily fast or extremely slowly,” Chen said.

Exactly what the oscillatory signal might look like, and how strong it might be, depend on the unknown nature of the quantum fields that might have created it. Discovering such a signal would tell us about those primordial cosmic ingredients. As for whether the putative signal will show up at all in future galaxy surveys, “the good news,” according to Kinney, is that the signal is probably “much, much easier to detect” than other searched-for signals called “non-gaussianities”: triangles and other geometric arrangements of matter in the sky that would also verify and reveal details of inflation. The bad news, though, “is that the strength and the form of the signal depend on a lot of things you don’t know,” Kinney said, such as constants whose values might be zero, and it’s entirely possible that “there will be no detectable signal.”

Natalie Wolchover is a senior writer and editor at Quanta Magazine covering the physical sciences.Quanta Magazine

More from Quanta Magazine

Physicists Debate Hawking’s Idea That the Universe Had No Beginning

A recent challenge to Stephen Hawking’s biggest idea—about how the universe might have come from nothing—has cosmologists choosing sides. Posted December 26th 2020

Quanta Magazine

  • Natalie Wolchover

.Shuttlecock_Universe_2880x1620_Lede.jpg

Credit: Mike Zeng for Quanta Magazine.

In 1981, many of the world’s leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing.

Before Hawking’s talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, “What happened before that?” The Big Bang theory, for instance — pioneered 50 years before Hawking’s lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican’s academy of sciences — rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from?

The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking’s talk, the cosmologist Alan Guth realized that the Big Bang’s problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it?

Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there’s no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, “There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?”

The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before.

“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. “It would be like asking what lies south of the South Pole.”

Hartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on. … We didn’t have time in the early universe, but we have time later on.”

The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking’s. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories’ various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, “was in some ways the simplest possible proposal for that.”

But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”

In their 2017 paper, published in Physical Review Letters, Turok and his co-authors approached Hartle and Hawking’s no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. “We discovered that it just failed miserably,” Turok said. “It was just not possible quantum mechanically for a universe to start in the way they imagined.” The trio checked their math and queried their underlying assumptions before going public, but “unfortunately,” Turok said, “it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster.”

The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues’ reasoning. “We disagree with his technical arguments,” said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter’s life. “But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that’s the more interesting discussion.”

After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated — yet friendly — debate has helped firm up the idea that most tickled Hawking’s fancy. Even critics of his and Hartle’s specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure.

Garden of Cosmic Delights

Hartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo’s theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin.

In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein’s equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking “singularity theorems” meant there was no way for space-time to begin smoothly, undramatically at a point. No-boundary-Graphic-v5-897x1720.jpg

Credit: 5W Infographics for Quanta Magazine.

Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking’s hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.

Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible “expansion histories,” smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails.

The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. “Murray Gell-Mann used to ask me,” Hartle said, referring to the late Nobel Prize-winning physicist, “if you know the wave function of the universe, why aren’t you rich?” Of course, to actually solve for the wave function using Feynman’s method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in “minisuperspace,” defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking’s shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.)

Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate.

The rival solutions are the two “classical” expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein’s theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation.

One of the two classical solutions resembles our universe. On large scales, it’s smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe.

The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong.

The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our “contour of integration” pick up? Researchers have forked down different paths.

In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called “lapse.” Lapse is essentially the height of each possible shuttlecock universe — the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution.

“People place huge faith in Stephen’s intuition,” Turok said by phone. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”

Imaginary Universes

Jonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking’s student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog’s, and apparently Hawking’s, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It’s similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. “You can do that parameterization in many different ways, but none of them are any more physical than another one,” Halliwell said.

He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be “normalizable,” but the wildly fluctuating universe that Turok’s team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws — obvious signs, according to no-boundary’s defenders, to walk the other way.

It’s true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that’s a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. “That’s a principle which is not written in the stars, and which we profoundly disagree with,” Hertog said.

According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt — after mulling over the issue during a layover at Raleigh-Durham International — argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace.

In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present.

That effort is continuing in Hawking’s absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they’re studying is no longer Hartle-Hawking, in his opinion — though Hartle himself disagrees.

For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological model that has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe’s mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative.

There has also been a revival of interest in the “tunneling proposal,” an alternative way that the universe might have arisen from nothing, conceived in the ’80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical “tunneling” event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment.

Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning — though universes that tunnel into existence may have other problems.

No matter how things go, perhaps we’ll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. “If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?” asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe “is the right kind of question to ask,” said Maldacena, who, incidentally, is a member of the Pontifical Academy. “Whether we are finding the right wave function, or how we should think about the wave function — it’s less clear.”

Natalie Wolchover is a senior writer and editor at Quanta Magazine covering the physical sciences.

Blinding with science December 21st 2020

As a teacher and college lecturer over 18 years, originally I intended to spend more time developing the Junius Education page, because good education is essential to healthy society and that society’s claim to be democratic.

In Britain, education for the masses has always left much to be desired, with a social class biased basis and social class outcomes. Strangely, maybe due to Britain’s appalling general education, I have been called upon to teach most subjects, including maths and science.

Biology was my least favourite science, but my ex wife was brilliant at the subject. Dissections and statistsics were among her speciallities. She could also have been a good teacher, because she made an unpleasant subject interesting to me. That doesn’t make me an expert, but there is an old saying, ‘If you want to know something, ask an expert and they will never stop talking.’ Until I met her, I had never heard of micro biology and statistics on epidemics were the least of my concerns.

In my 1950s childhood, there were killer diseases and I had most of them because my father did not trust vaccinations. As I have said elsewhere, he died when I was 11, but that was enough for him to make an impression. As a post graduate student of psychology, I learned the Jesuit concept of ‘Give me the child until it is seven and I will give you the adult.’ I have substituted modern words for boy and man.

So my father’s scepticism coupled with a distrust of the society that sent him to war, left him struggling in civvy street, then cycling 12 miles to work and back to drive a lorry – a job that killed him, means I don’t trust the official line on Covid 19, as I have written elsewhere.

Herd immunity comes from catching things, not lockdown or mask wearing. Scientists are blinkered, tests and studies are not the real world. Not being allowed to discuss age, BAME, lifestyle, mass immigration and old age, along with the Brexit diversion should raise question marks over the alleged new big killer strain which authorities fear will overwhelm a failing NHS ( National Health Service ). The idea that this mutant, known back in September , spreads faster than the other one, is a mix of guesswork and timely propaganda.

As for a vaccine which they want us to take, without removing lockdown because they say the virus will keep mutating, why bother ? Many fear that because it is RNA based, and designed therefore to plug in to modify DNA – think hard drives and RAM- it is going to alter our minds even more than lockdown. Maybe it will. Our DNA is modifying all the time. That is a major reason for ageing.

The problem is that when we start talking about these acids and viruses, the sceintists have the edge. To some they are Gods, others devils and some just don’t believe because they speak for politicans. Politicians have earned mistrust and even contempt. So it is what it is.

More people are going mad and becoming destitute, with relationships crumbling and children suffering as unemployment soars along with collapsing businesses, because of lockdown for a disease which kills less than 1% of those affected.The average age of Covid related death – it never kills on its own- is 82, and for that we sacrifice the young !

The death stats are deliberately misleading because it is always about Covid related after 28 days from test. Its key impact is on high density BAME communities, certain related lifestyles and old age, especially in privatised care homes.

Sturgeon speaking for Scotland, echoed by London Mayor Sadiq Khan are leading the charge for longer tougher lockdown because they see it as a way of stalling and ultimately ending Brexit.

There are many good reasons for scrapping Brexit and joining the fight to save Europe from the real fascists running France, Britain and Germany, but scaremongering and causing harm with this ridiculous new variant, just to help this type of useless posturing politician, is not one of them.

Lockdown is about adjusting the dangerous global economy to stifle the waves of working class protest Europe and U.S wide. Blinding people with science that few British MPs understand- in spite of their expensive educations in humanities and money grabbing law – is the the British Parliament and Government’s speciality.

Epidemiologists are not virologists. Covid’s key impact is on high density BAME communities, certain related lifestyles and old age, especially in privatised care homes. NHS and care home workers – the latter often casuals going from one home to another – are drawn heavily from BAME because of low pay.

Epidemiologists are politically correct, omitting key variables from practice and debate. They take a model of the virus, relate it to highly selective sample populations, using a computer to predict its spread on a rate calculated in a laboratory. In short, there is too much guesswork.

Below is a fatalistic fait accompli extract from more feel good pseudo Dunkirk ( a great British disaster in case you didn’t know, my dad was wounded there ) spirit journalism from the politcally correct MI5 friendly Guardian . R.J Cook

Has a year of living with Covid-19 rewired our brains? Paula Coccoza Posted December 21st 2020

Illustration of mirror image of woman. One distorted as if through a plastic sheet; the other wearing a mask.

The loss of the connecting power of touch can ‘trigger factors that contribute to depression – sadness, lower energy levels, lethargy. Illustration: Nathalie Lees/The Guardian

The pandemic is expected to precipitate a mental health crisis, but perhaps also a chance to approach life with new clarity

When the bubonic plague spread through England in the 17th century, Sir Isaac Newton fled Cambridge where he was studying for the safety of his family home in Lincolnshire. The Newtons did not live in a cramped apartment; they enjoyed a large garden with many fruit trees. In these uncertain times, out of step with ordinary life, his mind roamed free of routines and social distractions. And it was in this context that a single apple falling from a tree struck him as more intriguing than any of the apples he had previously seen fall. Gravity was a gift of the plague. So, how is this pandemic going for you?

In different ways, this is likely a question we are all asking ourselves. Whether you have experienced illness, relocated, lost a loved one or a job, got a kitten or got divorced, eaten more or exercised more, spent longer showering each morning or reached every day for the same clothes, it is an inescapable truth that the pandemic alters us all. But how? And when will we have answers to these questions – because surely there will be a time when we can scan our personal balance sheets and see in the credit column something more than grey hairs, a thicker waist and a kitten? (Actually, the kitten is pretty rewarding.) What might be the psychological impact of living through a pandemic? Will it change us for ever?

“People talk about the return to normality, and I don’t think that is going to happen,” says Frank Snowden, a historian of pandemics at Yale, and the author of Epidemics and Society: From the Black Death to the Present. Snowden has spent 40 years studying pandemics. Then last spring, just as his phone was going crazy with people wanting to know if history could shed light on Covid-19, his life’s work landed in his lap. He caught the coronavirus.

Snowden believes that Covid-19 was not a random event. All pandemics “afflict societies through the specific vulnerabilities people have created by their relationships with the environment, other species, and each other,” he says. Each pandemic has its own properties, and this one – a bit like the bubonic plague – affects mental health. Snowden sees a second pandemic coming “in the train of the Covid-19 first pandemic … [a] psychological pandemic”.

Relatives embrace through plastic deviceepa08490430 A woman (L) embraces her nephew (R) through a plastic device after three months without a hug at a home for the elderly in Valencia, Spain, 17 June 2020, amid the ongoing coronavirus pandemic. EPA/BIEL ALINO

A man embraces his aunt through a plastic curtain at a home for the elderly in Spain in June, for the first time in three months. Photograph: Biel Aliño/EPA

Aoife O’Donovan, an associate professor of psychiatry at the UCSF Weill Institute for Neurosciences in California, who specialises in trauma, agrees. “We are dealing with so many layers of uncertainty,” she says. “Truly horrible things have happened and they will happen to others and we don’t know when or to whom or how and it is really demanding cognitively and physiologically.”

The impact is experienced throughout the body, she says, because when people perceive a threat, abstract or actual, they activate a biological stress response. Cortisol mobilises glucose. The immune system is triggered, increasing levels of inflammation. This affects the function of the brain, making people more sensitive to threats and less sensitive to rewards.

In practice, this means that your immune system may be activated simply by hearing someone next to you cough, or by the sight of all those face masks and the proliferation of a colour that surely Pantone should rename “surgical blue”, or by a stranger walking towards you, or even, as O’Donovan found, seeing a friend’s cleaner in the background of a Zoom call, maskless. And because, O’Donovan points out, government regulations are by necessity broad and changeable, “as individuals we have to make lots of choices. This is uncertainty on a really intense scale.”Quick Guide

Covid at Christmas: how do rules vary across Europe?

The unique characteristics of Covid-19 play into this sense of uncertainty. The illness “is much more complex than anyone imagined in the beginning”, Snowden says, a sort of shapeshifting adversary. In some it is a respiratory disease, in others gastrointestinal, in others it can cause delirium and cognitive impairment, in some it has a very long tail, while many experience it as asymptomatic. Most of us will never know if we have had it, and not knowing spurs a constant self-scrutiny. Symptom checkers raise questions more than they allay fears: when does tiredness become fatigue? When does a cough become “continuous”?

O’Donovan sighs. She sounds tired; this is a busy time to be a threat researcher and her whole life is work now. She finds the body’s response to uncertainty “beautiful” – its ability to mobilise to see off danger – but she’s concerned that it is ill-suited to frequent and prolonged threats. “This chronic activation can be harmful in the long term. It accelerates biological ageing and increases risk for diseases of ageing,” she says.

In daily life, uncertainty has played out in countless tiny ways as we try to reorient ourselves in a crisis, in the absence of the usual landmarks – schools, families, friendships, routines and rituals. Previously habitual rhythms, of time alone and time with others, the commute and even postal deliveries, are askew.

Philippa Perry

Philippa Perry: ‘We are becoming a sort of non-person.’ Photograph: Pål Hansen/The Observer

There is no new normal – just an evolving estrangement. Even a simple “how are you?” is heavy with hidden questions (are you infectious?), and rarely brings a straightforward answer; more likely a hypervigilant account of a mysterious high temperature experienced back in February.

Thomas Dixon, a historian of emotions at Queen Mary University of London, says that when the pandemic hit, he stopped opening his emails with the phrase “I hope this finds you well.”

The old “social dances” – as the psychotherapist Philippa Perry calls them – of finding a seat in a cafe or on the bus have not only vanished, taking with them opportunities to experience a sense of belonging, but have been replaced with dances of rejection. Perry thinks that’s why she misses the Pret a Manger queue. “We were all waiting to pay for our sandwiches that we were all taking back to our desks. It was a sort of group activity even if I didn’t know the other people in the group.”

In contrast, pandemic queues are not organic; they are a series of regularly spaced people being processed by a wayfinding system. Further rejection occurs if a pedestrian steps into the gutter to avoid you, or when the delivery person you used to enjoy greeting sees you at the door and lunges backwards. It provides no consolation, Perry says, to understand cognitively why we repel others. The sense of rejection remains.

The word “contagion” comes from the Latin for “with” and “touch”, so it is no wonder that social touch is demonised in a pandemic. But at what cost? The neuroscientists Francis McGlone and Merle Fairhurst study nerve fibres called C-tactile afferents, which are concentrated in hard-to-reach places such as the back and shoulders. They wire social touch into a complex reward system, so that when we are stroked, touched, hugged or patted, oxytocin is released, lowering the heart rate and inhibiting the production of cortisone. “Very subtle requirements,” says McGlone, “to keep you on an even plane.”

But McGlone is worried. “Everywhere I look at changes of behaviour during the pandemic, this little flag is flying, this nerve fibre – touch, touch, touch!” While some people – especially those locked down with young children – might be experiencing more touch, others are going entirely without. Fairhurst is examining the data collected from a large survey she and McGlone launched in May, and she is finding those most at risk from the negative emotional impact of loss of touch are young people. “Age is a significant indicator of loneliness and depression,” she says. The loss of the connecting power of touch triggers “factors that contribute to depression – sadness, lower energy levels, lethargy”.

“We are becoming a sort of non-person,” says Perry. Masks render us mostly faceless. Hand sanitiser is a physical screen. Fairhurst sees it as “a barrier, like not speaking somebody’s language”. And Perry is not the only one to favour the “non-person clothes” of pyjamas and tracksuits. Somehow, the repeat-wearing of clothes makes all clothing feel like fatigues. They suit our weariness, and add an extra layer to it.

Cultural losses feed this sense of dehumanisation. Eric Clarke, a professor at Wadham College, Oxford, with a research interest in the psychology of music, led street singing in his cul-de-sac during the first lockdown, which “felt almost like a lifeline”, but he has missed going to live music events. “The impact on me has been one of a feeling of degradation or erosion of my aesthetic self,” he says. “I feel less excited by the world around me than I do when I’m going to music.” And the street music, like the street clapping, stopped months ago. Now “we are all living like boil-in-a-bag rice, closed off from the world in a plastic envelope of one sort or another.”

No element of Covid-19 has dehumanised people more than the way it has led us to experience death. Individuals become single units in a very long and horribly growing number, of course. But before they become statistics, the dying are condemned to isolation. “They are literally depersonalised,” Snowden says. He lost his sister during the pandemic. “I didn’t see her, and nor was she with her family … It breaks bonds and estranges people.”

The Rise and Fall of Nikola Tesla and his Tower

The inventor’s vision of a global wireless-transmission tower proved to be his undoing. Posted December 20th 2020

Smithsonian Magazine

  • Gilbert King

Read when you’ve got time to spare.ezgif.com-webp-to-jpg(79).jpgcrop.jpg

Wardenclyffe Tower in 1904. Photo from Wikimedia Commons / Public Domain.

By the end of his brilliant and tortured life, the Serbian physicist, engineer and inventor Nikola Tesla was penniless and living in a small New York City hotel room. He spent days in a park surrounded by the creatures that mattered most to him—pigeons—and his sleepless nights working over mathematical equations and scientific problems in his head. That habit would confound scientists and scholars for decades after he died, in 1943. His inventions were designed and perfected in his imagination.

Tesla believed his mind to be without equal, and he wasn’t above chiding his contemporaries, such as Thomas Edison, who once hired him. “If Edison had a needle to find in a haystack,” Tesla once wrote, “he would proceed at once with the diligence of the bee to examine straw after straw until he found the object of his search. I was a sorry witness of such doing that a little theory and calculation would have saved him ninety percent of his labor.”

But what his contemporaries may have been lacking in scientific talent (by Tesla’s estimation), men like Edison and George Westinghouse clearly possessed the one trait that Tesla did not—a mind for business. And in the last days of America’s Gilded Age, Nikola Tesla made a dramatic attempt to change the future of communications and power transmission around the world.  He managed to convince J.P. Morgan that he was on the verge of a breakthrough, and the financier gave Tesla more than $150,000 to fund what would become a gigantic, futuristic and startling tower in the middle of Long Island, New York. In 1898, as Tesla’s plans to create a worldwide wireless transmission system became known, Wardenclyffe Tower would be Tesla’s last chance to claim the recognition and wealth that had always escaped him.

Nikola Tesla was born in modern-day Croatia in 1856; his father, Milutin, was a priest of the Serbian Orthodox Church. From an early age, he demonstrated the obsessiveness that would puzzle and amuse those around him. He could memorize entire books and store logarithmic tables in his brain. He picked up languages easily, and he could work through days and nights on only a few hours sleep.

At the age of 19, he was studying electrical engineering at the Polytechnic Institute at Graz in Austria, where he quickly established himself as a star student. He found himself in an ongoing debate with a professor over perceived design flaws in the direct-current (DC) motors that were being demonstrated in class. “In attacking the problem again I almost regretted that the struggle was soon to end,” Tesla later wrote. “I had so much energy to spare. When I undertook the task it was not with a resolve such as men often make. With me it was a sacred vow, a question of life and death. I knew that I would perish if I failed. Now I felt that the battle was won. Back in the deep recesses of the brain was the solution, but I could not yet give it outward expression.”

He would spend the next six years of his life “thinking” about electromagnetic fields and a hypothetical motor powered by alternate-current that would and should work. The thoughts obsessed him, and he was unable to focus on his schoolwork. Professors at the university warned Tesla’s father that the young scholar’s working and sleeping habits were killing him. But rather than finish his studies, Tesla became a gambling addict, lost all his tuition money, dropped out of school and suffered a nervous breakdown. It would not be his last.

In 1881, Tesla moved to Budapest, after recovering from his breakdown, and he was walking through a park with a friend, reciting poetry, when a vision came to him. There in the park, with a stick, Tesla drew a crude diagram in the dirt—a motor using the principle of rotating magnetic fields created by two or more alternating currents. While AC electrification had been employed before, there would never be a practical, working motor run on alternating current until he invented his induction motor several years later.

In June 1884, Tesla sailed for New York City and arrived with four cents in his pocket and a letter of recommendation from Charles Batchelor—a former employer—to Thomas Edison, which was purported to say, “My Dear Edison: I know two great men and you are one of them. The other is this young man!”

A meeting was arranged, and once Tesla described the engineering work he was doing, Edison, though skeptical, hired him. According to Tesla, Edison offered him $50,000 if he could improve upon the DC generation plants Edison favored. Within a few months, Tesla informed the American inventor that he had indeed improved upon Edison’s motors. Edison, Tesla noted, refused to pay up. “When you become a full-fledged American, you will appreciate an American joke,” Edison told him.

Tesla promptly quit and took a job digging ditches. But it wasn’t long before word got out that Tesla’s AC motor was worth investing in, and the Western Union Company put Tesla to work in a lab not far from Edison’s office, where he designed AC power systems that are still used around the world. “The motors I built there,” Tesla said, “were exactly as I imagined them. I made no attempt to improve the design, but merely reproduced the pictures as they appeared to my vision, and the operation was always as I expected.”

Tesla patented his AC motors and power systems, which were said to be the most valuable inventions since the telephone. Soon, George Westinghouse, recognizing that Tesla’s designs might be just what he needed in his efforts to unseat Edison’s DC current, licensed his patents for $60,000 in stocks and cash and royalties based on how much electricity Westinghouse could sell. Ultimately, he won the “War of the Currents,” but at a steep cost in litigation and competition for both Westinghouse and Edison’s General Electric Company. 1024px-Tesla_Sarony.jpg

Nikola Tesla. Photo from Napoleon Sarony / Wikimedia Commons / Public Domain.

Fearing ruin, Westinghouse begged Tesla for relief from the royalties Westinghouse agreed to. “Your decision determines the fate of the Westinghouse Company,” he said. Tesla, grateful to the man who had never tried to swindle him, tore up the royalty contract, walking away from millions in royalties that he was already owed and billions that would have accrued in the future. He would have been one of the wealthiest men in the world—a titan of the Gilded Age.

His work with electricity reflected just one facet of his fertile mind. Before the turn of the 20th century, Tesla had invented a powerful coil that was capable of generating high voltages and frequencies, leading to new forms of light, such as neon and fluorescent, as well as X-rays. Tesla also discovered that these coils, soon to be called “Tesla Coils,” made it possible to send and receive radio signals. He quickly filed for American patents in 1897, beating the Italian inventor Guglielmo Marconi to the punch.

Tesla continued to work on his ideas for wireless transmissions when he proposed to J.P. Morgan his idea of a wireless globe. After Morgan put up the $150,000 to build the giant transmission tower, Tesla promptly hired the noted architect Stanford White of McKim, Mead, and White in New York. White, too, was smitten with Tesla’s idea. After all, Tesla was the highly acclaimed man behind Westinghouse’s success with alternating current, and when Tesla talked, he was persuasive.

“As soon as completed, it will be possible for a business man in New York to dictate instructions, and have them instantly appear in type at his office in London or elsewhere,” Tesla said at the time. “He will be able to call up, from his desk, and talk to any telephone subscriber on the globe, without any change whatever in the existing equipment. An inexpensive instrument, not bigger than a watch, will enable its bearer to hear anywhere, on sea or land, music or song, the speech of a political leader, the address of an eminent man of science, or the sermon of an eloquent clergyman, delivered in some other place, however distant. In the same manner any picture, character, drawing or print can be transferred from one to another place. Millions of such instruments can be operated from but one plant of this kind.”

White quickly got to work designing Wardenclyffe Tower in 1901, but soon after construction began it became apparent that Tesla was going to run out of money before it was finished. An appeal to Morgan for more money proved fruitless, and in the meantime investors were rushing to throw their money behind Marconi. In December 1901, Marconi successfully sent a signal from England to Newfoundland. Tesla grumbled that the Italian was using 17 of his patents, but litigation eventually favored Marconi and the commercial damage was done.  (The U.S. Supreme Court ultimately upheld Tesla’s claims, clarifying Tesla’s role in the invention of the radio—but not until 1943, after he died.) Thus the Italian inventor was credited as the inventor of radio and became rich. Wardenclyffe Tower became a 186-foot-tall relic (it would be razed in 1917), and the defeat—Tesla’s worst—led to another of his breakdowns. ”It is not a dream,” Tesla said, “it is a simple feat of scientific electrical engineering, only expensive—blind, faint-hearted, doubting world!” 1024px-Guglielmo_Marconi_1901_wireless_signal.jpg

Guglielmo Marconi in 1901. Photo from LIFE / Wikimedia Commons / Public Domain.

By 1912, Tesla began to withdraw from that doubting world. He was clearly showing signs of obsessive-compulsive disorder, and was potentially a high-functioning autistic. He became obsessed with cleanliness and fixated on the number three; he began shaking hands with people and washing his hands—all done in sets of three. He had to have 18 napkins on his table during meals, and would count his steps whenever he walked anywhere. He claimed to have an abnormal sensitivity to sounds, as well as an acute sense of sight, and he later wrote that he had “a violent aversion against the earrings of women,” and “the sight of a pearl would almost give me a fit.”

Near the end of his life, Tesla became fixated on pigeons, especially a specific white female, which he claimed to love almost as one would love a human being. One night, Tesla claimed the white pigeon visited him through an open window at his hotel, and he believed the bird had come to tell him she was dying. He saw “two powerful beans of light” in the bird’s eyes, he later said. “Yes, it was a real light, a powerful, dazzling, blinding light, a light more intense than I had ever produced by the most powerful lamps in my laboratory.” The pigeon died in his arms, and the inventor claimed that in that moment, he knew that he had finished his life’s work.

Nikola Tesla would go on to make news from time to time while living on the 33rd floor of the New Yorker Hotel. In 1931 he made the cover of Time magazine, which featured his inventions on his 75th birthday. And in 1934, the New York Times reported that Tesla was working on a “Death Beam” capable of knocking 10,000 enemy airplanes out of the sky. He hoped to fund a prototypical defensive weapon in the interest of world peace, but his appeals to J.P. Morgan Jr. and British Prime Minister Neville Chamberlain went nowhere. Tesla did, however, receive a $25,000 check from the Soviet Union, but the project languished.  He died in 1943, in debt, although Westinghouse had been paying his room and board at the hotel for years.

Gilbert King is a contributing writer in history for Smithsonian.com. His book Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America won the Pulitzer Prize in 2013.

Sources

Books: Nikola Tesla, My Inventions: The Autobiography of Nikola Tesla, Hart Brothers, Pub., 1982. Margaret Cheney, Tesla: Man Out of Time, Touchstone, 1981.

Why Black Hole Interiors Grow (Almost) Forever

The renowned physicist Leonard Susskind has identified a possible quantum origin for the ever-growing volume of black holes. December 17th 2020

Quanta Magazine

  • Natalie Wolchover

Read when you’ve got time to spare.GettyImages-1212735156.jpg

Credit: koto_feja / Getty Images.

Leonard Susskind, a pioneer of string theory, the holographic principle and other big physics ideas spanning the past half-century, has proposed a solution to an important puzzle about black holes. The problem is that even though these mysterious, invisible spheres appear to stay a constant size as viewed from the outside, their interiors keep growing in volume essentially forever. How is this possible?

In a series of recent papers and talks, the 78-year-old Stanford University professor and his collaborators conjecture that black holes grow in volume because they are steadily increasing in complexity — an idea that, while unproven, is fueling new thinking about the quantum nature of gravity inside black holes.

Black holes are spherical regions of such extreme gravity that not even light can escape. First discovered a century ago as shocking solutions to the equations of Albert Einstein’s general theory of relativity, they’ve since been detected throughout the universe. (They typically form from the inward gravitational collapse of dead stars.) Einstein’s theory equates the force of gravity with curves in space-time, the four-dimensional fabric of the universe, but gravity becomes so strong in black holes that the space-time fabric bends toward its breaking point — the infinitely dense “singularity” at the black hole’s center.

According to general relativity, the inward gravitational collapse never stops. Even though, from the outside, the black hole appears to stay a constant size, expanding slightly only when new things fall into it, its interior volume grows bigger and bigger all the time as space stretches toward the center point. For a simplified picture of this eternal growth, imagine a black hole as a funnel extending downward from a two-dimensional sheet representing the fabric of space-time. The funnel gets deeper and deeper, so that infalling things never quite reach the mysterious singularity at the bottom. In reality, a black hole is a funnel that stretches inward from all three spatial directions. A spherical boundary surrounds it called the “event horizon,” marking the point of no return.

Since at least the 1970s, physicists have recognized that black holes must really be quantum systems of some kind — just like everything else in the universe. What Einstein’s theory describes as warped space-time in the interior is presumably really a collective state of vast numbers of gravity particles called “gravitons,” described by the true quantum theory of gravity. In that case, all the known properties of a black hole should trace to properties of this quantum system.

Indeed, in 1972, the Israeli physicist Jacob Bekenstein figured out that the area of the spherical event horizon of a black hole corresponds to its “entropy.” This is the number of different possible microscopic arrangements of all the particles inside the black hole, or, as modern theorists would describe it, the black hole’s storage capacity for information.

Bekenstein’s insight led Stephen Hawking to realize two years later that black holes have temperatures, and that they therefore radiate heat. This radiation causes black holes to slowly evaporate away, giving rise to the much-discussed “black hole information paradox,” which asks what happens to information that falls into black holes. Quantum mechanics says the universe preserves all information about the past. But how does information about infalling stuff, which seems to slide forever toward the central singularity, also evaporate out?

The relationship between a black hole’s surface area and its information content has kept quantum gravity researchers busy for decades. But one might also ask: What does the growing volume of its interior correspond to, in quantum terms? “For whatever reason, nobody, including myself for a number of years, really thought very much about what that means,” said Susskind. “What is the thing which is growing? That should have been one of the leading puzzles of black hole physics.”

In recent years, with the rise of quantum computing, physicists have been gaining new insights about physical systems like black holes by studying their information-processing abilities — as if they were quantum computers. This angle led Susskind and his collaborators to identify a candidate for the evolving quantum property of black holes that underlies their growing volume. What’s changing, the theorists say, is the “complexity” of the black hole — roughly a measure of the number of computations that would be needed to recover the black hole’s initial quantum state, at the moment it formed. After its formation, as particles inside the black hole interact with one another, the information about their initial state becomes ever more scrambled. Consequently, their complexity continuously grows.

Using toy models that represent black holes as holograms, Susskind and his collaborators have shown that the complexity and volume of black holes both grow at the same rate, supporting the idea that the one might underlie the other. And, whereas Bekenstein calculated that black holes store the maximum possible amount of information given their surface area, Susskind’s findings suggest that they also grow in complexity at the fastest possible rate allowed by physical laws.

John Preskill, a theoretical physicist at the California Institute of Technology who also studies black holes using quantum information theory, finds Susskind’s idea very interesting. “That’s really cool that this notion of computational complexity, which is very much something that a computer scientist might think of and is not part of the usual physicist’s bag of tricks,” Preskill said, “could correspond to something which is very natural for someone who knows general relativity to think about,” namely the growth of black hole interiors.

Researchers are still puzzling over the implications of Susskind’s thesis. Aron Wall, a theorist at Stanford (soon moving to the University of Cambridge), said, “The proposal, while exciting, is still rather speculative and may not be correct.” One challenge is defining complexity in the context of black holes, Wall said, in order to clarify how the complexity of quantum interactions might give rise to spatial volume.

A potential lesson, according to Douglas Stanford, a black hole specialist at the Institute for  Advanced Study in Princeton, New Jersey, “is that black holes have a type of internal clock that keeps time for a very long time. For an ordinary quantum system,” he said, “this is the complexity of the state. For a black hole, it is the size of the region behind the horizon.”

If complexity does underlie spatial volume in black holes, Susskind envisions consequences for our understanding of cosmology in general. “It’s not only black hole interiors that grow with time. The space of cosmology grows with time,” he said. “I think it’s a very, very interesting question whether the cosmological growth of space is connected to the growth of some kind of complexity. And whether the cosmic clock, the evolution of the universe, is connected with the evolution of complexity. There, I don’t know the answer.”

Natalie Wolchover is a senior writer and editor at Quanta Magazine covering the physical sciences.Quanta Magazine

More from Quanta Magazine

End of Ageing and Cancer? Scientists Unveil Structure of the ‘Immortality’ Enzyme Telomerase

Detailed images of the anti-ageing enzyme telomerase are a drug designer’s dream. Posted December 8th 2020

Read when you’ve got time to spare.file-20180425-175044-pm9x2m.png

Telomeres on a chromosome. Credit: AJC1 / Flickr, CC BY-NC-ND.

Making a drug is like trying to pick a lock at the molecular level. There are two ways in which you can proceed. You can try thousands of different keys at random, hopefully finding one that fits. The pharmaceutical industry does this all the time – sometimes screening hundreds of thousands of compounds to see if they interact with a certain enzyme or protein. But unfortunately it’s not always efficient – there are more drug molecule shapes than seconds have passed since the beginning of the universe.

Alternatively, like a safe cracker, you can x-ray the lock you want to open and work out the probable shape of the key from the pictures you get. This is much more effective for discovering drugs, as you can use computer models to identify promising compounds before researchers go into the lab to find the best one. A 2018 study, published in Nature, presents detailed images of a crucial anti-ageing enzyme known as telomerase – raising hopes that we can soon slow ageing and cure cancer.

Every organism packages its DNA into chromosomes. In simple bacteria like E. coli this is a single small circle. More complex organisms have far more DNA and multiple linear chromosomes (22 pairs plus sex chromosomes). These probably appeared because they provided an evolutionary advantage, but they also come with a downside. file-20180425-175054-8dgqfa.jpg

We may all soon live to be centenarians. Credit: Dan Negureanu / Shutterstcock.

At the end of each chromosome is a protective cap called a telomere . However, most human cells can’t copy them – meaning that every time they divide, their telomeres become shorter. When telomeres become too short, the cell enters a toxic state called “senescence”. If these senescent cells are not cleared by the immune system, they begin to compromise the function of the tissues in which they reside. For millennia, humans have perceived this gradual compromise in tissue function over time without understanding what caused it. We simply called it ageing.

Enter telomerase, a specialised telomere repair enzyme in two parts – able to add DNA to the chromosome tips. The first part is a protein called TERT that does the copying. The second component is called TR, a small piece of RNA which acts as a template. Together, these form telomerase, which trundles up and down on the ends of chromosomes, copying the template. At the bottom, a human telomere is roughly 3,000 copies of the DNA sequence “TTAGGG” – laid down and maintained by telomerase. But sadly, production of TERT is repressed in human tissues with the exception of sperm, eggs and some immune cells.

Ageing Versus Cancer

Organisms regulate their telomere maintenance in this way because they are walking a biological tightrope. On the one hand, they need to replace the cells they lose in the course of their ordinary daily lives by cell division. However, any cell with an unlimited capacity to divide is the seed of a tumour. And it turns out that the majority of human cancers have active telomerase and shorter telomeres than the cells surrounding them.

This indicates that the cell from which they came divided as normal but then picked up a mutation which turned TERT back on. Cancer and ageing are flip sides of the same coin and telomerase, by and large, is doing the flipping. Inhibit telomerase, and you have a treatment for cancer, activate it and you prevent senescence. That, at least, is the theory.

The researchers behind the new study were not just able to obtain the structure of a proportion of the enzyme, but of the entire molecule as it was working. This was a tour de force involving the use of cryo-electron microscopy – a technique using a beam of electrons (rather than light) to take thousands of detailed images of individual molecules from different angles and combine them computationally.

Prior to the development of this method, for which scientists won the Nobel Prize last year, it was necessary to crystallise proteins to image them. This typically requires thousands of attempts and many years of trying, if it works at all.

Elixir of Youth?

TERT itself is a large molecule and although it has shown to lengthen lifespan when introduced into normal mice using gene therapy this is technically challenging and fraught with difficulties. Drugs that can turn on the enzyme that produces it are far better, easier to deliver and cheaper to make.

We already know of a few compounds to inhibit and activate telomerase – discovered through the cumbersome process of randomly screening for drugs. Sadly, they are not very efficient.

Some of the most provocative studies involve the compound TA-65 (Cycloastragenol) – a natural product which lengthens telomeres experimentally and has been claimed to show benefit in early stage macular degeneration (vision loss). As a result, TA65 has been sold over the internet and has prompted at least one (subsequently dismissed) lawsuit over claims that it caused cancer in a user. This sad story illustrates an important public health message best summarised simply as “don’t try this at home, folks”.

The telomerase inhibitors we know of so far, however, have genuine clinical benefit in various cancers, particularly in combination with other drugs. However, the doses required are relatively high.

The new study is extremely promising because, by knowing the structure of telomerase, we can use computer models to identify the most promising activators and inhibitors and then test them to find which ones are most effective. This is a much quicker process than randomly trying different molecules to see if they work.

So how far could could we go? In terms of cancer, it is hard to tell. The body can easily become resistant to cancer drugs, including telomerase inhibitors. Prospects for slowing ageing where there is not cancer are somewhat easier to estimate. In mice, deleting senescent cells or dosing with telomerase (gene therapy) both give increases in lifespan of the order of 20 percent – despite being inefficient techniques. It may be that at some point other ageing mechanisms, such as the accumulation of damaged proteins, start to come into play.

But if we did manage to stop the kind of ageing caused by senescent cells using telomerase activation, we could start devoting all our efforts into tackling these additional ageing processes. There’s every reason to be optimistic that we may soon live much longer, healthier lives than we do today.

Comment We can see the problems of allegedly trying to save the aged, whilst ruining lives of the young and younger by lockdown tyranny.

Overpopulation is already a major issue which the politically correct and elites don’t want to discuss.

From a science point of view this is fascinating and treating cancer is good ( my best friend has just died of it, but he was a 74 year old, worked as a tile maker bathed in clay dust for years, chain smoker and drinker also for years . Cancer patients like my friend Mike, who worked with me on the building sites, was much neglected because of ludicrous lockdown, but there is a cycle of life and need for renewal. Also I suspect it will only be the pampered super rich , like the Royals, who will afford such medicine.

Getting old folk to step back rather than staying in charge and corrupting the young into corrupt careers is a big enough problem already. Bigotry and mental health issues won’t be resolved by elixirs of youth – though they obviously have a sex selling point, which are priorities for the rich and other escapists. Sex is after all the field of youth and the ultimate delusional drug.

But what about the old Third World, they breed so fast, they can’t cope with the numbers, so there are all maanner of health, crime and migratory issues.

Scientists are nerds. They do things because they can. Then corrupt dark matter of the industrial military complex, fake democrats and dictators take old. There will be the rich preserved in luxury, and a swarming mass of healthy bodies with brains more addled than the rich old politicians. How will it all pan out, as if it isn’t bad enough. I remind readers that I am not ageist, but at 70, well past my own sell by date.

R.J Cook and old and best friend Mick Birrell, at home near aaliverpool, who died two weeks ago i an hospice. Father Daly gave the last rights, then after the funeral and cremation his ashes were taken back to Ireland. Michael used to flatter me as the best Irish folk singer this side of the Irish Sea.

I was up at his home looking after his sister on the weekend of October 4th/5th 2008, having arrived in my motorhome with son Kieran. Poor girl had dementia. For years she would phone me to sing her old songs like ‘The Rare Old Times’ over the phone. That day on she stood sadly in front of the wood stove, as I started to whistle an old favourite ‘Danny Boy.’

She couldn’t speak, but still ahd that rare beauty and soul of the Irish – my mother’s father was Irish. As I whisteled, her big sad blue eyes started to stream with tear. Then I sang, ‘The Rare Old Times.;

R.J Cook

About 610,000,000 results (0.75 seconds) 

Dublin In The Rare Old Times - Luke Kelly

Dublin In The Rare Old Times – Luke Kelly – YouTube

www.youtube.com › watch

Luke Kelly was more than a folk singer. I thought he was unique. Very special. he drank a lot and died young. I met him once in 1971 at a bar in Norwich, a modest man and hero to me, but no fan of the then current I.R.A. He told me he was a Marxist.

Would an elixir of youth made him a better man. I don’t think see. Beauty . loss and longing are comforted by the right songs. Those songs come from instinct. T

They employ the science of accoustics, but to be better than pop, there needs to e deeper painful place for the sound, tone and texture to be born. This was Anne; favourite because she loved Dublin as it used to be. Anne died very soon after. As for me and Irish fold songs, I was beaten up onm a pavment after singing my son about how poilce thugs killed innocent Ian Tomlinson at G7. That’ another story. The pub was full of off duty cops. aia quit the scene,deciding folkies are fakies.

R.J Cook

False-positive COVID-19 results: hidden problems and costs Posted December 8th 2020

Published:September 29, 2020DOI:https://doi.org/10.1016/S2213-2600(20)30453-7PlumX Metrics

Advertisement

RT-PCR tests to detect severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA are the operational gold standard for detecting COVID-19 disease in clinical practice. RT-PCR assays in the UK have analytical sensitivity and specificity of greater than 95%, but no single gold standard assay exists.12 New assays are verified across panels of material, confirmed as COVID-19 by multiple testing with other assays, together with a consistent clinical and radiological picture. These new assays are often tested under idealised conditions with hospital samples containing higher viral loads than those from asymptomatic individuals living in the community. As such, diagnostic or operational performance of swab tests in the real world might differ substantially from the analytical sensitivity and specificity.2Although testing capacity and therefore the rate of testing in the UK and worldwide has continued to increase, more and more asymptomatic individuals have undergone testing. This growing inclusion of asymptomatic people affects the other key parameter of testing, the pretest probability, which underpins the veracity of the testing strategy. In March and early April, 2020, most people tested in the UK were severely ill patients admitted to hospitals with a high probability of infection. Since then, the number of COVID-19-related hospital admissions has decreased markedly from more than 3000 per day at the peak of the first wave, to just more than 100 in August, while the number of daily tests jumped from 11 896 on April 1, 2020, to 190 220 on Aug 1, 2020. In other words, the pretest probability will have steadily decreased as the proportion of asymptomatic cases screened increased against a background of physical distancing, lockdown, cleaning, and masks, which have reduced viral transmission to the general population. At present, only about a third of swab tests are done in those with clinical needs or in health-care workers (defined as the pillar 1 community in the UK), while the majority are done in wider community settings (pillar 2). At the end of July, 2020, the positivity rate of swab tests within both pillar 1 (1·7%) and pillar 2 (0·5%) remained significantly lower than those in early April, when positivity rates reached 50%.3

• View related content for this articleGlobally, most effort so far has been invested in turnaround times and low test sensitivity (ie, false negatives); one systematic review reported false-negative rates of between 2% and 33% in repeat sample testing.4 Although false-negative tests have until now had priority due to the devastating consequences of undetected cases in health-care and social care settings, and the propagation of the epidemic especially by asymptomatic or mildly symptomatic patients,1 the consequences of a false-positive result are not benign from various perspectives (panel), in particular among health-care workers.PanelPotential consequences of false-positive COVID-19 swab test resultsIndividual perspectiveHealth-related

  • •For swab tests taken for screening purposes before elective procedures or surgeries: unnecessary treatment cancellation or postponement
  • •For swab tests taken for screening purposes during urgent hospital admissions: potential exposure to infection following a wrong pathway in hospital settings as an in-patient

Financial

  • •Financial losses related to self-isolation, income losses, and cancelled travel, among other factors

Psychological

  • •Psychological damage due to misdiagnosis or fear of infecting others, isolation, or stigmatisation

Global perspectiveFinancial

  • •Misspent funding (often originating from taxpayers) and human resources for test and trace
  • •Unnecessary testing
  • •Funding replacements in the workplace
  • •Various business losses

Epidemiological and diagnostic performance

  • •Overestimating COVID-19 incidence and the extent of asymptomatic infection
  • •Misleading diagnostic performance, potentially leading to mistaken purchasing or investment decisions if a new test shows high performance by identification of negative reference samples as positive (ie, is it a false positive or does the test show higher sensitivity than the other comparator tests used to establish the negativity of the test sample?)

Societal

  • •Misdirection of policies regarding lockdowns and school closures
  • •Increased depression and domestic violence (eg, due to lockdown, isolation, and loss of earnings after a positive test).

Technical problems including contamination during sampling (eg, a swab accidentally touches a contaminated glove or surface), contamination by PCR amplicons, contamination of reagents, sample cross-contamination, and cross-reactions with other viruses or genetic material could also be responsible for false-positive results.2 These problems are not only theoretical; the US Center for Disease Control and Prevention had to withdraw testing kits in March, 2020, when they were shown to have a high rate of false-positives due to reagent contamination.5The current rate of operational false-positive swab tests in the UK is unknown; preliminary estimates show it could be somewhere between 0·8% and 4·0%.26 This rate could translate into a significant proportion of false-positive results daily due to the current low prevalence of the virus in the UK population, adversely affecting the positive predictive value of the test.2 Considering that the UK National Health Service employs 1·1 million health-care workers, many of whom have been exposed to COVID-19 at the peak of the first wave, the potential disruption to health and social services due to false positives could be considerable.Any diagnostic test result should be interpreted in the context of the pretest probability of disease. For COVID-19, the pretest probability assessment includes symptoms, previous medical history of COVID-19 or presence of antibodies, any potential exposure to COVID-19, and likelihood of an alternative diagnosis.1 When low pretest probability exists, positive results should be interpreted with caution and a second specimen tested for confirmation. Notably, current policies in the UK and globally do not include special provisions for those who test positive despite being asymptomatic and having laboratory confirmed COVID-19 in the past (by RT-PCR swab test or antibodies). Prolonged viral RNA shedding, which is known to last for weeks after recovery, can be a potential reason for positive swab tests in those previously exposed to SARS-CoV-2. However, importantly, no data suggests that detection of low levels of viral RNA by RT-PCR equates with infectivity unless infectious virus particles have been confirmed with laboratory culture-based methods.7 If viral load is low, it might need to be taken into account when assessing the validity of the result.8To summarise, false-positive COVID-19 swab test results might be increasingly likely in the current epidemiological climate in the UK, with substantial consequences at the personal, health system, and societal levels (panel). Several measures might help to minimise false-positive results and mitigate possible consequences. Firstly, stricter standards should be imposed in laboratory testing. This includes the development and implementation of external quality assessment schemes and internal quality systems, such as automatic blinded replication of a small number of tests for performance monitoring to ensure false-positive and false-negative rates remain low, and to permit withdrawal of a malfunctioning test at the earliest possibility. Secondly, pretest probability assessments should be considered, and clear evidence-based guidelines on interpretation of test results developed. Thirdly, policies regarding the testing and prevention of virus transmission in health-care workers might need adjustments, with an immediate second test implemented for any health-care worker testing positive. Finally, research is urgently required into the clinical and epidemiological significance of prolonged virus shedding and the role of people recovering from COVID-19 in disease transmission.

Dark matter holds our universe together. No one knows what it is. Posted December 5th 2020

Dark matter, unexplained. By Brian Resnick@B_resnickbrian@vox.com Nov 25, 2020, 8:30am EST Additional reporting by Noam Hassenfeld and Byrd Pinkerton

If you go outside on a dark night, in the darkest places on Earth, you can see as many as 9,000 stars. They appear as tiny points of light, but they are massive infernos. And while these stars seem astonishingly numerous to our eyes, they represent just the tiniest fraction of all the stars in our galaxy, let alone the universe.

The beautiful challenge of stargazing is keeping this all in mind: Every small thing we see in the night sky is immense, but what’s even more immense is the unseen, the unknown.

I’ve been thinking about this feeling — the awesome, terrifying feeling of smallness, of the extreme contrast of the big and small — while reporting on one of the greatest mysteries in science for Unexplainable, a new Vox podcast pilot you can listen to below.

It turns out all the stars in all the galaxies, in all the universe, barely even begin to account for all the stuff of the universe. Most of the matter in the universe is actually unseeable, untouchable, and, to this day, undiscovered.

Scientists call this unexplained stuff “dark matter,” and they believe there’s five times more of it in the universe than normal matter — the stuff that makes up you and me, stars, planets, black holes, and everything we can see in the night sky or touch here on Earth. It’s strange even calling all that “normal” matter, because in the grand scheme of the cosmos, normal matter is the rare stuff. But to this day, no one knows what dark matter actually is.

“I think it gives you intellectual and kind of epistemic humility — that we are simultaneously, super insignificant, a tiny, tiny speck of the universe,” Priya Natarajan, a Yale physicist and dark matter expert, said on a recent phone call. “But on the other hand, we have brains in our skulls that are like these tiny, gelatinous cantaloupes, and we have figured all of this out.”

The story of dark matter is a reminder that whatever we know, whatever truth about the universe we have acquired as individuals or as a society, is insignificant compared to what we have not yet explained.

It’s also a reminder that, often, in order to discover something true, the first thing we need to do is account for what we don’t know.

This accounting of the unknown is not often a thing that’s celebrated in science. It doesn’t win Nobel Prizes. But, at least, we can know the size of our ignorance. And that’s a start.

But how does it end? Though physicists have been trying for decades to figure out what dark matter is, the detectors they built to find it have gone silent year after year. It makes some wonder: Have they been chasing a ghost? Dark matter might not be real. Instead, there could be something more deeply flawed in physicists’ understanding of gravity that would explain it away. Still, the search, fueled by faith in scientific observations, continues, despite the possibility that dark matter may never be found.

To learn about dark matter is to grapple with, and embrace, the unknown.

The woman who told us how much we don’t know

Scientists are, to this day, searching for dark matter because they believe it is there to find. And they believe so largely because of Vera Rubin, an astronomer who died in 2016 at age 88.

Growing up in Washington, DC, in the 1930s, like so many young people getting started in science, Rubin fell in love with the night sky.

Rubin shared a bedroom and bed with her sister Ruth. Ruth was older and got to pick her favorite side of the bed, the one that faced the bedroom windows and the night sky.

“But the windows captivated Vera’s attention,” Ashley Yeager, a journalist writing a forthcoming biography on Rubin, says. “Ruth remembers Vera constantly crawling over her at night, to be able to open the windows and look out at the night sky and start to track the stars.” Ruth just wanted to sleep, and “there Vera was tinkering and trying to take pictures of the stars and trying to track their motions.” It wasn’t that everything we knew about matter was wrong. It was that everything we knew about normal matter was insignificant.

Not everyone gets to turn their childlike wonder and captivation of the unknown into a career, but Rubin did.

Flash-forward to the late 1960s, and she’s at the Kitt Peak National Observatory near Tucson, Arizona, doing exactly what she did in that childhood bedroom: tracking the motion of stars.

This time, though, she has a cutting-edge telescope and is looking at stars in motion at the edge of the Andromeda Galaxy. Just 40 years prior, Edwin Hubble had determined, for the first time, that Andromeda was a galaxy outside of our own, and that galaxies outside our own even existed. With one observation, Hubble doubled the size of the known universe.

By 1960, scientists were still asking basic questions in the wake of this discovery. Like: How do galaxies move?

Rubin and her colleague Kent Ford were at the observatory doing this basic science, charting how stars are moving at the edge of Andromeda. “I guess I wanted to confirm Newton’s laws,” Rubin said in an archival interview with science historian David DeVorkin.

The Andromeda Galaxy.

Per Newton’s equations, the stars in the galaxy ought to move like the planets in our solar system do. Mercury, the closest planet to the sun, orbits very quickly, propelled by the sun’s gravity to a speed of around 106,000 mph. Neptune, far from the sun, and less influenced by its gravity, moves much slower, at around 12,000 mph.

The same thing ought to happen in galaxies too: Stars near the dense, gravity-rich centers of galaxies ought to move faster than the stars along the edges.

But that wasn’t what Rubin and Ford observed. Instead, they saw that the stars along the edge of Andromeda were going the same speed as the stars in the interior. “I think it was kind of like a ‘what the fuck’ moment,” Yeager says. “It was just so different than what everyone had expected.”

On the left, what Rubin expected to see: stars orbiting the outskirts of a galaxy moving slower than those near the center. On the right, what was observed: the stars on the outside moving at the same speed as the center.

The data pointed to an enormous problem: The stars couldn’t just be moving that fast on their own.

At those speeds, the galaxy should be ripping itself apart like an accelerating merry-go-round with the brake turned off. To explain why this wasn’t happening, these stars needed some kind of extra gravity out there acting like an engine. There had to be a source of mass for all that extra gravity. (For a refresher: Physicists consider gravity to be a consequence of mass. The more mass in an area, the stronger the gravitational pull.)

The data suggested that there was a staggering amount of mass in the galaxy that astronomers simply couldn’t see. “As they’re looking out there, they just can’t seem to find any kind of evidence that it’s some normal type of matter,” Yeager says. It wasn’t black holes; it wasn’t dead stars. It was something else generating the gravity needed to both hold the galaxy together and propel those outer stars to such fast speeds.

Know of a great unanswered question in science? Tell me about it: Brian@vox.com

“I mean, when you first see it, I think you’re afraid of being … you’re afraid of making a dumb mistake, you know, that there’s just some simple explanation,” Rubin later recounted. Other scientists might have immediately announced a dramatic conclusion based on this limited data. But not Rubin. She and her collaborators dug in and decided to do a systematic review of the star speeds in galaxies.

Rubin and Ford weren’t the first group to make an observation of stars moving fast at the edge of a galaxy. But what Rubin and her collaborators are famous for is verifying the finding across the universe. “She [studied] 20 galaxies, and then 40 and then 60, and they all show this bizarre behavior of stars out far in the galaxy, moving way, way too fast,” Yeager explains.

This is why people say Rubin ought to have won a Nobel Prize (the prizes are only awarded to living recipients, so she will never win one). She didn’t “discover” dark matter. But the data she collected over her career made it so the astronomy community had to reckon with the idea that most of the mass in the universe is unknown.

By 1985, Rubin was confident enough in her observations to declare something of an anti-eureka: announcing not a discovery, but a huge absence in our collective knowledge. “Nature has played a trick on astronomers,” she’s paraphrased as saying at an International Astronomical Union conference in 1985, “who thought we were studying the universe. We now know that we were studying only a small fraction of it.”

To this day, no one has “discovered” dark matter. But Rubin did something incredibly important: She told the scientific world about what they were missing.

In the decades since this anti-eureka, other scientists have been trying to fill in the void Rubin pointed to. Their work isn’t complete. But what they’ve been learning about dark matter is that it’s incredibly important to the very structure of our universe, and that it’s deeply, deeply weird.

Dark matter isn’t just enormous. It’s also strange.

Since Rubin’s WTF moment in the Arizona desert, more and more evidence has accumulated that dark matter is real, and weird, and accounts for most of the mass in the universe.

“Even though we can’t see it, we can still infer that dark matter is there,” Kathryn Zurek, a Caltech astrophysicist, explains. “Even if we couldn’t see the moon with our eyes, we would still know that it was there because it pulls the oceans in different directions — and it’s really very similar with dark matter.”

Scientists can’t see dark matter directly. But they can see its influence on the space and light around it. The biggest piece of indirect evidence: Dark matter, like all matter that accumulates in large quantities, has the ability to warp the very fabric of space.

“You can visualize dark matter as these lumps of matter that create little potholes in space-time,” Natarajan says. “All the matter in the universe is pockmarked with dark matter.”

When light falls into one of these potholes, it bends like light does in a lens. In this way, we can’t “see” dark matter, but we can “see” the distortions it produces in astronomers’ views of the cosmos. From this, we know dark matter forms a spherical cocoon around galaxies, lending them more mass, which allows their stars to move faster than what Newton’s laws would otherwise suggest.

This is a NASA/ESA Hubble Space Telescope image of the galaxy cluster MACS J0717.5+3745. Shown in blue on the image is a map of the dark matter found within the cluster.

These are indirect observations, but they have given scientists some clues about the intrinsic nature of dark matter. It’s not called dark matter because of its color. It has no color. It’s called “dark” because it neither reflects nor emits light, nor any sort of electromagnetic radiation. So we can’t see it directly even with the most powerful telescopes.

Not only can we not see it, we couldn’t touch it if we tried: If some sentient alien tossed a piece of dark matter at you, it would pass right through you. If it were going fast enough, it would pass right through the entire Earth. Dark matter is like a ghost.

Here’s one reason physicists are confident in that weird fact. Astronomers have made observations of galaxy clusters that have slammed into one another like a head-on collision between two cars on the highway.

Astronomers deduced that in the collision, much of the normal matter in the galaxy clusters slowed down and mixed together (like two cars in a head-on collision would stop one another and crumple together). But the dark matter in the cluster didn’t slow down in the collision. It kept going, as if the collision didn’t even happen.

The event is recreated in this animation. The red represents normal matter in the galaxy clusters, and the blue represents dark matter. During the collision, the blue dark matter acts like a ghost, just passing through the normal colliding matter as if it weren’t there.

(A note: These two weird aspects of dark matter — its invisibility and its untouchability — are connected: Dark matter simply does not interact with the electromagnetic force of nature. The electromagnetic force lights up our universe with light and radiation, but it also makes the world feel solid.)

A final big piece of evidence for dark matter is that it helps physicists make sense of how galaxies formed in the early universe. “We know that dark matter had to be present to be part of that process,” astrophysicist Katie Mack explains. It’s believed dark matter coalesced together in the early universe before normal matter did, creating gravitational wells for normal matter to fall into. Those gravitational wells formed by dark matter became the seeds of galaxies.

So dark matter not only holds galaxies together, as Rubin’s work implied — it’s why galaxies are there in the first place.

So: What is it?

To this day, no one really knows what dark matter is.

Scientists’ best guess is that it’s a particle. Particles are the smallest building blocks of reality — they’re so small, they make up atoms. It’s thought that dark matter is just another one of these building blocks, but one we haven’t seen up close for ourselves. (There are a lot of different proposed particles that may be good dark matter candidates. Scientists still aren’t sure exactly which one it will be.)

You might be wondering: Why can’t we find the most common source of matter in all the universe? Well, our scientific equipment is made out of normal matter. So if dark matter passes right through normal matter, trying to find dark matter is like trying to catch a ghost baseball with a normal glove.

Plus, while dark matter is bountiful in the universe, it’s really diffuse. There are just not massive boulders of it passing nearby Earth. It’s more like we’re swimming in a fine mist of it. “If you add up all the dark matter inside humans, all humans on the planet at any given moment, it’s one nanogram,” Natarajan says — teeny-tiny.

Dark matter may never be “discovered,” and that’s okay

Some physicists favor a different interpretation for what Rubin observed, and for what other scientists have observed since: that it’s not that there’s some invisible mass of dark matter dominating the universe, but that scientists’ fundamental understanding of gravity is flawed and needs to be reworked.

While “that’s a definite possibility,” Natarajan says, currently, there’s a lot more evidence on the side of dark matter being real and not just a mirage based on a misunderstanding of gravity. “We would need a new theory [of gravity] that can explain everything that we see already,” she explains. “There is no such theory that is currently available.”

On the left, a Hubble Space Telescope image of a galaxy cluster. On the right, a blue shading has been added to indicate where the dark matter ought to be.

It’s not hard to believe in something invisible, Mack says, if all the right evidence is there. We do it all the time.

“It’s similar to if you’re walking down the street,” she says. “And as you’re walking, you see that some trees are kind of bending over, and you hear some leaves rustling and maybe you see a plastic bag sort of floating past you and you feel a little cold on one side. You can pretty much figure out there’s wind. Right? And that wind explains all of these different phenomena. … There are many, many different pieces of evidence for dark matter. And for each of them, you might be able to find some other explanation that works just as well. But when taken together, it’s really good evidence.”

Meanwhile, experiments around the world are trying to directly detect dark matter. Physicists at the Large Hadron Collider are hoping their particle collisions may one day produce some detectable dark matter. Astronomers are looking out in space for more clues, hoping one day dark matter will reveal itself through an explosion of gamma rays. Elsewhere, scientists have burrowed deep underground, shielding labs from noise and radiation, hoping that dark matter will one day pass through a detector they’ve carefully designed and make itself known.

But it hasn’t happened yet. It may never happen: Scientists hope that dark matter isn’t a complete ghost to normal matter. They hope that every once in a while, when it collides with normal matter, it does something really, really subtle, like shove one single atom to the side, and set off a delicately constructed alarm.

But that day may never come. It could be dark matter just never prods normal matter, that it remains a ghost.

“I really did get into this business because I thought I would be detecting this within five years,” Prisca Cushman, a University of Minnesota physicist who works on a dark matter detector, says. She’s been trying to find dark matter for 20 years. She still believes it exists, that it’s out there to be discovered. But maybe it’s just not the particular candidate particle her detector was initially set up to find.

That failure isn’t a reason to give up, she says. “By not seeing [dark matter] yet with a particular detector, we’re saying, ‘Oh, so it’s not this particular model that we thought it might be.’ And that is an extremely interesting statement. Because all of a sudden an army of theorists go out and say, ‘Hey, what else could it be?’”

But even if the dark matter particle is never found, that won’t discount all science has learned about it. “It’s like you’re on a beach,” Natarajan explains. “You have a lot of sand dunes. And so we are in a situation where we are able to understand how these sand dunes form, but we don’t actually know what a grain of sand is made of.”

Embracing the unknown

Natarajan and the other physicists I spoke to for this story are comfortable with the unknown nature of dark matter. They’re not satisfied, they want to know more, but they accept it’s real. They accept it because that’s the state of the evidence. And if new evidence comes along to disprove it, they’ll have to accept that too.

“Inherent to the nature of science is the fact that whatever we know is provisional,” Natarajan says. “It is apt to change. So I think what motivates people like me to continue doing science is the fact that it keeps opening up more and more questions. Nothing is ultimately resolved.”

That’s true when it comes to the biggest questions, like “what is the universe made of?”

It’s true in so many other areas of science, too: Despite the endless headlines that proclaim new research findings that get published daily, there are many more unanswered questions than answered. Scientists don’t really understand how bicycles stay upright, or know the root cause of Alzheimer’s disease or how to treat it. Similarly, at the beginning of the Covid-19 pandemic, we craved answers: Why do some people get much sicker than others, what does immunity to the virus look like? The truth was we couldn’t yet know (and still don’t, for sure). But that didn’t mean the scientific process was broken.

The truth is, when it comes to a lot of fields of scientific progress, we’re in the middle of the story, not the end. The lesson is that truth and knowledge are hard-won.

In the case of dark matter, it wasn’t that everything we knew about matter was wrong. It was that everything we knew about normal matter was insignificant compared to our ignorance about dark matter. The story of dark matter fits with a narrative of scientific progress that makes us humans seem smaller and smaller at each turn. First, we learned that Earth wasn’t the center of the universe. Now dark matter teaches us that the very stuff we’re made of — matter — is just a fraction of all reality.

If dark matter is one day discovered, it will only open up more questions. Dark matter could be more than one particle, more than one thing. There could be a richness and diversity in dark matter that’s a little like the richness and diversity we see in normal matter. It’s possible, and this is speculation, that there’s a kind of shadow universe that we don’t have access to — scientists label it the “dark sector” — that is made up of different components and exists, as a ghost, enveloping our galaxies.

It’s a little scary to learn how little we know, to learn we don’t even know what most of the universe is made out of. But there’s a sense of optimism in a question, right? It makes you feel like we can know the answer to them.

There’s so much about our world that’s arrogant: from politicians who only believe in what’s convenient for them to Silicon Valley companies that claim they’re helping the world while fracturing it, and so many more examples. If only everyone could see a bit of what Vera Rubin saw — a fundamental truth not just about the universe, but about humanity.

“In a spiral galaxy, the ratio of dark-to-light matter is about a factor of 10,” Rubin said in a 2000 interview. “That’s probably a good number for the ratio of our ignorance to knowledge. We’re out of kindergarten, but only in about third grade.”


Will you help keep Vox free for all?

There is tremendous power in understanding. Vox answers your most important questions and gives you clear information to help make sense of an increasingly chaotic world. A financial contribution to Vox will help us continue providing free explanatory journalism to the millions who are relying on us. Please consider making a contribution to Vox today, from as little as $3.

Science & Truth November 27th 2020

As previously written here, science is a methodology for the purposes of discovery, mamipulation and understanding of nature. Good science requires pure and good inspiration. Unfortunately it also requires money which is in the hands of a tiny political, military and business complex.

Hitler and the Nazis demonstrated how science can be used for evil purposes. World War Two’s victors were quick to employ the discoveries and inventions of the Nazi devils, who among other things gave us the ICBM. The British made a major contribution to nuclear, chemical and germ warfare.

Education in the dominant Anglo United States block favours rich kids. As somone who taught maths and science in a British seconday modern school, it is my view that the whole curriculum is the opposite of inspiring.. I have previously quoted that while teaching at Aylesbury’s Grange School, I was inspired to write the following punk song, after watching the headmaster berating a young boy for wearing trainers.

“They limit our horizons and fit us for a task

Dressed in blue overalls and carrying a flask

Left Right, left right the headmaster roars

And mind you don’t stand talking, blocking up the doors

He’s tellin’ them they’re wearing the wrong kind of shoes

Their little faces quiver, do they know they’re goin’ to lose

All runnin’ ’round in circles and marchin’ up and down

Preparing for a life driftin’ round the town.”

That was back in 1980. The country and the world has got worse. Peoples’ lives are boxed in – even if only a cardboard box on a cold dirty pavement.

Yesterday’s headlines warned British people that unemployment due to Covid restrictions will soon hit 2.6 million. The people dying because of this lockdown, not Covid, do not count for the well paid mainstream media, politicians and other authorities. We are warned of higher taxes to come, but no mention of higher taxes for the rich whose global economy created this situation and continues to make them ever more obscenely rich and powerful. BLM, which these people back and fund, is just another divide and fool smokescreen to keep them safe, justifying more arrests and sentences for dissidents.

So we read below about a wonderful vaccine coming out of so much scientific uncertainty – but we must still have social distancing. The good old Nazi abuse of science and psuedo science is alive and well. So called scientists care only for the pay, the grant money and status. Science created World War One’s mustard gass which blinded people. Science has for a long time been the elite’s weapon of choice for blinding people with. Education is very much part of that process. An old tutor of mine, Basil Bernstein, from my days as a post grad at London University produced a set of studies called ‘Knowledge & Control’. He hit the nail on the head. R.J Cook

The End of the Pandemic Is Now in Sight

A year of scientific uncertainty is over. Two vaccines look like they will work, and more should follow.Sarah ZhangNovember 18, 2020

A man holding syringes
Herb Snitzer / Getty

Editor’s Note: The Atlantic is making vital coverage of the coronavirus available to all readers. Find the collection here.

For all that scientists have done to tame the biological world, there are still things that lie outside the realm of human knowledge. The coronavirus was one such alarming reminder, when it emerged with murky origins in late 2019 and found naive, unwitting hosts in the human body. Even as science began to unravel many of the virus’s mysteries—how it spreads, how it tricks its way into cells, how it kills—a fundamental unknown about vaccines hung over the pandemic and our collective human fate: Vaccines can stop many, but not all, viruses. Could they stop this one?

The answer, we now know, is yes. A resounding yes. Pfizer and Moderna have separately released preliminary data that suggest their vaccines are both more than 90 percent effective, far more than many scientists expected. Neither company has publicly shared the full scope of their data, but independent clinical-trial monitoring boards have reviewed the results, and the FDA will soon scrutinize the vaccines for emergency use authorization. Unless the data take an unexpected turn, initial doses should be available in December.

The tasks that lie ahead—manufacturing vaccines at scale, distributing them via a cold or even ultracold chain, and persuading wary Americans to take them—are not trivial, but they are all within the realm of human knowledge. The most tenuous moment is over: The scientific uncertainty at the heart of COVID-19 vaccines is resolved. Vaccines work. And for that, we can breathe a collective sigh of relief. “It makes it now clear that vaccines will be our way out of this pandemic,” says Kanta Subbarao, a virologist at the Doherty Institute, who has studied emerging viruses.

The invention of vaccines against a virus identified only 10 months ago is an extraordinary scientific achievement. They are the fastest vaccines ever developed, by a margin of years. From virtually the day Chinese scientists shared the genetic sequence of a new coronavirus in January, researchers began designing vaccines that might train the immune system to recognize the still-unnamed virus. They needed to identify a suitable piece of the virus to turn into a vaccine, and one promising target was the spike-shaped proteins that decorate the new virus’s outer shell. Pfizer and Moderna’s vaccines both rely on the spike protein, as do many vaccine candidates still in development. These initial successes suggest this strategy works; several more COVID-19 vaccines may soon cross the finish line. To vaccinate billions of people across the globe and bring the pandemic to a timely end, we will need all the vaccines we can get.

But it is no accident or surprise that Moderna and Pfizer are first out of the gate. They both bet on a new and hitherto unproven idea of using mRNA, which has the long-promised advantage of speed. This idea has now survived a trial by pandemic and emerged likely triumphant. If mRNA vaccines help end the pandemic and restore normal life, they may also usher in a new era for vaccine development.


The human immune system is awesome in its power, but an untrained one does not know how to aim its fire. That’s where vaccines come in. They present a harmless snapshot of a pathogen, a “wanted” poster, if you will, that primes the immune system to recognize the real virus when it comes along. Traditionally, this snapshot could be in the form of a weakened virus or an inactivated virus or a particularly distinctive viral molecule. But those approaches require vaccine makers to manufacture viruses and their molecules, which takes time and expertise. Both are lacking during a pandemic caused by a novel virus.

mRNA vaccines offer a clever shortcut. We humans don’t need to intellectually work out how to make viruses; our bodies are already very, very good at incubating them. When the coronavirus infects us, it hijacks our cellular machinery, turning our cells into miniature factories that churn out infectious viruses. The mRNA vaccine makes this vulnerability into a strength. What if we can trick our own cells into making just one individually harmless, though very recognizable, viral protein? The coronavirus’s spike protein fits this description, and the instructions for making it can be encoded into genetic material called mRNA.

Both vaccines, from Moderna and from Pfizer’s collaboration with the smaller German company BioNTech, package slightly modified spike-protein mRNA inside a tiny protective bubble of fat. Human cells take up this bubble and simply follow the directions to make spike protein. The cells then display these spike proteins, presenting them as strange baubles to the immune system. Recognizing these viral proteins as foreign, the immune system begins building an arsenal to prepare for the moment a virus bearing this spike protein appears.

This overall process mimics the steps of infection better than some traditional vaccines, which suggests that mRNA vaccines may provoke a better immune response for certain diseases. When you inject vaccines made of inactivated viruses or viral pieces, they can’t get inside the cell, and the cell can’t present those viral pieces to the immune system. Those vaccines can still elicit proteins called antibodies, which neutralize the virus, but they have a harder time stimulating T cells, which make up another important part of the immune response. (Weakened viruses used in vaccines can get inside cells, but risk causing an actual infection if something goes awry. mRNA vaccines cannot cause infection because they do not contain the whole virus.) Moreover, inactivated viruses or viral pieces tend to disappear from the body within a day, but mRNA vaccines can continue to produce spike protein for two weeks, says Drew Weissman, an immunologist at the University of Pennsylvania, whose mRNA vaccine research has been licensed by both BioNTech and Moderna. The longer the spike protein is around, the better for an immune response.

All of this is how mRNA vaccines should work in theory. But no one on Earth, until last week, knew whether mRNA vaccines actually do work in humans for COVID-19. Although scientists had prototyped other mRNA vaccines before the pandemic, the technology was still new. None had been put through the paces of a large clinical trial. And the human immune system is notoriously complicated and unpredictable. Immunology is, as my colleague Ed Yong has written, where intuition goes to die. Vaccines can even make diseases more severe, rather than less. The data from these large clinical trials from Pfizer/BioNTech and Moderna are the first, real-world proof that mRNA vaccines protect against disease as expected. The hope, in the many years when mRNA vaccine research flew under the radar, was that the technology would deliver results quickly in a pandemic. And now it has.

“What a relief,” says Barney Graham, a virologist at the National Institutes of Health, who helped design the spike protein for the Moderna vaccine. “You can make thousands of decisions, and thousands of things have to go right for this to actually come out and work. You’re just worried that you have made some wrong turns along the way.” For Graham, this vaccine is a culmination of years of such decisions, long predating the discovery of the coronavirus that causes COVID-19. He and his collaborators had homed in on the importance of spike protein in another virus, called respiratory syncytial virus, and figured out how to make the protein more stable and thus suitable for vaccines. This modification appears in both Pfizer/BioNTech’s and Moderna’s vaccines, as well as other leading vaccine candidates.

The spectacular efficacy of these vaccines, should the preliminary data hold, likely also has to do with the choice of spike protein as vaccine target. On one hand, scientists were prepared for the spike protein, thanks to research like Graham’s. On the other hand, the coronavirus’s spike protein offered an opening. Three separate components of the immune system—antibodies, helper cells, and killer T cells—all respond to the spike protein, which isn’t the case with most viruses.

In this, we were lucky. “It’s the three punches,” says Alessandro Sette. Working with Shane Crotty, his fellow immunologist at the La Jolla Institute, Sette found that COVID-19 patients whose immune systems can marshal all three responses against the spike protein tend to fare the best. The fact that most people can recover from COVID-19 was always encouraging news; it meant a vaccine simply needed to jump-start the immune system, which could then take on the virus itself. But no definitive piece of evidence existed that proved COVID-19 vaccines would be a slam dunk. “There’s nothing like a Phase 3 clinical trial,” Crotty says. “You don’t know what’s gonna happen with a vaccine until it happens, because the virus is complicated and the immune system is complicated.”

Experts anticipate that the ongoing trials will clarify still-unanswered questions about the COVID-19 vaccines. For example, Ruth Karron, the director of the Center for Immunization Research at Johns Hopkins University, asks, does the vaccine prevent only a patient’s symptoms? Or does it keep them from spreading the virus? How long will immunity last? How well does it protect the elderly, many of whom have a weaker response to the flu vaccine? So far, Pfizer has noted that its vaccine seems to protect the elderly just as well, which is good news because they are especially vulnerable to COVID-19.

Several more vaccines using the spike protein are in clinical trials too. They rely on a suite of different vaccine technologies, including weakened viruses, inactivated viruses, viral proteins, and another fairly new concept called DNA vaccines. Never before have companies tested so many different types of vaccines against the same virus, which might end up revealing something new about vaccines in general. You now have the same spike protein delivered in many different ways, Sette points out. How will the vaccines behave differently? Will they each stimulate different parts of the immune system? And which parts are best for protecting against the coronavirus? The pandemic is an opportunity to compare different types of vaccines head-on.

If the two mRNA vaccines continue to be as good as they initially seem, their success will likely crack open a whole new world of mRNA vaccines. Scientists are already testing them against currently un-vaccinable viruses such as Zika and cytomegalovirus and trying to make improved versions of existing vaccines, such as for the flu. Another possibility lies in personalized mRNA vaccines that can stimulate the immune system to fight cancer.

But the next few months will be a test of one potential downside of mRNA vaccines: their extreme fragility. mRNA is an inherently unstable molecule, which is why it needs that protective bubble of fat, called a lipid nanoparticle. But the lipid nanoparticle itself is exquisitely sensitive to temperature. For longer-term storage, Pfizer/BioNTech’s vaccine has to be stored at –70 degrees Celsius and Moderna’s at –20 Celsius, though they can be kept at higher temperatures for a shorter amount of time. Pfizer/BioNTech and Moderna have said they can collectively supply enough doses for 22.5 million people in the United States by the end of the year.

Distributing the limited vaccines fairly and smoothly will be a massive political and logistical challenge, especially as it begins during a bitter transition of power in Washington. The vaccine is a scientific triumph, but the past eight months have made clear how much pandemic preparedness is not only about scientific research. Ensuring adequate supplies of tests and personal protective equipment, providing economic relief, and communicating the known risks of COVID-19 transmission are all well within the realm of human knowledge, yet the U.S. government has failed at all of that.

The vaccine by itself cannot slow the dangerous trajectory of COVID-19 hospitalizations this fall or save the many people who may die by Christmas. But it can give us hope that the pandemic will end. Every infection we prevent now—through masking and social distancing—is an infection that can, eventually, be prevented forever through vaccines.

Sarah Zhang is a staff writer at The Atlantic.

The Science Behind Your Cheap Wine Posted November 17th 2020

How advances in bottling, fermenting and taste-testing are democratizing a once-opaque liquid.

Smithsonian Magazine

  • Ben Panko

Read when you’ve got time to spare.GettyImages-680787411.jpg

To develop the next big mass-market wine, winemakers first hone flavor using focus groups, then add approved flavoring and coloring additives to make the drink match up with what consumers want. Credit: Zero Creatives / Getty Images.

We live in a golden age of wine, thanks in part to thirsty millennials and Americans seemingly intent on out-drinking the French. Yet for all its popularity, the sommelier’s world is largely a mysterious one. Bottles on grocery store shelves come adorned with whimsical images and proudly proclaim their region of origin, but rarely list ingredients other than grapes. Meanwhile, ordering wine at a restaurant can often mean pretending to understand terms like “mouthfeel,” “legs” or “bouquet.”

“I liked wine the same way I liked Tibetan hand puppetry or theoretical particle physics,” writes journalist Bianca Bosker in the introduction to her 2017 book Cork Dork, “which is to say I had no idea what was going on but was content to smile and nod.”

Curious about what exactly happened in this shrouded world, Bosker took off a year and a half from writing to train to become a sommelier, and talk her way into wine production facilities across the country. In the end, Bosker learned that most wine is nowhere near as “natural” as many people think—and that scientific advances have helped make cheap wine nearly as good as the expensive stuff.

“There’s an incredible amount we don’t understand about what makes wine—this thing that shakes some people to the core,” Bosker says. In particular, most people don’t realize how much chemistry goes into making a product that is supposedly just grapes and yeast, she says. Part of the reason is that, unlike food and medicines, alcoholic beverages in the U.S. aren’t covered by the Food and Drug Administration. That means winemakers aren’t required to disclose exactly what is in each bottle; all they have to reveal is the alcohol content and whether the wine has sulfites or certain food coloring additives.

In Cork Dork, published by Penguin Books, Bosker immerses herself in the world of wine and interviews winemakers and scientists to distill for the average drinking person what goes into your bottle of pinot. “One of the things that I did was to go into this wine conglomerate [Treasury Wine Estates] that produces millions of bottles of wine per year,” Bosker says. “People are there developing wine the way flavor scientists develop the new Oreo or Doritos flavor.” 

For Treasury Wine Estates, the process of developing a mass-market wine starts in a kind of “sensory insights lab,” Bosker found. There, focus groups of professional tasters blind-sample a variety of Treasury’s wine products. The best ones are then sampled by average consumers to help winemakers get a sense of which “sensory profiles” would do best in stores and restaurants, whether it be “purplish wines with blackberry aromas, or low-alcohol wines in a pink shade,” she writes.

From these baseline preferences, the winemakers take on the role of the scientist, adding a dash of acidity or a hint of red to bring their wines in line with what consumers want. Winemakers can draw on a list of more than 60 government-approved additives that can be used to tweak everything from color to acidity to even thickness. 

Then the wines can be mass-produced in huge steel vats, which hold hundreds of gallons and are often infused with oak chips to impart the flavor of real oaken barrels. Every step of this fermentation process is closely monitored, and can be altered by changing temperature or adding more nutrients for the yeast. Eventually, the wine is packaged on huge assembly lines, churning out thousands of bottles an hour that will make their way to your grocery store aisle and can sometimes sell for essentially the same price as bottled water.

“This idea of massaging grapes with the help of science is not new,” Bosker points out. The Romans, for example, added lead to their wine to make it thicker. In the Middle Ages, winemakers began adding sulfur to make wines stay fresh for longer.

However, starting in the 1970s, enologists (wine scientists) at the University of California at Davis took the science of winemaking to new heights, Bosker says. These entrepreneurial wine wizards pioneered new forms of fermentation to help prevent wine from spoiling and produce it more efficiently. Along with the wide range of additives, winemakers today can custom order yeast that will produce wine with certain flavors or characteristics. Someday soon, scientists might even build yeast from scratch.

Consumers most commonly associate these kinds of additives with cheap, mass-produced wines like Charles Shaw (aka “Two Buck Chuck”) or Barefoot. But even the most expensive red wines often have their color boosted with the use of “mega-red” or “mega-purple” juice from other grape varieties, says Davis enologist Andrew Waterhouse. Other common manipulations include adding acidity with tartaric acid to compensate for the less acidic grapes grown in warmer climates, or adding sugar to compensate for the more acidic grapes grown in cooler climates.

Tannins, a substance found in grape skins, can be added to make a wine taste “drier” (less sweet) and polysaccharides can even be used to give the wine a “thicker mouthfeel,” meaning the taste will linger more on the tongue.

When asked if there was any truth to the oft-repeated legend that cheap wine is bound to give more headaches and worse hangovers, Waterhouse was skeptical. “There’s no particular reason that I can think of that expensive wine is better than cheap wine,” Waterhouse says. He adds, however, that there isn’t good data on the topic. “As you might suspect, the [National Institutes of Health] can’t make wine headaches a high priority,” he says.

Instead, Waterhouse suggests, there may be a simpler explanation: “It’s just possible that people tend to drink more wine when it’s cheap.”

While this widespread use of additives may make some natural-foods consumers cringe, Bosker found no safety or health issues to worry about in her research. Instead, she credits advancements in wine science with improving the experience of wine for most people by “democratizing quality.” “The technological revolution that has taken place in the winery has actually elevated the quality of really low-end wines,” Bosker says.

The main issue she has with the modern wine industry is that winemakers aren’t usually transparent with all of their ingredients—because they don’t have to be. “I find it outrageous that most people don’t realize that their fancy Cabernet Sauvignon has actually been treated with all kinds of chemicals,” Bosker says. 

Yet behind those fancy labels and bottles and newfangled chemical manipulation, the biggest factor influencing the price of wine is an old one: terroir, or the qualities a wine draws from the region where it was grown. Famous winemaking areas such as Bordeaux, France, or Napa Valley, California, can still land prices 10 times higher than just as productive grape-growing land in other areas, says Waterhouse. Many of these winemakers grow varieties of grapes that produce less quantity, but are considered by winemakers to be far higher quality.

“Combine the low yield and the high cost of the land, and there’s a real structural difference in the pricing of those wines,” Waterhouse says. Yet as winemakers continue to advance the science of making, cultivating and bottling this endlessly desirable product, that may soon change. After all, as Bosker says, “wine and science have always gone hand in hand.”

Ben Panko is a staff writer for Smithsonian.com

By Iain Baird on 15 May 2011

Colour television in Britain

Beginning in the late 1960s, British households began the rather expensive process of investing in their first colour televisions, causing the act of viewing to change dramatically.

Larger screens, sharper images and of course, colour, meant that the television audience experienced a feeling of greater realism while viewing—an enhanced sense of actually “being there”. Programmers sought to attract their new audience with brightly-coloured fare such as The Avengers, Z Cars, Dad’s Army, and The Prisoner. This change, which was important, was difficult to recognise because it was so gradual; many households did not buy colour sets right away. Not helping things was the fact that for several years so many programmes were still only available in black-and-white.

Invention

Colour television was first demonstrated publicly by John Logie Baird on 3 July 1928 in his laboratory at 133 Long Acre in London. The technology used was electro-mechanical, and the early test subject was a basket of strawberries “which proved popular with the staff”. The following month, the same demonstration was given to a mostly academic audience attending a British Association for the Advancement of Science meeting in Glasgow.

In the mid-late 1930s, Baird returned to his colour television research and developed some of the world’s first colour television systems, most of which used cathode-ray tubes. The effect of World War II, which saw BBC television service suspended, caused his company to go out of business and ended his salary. Nonetheless, he continued his colour television research by financing it from his own personal savings, including cashing in his life insurance policy. He gave the world’s first demonstration of a fully integrated electronic colour picture tube on 16 August 1944. Baird’s untimely death only two years later marked the end of his pioneering colour research in Britain.

The lead in colour television research transferred to the USA with demonstrations given by CBS Laboratories. Soon after, the Radio Corporation of America (RCA) channelled some of its massive resources towards colour television development.

Broadcast

The world’s first proper colour television service began in the USA. Colour television was available in select cities beginning in 1954 using the NTSC (National Television Standards Committee)-compatible colour system championed by RCA. A small fledgling colour service introduced briefly by CBS in 1951 was stopped after RCA complained to the FCC (Federal Communications Commission) that it was not compatible with the existing NTSC black-and-white television sets.

Meanwhile in Britain, several successful colour television tests were carried out, but it would take many more years for a public service to become viable here due to post-war austerity and uncertainty about what kind of colour television system would be the best one for Britain to adopt—and when.

On 1 July 1967, BBC2 launched Europe’s first colour service with the Wimbledon tennis championships, presented by David Vine. This was broadcast using the Phase Alternating Line (PAL) system, which was based on the work of the German television engineer Walter Bruch. PAL seemed the obvious solution, the signal to the British television industry that the time for a public colour television service had finally arrived. PAL was a marked improvement over the American NTSC-compatible system on which it was based, and soon dubbed “never twice the same colour” in comparison to PAL.

On 15 November 1969, colour broadcasting went live on the remaining two channels, BBC1 and ITV, which were in fact more popular than BBC2. Only about half of the national population was brought within the range of colour signals by 15th November, 1969. Colour could be received in the London Weekend Television/Thames region, ATV (Midlands), Granada (North-West) and Yorkshire TV regions. ITV’s first colour programmes in Scotland appeared on 13 December 1969 in Central Scotland; in Wales on 6 April 1970 in South Wales; and in Northern Ireland on 14 September 1970 in the eastern parts.

Colour TV licences were introduced on 1 January 1968, costing £10—twice the price of the standard £5 black and white TV licence.

Programmes

The BBC and ITV sought programmes that could exploit this new medium of colour television. Major sporting events were linked to colour television from the very start of the technology being made available. Snooker, with its rainbow of different-coloured balls, was ideal. On 23 July 1969, BBC2’s Pot Black was born, a series of non-ranking snooker tournaments. It would run until 1986, with one-off programmes continuing up to the present day.

The first official colour programme on BBC1 was a concert by Petula Clark from the Royal Albert Hall, London, broadcast at midnight on 14/15 November 1969. This might seem an odd hour to launch a colour service, but is explained by the fact that the Postmaster General’s colour broadcasting licence began at exactly this time.

The first official colour programme on ITV was a Royal Auto Club Road Report at 09.30 am, followed at 09.35 by The Growing Summer, London Weekend Television’s first colour production for children, starring Wendy Hiller. This was followed at 11.00 by Thunderbirds. The episode was ‘City of Fire’, which also became the first programme to feature a colour advertisement, for Birds Eye peas.

The 9th World Cup finals in Mexico, 1970, were not only the very first to be televised in colour, but also the first that viewers in Europe were able to watch live via trans-Atlantic satellite.

Colour TV sets

Colour TV sets did not outnumber black-and-white sets until 1976, mainly due to the high price of the early colour sets. Colour receivers were almost as expensive in real terms as the early black and white sets had been; the monthly rental for a large-screen receiver was £8. In March, 1969, there were only 100,000 colour TV sets in use; by the end of 1969 this had doubled to 200,000; and by 1972 there were 1.6 million.

Iain Baird

Iain was Associate Curator of Television and Radio at the National Science and Media Museum until 2016.

Color motion picture film

From Wikipedia, the free encyclopedia Jump to navigationJump to search

Still from test film made by Edward Turner in 1902

Color motion picture film refers both to unexposed color photographic film in a format suitable for use in a motion picture camera, and to finished motion picture film, ready for use in a projector, which bears images in color.

The first color cinematography was by additive color systems such as the one patented by Edward Raymond Turner in 1899 and tested in 1902.[1] A simplified additive system was successfully commercialized in 1909 as Kinemacolor. These early systems used black-and-white film to photograph and project two or more component images through different color filters.

During 1920, the first practical subtractive color processes were introduced. These also used black-and-white film to photograph multiple color-filtered source images, but the final product was a multicolored print that did not require special projection equipment. Before 1932, when three-strip Technicolor was introduced, commercialized subtractive processes used only two color components and could reproduce only a limited range of color.

In 1935, Kodachrome was introduced, followed by Agfacolor in 1936. They were intended primarily for amateur home movies and “slides“. These were the first films of the “integral tripack” type, coated with three layers of differently color-sensitive emulsion, which is usually what is meant by the words “color film” as commonly used. The few color photographic films still being made in the 2010s are of this type. The first color negative films and corresponding print films were modified versions of these films. They were introduced around 1940 but only came into wide use for commercial motion picture production in the early 1950s. In the US, Eastman Kodak‘s Eastmancolor was the usual choice, but it was often re-branded with another trade name, such as “WarnerColor”, by the studio or the film processor.

Later color films were standardized into two distinct processes: Eastman Color Negative 2 chemistry (camera negative stocks, duplicating interpositive and internegative stocks) and Eastman Color Positive 2 chemistry (positive prints for direct projection), usually abbreviated as ECN-2 and ECP-2. Fuji’s products are compatible with ECN-2 and ECP-2.

Film was the dominant form of cinematography until the 2010s, when it was largely replaced by digital cinematography.[2]

Contents

Overview

The first motion pictures were photographed using a simple homogeneous photographic emulsion that yielded a black-and-white image—that is, an image in shades of gray, ranging from black to white, corresponding to the luminous intensity of each point on the photographed subject. Light, shade, form and movement were captured, but not color.

With color motion picture film, information about the color of the light at each image point is also captured. This is done by analyzing the visible spectrum of color into several regions (normally three, commonly referred to by their dominant colors: red, green and blue) and recording each region separately.

Current color films do this with three layers of differently color-sensitive photographic emulsion coated on one strip of film base. Early processes used color filters to photograph the color components as completely separate images (e.g., three-strip Technicolor) or adjacent microscopic image fragments (e.g., Dufaycolor) in a one-layer black-and-white emulsion.

Each photographed color component, initially just a colorless record of the luminous intensities in the part of the spectrum that it captured, is processed to produce a transparent dye image in the color complementary to the color of the light that it recorded. The superimposed dye images combine to synthesize the original colors by the subtractive color method. In some early color processes (e.g., Kinemacolor), the component images remained in black-and-white form and were projected through color filters to synthesize the original colors by the additive color method.

Tinting and hand coloring

See also: List of early color feature films

The earliest motion picture stocks were orthochromatic, and recorded blue and green light, but not red. Recording all three spectral regions required making film stock panchromatic to some degree. Since orthochromatic film stock hindered color photography in its beginnings, the first films with color in them used aniline dyes to create artificial color. Hand-colored films appeared in 1895 with Thomas Edison‘s hand-painted Annabelle’s Dance for his Kinetoscope viewers.

Many early filmmakers from the first ten years of film also used this method to some degree. George Méliès offered hand-painted prints of his own films at an additional cost over the black-and-white versions, including the visual-effects pioneering A Trip to the Moon (1902). The film had various parts of the film painted frame-by-frame by twenty-one women in Montreuil[3] in a production-line method.[4]

The first commercially successful stencil color process was introduced in 1905 by Segundo de Chomón working for Pathé Frères. Pathé Color, renamed Pathéchrome in 1929, became one of the most accurate and reliable stencil coloring systems. It incorporated an original print of a film with sections cut by pantograph in the appropriate areas for up to six colors[3] by a coloring machine with dye-soaked, velvet rollers.[5] After a stencil had been made for the whole film, it was placed into contact with the print to be colored and run at high speed (60 feet per minute) through the coloring (staining) machine. The process was repeated for each set of stencils corresponding to a different color. By 1910, Pathé had over 400 women employed as stencilers in their Vincennes factory. Pathéchrome continued production through the 1930s.[3]

A more common technique emerged in the early 1910s known as film tinting, a process in which either the emulsion or the film base is dyed, giving the image a uniform monochromatic color. This process was popular during the silent era, with specific colors employed for certain narrative effects (red for scenes with fire or firelight, blue for night, etc.).[4]

A complementary process, called toning, replaces the silver particles in the film with metallic salts or mordanted dyes. This creates a color effect in which the dark parts of the image are replaced with a color (e.g., blue and white rather than black and white). Tinting and toning were sometimes applied together.[4]

In the United States, St. Louis engraver Max Handschiegl and cinematographer Alvin Wyckoff created the Handschiegl Color Process, a dye-transfer equivalent of the stencil process, first used in Joan the Woman (1917) directed by Cecil B. DeMille, and used in special effects sequences for films such as The Phantom of the Opera (1925).[3]

Eastman Kodak introduced its own system of pre-tinted black-and-white film stocks called Sonochrome in 1929. The Sonochrome line featured films tinted in seventeen different colors including Peachblow, Inferno, Candle Flame, Sunshine, Purple Haze, Firelight, Azure, Nocturne, Verdante, Aquagreen,[6] Caprice, Fleur de Lis, Rose Doree, and the neutral-density Argent, which kept the screen from becoming excessively bright when switching to a black-and-white scene.[3]

Tinting and toning continued to be used well into the sound era. In the 1930s and 1940s, some western films were processed in a sepia-toning solution to evoke the feeling of old photographs of the day. Tinting was used as late as 1951 for Sam Newfield‘s sci-fi film Lost Continent for the green lost-world sequences. Alfred Hitchcock used a form of hand-coloring for the orange-red gun-blast at the audience in Spellbound (1945).[3] Kodak’s Sonochrome and similar pre-tinted stocks were still in production until the 1970s and were used commonly for custom theatrical trailers and snipes.

In the last half of the 20th century, Norman McLaren, who was one of the pioneers in animated movies, made several animated films in which he directly hand-painted the images, and in some cases, also the soundtrack, on each frame of the film. This approach was previously employed in the early years of movies, late 19th and early 20th century. One of the precursors in color hand painting frame by frame were the Aragonese Segundo de Chomón and his French wife Julienne Mathieu, who were Melies’ close competitors.

Tinting was gradually replaced by natural color techniques.

Physics of light and color

The principles on which color photography is based were first proposed by Scottish physicist James Clerk Maxwell in 1855 and presented at the Royal Society in London in 1861.[3] By that time, it was known that light comprises a spectrum of different wavelengths that are perceived as different colors as they are absorbed and reflected by natural objects. Maxwell discovered that all natural colors in this spectrum as perceived by the human eye may be reproduced with additive combinations of three primary colorsred, green, and blue—which, when mixed equally, produce white light.[3]

Between 1900 and 1935, dozens of natural color systems were introduced, although only a few were successful.[6]

Additive color

The first color systems that appeared in motion pictures were additive color systems. Additive color was practical because no special color stock was necessary. Black-and-white film could be processed and used in both filming and projection. The various additive systems entailed the use of color filters on both the movie camera and projector. Additive color adds lights of the primary colors in various proportions to the projected image. Because of the limited amount of space to record images on film, and later because the lack of a camera that could record more than two strips of film at once, most early motion-picture color systems consisted of two colors, often red and green or red and blue.[4]

A pioneering three-color additive system was patented in England by Edward Raymond Turner in 1899.[7] It used a rotating set of red, green and blue filters to photograph the three color components one after the other on three successive frames of panchromatic black-and-white film. The finished film was projected through similar filters to reconstitute the color. In 1902, Turner shot test footage to demonstrate his system, but projecting it proved problematic because of the accurate registration (alignment) of the three separate color elements required for acceptable results. Turner died a year later without having satisfactorily projected the footage. In 2012, curators at the National Media Museum in Bradford, UK, had the original custom-format nitrate film copied to black-and-white 35 mm film, which was then scanned into a digital video format by telecine. Finally, digital image processing was used to align and combine each group of three frames into one color image.[8] As a result, these films from 1902 became viewable in full color.[9] File:A Visit to the Seaside (1908).webmPlay mediaA Visit to the Seaside, the first motion picture in Kinemacolor File:The Delhi Durbar 1911.webmPlay mediaWith Our King and Queen Through India, extract

Practical color in the motion picture business began with Kinemacolor, first demonstrated in 1906.[5] This was a two-color system created in England by George Albert Smith, and promoted by film pioneer Charles Urban‘s The Charles Urban Trading Company in 1908. It was used for a series of films including the documentary With Our King and Queen Through India, depicting the Delhi Durbar (also known as The Durbar at Delhi, 1912), which was filmed in December 1911. The Kinemacolor process consisted of alternating frames of specially sensitized black-and-white film exposed at 32 frames per second through a rotating filter with alternating red and green areas. The printed film was projected through similar alternating red and green filters at the same speed. A perceived range of colors resulted from the blending of the separate red and green alternating images by the viewer’s persistence of vision.[4][10]

William Friese-Greene invented another additive color system called Biocolour, which was developed by his son Claude Friese-Greene after William’s death in 1921. William sued George Albert Smith, alleging that the Kinemacolor process infringed on the patents for his Bioschemes, Ltd.; as a result, Smith’s patent was revoked in 1914.[3] Both Kinemacolor and Biocolour had problems with “fringing” or “haloing” of the image, due to the separate red and green images not fully matching up.[3]

By their nature, these additive systems were very wasteful of light. Absorption by the color filters involved meant that only a minor fraction of the projection light actually reached the screen, resulting in an image that was dimmer than a typical black-and-white image. The larger the screen, the dimmer the picture. For this and other case-by-case reasons, the use of additive processes for theatrical motion pictures had been almost completely abandoned by the early 1940s, though additive color methods are employed by all the color video and computer display systems in common use today.[4]

Subtractive color

The first practical subtractive color process was introduced by Kodak as “Kodachrome”, a name recycled twenty years later for a very different and far better-known product. Filter-photographed red and blue-green records were printed onto the front and back of one strip of black-and-white duplitized film. After development, the resulting silver images were bleached away and replaced with color dyes, red on one side and cyan on the other. The pairs of superimposed dye images reproduced a useful but limited range of color. Kodak’s first narrative film with the process was a short subject entitled Concerning $1000 (1916). Though their duplitized film provided the basis for several commercialized two-color printing processes, the image origination and color-toning methods constituting Kodak’s own process were little-used.

The first truly successful subtractive color process was William van Doren Kelley’s Prizma,[11] an early color process that was first introduced at the American Museum of Natural History in New York City on 8 February 1917.[12][13] Prizma began in 1916 as an additive system similar to Kinemacolor.

However, after 1917, Kelley reinvented the process as a subtractive one with several years of short films and travelogues, such as Everywhere With Prizma (1919) and A Prizma Color Visit to Catalina (1919) before releasing features such as the documentary Bali the Unknown (1921), The Glorious Adventure (1922), and Venus of the South Seas (1924). A Prizma promotional short filmed for Del Monte Foods titled Sunshine Gatherers (1921) is available on DVD in Treasures 5 The West 1898–1938 by the National Film Preservation Foundation.

The invention of Prizma led to a series of similarly printed color processes. This bipack color system used two strips of film running through the camera, one recording red, and one recording blue-green light. With the black-and-white negatives being printed onto duplitized film, the color images were then toned red and blue, effectively creating a subtractive color print.

Leon Forrest Douglass (1869–1940), a founder of Victor Records, developed a system he called Naturalcolor, and first showed a short test film made in the process on 15 May 1917 at his home in San Rafael, California. The only feature film known to have been made in this process, Cupid Angling (1918) — starring Ruth Roland and with cameo appearances by Mary Pickford and Douglas Fairbanks — was filmed in the Lake Lagunitas area of Marin County, California.[14]

After experimenting with additive systems (including a camera with two apertures, one with a red filter, one with a green filter) from 1915 to 1921, Dr. Herbert Kalmus, Dr. Daniel Comstock, and mechanic W. Burton Wescott developed a subtractive color system for Technicolor. The system used a beam splitter in a specially modified camera to send red and green light to adjacent frames of one strip of black-and-white film. From this negative, skip-printing was used to print each color’s frames contiguously onto film stock with half the normal base thickness. The two prints were chemically toned to roughly complementary hues of red and green,[5] then cemented together, back to back, into a single strip of film. The first film to use this process was The Toll of the Sea (1922) starring Anna May Wong. Perhaps the most ambitious film to use it was The Black Pirate (1926), starring and produced by Douglas Fairbanks.

The process was later refined through the incorporation of dye imbibition, which allowed for the transferring of dyes from both color matrices into a single print, avoiding several problems that had become evident with the cemented prints and allowing multiple prints to be created from a single pair of matrices.[4]

Technicolor’s early system were in use for several years, but it was a very expensive process: shooting cost three times that of black-and-white photography and printing costs were no cheaper. By 1932, color photography in general had nearly been abandoned by major studios, until Technicolor developed a new advancement to record all three primary colors. Utilizing a special dichroic beam splitter equipped with two 45-degree prisms in the form of a cube, light from the lens was deflected by the prisms and split into two paths to expose each one of three black-and-white negatives (one each to record the densities for red, green, and blue).[15]

The three negatives were then printed to gelatin matrices, which also completely bleached the image, washing out the silver and leaving only the gelatin record of the image. A receiver print, consisting of a 50% density print of the black-and-white negative for the green record strip, and including the soundtrack, was struck and treated with dye mordants to aid in the imbibition process (this “black” layer was discontinued in the early 1940s). The matrices for each strip were coated with their complementary dye (yellow, cyan, or magenta), and then each successively brought into high-pressure contact with the receiver, which imbibed and held the dyes, which collectively rendered a wider spectrum of color than previous technologies.[16] The first animation film with the three-color (also called three-strip) system was Walt Disney‘s Flowers and Trees (1932), the first short live-action film was La Cucaracha (1934), and the first feature was Becky Sharp (1935).[5]

Gasparcolor, a single-strip 3-color system, was developed in 1933 by the Hungarian chemist Dr. Bela Gaspar.[17]

The real push for color films and the nearly immediate changeover from black-and-white production to nearly all color film were pushed forward by the prevalence of television in the early 1950s. In 1947, only 12 percent of American films were made in color. By 1954, that number rose to over 50 percent.[3] The rise in color films was also aided by the breakup of Technicolor’s near monopoly on the medium.

In 1947, the United States Justice Department filed an antitrust suit against Technicolor for monopolization of color cinematography (even though rival processes such as Cinecolor and Trucolor were in general use). In 1950, a federal court ordered Technicolor to allot a number of its three-strip cameras for use by independent studios and filmmakers. Although this certainly affected Technicolor, its real undoing was the invention of Eastmancolor that same year.[3]

Monopack color film

A strip of undeveloped 35 mm color negative.

In the field of motion pictures, the many-layered type of color film normally called an integral tripack in broader contexts has long been known by the less tongue-twisting term monopack. For many years, Monopack (capitalized) was a proprietary product of Technicolor Corp, whereas monopack (not capitalized) generically referred to any of several single-strip color film products, including various Eastman Kodak products. It appeared that Technicolor made no attempt to register Monopack as a trademark with the US Patent and Trademark Office, although it asserted that term as if it were a registered trademark, and it had the force of a legal agreement between it and Eastman Kodak to back up that assertion. It was a solely-sourced product, too, as Eastman Kodak was legally prevented from marketing any color motion picture film products wider than 16mm, 35mm specifically, until the expiration of the so-called “Monopack Agreement” in 1950. This, notwithstanding the facts that Technicolor never had the capability to manufacture sensitized motion picture films of any kind, nor single-strip color films based upon its so-called “Troland Patent” (which Technicolor maintained covered all monopack-type films in general, and which Eastman Kodak elected not to contest as Technicolor was then one of its largest customers, if not its largest customer). After 1950, Eastman Kodak was free to make and market color films of any kind, particularly including monopack color motion picture films in 65/70mm, 35mm, 16mm and 8mm. The “Monopack Agreement” had no effect on color still films.

Monopack color films are based on the subtractive color system, which filters colors from white light by using superimposed cyan, magenta and yellow dye images. Those images are created from records of the amounts of red, green and blue light present at each point of the image formed by the camera lens. A subtractive primary color (cyan, magenta, yellow) is what remains when one of the additive primary colors (red, green, blue) has been removed from the spectrum. Eastman Kodak’s monopack color films incorporated three separate layers of differently color sensitive emulsion into one strip of film. Each layer recorded one of the additive primaries and was processed to produce a dye image in the complementary subtractive primary.

Kodachrome was the first commercially successful application of monopack multilayer film, introduced in 1935.[18] For professional motion picture photography, Kodachrome Commercial, on a 35mm BH-perforated base, was available exclusively from Technicolor, as its so-called “Technicolor Monopack” product. Similarly, for sub-professional motion picture photography, Kodachrome Commercial, on a 16mm base, was available exclusively from Eastman Kodak. In both cases, Eastman Kodak was the sole manufacturer and the sole processor. In the 35mm case, Technicolor dye-transfer printing was a “tie-in” product.[19] In the 16mm case, there were Eastman Kodak duplicating and printing stocks and associated chemistry, not the same as a “tie-in” product. In exceptional cases, Technicolor offered 16mm dye-transfer printing, but this necessitated the exceptionally wasteful process of printing on a 35mm base, only thereafter to be re-perforated and re-slit to 16mm, thereby discarding slightly more than one-half of the end product.

A late modification to the “Monopack Agreement”, the “Imbibition Agreement”, finally allowed Technicolor to economically manufacture 16mm dye-transfer prints as so-called “double-rank” 35/32mm prints (two 16mm prints on a 35mm base that was originally perforated at the 16mm specification for both halves, and was later re-slit into two 16mm wide prints without the need for re-perforation). This modification also facilitated the early experiments by Eastman Kodak with its negative-positive monopack film, which eventually became Eastmancolor. Essentially, the “Imbibition Agreement” lifted a portion of the “Monopack Agreement’s” restrictions on Technicolor (which prevented it from making motion picture products less than 35mm wide) and somewhat related restrictions on Eastman Kodak (which prevented it from experimenting and developing monopack products greater than 16mm wide).

Eastmancolor, introduced in 1950,[20] was Kodak’s first economical, single-strip 35 mm negative-positive process incorporated into one strip of film. This eventually rendered Three-Strip color photography obsolete, even though, for the first few years of Eastmancolor, Technicolor continued to offer Three-Strip origination combined with dye-transfer printing (150 titles produced in 1953, 100 titles produced in 1954 and 50 titles produced in 1955, the last year for Three-Strip as camera negative stock). The first commercial feature film to use Eastmancolor was the documentary Royal Journey, released in December 1951.[20] Hollywood studios waited until an improved version of Eastmancolor negative came out in 1952 before using it; This is Cinerama was an early film which employed three separate and interlocked strips of Eastmancolor negative. This is Cinerama was initially printed on Eastmancolor positive, but its significant success eventually resulted in it being reprinted by Technicolor, using dye-transfer.

By 1953, and especially with the introduction of anamorphic wide screen CinemaScope, Eastmancolor became a marketing imperative as CinemaScope was incompatible with Technicolor’s Three-Strip camera and lenses. Indeed, Technicolor Corp became one of the best, if not the best, processor of Eastmancolor negative, especially for so-called “wide gauge” negatives (5-perf 65mm, 6-perf 35mm), yet it far preferred its own 35mm dye-transfer printing process for Eastmancolor-originated films with a print run that exceeded 500 prints,[21] not withstanding the significant “loss of register” that occurred in such prints that were expanded by CinemaScope’s 2X horizontal factor, and, to a lesser extent, with so-called “flat wide screen” (variously 1.66:1 or 1.85:1, but spherical and not anamorphic). This nearly fatal flaw was not corrected until 1955 and caused numerous features initially printed by Technicolor to be scrapped and reprinted by DeLuxe Labs. (These features are often billed as “Color by Technicolor-DeLuxe”.) Indeed, some Eastmancolor-originated films billed as “Color by Technicolor” were never actually printed using the dye-transfer process, due in part to the throughput limitations of Technicolor’s dye-transfer printing process, and competitor DeLuxe’s superior throughput. Incredibly, DeLuxe once had a license to install a Technicolor-type dye-transfer printing line, but as the “loss of register” problems became apparent in Fox’s CinemaScope features that were printed by Technicolor, after Fox had become an all-CinemaScope producer, Fox-owned DeLuxe Labs abandoned its plans for dye-transfer printing and became, and remained, an all-Eastmancolor shop, as Technicolor itself later became.

Technicolor continued to offer its proprietary imbibition dye-transfer printing process for projection prints until 1975, and even briefly revived it in 1998. As an archival format, Technicolor prints are one of the most stable color print processes yet created, and prints properly cared for are estimated to retain their color for centuries.[22] With the introduction of Eastmancolor low-fade positive print (LPP) films, properly stored (at 45 °F or 7 °C and 25 percent relative humidity) monopack color film is expected to last, with no fading, a comparative amount of time. Improperly stored monopack color film from before 1983 can incur a 30 percent image loss in as little as 25 years.[23]

Functionality

A representation of the layers within a piece of developed color 35 mm negative film. When developed, the dye couplers in the blue-, green-, and red-sensitive layers turn the exposed silver halide crystals to their complementary colors (yellow, magenta, and cyan). The film is made up of (A) Clear protective topcoat, (B) UV filter, (C) “Fast” blue layer, (D) “Slow” blue layer, (E) Yellow filter to cut all blue light from passing through to (F) “Fast” green layer, (G) “Slow” green layer, (H) Inter (subbing) layer, (I) “Fast” red layer, (J) “Slow” red layer, (K) Clear triacetate base, and (L) Antihalation (rem-jet) backing.

A color film is made up of many different layers that work together to create the color image. Color negative films provide three main color layers: the blue record, green record, and red record; each made up of two separate layers containing silver halide crystals and dye-couplers. A cross-sectional representation of a piece of developed color negative film is shown in the figure at the right. Each layer of the film is so thin that the composite of all layers, in addition to the triacetate base and antihalation backing, is less than 0.0003″ (8 µm) thick.[24]

The three color records are stacked as shown at right, with a UV filter on top to keep the non-visible ultraviolet radiation from exposing the silver-halide crystals, which are naturally sensitive to UV light. Next are the fast and slow blue-sensitive layers, which, when developed, form the latent image. When the exposed silver-halide crystal is developed, it is coupled with a dye grain of its complementary color. This forms a dye “cloud” (like a drop of water on a paper towel) and is limited in its growth by development-inhibitor-releasing (DIR) couplers, which also serve to refine the sharpness of the processed image by limiting the size of the dye clouds. The dye clouds formed in the blue layer are actually yellow (the opposite or complementary color to blue).[25] There are two layers to each color; a “fast” and a “slow.” The fast layer features larger grains that are more sensitive to light than the slow layer, which has finer grain and is less sensitive to light. Silver-halide crystals are naturally sensitive to blue light, so the blue layers are on the top of the film and they are followed immediately by a yellow filter, which stops any more blue light from passing through to the green and red layers and biasing those crystals with extra blue exposure. Next are the red-sensitive record (which forms cyan dyes when developed); and, at the bottom, the green-sensitive record, which forms magenta dyes when developed. Each color is separated by a gelatin layer that prevents silver development in one record from causing unwanted dye formation in another. On the back of the film base is an anti-halation layer that absorbs light which would otherwise be weakly reflected back through the film by that surface and create halos of light around bright features in the image. In color film, this backing is “rem-jet”, a black-pigmented, non-gelatin layer which is removed in the developing process.[24]

Eastman Kodak manufactures film in 54-inch (1,372 mm) wide rolls. These rolls are then slit into various sizes (70 mm, 65 mm, 35 mm, 16 mm) as needed.

Manufacturers of color film for motion picture use

See also: List of motion picture film stocks

Motion picture film, primarily because of the rem-jet backing, requires different processing than standard C-41 process color film. The process necessary is ECN-2, which has an initial step using an alkaline bath to remove the backing layer. There are also minor differences in the remainder of the process. If motion picture negative is run through a standard C-41 color film developer bath, the rem-jet backing partially dissolves and destroys the integrity of the developer and, potentially, ruins the film.

Kodak color motion picture films

In the late 1980s, Kodak introduced the T-Grain emulsion, a technological advancement in the shape and make-up of silver halide grains in their films. T-Grain is a tabular silver halide grain that allows for greater overall surface area, resulting in greater light sensitivity with a relatively small grain and a more uniform shape that results in a less overall graininess to the film. This made for sharper and more sensitive films. The T-Grain technology was first employed in Kodak’s EXR line of motion picture color negative stocks.[26] This was further refined in 1996 with the Vision line of emulsions, followed by Vision2 in the early 2000s and Vision3 in 2007.

Fuji color motion picture films

Fuji films also integrate tabular grains in their SUFG (Super Unified Fine Grain) films. In their case, the SUFG grain is not only tabular, it is hexagonal and consistent in shape throughout the emulsion layers. Like the T-grain, it has a larger surface area in a smaller grain (about one-third the size of traditional grain) for the same light sensitivity. In 2005, Fuji unveiled their Eterna 500T stock, the first in a new line of advanced emulsions, with Super Nano-structure Σ Grain Technology.