Science

January 28th 2023

Quantum Leaps, Long Assumed to Be Instantaneous, Take Time

An experiment caught a quantum system in the middle of a jump — something the originators of quantum mechanics assumed was impossible.

Quanta Magazine

Read when you’ve got time to spare.

Advertisement

Quanta Magazine

More from Quanta Magazine

Advertisement

Screenshot_2020-09-23 Quantum Leaps, Long Assumed to Be Instantaneous, Take Time Quanta Magazine.png

A quantum leap is a rapidly gradual process. Credit: Quanta Magazine; source: qoncha.

When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.

A 2019 experiment shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.” The study, which was led by Zlatko Minev, a graduate student in Devoret’s lab, was published on Monday in Nature. Already, colleagues are excited. “This is really a fantastic experiment,” said the physicist William Oliver of the Massachusetts Institute of Technology, who wasn’t involved in the work. “Really amazing.”

But there’s more. With their high-speed monitoring system, the researchers could spot when a quantum jump was about to appear, “catch” it halfway through, and reverse it, sending the system back to the state in which it started. In this way, what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control. We can take charge of the quantum.

All Too Random

The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr, Werner Heisenberg and their colleagues in the mid-1920s, in a picture now commonly called the Copenhagen interpretation. Bohr had argued earlier that the energy states of electrons in atoms are “quantized”: Only certain energies are available to them, while all those in between are forbidden. He proposed that electrons change their energy by absorbing or emitting quantum particles of light — photons — that have energies matching the gap between permitted electron states. This explained why atoms and molecules absorb and emit very characteristic wavelengths of light — why many copper salts are blue, say, and sodium lamps yellow.

Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s. Heisenberg’s quantum mechanics enumerated all the allowed quantum states, and implicitly assumed that jumps between them are instant — discontinuous, as mathematicians would say. “The notion of instantaneous quantum jumps … became a foundational notion in the Copenhagen interpretation,” historian of science Mara Beller has written.

Another of the architects of quantum mechanics, the Austrian physicist Erwin Schrödinger, hated that idea. He devised what seemed at first to be an alternative to Heisenberg’s math of discrete quantum states and instant jumps between them. Schrödinger’s theory represented quantum particles in terms of wavelike entities called wave functions, which changed only smoothly and continuously over time, like gentle undulations on the open sea. Things in the real world don’t switch suddenly, in zero time, Schrödinger thought — discontinuous “quantum jumps” were just a figment of the mind. In a 1952 paper called “Are there quantum jumps?,” Schrödinger answered with a firm “no,” his irritation all too evident in the way he called them “quantum jerks.”

The argument wasn’t just about Schrödinger’s discomfort with sudden change. The problem with a quantum jump was also that it was said to just happen at a random moment — with nothing to say why that particular moment. It was thus an effect without a cause, an instance of apparent randomness inserted into the heart of nature. Schrödinger and his close friend Albert Einstein could not accept that chance and unpredictability reigned at the most fundamental level of reality. According to the German physicist Max Born, the whole controversy was therefore “not so much an internal matter of physics, as one of its relation to philosophy and human knowledge in general.” In other words, there’s a lot riding on the reality (or not) of quantum jumps.

Seeing Without Looking

To probe further, we need to see quantum jumps one at a time. In 1986, three teams of researchers reported them happening in individual atoms suspended in space by electromagnetic fields. The atoms flipped between a “bright” state, where they could emit a photon of light, and a “dark” state that did not emit at random moments, remaining in one state or the other for periods of between a few tenths of a second and a few seconds before jumping again. Since then, such jumps have been seen in various systems, ranging from photons switching between quantum states to atoms in solid materials jumping between quantized magnetic states. In 2007 a team in France reported jumps that correspond to what they called “the birth, life and death of individual photons.”

In these experiments the jumps indeed looked abrupt and random — there was no telling, as the quantum system was monitored, when they would happen, nor any detailed picture of what a jump looked like. The Yale team’s setup, by contrast, allowed them to anticipate when a jump was coming, then zoom in close to examine it. The key to the experiment is the ability to collect just about all of the available information about it, so that none leaks away into the environment before it can be measured. Only then can they follow single jumps in such detail.

The quantum systems the researchers used are much larger than atoms, consisting of wires made from a superconducting material — sometimes called “artificial atoms” because they have discrete quantum energy states analogous to the electron states in real atoms. Jumps between the energy states can be induced by absorbing or emitting a photon, just as they are for electrons in atoms.

Devoret and colleagues wanted to watch a single artificial atom jump between its lowest-energy (ground) state and an energetically excited state. But they couldn’t monitor that transition directly, because making a measurement on a quantum system destroys the coherence of the wave function — its smooth wavelike behavior  — on which quantum behavior depends. To watch the quantum jump, the researchers had to retain this coherence. Otherwise they’d “collapse” the wave function, which would place the artificial atom in one state or the other. This is the problem famously exemplified by Schrödinger’s cat, which is allegedly placed in a coherent quantum “superposition” of live and dead states but becomes only one or the other when observed.

To get around this problem, Devoret and colleagues employ a clever trick involving a second excited state. The system can reach this second state from the ground state by absorbing a photon of a different energy. The researchers probe the system in a way that only ever tells them whether the system is in this second “bright” state, so named because it’s the one that can be seen. The state to and from which the researchers are actually looking for quantum jumps is, meanwhile, the “dark” state — because it remains hidden from direct view.

The researchers placed the superconducting circuit in an optical cavity (a chamber in which photons of the right wavelength can bounce around) so that, if the system is in the bright state, the way that light scatters in the cavity changes. Every time the bright state decays by emission of a photon, the detector gives off a signal akin to a Geiger counter’s “click.”

The key here, said Oliver, is that the measurement provides information about the state of the system without interrogating that state directly. In effect, it asks whether the system is in, or is not in, the ground and dark states collectively. That ambiguity is crucial for maintaining quantum coherence during a jump between these two states. In this respect, said Oliver, the scheme that the Yale team has used is closely related to those employed for error correction in quantum computers. There, too, it’s necessary to get information about quantum bits without destroying the coherence on which the quantum computation relies. Again, this is done by not looking directly at the quantum bit in question but probing an auxiliary state coupled to it.

The strategy reveals that quantum measurement is not about the physical perturbation induced by the probe but about what you know (and what you leave unknown) as a result. “Absence of an event can bring as much information as its presence,” said Devoret. He compares it to the Sherlock Holmes story in which the detective infers a vital clue from the “curious incident” in which a dog did not do anything in the night. Borrowing from a different (but often confused) dog-related Holmes story, Devoret calls it “Baskerville’s Hound meets Schrödinger’s Cat.”

To Catch a Jump

The Yale team saw a series of clicks from the detector, each signifying a decay of the bright state, arriving typically every few microseconds. This stream of clicks was interrupted approximately every few hundred microseconds, apparently at random, by a hiatus in which there were no clicks. Then after a period of typically 100 microseconds or so, the clicks resumed. During that silent time, the system had presumably undergone a transition to the dark state, since that’s the only thing that can prevent flipping back and forth between the ground and bright states.

So here in these switches from “click” to “no-click” states are the individual quantum jumps — just like those seen in the earlier experiments on trapped atoms and the like. However, in this case Devoret and colleagues could see something new.

Before each jump to the dark state, there would typically be a short spell where the clicks seemed suspended: a pause that acted as a harbinger of the impending jump. “As soon as the length of a no-click period significantly exceeds the typical time between two clicks, you have a pretty good warning that the jump is about to occur,” said Devoret.

That warning allowed the researchers to study the jump in greater detail. When they saw this brief pause, they switched off the input of photons driving the transitions. Surprisingly, the transition to the dark state still happened even without photons driving it — it is as if, by the time the brief pause sets in, the fate is already fixed. So although the jump itself comes at a random time, there is also something deterministic in its approach.

With the photons turned off, the researchers zoomed in on the jump with fine-grained time resolution to see it unfold. Does it happen instantaneously — the sudden quantum jump of Bohr and Heisenberg? Or does it happen smoothly, as Schrödinger insisted it must? And if so, how?

The team found that jumps are in fact gradual. That’s because, even though a direct observation could reveal the system only as being in one state or another, during a quantum jump the system is in a superposition, or mixture, of these two end states. As the jump progresses, a direct measurement would be increasingly likely to yield the final rather than the initial state. It’s a bit like the way our decisions may evolve over time. You can only either stay at a party or leave it — it’s a binary choice — but as the evening wears on and you get tired, the question “Are you staying or leaving?” becomes increasingly likely to get the answer “I’m leaving.”

The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump. Using a method called tomographic reconstruction, the researchers could figure out the relative weightings of the dark and ground states in the superposition. They saw these weights change gradually over a period of a few microseconds. That’s pretty fast, but it’s certainly not instantaneous.

What’s more, this electronic system is so fast that the researchers could “catch” the switch between the two states as it is happening, then reverse it by sending a pulse of photons into the cavity to boost the system back to the dark state. They can persuade the system to change its mind and stay at the party after all.

Flash of Insight

The experiment shows that quantum jumps “are indeed not instantaneous if we look closely enough,” said Oliver, “but are coherent processes”: real physical events that unfold over time.

The gradualness of the “jump” is just what is predicted by a form of quantum theory called quantum trajectories theory, which can describe individual events like this. “It is reassuring that the theory matches perfectly with what is seen” said David DiVincenzo, an expert in quantum information at Aachen University in Germany, “but it’s a subtle theory, and we are far from having gotten our heads completely around it.”

The possibility of predicting quantum jumps just before they occur, said Devoret, makes them somewhat like volcanic eruptions. Each eruption happens unpredictably, but some big ones can be anticipated by watching for the atypically quiet period that precedes them. “To the best of our knowledge, this precursory signal [to a quantum jump] has not been proposed or measured before,” he said.

Devoret said that an ability to spot precursors to quantum jumps might find applications in quantum sensing technologies. For example, “in atomic clock measurements, one wants to synchronize the clock to the transition frequency of an atom, which serves as a reference,” he said. But if you can detect right at the start if the transition is about to happen, rather than having to wait for it to be completed, the synchronization can be faster and therefore more precise in the long run.

DiVincenzo thinks that the work might also find applications in error correction for quantum computing, although he sees that as “quite far down the line.” To achieve the level of control needed for dealing with such errors, though, will require this kind of exhaustive harvesting of measurement data — rather like the data-intensive situation in particle physics, said DiVincenzo.

The real value of the result is not, though, in any practical benefits; it’s a matter of what we learn about the workings of the quantum world. Yes, it is shot through with randomness — but no, it is not punctuated by instantaneous jerks. Schrödinger, aptly enough, was both right and wrong at the same time.

Philip Ball is a science writer and author based in London who contributes frequently to Nature, New Scientist, Prospect, Nautilus and The Atlantic, among other publications.

  • Philip Ball

January 14th 2023

The Remarkable Emptiness of Existence

Early scientists didn’t know it, but we do now: The void in the universe is alive.

  • By Paul M. Sutter
  • January 4, 2023
normal_thumb

In 1654 a German scientist and politician named Otto von Guericke was supposed to be busy being the mayor of Magdeburg. But instead he was putting on a demonstration for lords of the Holy Roman Empire. With his newfangled invention, a vacuum pump, he sucked the air out of a copper sphere constructed of two hemispheres. He then had two teams of horses, 15 in each, attempt to pull the hemispheres apart. To the astonishment of the royal onlookers, the horses couldn’t separate the hemispheres because of the overwhelming pressure of the atmosphere around them.

Von Guericke became obsessed by the idea of a vacuum after learning about the recent and radical idea of a heliocentric universe: a cosmos with the sun at the center and the planets whipping around it. But for this idea to work, the space between the planets had to be filled with nothing. Otherwise friction would slow the planets down.

The vacuum is singing to us, a harmony underlying reality itself.

Scientists, philosophers, and theologians across the globe had debated the existence of the vacuum for millennia, and here was von Guericke and a bunch of horses showing that it was real. But the idea of the vacuum remained uncomfortable, and only begrudgingly acknowledged. We might be able to artificially create a vacuum with enough cleverness here on Earth, but nature abhorred the idea. Scientists produced a compromise: The space of space was filled with a fifth element, an aether, a substance that did not have much in the way of manifest properties, but it most definitely wasn’t nothing.

But as the quantum and cosmological revolutions of the 20th century arrived, scientists never found this aether and continued to turn up empty handed.

The more we looked, through increasingly powerful telescopes and microscopes, the more we discovered nothing. In the 1920s astronomer Edwin Hubble discovered that the Andromeda nebula was actually the Andromeda galaxy, an island home of billions of stars sitting a staggering 2.5 million light-years away. As far as we could tell, all those lonely light-years were filled with not much at all, just the occasional lost hydrogen atom or wandering photon. Compared to the relatively small size of galaxies themselves (our own Milky Way stretches across for a mere 100,000 light-years), the universe seemed dominated by absence.

At subatomic scales, scientists were also discovering atoms to be surprisingly empty places. If you were to rescale a hydrogen atom so that its nucleus was the size of a basketball, the nearest electron would sit around two miles away. With not so much as a lonely subatomic tumbleweed in between.

WILD HORSES COULDN’T DRAG THEM APART: Inventor Otto von Guericke’s original vacuum pump and copper hemispheres are on display in the Deutsches Museum in Munich, Germany. When von Guericke sealed the hemispheres in a vacuum, they couldn’t be separated by teams of horses. Image by Wikimedia Commons.

Nothing. Absolutely nothing. Continued experiments and observations only served to confirm that at scales both large and small, we appeared to live in an empty world.

And then that nothingness cracked open. Within the emptiness that dominates the volume of an atom and the volume of the universe, physicists found something. Far from the sedate aether of yore, this something is strong enough to be tearing our universe apart. The void, it turns out, is alive.

In December 2022, an international team of astronomers released the results of their latest survey of galaxies, and their work has confirmed that the vacuum of spacetime is wreaking havoc across the cosmos. They found that matter makes up only a minority contribution to the energy budget of the universe. Instead, most of the energy within the cosmos is contained in the vacuum, and that energy is dominating the future evolution of the universe.

Their work is the latest in a string of discoveries stretching back over two decades. In the late 1990s, two independent teams of astronomers discovered that the expansion of the universe is accelerating, meaning that our universe grows larger and larger faster and faster every day. The exact present-day expansion rate is still a matter of some debate among cosmologists, but the reality is clear: Something is making the universe blow up. It appears as a repulsive gravitational force, and we’ve named it dark energy.

The trick here is that the vacuum, first demonstrated by von Guericke all those centuries ago, is not as empty as it seems. If you were to take a box (or, following von Guericke’s example, two hemispheres), and remove everything from it, including all the particles, all the light, all the everything, you would not be left with, strictly speaking, nothing. What you’d be left with is the vacuum of spacetime itself, which we’ve learned is an entity in its own right.

Nothing contains all things. It is more precious than gold.

We live in a quantum universe; a universe where you can never be quite sure about anything. At the tiniest of scales, subatomic particles fizz and pop into existence, briefly experiencing the world of the living before returning back from where they came, disappearing from reality before they have a chance to meaningfully interact with anything else.

This phenomenon has various names: the quantum foam, the spacetime foam, vacuum fluctuations. This foam represents a fundamental energy to the vacuum of spacetime itself, a bare ground level on which all other physical interactions take place. In the language of quantum field theory, the offspring of the marriage of quantum mechanics and special relativity, quantum fields representing every kind of particle soak the vacuum of spacetime like crusty bread dipped in oil and vinegar. Those fields can’t help but vibrate at a fundamental, quantum level. In this view, the vacuum is singing to us, a harmony underlying reality itself.

In our most advanced quantum theories, we can calculate the energy contained in the vacuum, and it’s infinite. As in, suffusing every cubic centimeter of space and time is an infinite amount of energy, the combined efforts of all those countless but effervescent particles. This isn’t necessarily a problem for the physics that we’re used to, because all the interactions of everyday experience sit “on top of” (for lack of a better term) that infinite tower of energy—it just makes the math a real pain to work with.

All this would be mathematically annoying but otherwise unremarkable except for the fact that in Einstein’s general theory of relativity, vacuum energy has the curious ability to generate a repulsive gravitational force. We typically never notice such effects because the vacuum energy is swamped by all the normal mass within it (in von Guericke’s case, the atmospheric pressure surrounding his hemispheres was the dominant force at play). But at the largest scales there’s so much raw nothingnessto the universe that these effects become manifest as an accelerated expansion. Recent research suggests that around 5 billion years ago, the matter in the universe diluted to the point that dark energy could come to the fore. Today, it represents roughly 70 percent of the entire energy budget of the cosmos. Studies have shown that dark energy is presently in the act of ripping apart the large-scale structure of the universe, tearing apart superclusters of galaxies and disentangling the cosmic web before our eyes.

But the acceleration isn’t all that rapid. When we calculate how much vacuum energy is needed to create the dark energy effect, we only get a small number.

But our quantum understanding of vacuum energy says it should be infinite, or at least incredibly large. Definitely not small. This discrepancy between the theoretical energy of the vacuum and the observed value is one of the greatest mysteries in modern physics. And it leads to the question about what else might be lurking in the vast nothingness of our atoms and our universe. Perhaps von Guericke was right all along. “Nothing contains all things,” he wrote. “It is more precious than gold, without beginning and end, more joyous than the perception of bountiful light, more noble than the blood of kings, comparable to the heavens, higher than the stars, more powerful than a stroke of lightening, perfect and blessed in every way.”

Paul M. Sutter is a research professor in astrophysics at the Institute for Advanced Computational Science at Stony Brook University and a guest researcher at the Flatiron Institute in New York City. He is the author of Your Place in the Universe: Understanding our Big, Messy Existence.

Lead image: ANON MUENPROM / Shutterstock

new_letter

Get the Nautilus newsletter

The newest and most popular articles delivered right to your inbox! Newsletter Signup (Embedded) If you are human, leave this field blank.

Should People Live on the Moon?

One question for Joseph Silk, an astrophysicist at Johns Hopkins University.

  • By Brian Gallagher
  • January 9, 2023
  • Share
normal_thumb

Explore

One question for Joseph Silk, an astrophysicist at Johns Hopkins University and the author of Back to the Moon: The Next Giant Leap for Humankind.

Photo courtesy of Joseph Silk

Should people live on the moon?

Why not? We have to start somewhere if we ever want to leave Earth. And the only realistic place to start is the moon. It’s going to be for a minority, right? For explorers, for people exploiting the moon for commercial reasons, for scientists. They will be living on the moon within the next century. And it will be a starting point to go elsewhere. It’s a much easier environment, because of the low gravity, from which to send spacecraft farther afield.

If your interest is commercial, then you’ll probably focus on mining because we’re running out of certain rare-earth elements on Earth critical for our computer industry. There’s a hugely abundant supply on the moon, thanks to its history of bombardment by meteorites throughout billions of years. Beyond that there’s tourism. You can already buy tickets—not cheap, of course—for a trip around the moon, perhaps in the next five years, because of people like Elon Musk. The moon’s also got a huge abundance—this is not very widely known—of water ice. Inside deep, dark craters. Not only does that help you have a major resource for life, but maybe the most important use of water will be to break it down into oxygen, hydrogen. Liquefy these—you have abundant power from the sun to do this—and lo and behold, you have rocket fuel to take you throughout the inner solar system and beyond.

I’m very attracted to the moon as a place to build telescopes. We can see stars really sharply, with no obscuration. Water vapor in Earth’s atmosphere masks out much of the infrared light from the stars where there’s huge amounts of information about what the stars are made of. Also, because of our ionosphere, Earth is a very difficult place to receive low-frequency radio waves in space. These are inaccessible from Earth because we’ll be looking deep into the universe, where the wavelengths of these waves get stretched out by the expansion of the universe. Earth’s ionosphere distorts them. So, on the moon, we can view the universe as we never could before in radio waves. 

A giant lava tube is large enough to house an entire city.

The only way to capture these really faint things is with a huge telescope. The James Webb is a small telescope—six meters. The far side of the moon is a unique place for doing that, and there are some futuristic schemes now to build a mega telescope inside a crater, a natural bowl. You can line the inside of that bowl with a number of small telescopes, maybe hundreds of them, and they would operate coherently, supported by wires from the crater edges. The size of the telescope determines how much light you can collect. You could have hundreds of times more light-gathering power on the moon.

Suddenly you have a telescope that’s kilometers across, and this would be the most amazing thing I could imagine, because then you could have not just light-collecting power, but resolving power. You could actually directly image the nearest exoplanets and look to see: “Do they have oceans? Do they have forests?” The first things you want to look for, if you want to explore the possibilities of distant, alien life. I’m rather doubtful that we’ll find life in our solar system. We have to look much farther away. Many light-years away. And that we can do with these giant telescopes.

The people working on the moon will have to figure out how to make a biosphere. Could be inside an artificial construction on the lunar surface. But that will not be very large. A much more promising approach, again in the next decades, will be to use a giant lava tube. These are huge, natural caves, relics of ancient volcanic activity. A giant lava tube is large enough to house an entire city. You can imagine air proofing that and developing a local atmosphere inside that, which would be a great place to live and certainly do new things on the moon. Things you would never do on Earth. 

That is my most optimistic scenario for having large numbers of people. By no means millions, but maybe thousands of people living and working on the moon. One has to be optimistic that the international community will recognize that cooperation is the only way to go in the future, and establish lunar law that will control both real estate and also, I imagine, crime activity, if people start disputing territories. I’m hoping we have a legal framework. Right now, we seem very far away from this, but it’s got to happen. We have maybe one or two decades before the moon becomes a competitive place and exploration heats up.

Lead image: Nostalgia for Infinity / Shutterstock

  • Brian GallagherPosted on January 9, 2023 Brian Gallagher is an associate editor at Nautilus. Follow him on Twitter @bsgallagher.

new_letter

Get the Nautilus newsletter

The newest and most popular articles delivered right to your inbox!

January 9th 2023

Study Shows How The Universe Would Look if You Broke The Speed of Light, And It’s Weird

Physics02 January 2023

By David Nield

Illuminated tunnel of light (Omar Jabri/EyeEm/Getty Images)

Nothing can go faster than light. It’s a rule of physics woven into the very fabric of Einstein’s special theory of relativity. The faster something goes, the closer it gets to its perspective of time freezing to a standstill.

Go faster still, and you run into issues of time reversing, messing with notions of causality.

But researchers from the University of Warsaw in Poland and the National University of Singapore have now pushed the limits of relativity to come up with a system that doesn’t run afoul of existing physics, and might even point the way to new theories.

What they’ve come up with is an “extension of special relativity” that combines three time dimensions with a single space dimension (“1+3 space-time”), as opposed to the three spatial dimensions and one time dimension that we’re all used to.

Rather than creating any major logical inconsistencies, this new study adds more evidence to back up the idea that objects might well be able to go faster than light without completely breaking our current laws of physics.

“There is no fundamental reason why observers moving in relation to the described physical systems with speeds greater than the speed of light should not be subject to it,” says physicist Andrzej Dragan, from the University of Warsaw in Poland.

This new study builds on previous work by some of the same researchers which posits that superluminal perspectives could help tie together quantum mechanics with Einstein’s special theory of relativity – two branches of physics that currently can’t be reconciled into a single overarching theory that describes gravity in the same way we explain other forces.

Particles can no longer be modelled as point-like objects under this framework, as we might in the more mundane 3D (plus time) perspective of the Universe.

Instead, to make sense of what observers might see and how a superluminal particle might behave, we’d need to turn to the kinds of field theories that underpin quantum physics.

Based on this new model, superluminal objects would look like a particle expanding like a bubble through space – not unlike a wave through a field. The high-speed object, on the other hand, would ‘experience’ several different timelines.

Even so, the speed of light in a vacuum would remain constant even for those observers going faster than it, which preserves one of Einstein’s fundamental principles – a principle that has previously only been thought about in relation to observers going slower than the speed of light (like all of us).

“This new definition preserves Einstein’s postulate of constancy of the speed of light in vacuum even for superluminal observers,” says Dragan.

“Therefore, our extended special relativity does not seem like a particularly extravagant idea.”

However, the researchers acknowledge that switching to a 1+3 space-time model does raise some new questions, even while it answers others. They suggest that extending the theory of special relativity to incorporate faster-than-light frames of reference is needed.

That may well involve borrowing from quantum field theory: a combination of concepts from special relativity, quantum mechanics, and classical field theory (which aims to predict how physical fields are going to interact with each other).

If the physicists are right, the particles of the Universe would all have extraordinary properties in extended special relativity.

One of the questions raised by the research is whether or not we would ever be able to observe this extended behavior – but answering that is going to require a lot more time and a lot more scientists.

“The mere experimental discovery of a new fundamental particle is a feat worthy of the Nobel Prize and feasible in a large research team using the latest experimental techniques,” says physicist Krzysztof Turzyński, from the University of Warsaw.

“However, we hope to apply our results to a better understanding of the phenomenon of spontaneous symmetry breaking associated with the mass of the Higgs particle and other particles in the Standard Model, especially in the early Universe.”

The research has been published in Classical and Quantum Gravity.

Trending News

Something Strange Happens to The Temperature Around Freshly Formed Bubbles Physics3 days ago

We May Finally Know Why Some People Don’t Recover Their Sense of Smell After COVID Health3 days ago

The Chilling Tale of The ‘Demon Core’ And The Scientists Who Became Its Victims Humans4 days ago

January 7th 2023

We’ve Never Found Anything Like The Solar System. Is It a Freak in Space?

Space29 December 2022

By Michelle Starr

Bold Colour Image depicting the Solar System (NASA)

Since the landmark discovery in 1992 of two planets orbiting a star outside of our Solar System, thousands of new worlds have been added to a rapidly growing list of ‘exoplanets’ in the Milky Way galaxy.

We’ve learnt many things from this vast catalogue of alien worlds orbiting alien stars. But one small detail stands out like a sore thumb. We’ve found nothing else out there like our own Solar System.

This has led some to conclude that our home star and its brood could be outliers in some way – perhaps the only planetary system of its kind.

By extension, this could mean life itself is an outlier; that the conditions that formed Earth and its veneer of self-replicating chemistry are difficult to replicate.

If you’re just looking at the numbers, the outlook is grim. By a large margin, the most numerous exoplanets we’ve identified to date are of a type not known to be conducive to life: giants and subgiants, of the gas and maybe ice variety.

Most exoplanets we’ve seen so far orbit their stars very closely, practically hugging them; so close that their sizzling temperatures would be much higher than the known habitability range.

Artist's impression of an ultra-hot exoplanet about to transit in front of its host star
Artist’s impression of an ultra-hot Jupiter transiting its star. (ESO/M. Kornmesser)

It’s possible that as we continue searching, the statistics will balance out and we’ll see more places that remind us of our own backyard. But the issue is much more complex than just looking at numbers. Exoplanet science is limited by the capabilities of our technology. More than that, our impression of the true variety of alien worlds risks being limited by our own imagination.

What’s really out there in the Milky Way galaxy, and beyond, may be very different from what we actually see.

Expectations, and how to thwart them

Exoplanet science has a history of subverting expectations, right from the very beginning.

“If you go back to that world I grew up in when I was a kid, we only knew of one planetary system,” planetary scientist Jonti Horner of the University of Southern Queensland tells ScienceAlert.

“And so that was this kind of implicit assumption, and sometimes the explicit assumption, that all planetary systems would be like this. You know, you’d have rocky planets near the star that were quite small, you’d have gas giants a long way from the star that were quite big. And that’s how planetary systems would be.”

For this reason, it took scientists a while to identify an exoplanet orbiting a main sequence star, like our Sun. Assuming other solar systems were like ours, the tell-tale signs of heavyweight planets tugging on their stars would take years to observe, just as it takes our own gas giants years to complete an orbit.

Based on such lengthy periods of a single measurement, it didn’t seem worth the trouble to sift through a relatively short history of observations for many stars to conclusively sift out a fellow main-sequence solar system.

When they finally did look, the exoplanet they found was nothing like what they were expecting: a gas giant half the mass (and twice the size) of Jupiter orbiting so close to its host star, its year equals 4.2 days, and its atmosphere scorches at temperatures of around 1,000 degrees Celsius (1800 degrees Fahrenheit).

Since then, we’ve learnt these ‘Hot Jupiter’ type planets aren’t oddities at all. If anything, they seem relatively common.

We know now that there’s a lot more variety out there in the galaxy than what we see in our home system. However, it’s important not to assume that what we can currently detect is all that the Milky Way has to offer. If there’s anything out there like our own Solar System, it’s very possibly beyond our detection capabilities.

“Things like the Solar System are very hard for us to find, they’re a bit beyond us technologically at the minute,” Horner says.

“The terrestrial planets would be very unlikely to be picked up from any of the surveys we’ve done so far. You’re very unlikely to be able to find a Mercury, Venus, Earth and Mars around a star like the Sun.”

How to find a planet

Let’s be perfectly clear: the methods we use to detect exoplanets are incredibly clever. There are currently two that are the workhorses of the exoplanet detection toolkit: the transit method, and the radial velocity method.

In both cases, you need a telescope sensitive to very minute changes in the light of a star. The signals each are looking for, however, couldn’t be more different.

For the transit method you’ll need a telescope that can keep a star fixed in its view for a sustained period of time. That’s why instruments such as NASA’s space-based Transiting Exoplanet Survey Satellite (TESS) is such a powerhouse, capable of locking onto a segment of the sky for over 27 days without being interrupted by Earth’s rotation.

Invading astronomy one exoplanet gif at a time! This time with a gif showing the transit method for detecting exoplanets 😊 pic.twitter.com/2ZHv24DRTH— Alysa Obertas (parody) (@AstroAlysa) September 1, 2021

The aim for these kinds of telescopes is to spot the signal of a transit – when an exoplanet passes between us and its host star, like a tiny cloud blotting out a few rays of sunshine. These dips in light are tiny, as you can imagine. And one blip is insufficient to confidently infer the presence of an exoplanet; there are many things that can dim a star’s light, many of which are one-off events. Multiple transits, especially ones that exhibit regular periodicity, are the gold standard.

Therefore, larger exoplanets that are on short orbital periods, closer to their stars than Mercury is to the Sun (some much, much closer, on orbits of less than one Earth week), are favored in the data.

In case you missed it, my gif showing how exoplanets are detected via the radial velocity method is now available in dark mode! pic.twitter.com/P4yvXQVSUt— Alysa Obertas (parody) (@AstroAlysa) August 15, 2022

The radial velocity method detects the wobble of a star caused by the gravitational pull of the exoplanet as it swings around in its orbit. A planetary system, you see, doesn’t really orbit a star, so much as dance in a coordinated shuffle. The star and the planets orbit a mutual center of gravity, known as the barycenter. For the Solar System, that’s a point very, very close to the surface of the Sun, or just outside it, primarily due to the influence of Jupiter, which is more than twice the mass of all the rest of the planets combined.

Unlike a transit’s blink-and-you-miss-it event, the shift in the star’s position is an ongoing change that doesn’t require constant monitoring to notice. We can detect the motion of distant stars orbiting their barycenters because that motion changes their light due to something called the Doppler effect.

As the star moves towards us, the waves of light coming in our direction are squished slightly, towards the bluer end of the spectrum; as it moves away, the waves stretch towards the redder end. A regular ‘wobble’ in the star’s light suggests the presence of an orbital companion.

Again, the data tends to favor larger planets that exert a stronger gravitational influence, on shorter, closer orbits to their star.

Aside from these two prominent methods, it’s possible on occasion to directly image an exoplanet as it orbits its star. Though an extremely difficult thing to do, it may become more common in the JWST era.

According to astronomer Daniel Bayliss of the University of Warwick in the UK, this approach would uncover an almost opposite class of exoplanet to the short-orbit variety. In order to see an exoplanet without it being swamped by the glare of its parent star, the two bodies need to have a very wide separation. This means the direct imaging approach favors planets on relatively long orbits.

However, larger exoplanets would still be spotted more easily through this method, for obvious reasons.

“Each of the discovery methods has its own biases,” Bayliss explains.

Earth with its year-long loop around the Sun sits between the orbital extremes favored by different detection techniques, he adds, so “to find planets with a one year orbit is still very, very difficult.”

What’s out there?

By far, the most numerous group of exoplanets is a class that isn’t even represented in the Solar System. That’s the mini-Neptune – gas-enveloped exoplanets that are smaller than Neptune and larger than Earth in size.

Rocky planet surrounded by purple haze and a star in the distance on the left
Illustration of the mini-Neptune TOI 560.01, orbiting its solitary star. (W. M. Keck Observatory/Adam Makarenko)

Most of the confirmed exoplanets are on much shorter orbits than Earth; in fact, more than half have orbits of less than 20 days.

Most of the exoplanets we’ve found orbit solitary stars, much like our Sun. Fewer than 10 percent are in multi-star systems. Yet most of the stars in the Milky Way are members of a multi-star systems, with estimates as high as 80 percent seen in a partnership orbiting at least one other star.

Think about that for a moment, though. Does that mean that exoplanets are more common around single stars – or that exoplanets are harder to detect around multiple stars? The presence of more than one source of light can distort or obscure the very similar (but much smaller) signals we’re trying to detect from exoplanets, but it might also be reasoned that multi-star systems complicate planet formation in some way.

And this brings us back home again, back to our Solar System. As odd as home seems in the context of everything we’ve found, it might not be uncommon at all.

“I think it is fair enough to say that there’s actually some very common types of planets that are missing from our Solar System,” says Bayliss.

“Super Earths that look a little bit like Earth but have double the radius, we don’t have anything like that. We don’t have these mini-Neptunes. So I think it is fair enough to say that there are some very common planets that we don’t see in our own Solar System.

“Now, whether that makes our Solar System rare or not, I think I wouldn’t go that far. Because there could be a lot of other stars that have a Solar System-type set of planets that we just don’t see yet.”

An artist's illustration of many planets and stars in the Milky Way.
This artist’s illustration gives an impression of how common planets are around the stars in the Milky Way. (ESO/M. Kornmesser)

On the brink of discovery

The first exoplanets were discovered just 30 years ago orbiting a pulsar, a star completely unlike our own. Since then, the technology has improved out of sight. Now that scientists know what to look for, they can devise better and better ways to find them around a greater diversity of stars.

And, as the technology advances, so too will our ability to find smaller and smaller worlds.

This means that exoplanet science could be on the brink of discovering thousands of worlds hidden from our current view. As Horner points out, in astronomy, there are way more small things than big things.

Red dwarf stars are a perfect example. They’re the most common type of star in the Milky Way – and they’re tiny, up to about half the mass of the Sun. They’re so small and dim that we can’t see them with the naked eye, yet they account for up to 75 percent of all stars in the galaxy.

Right now, when it comes to statistically understanding exoplanets, we’re operating with incomplete information, because there are types of worlds we just can’t see.

That is bound to change.

“I just have this nagging feeling that if you come back in 20 years time, you’ll look at those statements that mini-Neptunes are the most common kind of planets with about as much skepticism as you’d look back at statements from the early 1990s that said you’d only get rocky planets next to the star,” Horner tells ScienceAlert.

“Now, I could well be proved wrong. This is how science works. But my thinking is that when we get to the point that we can discover things that are Earth-sized and smaller, we’ll find that there are more things that are Earth-sized and smaller than there are things that are Neptune-sized.”

And maybe we’ll find that our oddball little planetary system, in all its quirks and wonders, isn’t so alone in the cosmos after all.

Will Neurotech Force Us to Update Human Rights?

One question for Nita Farahany, a philosopher at Duke University.

  • By Brian Gallagher
  • December 19, 2022
  • Share
normal_thumb

Explore

One question for Nita Farahany, a philosopher at Duke University who studies the implications of emerging neuroscience, genomics, and artificial intelligence for law and society.

Will neurotech force us to update human rights?

Yes. And that moment will pass us by quickly if we don’t seize on it, which would enable us to embrace and reap the benefits of the coming age of neural interface technology. That means recognizing the fundamental human right to cognitive liberty—our right to think freely, our self-determination over our brains and mental experiences. And then updating three existing rights: the right to privacy, freedom of thought, and self-determination.

Updating the right to privacy requires that we recognize explicitly a right to mental privacy. Freedom of thought is already recognized in international human rights law but has been focused on the right to free exercise of religion. We need to recognize a right against manipulation of thought, or having our thoughts used against us. And we’ve long recognized a collective right to self-determination of peoples or nations, but we need a right to self-determination over our own bodies, which will include, for example, a right to receiving information about ourselves. 

If a corporation or an employer wants to implement fatigue monitoring in the workplace, for example, the default would be that the employee has a right to mental privacy. That means, if I’m tracking my brain data, a right to receive information about what is being tracked. It’s recognizing that by default people have rights over their cognitive liberty, and the exceptions have to be legally carved out. There would have to be robust consent, and robust information given to consumers about what the data is that is being collected, how it’s being used, and whether it can be used or commodified. 

I’ve written a book that’s coming out in March, called The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. One of the chapters in the book explores the line between persuasion and manipulation. I go into the example from Facebook experimenting on people, changing their timelines to feature negative or positive content. It was deeply offensive, and part of it was the lack of informed consent but a bigger part was it felt as if people’s emotions were being toyed with just to see if they could make somebody unhappy in ways that you could measure.

In a world of neurotechnology you can measure the effect of those experiments much more precisely because you can see what’s happening to the brain as you make those changes. But also, these technologies aren’t just devices that read the brain. Many of them are writing devices—you can make changes to the brain. That definitely requires that we think about who controls the technology and what they can do with it, including things to intentionally manipulate your brain that might cause you harm. What rules are we going to put into place to safeguard people against that?

I’m optimistic we can get this done. There’s already momentum at the human rights level. The value of the human rights level is that there will be lots of specific laws that will have to be implemented to realize a right to cognitive liberty locally, nationally, and across the world. But if you start with a powerful legal and moral obligation that’s universal, it’s easier to get those laws updated. People recognize the unique sensitivity of their thoughts and emotions. It’s not just the right to keep people out of your thoughts, or the right to not be manipulated. It’s a positive right to make choices about what your mental experiences are going to be like, whether that’s enhancements, or access to technology, or the ability to use and read out information from that technology.

Lead image: AndryDj / Shutterstock

  • Brian GallagherPosted on December 19, 2022 Brian Gallagher is an associate editor at Nautilus. Follow him on Twitter @bsgallagher.

new_letter

Get the Nautilus newsletter

The newest and most popular articles delivered right to your inbox!

January 4th 2023

INNOVATION

Could Getting Rid of Old Cells Help People Live Disease-Free for Longer?

Researchers are investigating medicines that selectively kill decrepit cells to promote healthy aging

Amber Dance, Knowable Magazine December 29, 2022


aging
A growing movement is underway to halt chronic disease by protecting brains and bodies from the biological fallout of aging. Malte Mueller/Getty Images

James Kirkland started his career in 1982 as a geriatrician, treating aging patients. But he found himself dissatisfied with what he could offer them.

“I got tired of prescribing wheelchairs, walkers and incontinence devices,” recalls Kirkland, now at the Mayo Clinic in Rochester, Minnesota. He knew that aging is considered the biggest risk factor for chronic illness, but he was frustrated by his inability to do anything about it. So Kirkland went back to school to learn the skills he’d need to tackle aging head-on, earning a PhD in biochemistry at the University of Toronto. Today, he and his colleague Tamar Tchkonia, a molecular biologist at the Mayo Clinic, are leaders in a growing movement to halt chronic disease by protecting brains and bodies from the biological fallout of aging.

If these researchers are successful, they’ll have no shortage of customers: People are living longer, and the number of Americans age 65 and older is expected to double, to 80 million, between 2000 and 2040. While researchers like Kirkland don’t expect to extend lifespan, they hope to lengthen “health span,” the time that a person lives free of disease.

One of their targets is decrepit cells that build up in tissues as people age. These “senescent” cells have reached a point—due to damage, stress or just time—when they stop dividing, but don’t die. While senescent cells typically make up only a small fraction of the overall cell population, they accounted for up to 36 percent of cells in some organs in aging mice, one study showed. And they don’t just sit there quietly. Senescent cells can release a slew of compounds that create a toxic, inflamed environment that primes tissues for chronic illness. Senescent cells have been linked to diabetes, stroke, osteoporosis and several other conditions of aging.

These noxious cells, along with the idea that getting rid of them could mitigate chronic illnesses and the discomforts of aging, are getting serious attention. The U.S. National Institutes of Health is investing $125 million in a new research effort, called SenNet, that aims to identify and map senescent cells in the human body as well as in mice over the natural lifespan. And the National Institute on Aging has put up more than $3 million over four years for the Translational Geroscience Network (TGN) multicenter team led by Kirkland that is running preliminary clinical trials of potential antiaging treatments. Drugs that kill senescent cells—called senolytics—are among the top candidates. Small-scale trials of these are already underway in people with conditions including Alzheimer’s, osteoarthritis and kidney disease.

“It’s an emerging and incredibly exciting—and maybe even game-changing—area,” says John Varga, chief of rheumatology at the University of Michigan Medical School in Ann Arbor, who isn’t part of the TGN.

But he and others sound a note of caution as well, and some scientists think the field’s potential has been overblown. “There’s a lot of hype,” says Varga. “I do have, I would say, a very healthy skepticism.” He warns his patients of the many unknowns and tells them that trying senolytic supplementation on their own could be dangerous.

Researchers are still untangling the biology of senescent cells, not only in aging animals but also in younger ones—even in embryos, where the aging out of certain cells is crucial for proper development. So far, evidence that destroying senescent cells helps to improve health span mostly comes from laboratory mice. Only a couple of preliminary human trials have been completed, with hints of promise but far from blockbuster results.

Even so, Kirkland and Tchkonia speculate that senolytics might eventually help not only with aging but also with conditions suffered by younger people due to injury or medical treatments such as chemotherapy. “There may be applications all over the place,” muses Kirkland.

Could Getting Rid of Old Cells Help People Live Disease-Free for Longer?
Changes in older cells allow scientists to identify them. As lab-grown cells age (right), they grow larger than young cells (left). Senescent cells produce more of an enzyme, beta-galactosidase, that scientists can stain blue. N. SCHMID ET AL / SCIENTIFIC REPORTS 2019

Good cells gone bad

Biologists first noticed senescence when they began growing cells in lab dishes more than 60 years ago. After about 50 cycles of cells first growing, then dividing, the rate of cell division slows and ultimately ceases. When cells reach this state of senescence, they grow larger and start exhibiting a variety of genetic abnormalities. They also accumulate extra lysosomes, baglike organelles that destroy cellular waste. Scientists have found a handy way to identify many senescent cells by using stains that turn blue in the presence of a lysosome enzyme, called beta-galactosidase, that’s often overactive in these cells.

Scientists have also discovered hundreds of genes that senescent cells activate to shut down the cells’ replication cycle, change their biology and block natural self-destruct mechanisms. Some of these genes produce a suite of immune molecules, growth factors and other compounds. The fact that specific genes consistently turn on in senescent cells indicates there may be more to senescence than just cells running out of steam. It suggests that senescence is a cellular program that evolved for some purpose in healthy bodies. Hints at that purpose have emerged from studies of creatures far earlier in their lifespan—even before birth.

Cell biologist Bill Keyes was working on senescence in embryos back in the early 2000s. When he stained healthy mouse and chick embryos to look for beta-galactosidase, little blue spots lit up in certain tissues. He soon met up with Manuel Serrano, a cell biologist at the Institute for Research in Biomedicine in Barcelona, who’d noticed the same thing. Cells with signs of senescence turned up in the developing brain, ear and limbs, Keyes and Serrano reported in 2013.

Keyes, now at the Institute of Genetics and Molecular and Cellular Biology in Strasbourg, France, focused on mouse and chick embryonic limbs, where a thread of temporary tissue forms across the future toe-tips. Unlike most embryonic cells, the cells in this thread of tissue disappear before the animal is born. They release chemicals that help the limb develop, and once their work is done, they die. At a molecular level, they look a lot like senescent cells.

Serrano, meanwhile, looked at cells in an organ that exists only in embryos: a temporary kidney, called the mesonephros, that forms near the heart. Once the final kidneys develop, the mesonephros disappears. Here, too, beta-galactosidase and other compounds linked to senescence appeared in mouse embryos.

The cells in these temporary tissues probably disappear because they are senescent. Certain compounds made by senescent cells call out to the immune system to come in and destroy the cells once their work is done. Scientists think the short-term but crucial jobs these cells perform could be the reason senescence evolved in the first place.

Other studies suggest that senescent cells may also promote health in adult animals. Judith Campisi, a cell biologist at the Buck Institute for Research on Aging in Novato, California, and others have found senescent cells in adult mice, where they participate in wound healing. Connective-tissue cells called fibroblasts fill in a wound, but if they stick around, they form abnormal scar tissue. During normal wound healing, they turn senescent, releasing compounds that both promote repair of the tissue and call immune cells to come in and destroy them.

In other words, the emergence of senescent cells in aging people isn’t necessarily a problem in and of itself. The problem seems to be that they hang around for too long. Serrano suspects this happens because the immune system in aging individuals isn’t up to the task of eliminating them all. And when senescent cells stay put, the cocktail of molecules they produce and the ongoing immune response can damage surrounding tissues.

Senescence can also contribute to cancer, as Campisi has described in the Annual Review of Physiology, but the relationship is multifaceted. Senescence itself is a great defense against cancer—cells that don’t divide don’t form tumors. On the other hand, the molecules senescent cells emit can create an inflamed, cancer-promoting environment. So if a senescent cell arises near a cell that’s on its way to becoming cancerous, it might alter the locale enough to push that neighbor cell over the edge. In fact, Campisi reported in 2001 that injecting mice with senescent cells made tumors grow bigger faster.

Mighty mice

If senescent cells in an aging body are bad, removing them should be good. To test this idea, Darren Baker, a molecular cell biologist at the Mayo Clinic, devised a way to kill senescent cells in mice. Baker genetically engineered mice so that when their cells turned senescent, those cells became susceptible to a certain drug. The researchers began injecting the drug twice a week once the mice turned 1 year old—that’s about middle age for a lab mouse.

Treated mice maintained healthier kidney, heart, muscle and fat tissue compared with untreated mice, and though they were still susceptible to cancer, tumors appeared later in life, the researchers reported in studies in 2011 and 2016. The rodents also lived, on average, five or six months longer.

These results generated plenty of interest, Baker recalls, and set senescence biology on the path toward clinical research. “That was the boom—a new era for cellular senescence,” says Viviana Perez, former program officer for the SenNet consortium at the National Institute on Aging.

Baker followed up with a study of mice that had been genetically modified to develop characteristics of Alzheimer’s. Getting rid of senescent cells staved off the buildup of toxic proteins in the brain, he reported, and seemed to help the mice to retain mental acuity, as measured by their ability to remember a new smell.

Of course, geriatricians can’t go about genetically engineering retirees, so Kirkland, Tchkonia and colleagues went hunting for senolytic drugs that would kill senescent cells while leaving their healthy neighbors untouched. They reasoned that since senescent cells appear to be resistant to a process called apoptosis, or programmed cell death, medicines that unblock that process might have senolytic properties.

Could Getting Rid of Old Cells Help People Live Disease-Free for Longer?
Senescent cells (purple) and the molecules they secrete (red) are beneficial when present for a short time in healthy tissues. These molecules influence the cells around them (pink) in ways that can influence development or promote healing, before being eliminated by immune cells (yellow). However, senescent cells can be harmful when chronically present in aged or damaged tissues. Removing them with senolytic drugs may be a strategy for restoring tissue health.

Some cancer drugs do this, and the researchers included several of these in a screen of 46 compounds they tested on senescent cells grown in lab dishes. The study turned up two major winners: One was the cancer drug dasatinib, an inhibitor of several natural enzymes that appears to make it possible for the senescent cells to self-destruct. The other was quercetin, a natural antioxidant that’s responsible for the bitter flavor of apple peels and that also inhibits several cellular enzymes. Each drug worked best on senescent cells from different tissues, the scientists found, so they decided to use them both, in a combo called D+Q, in studies with mice.

In one study, Tchkonia and Kirkland gave D+Q to 20-month-old mice and found that the combination improved the rodents’ walking speed and endurance in lab tests, as well as their grip strength. And treating 2-year-old mice—the equivalent of a 75- to 90-year-old human—with D+Q every other week extended their remaining lifespan by about 36 percent, compared with mice that didn’t receive senolytics, the researchers reported in 2018. Tchkonia, Kirkland and Baker all hold patents related to treating diseases by eliminating senescent cells.

To the clinic

Scientists have since discovered several other medications with senolytic effects, though D+Q remains a favorite pairing. Further studies from several research groups reported that senolytics appear to protect mice against a variety of conditions of aging, including the metabolic dysfunction associated with obesity, vascular problems associated with atherosclerosis and bone loss akin to osteoporosis.

“That’s a big deal, collectively,” says Laura Niedernhofer, a biochemist at the University of Minnesota Medical School in Minneapolis who is a collaborator on some of these studies and a member of the TGN clinical trials collaboration. “It would be a shame not to test them in humans.”

A few small human trials have been completed. The first, published in 2019, addressed idiopathic pulmonary fibrosis, a fatal condition in which the lungs fill up with thick scar tissue that interferes with breathing. It’s most common in people 60 or older, and there’s no cure. In a small pilot study, Kirkland, Tchkonia and collaborators administered D+Q to 14 people with the condition, three times a week for three weeks. They reported notable improvement in the ability of participants to stand up from a chair and to walk for six minutes. But the study had significant caveats: In addition to its small size and short duration, there was no control group, and every participant knew they’d received D+Q. Moreover, the patients’ lung function didn’t improve, nor did their frailty or overall health.

Niedernhofer, who wasn’t involved in the trial, calls the results a “soft landing”: There seemed to be something there, but no major benefits emerged. She says she would have been more impressed with the results if the treatment had reduced the scarring in the lungs.

The TGN is now running several small trials for conditions related to aging, and other diseases, too. Kirkland thinks that senescence may even be behind conditions that affect young people, such as osteoarthritis due to knee injuries and frailty in childhood cancer survivors.

Could Getting Rid of Old Cells Help People Live Disease-Free for Longer?
Senolytics are being tested to treat a wide variety of conditions in people as part of the Translational Geroscience Network. Dasatinib is a cancer drug, and quercetin and fisetin are natural antioxidants.

Tchkonia and Kirkland are also investigating how space radiation affects indications of senescence in the blood and urine of astronauts, in conjunction with two companies, SpaceX and Axiom Space. They hypothesize that participants in future long-term missions to Mars might have to monitor their bodies for senescence or pack senolytics to stave off accelerated cellular aging caused by extended exposure to radiation.

Kirkland is also collaborating with researchers who are investigating the use of senolytics to expand the pool of available transplant organs. Despite desperate need, about 24,000 organs from older donors are left out of the system every year because the rate of rejection is higher for these than for younger organs, says Stefan Tullius, chief of transplant surgery at Brigham and Women’s Hospital in Boston. In heart transplant experiments with mice, he reported that pretreating older donor mice with D+Q before transplant into younger recipients resulted in the donor organs working “as well or slightly better” than hearts from young donors.

“That was huge,” says Tullius. He hopes to be doing clinical trials in people within three years.

Healthy skepticism

Numerous medical companies have jumped on the anti-senescence bandwagon, notes Paul Robbins, a molecular biologist at the University of Minnesota Medical School. But results have been mixed. One front-runner, Unity Biotechnology of South San Francisco, California, dropped a top program in 2020 after its senolytic medication failed to reduce pain in patients with knee osteoarthritis.

“I think we just don’t know enough about the right drug, the right delivery, the right patient, the right biomarker,” says the University of Michigan’s Varga, who is not involved with Unity. More recently, however, the company reported progress in slowing diabetic macular edema, a form of swelling in the back of the eye due to high blood sugar.

Despite the excitement, senolytic research remains in preliminary stages. Even if the data from TGN’s initial, small trials looks good, they won’t be conclusive, says network member Robbins—who nonetheless thinks positive results would be a “big deal.” Success in a small study would suggest it’s worth investing in larger studies, and in the development of drugs that are more potent or specific for senescent cells.

“I’m urging extreme caution,” says Campisi—who is herself a cofounder of Unity and holds several patents related to anti-senescence treatments. She’s optimistic about the potential for research on aging to improve health, but she worries that moving senolytics quickly into human trials, as some groups are doing, could set the whole field back. That’s what happened with gene therapy in the late 1990s when an experimental treatment killed a study volunteer. “I hope they don’t kill anyone, seriously,” she says.

Side effects are an ongoing concern. For example, dasatinib (the D in D+Q) has a host of side effects ranging from nosebleeds to fainting to paralysis.

But Kirkland thinks that may not be an insurmountable problem. He notes that these side effects show up only in cancer patients taking the drug regularly for months at a time, whereas anti-senescence treatments might not need to be taken so often—once every two or three months might be enough to keep the population of senescent cells under control.

Another way to reduce the risks would be to make drugs that target senescent cells in specific tissues, Niedernhofer and Robbins note in the Annual Review of Pharmacology and Toxicology. For example, if a person has senescent cells in their heart, they could take a medicine that targets only those cells, leaving any other senescent cells in the body—which still might be doing some good—alone.

For that strategy to work, though, doctors would need better ways to map senescent cells in living people. While identifying such biomarkers is a major goal for SenNet, Campisi suspects it will be hard to find good ones. “It’s not a simple problem,” she says.

A lot of basic and clinical research must happen first, but if everything goes right, senolytics might someday be part of a personalized medicine plan: The right drugs, at the right time, could help keep aging bodies healthy and nimble. It may be a long shot, but to many researchers, the possibility of nixing walkers and wheelchairs for many patients makes it one worth taking.

Knowable Magazine is an independent journalistic endeavor from Annual Reviews.
AgingBiologyBrainCancerChemistryDisease and IllnessesGeneticsHealthMedicineNeurosciencev

January 1st 2023

The Year in Computer Science

Computer scientists this year learned how to transmit perfect secrets, why transformers seem so good at everything, and how to improve on decades-old algorithms (with a little help from AI).

Myriam Wares for Quanta Magazine

By Bill Andrews

Senior Editor


December 21, 2022


View PDF/Print Mode

2022 in Reviewalgorithmsartificial intelligencebrainscomputational complexitycomputer sciencecryptographyentanglementinformation theorymachine learningmathematicsneural networksphysicsquantum computingquantum cryptographyquantum physicsYear in ReviewAll topics

Watch and Learn

Introduction

As computer scientists tackle a greater range of problems, their work has grown increasingly interdisciplinary. This year, many of the most significant computer science results also involved other scientists and mathematicians. Perhaps the most practical involved the cryptographic questions underlying the security of the internet, which tend to be complicated mathematical problems. One such problem — the product of two elliptic curves and their relation to an abelian surface — ended up bringing down a promising new cryptography scheme that was thought to be strong enough to withstand an attack from a quantum computer. And a different set of mathematical relationships, in the form of one-way functions, will tell cryptographers if truly secure codes are even possible.

Computer science, and quantum computing in particular, also heavily overlaps with physics. In one of the biggest developments in theoretical computer science this year, researchers posted a proof of the NLTS conjecture, which (among other things) states that a ghostly connection between particles known as quantum entanglement is not as delicate as physicists once imagined. This has implications not just for our understanding of the physical world, but also for the myriad cryptographic possibilities that entanglement makes possible. 

And artificial intelligence has always flirted with biology — indeed, the field takes inspiration from the human brain as perhaps the ultimate computer. While understanding how the brain works and creating brainlike AI has long seemed like a pipe dream to computer scientists and neuroscientists, a new type of neural network known as a transformer seems to process information similarly to brains. As we learn more about how they both work, each tells us something about the other. Perhaps that’s why transformers excel at problems as varied as language processing and image classification. AI has even become better at helping us make better AI, with new “hypernetworks” helping researchers train neural networks faster and at a lower cost. So now the field is not only helping other scientists with their work, but also helping its own researchers achieve their goals.

Share this article

Copied!


Newsletter

Get Quanta Magazine delivered to your inbox

Recent newsletters

Red particles with varying spins and some entanglement
Kristina Armitage for Quanta Magazine

Introduction

Entangled Answers

When it came to quantum entanglement, a property that intimately connects even distant particles, physicists and computer scientists were at an impasse. Everyone agreed that a fully entangled system would be impossible to describe fully. But physicists thought it might be easier to describe systems that were merely close to being fully entangled. Computer scientists disagreed, saying that those would be just as impossible to calculate — a belief formalized into the “no low-energy trivial state” (NLTS) conjecture. In June a team of computer scientists posted a proof of it. Physicists were surprised, since it implied that entanglement is not necessarily as fragile as they thought, and computer scientists were happy to be one step closer to proving a seminal question known as the quantum probabilistically checkable proof theorem, which requires NLTS to be true. 

This news came on the heels of results from late last year showing that it’s possible to use quantum entanglement to achieve perfect secrecy in encrypted communications. And this October researchers successfully entangled three particles over considerable distances, strengthening the possibilities for quantum encryption. 

An illustration showing an orange and blue network of lines focus into a clear pyramid, emerging as a white light traveling into a clear eye.
Avalon Nuovo for Quanta Magazine

Introduction

Transforming How AI Understands

For the past five years, transformers have been revolutionizing how AI processes information. Developed originally to understand and generate language, the transformer processes every element in its input data simultaneously, giving it a big-picture understanding that lends it improved speed and accuracy compared to other language networks, which take a piecemeal approach. This also makes it unusually versatile, and other AI researchers are putting it to work in their fields. They have discovered that the same principles can also enable them to upgrade tools for image classification and for processing multiple kinds of data at once. However, these benefits come at the cost of more training than non-transformer models need. Researchers studying how transformers work learned in March that part of their power comes from their ability to attach greater meaning to words, rather than simply memorize patterns. Transformers are so adaptable, in fact, that neuroscientists have begun modeling human brain functions with transformer-based networks, suggesting a fundamental similarity between artificial and human intelligence.

Kristina Armitage for Quanta Magazine

Introduction

Breaking Down Cryptography

The safety of online communications is based on the difficulty of various math problems — the harder a problem is to solve, the harder a hacker must work to break it. And because today’s cryptography protocols would be easy work for a quantum computer, researchers have sought new problems to withstand them. But in July, one of the most promising leads fell after just an hour of computation on a laptop. “It’s a bit of a bummer,” said Christopher Peikert, a cryptographer at the University of Michigan. 

The failure highlights the difficulty of finding suitable questions. Researchers have shown that it’s only possible to create a provably secure code — one which could never fall — if you can prove the existence of “one-way functions,” problems that are easy to do but hard to reverse. We still don’t know if they exist (a finding that would help tell us what kind of cryptographic universe we live in), but a pair of researchers discovered that the question is equivalent to another problem called Kolmogorov complexity, which involves analyzing strings of numbers: One-way functions and real cryptography are possible only if a certain version of Kolmogorov complexity is hard to compute. 

Olivia Fields for Quanta Magazine

Introduction

Machines Help Train Machines 

In recent years, the pattern recognition skills of artificial neural networks have supercharged the field of AI. But before a network can get to work, researchers must first train it, fine-tuning potentially billions of parameters in a process that can last for months and requires huge amounts of data. Or they could get a machine to do it for them. With a new kind of “hypernetwork” — a network that processes and spits out other networks — they may soon be able to. Named GHN-2, the hypernetwork analyzes any given network and provides a set of parameter values that were shown in a study to be generally at least as effective as those in networks trained the traditional way. Even when it didn’t provide the best possible parameters, GHN-2’s suggestions still offered a starting point that was closer to the ideal, cutting down the time and data required for full training. 

This summer, Quanta also examined another new approach to helping machines learn. Known as embodied AI, it allows algorithms to learn from responsive three-dimensional environments, rather than static images or abstract data. Whether they’re agents exploring simulated worlds or robots in the real one, these systems learn fundamentally differently — and, in many cases, better — than ones trained using traditional approaches.

Mahmet Emin Güzel for Quanta Magazine

Introduction

Improved Algorithms

This year, with the rise of more sophisticated neural networks, computers made further strides as a research tool. One such tool seemed particularly well suited to the problem of multiplying two-dimensional tables of numbers called matrices. There’s a standard way to do it, but it becomes cumbersome as matrices grow larger, so researchers are always looking for a faster algorithm that uses fewer steps. In October, researchers at DeepMind announced that their neural network had discovered faster algorithms for multiplying certain matrices. But experts cautioned that the breakthrough represented the arrival of a new tool for solving a problem, not an entirely new era of AI solving these problems on its own. As if on cue, a pair of researchers built on the new algorithms, using traditional tools and methods to improve them. 

Researchers in March also published a faster algorithm to solve the problem of maximum flow, one of the oldest questions in computer science. By combining past approaches in novel ways, the team created an algorithm that can determine the maximum possible flow of material through a given network “absurdly fast,” according to Daniel Spielman of Yale University. “I was actually inclined to believe … algorithms this good for this problem would not exist.”

Mark Braverman, in an orange shirt, stands on a path lined with trees.
Sasha Maslov for Quanta Magazine

Introduction

New Avenues for Sharing Information

Mark Braverman, a theoretical computer scientist at Princeton University, has spent more than a quarter of his life working on a new theory of interactive communication. His work allows researchers to quantify terms like “information” and “knowledge,” not just allowing for a greater theoretical understanding of interactions, but also creating new techniques that enable more efficient and accurate communication. For this achievement and others, the International Mathematical Union this July awarded Braverman the IMU Abacus Medal, one of the highest honors in theoretical computer science. 

The Quanta Newsletter

December 25th 2022

The paradox of light goes beyond wave-particle duality

Light carries with it the secrets of reality in ways we cannot completely understand.

Credit: Annelisa Leinbach

Key Takeaways

  • Light is the most mysterious of all things we know exist.
  • Light is not matter; it is both wave and particle — and it’s the fastest thing in the Universe.
  • We are only beginning to understand light’s secrets.

Marcelo Gleiser

Share The paradox of light goes beyond wave-particle duality on Facebook

Share The paradox of light goes beyond wave-particle duality on Twitter

Share The paradox of light goes beyond wave-particle duality on LinkedIn

This is the third in a series of articles exploring the birth of quantum physics.

Light is a paradox. It is associated with wisdom and knowledge, with the divine. The Enlightenment proposed the light of reason as the guiding path toward truth. We evolved to identify visual patterns with great accuracy — to distinguish the foliage from the tiger, or shadows from an enemy warrior. Many cultures identify the sun as a god-like entity, provider of light and warmth. Without sunlight, after all, we would not be here. 

Yet the nature of light is a mystery. Sure, we have learned a tremendous amount about light and its properties. Quantum physics has been essential along this path, changing the way we describe light. But light is weird. We cannot touch it the way we touch air or water. It is a thing that is not a thing, or at least it is not made of the stuff we associate with things.

If we traveled back to the 17th century, we could follow Isaac Newton’s disagreements with Christiaan Huygens on the nature of light. Newton would claim that light is made of tiny, indivisible atoms, while Huygens would counter that light is a wave that propagates on a medium that pervades all of space: the ether. They were both right, and they were both wrong. If light is made of particles, what particles are these? And if it is a wave propagating across space, what’s this weird ether?

Light magic

We now know that we can think of light in both ways — as a particle, and as a wave. But during the 19th century the particle theory of light was mostly forgotten, because the wave theory was so successful, and something could not be two things. In the early 1800s Thomas Young, who also helped decipher the Rosetta Stone, performed beautiful experiments showing how light diffracted as it passed through small slits, just like water waves were known to do. Light would move through the slit and the waves would interfere with one another, creating bright and dark fringes. Atoms couldn’t do that.

But then, what was the ether? All great physicists of the 19th century, including James Clerk Maxwell, who developed the beautiful theory of electromagnetism, believed the ether was there, even if it eluded us. After all, no decent wave could propagate in empty space. But this ether was quite bizarre. It was perfectly transparent, so we could see faraway stars. It had no mass, so it wouldn’t create friction and interfere with planetary orbits. Yet it was very rigid, to allow for the propagation of the ultra-fast light waves. Pretty magical, right? Maxwell had shown that if an electric charge oscillated up and down, it would generate an electromagnetic wave. This was the electric and magnetic fields tied up together, one bootstrapping the other as they traveled through space. And more amazingly, this electromagnetic wave would propagate at the speed of light, 186,282 miles per second. You blink your eyes and light goes seven and a half times around the Earth. 

Maxwell concluded that light is an electromagnetic wave. The distance between two consecutive crests is a wavelength. Red light has a longer wavelength than violet light. But the speed of any color in empty space is always the same. Why is it about 186,000 miles per second? No one knows. The speed of light is one of the constants of nature, numbers we measure that describe how things behave.

Steady as a wave, hard as a bullet

A crisis started in 1887 when Albert Michelson and Edward Morley performed an experiment to demonstrate the existence of the ether. They couldn’t prove a thing. Their experiment failed to show that light propagated in an ether. It was chaos. Theoretical physicists came up with weird ideas, saying the experiment failed because the apparatus shrunk in the direction of the motion. Anything was better than accepting that light actually can travel in empty space. 

And then came Albert Einstein. In 1905, the 26-year-old patent officer wrote two papers that completely changed the way we picture light and all of reality. (Not too shabby.) Let’s start with the second paper, on the special theory of relativity. 

Smarter faster: the Big Think newsletter

Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Fields marked with an * are required

Einstein showed that if one takes the speed of light to be the fastest speed in nature, and assumes that this speed is always the same even if the light source is moving, then two observers moving with respect to each other at a constant speed and making an observation need to correct for their distance and time measurements when comparing their results. So, if one is in a moving train while the other is standing at a station, the time intervals of the measurements they make of the same phenomenon will be different. Einstein provided a way for the two to compare their results in a way that allows these to agree with each other. The corrections showed that light could and should propagate in empty space. It had no need for an ether.

Einstein’s other paper explained the so-called photoelectric effect, which was measured in the lab in the 19th century but remained a total mystery. What happens if light is shined onto a metal plate? It depends on the light. Not on how bright it is, but on its color — or more appropriately stated, its wavelength. Yellow or red light does nothing. But shine a blue or violet light on the plate, and the plate actually acquires an electrical charge. (Hence the term photoelectric.) How could light electrify a piece of metal? Maxwell’s wave theory of light, so good at so many things, could not explain this. 

The young Einstein, bold and visionary, put forth an outrageous idea. Light can be a wave, sure. But it can also be made of particles. Depending on the circumstance, or on the type of experiment, one or the other description prevails. For the photoelectric effect, we could picture little “bullets” of light hitting the electrons on the metal plate and kicking them out like billiard balls flying off a table. Having lost electrons, the metal now holds a surplus positive charge. It’s that simple. Einstein even provided a formula for the energy of the flying electrons and equated it to the energy of the incoming light bullets, or photons. The energy for the photons is E = hc/L, where c is the speed of light, L its wavelength, and h is Planck’s constant. The formula tells us that smaller wavelengths mean more energy — more kick for the photons. 

Einstein won the Nobel prize for this idea. He essentially suggested what we now call the wave-particle duality of light, showing that light can be both particle and wave and will manifest differently depending on the circumstance. The photons — our light bullets — are the quanta of light, the smallest light packets possible. Einstein thus brought quantum physics into the theory of light, showing that both behaviors are possible.

I imagine Newton and Huygens are both smiling in heaven. These are the photons that Bohr used in his model of the atom, which we discussed last week. Light is both particle and wave, and it is the fastest thing in the cosmos. It carries with it the secrets of reality in ways we cannot completely understand. But understanding its duality was an important step for our perplexed minds.

Why black holes spin at nearly the speed of light

Black holes aren’t just the densest masses in the Universe, but they also spin the fastest of all massive objects. Here’s why it must be so.

An illustration of an active black hole, one that accretes matter and accelerates a portion of it outward in two perpendicular jets. The normal matter undergoing an acceleration like this describes how quasars work extremely well. All known, well-measured black holes have enormous rotation rates, and the laws of physics all but ensure that this is mandatory. (Credit: University of Warwick/Mark A. Garlick)

Key Takeaways

  • Black holes are some of the most enigmatic, extreme objects in the entire Universe, with more mass compressed into a tiny volume than any other object.
  • But black holes aren’t just extremely massive, they’re also incredibly fast rotators. Many black holes, from their measured spins, are spinning at more than 90% the speed of light.
  • This might seem like a puzzle, but physics not only has an explanation for why, but shows us that it’s very difficult to create black holes that spin slowly relative to the speed of light. Here’s why.

Ethan Siegel

Share Why black holes spin at nearly the speed of light on Facebook

Share Why black holes spin at nearly the speed of light on Twitter

Share Why black holes spin at nearly the speed of light on LinkedIn

Whenever you take a look out there at the vast abyss of the deep Universe, it’s the points of light that stand out the most: stars and galaxies. While the majority of the light that you’ll first notice does indeed come from stars, a deeper look, going far beyond the visible portion of the electromagnetic spectrum, shows that there’s much more out there. The brightest, most massive stars, by their very nature, have the shortest lifespans, as they burn through their fuel far more quickly than their lower-mass counterparts. Once they’ve reached their limits and can fuse elements no further, they reach the end of their lives and become stellar corpses.

These corpses come in multiple varieties: white dwarfs for the lowest-mass (e.g., Sun-like) stars, neutron stars for the next tier up, and black holes for the most massive stars of all. These compact objects give off electromagnetic emissions spanning wavelengths from radio to X-ray light, revealing properties that range from mundane to absolutely shocking. While most stars themselves may spin relatively slowly, black holes rotate at nearly the speed of light. This might seem counterintuitive, but under the laws of physics, it couldn’t be any other way. Here’s why.

round
(Credit: NASA/Solar Dynamics Observatory)

The closest analogue we have to one of those extreme objects in our own Solar System is the Sun. In another 7 billion years or so, after becoming a red giant and burning through the helium fuel that’s built up within its core, it will end its life by blowing off its outer layers while its core contracts down to a stellar remnant: the most gentle of all major types of stellar death.

The outer layers will create a sight known as a planetary nebula, which comes from the blown-off gases getting ionized and illuminated from the contracting central core. This nebula will glow for tens of thousands of years before cooling off and becoming neutral again, generally returning that material to the interstellar medium. When the opportunity then arises, those processed atoms will participate in future generations of star formation.

But the inner core, largely composed of carbon and oxygen, will contract down as far as it possibly can. In the end, gravitational collapse will only be stopped by the particles ⁠ — atoms, ions, and electrons ⁠ — that the remnant of our Sun will be made of.

planetary nebula
(Credit: Nordic Optical Telescope and Romano Corradi (Isaac Newton Group of Telescopes, Spain))

So long as you remain below a critical mass threshold, the Chandrasekhar mass limit, the quantum properties inherent to those particles will be sufficient to hold the stellar remnant up against gravitational collapse. The endgame for a Sun-like star’s core will be a degenerate state known as a white dwarf. It will possess a sizable fraction of the mass of its parent star, but crammed into a tiny fraction of the volume: approximately the size of Earth.

Astronomers now know enough about stars and stellar evolution to describe what happens during this process. For a star like our Sun, approximately 60% of its mass will get expelled in the outer layers, while the remaining 40% remains in the core. The more massive a star becomes, the more mass, percentage-wise, gets blown off in its outer layers, with less being retained in the core. For the most massive stars that suffer the same fate as our Sun, possessing about 7-8 times the Sun’s mass, the mass fraction remaining in the core comes all the way down to about 18% of the original star’s mass.

This has happened nearby relatively recently, as the brightest star in Earth’s sky, Sirius, has a white dwarf companion, visible in the Hubble image below.

(Credit: NASA, ESA, H. Bond (STScI) and M. Barstow (University of Leicester))

Sirius A is a little bit brighter and more massive than our Sun, and we believe that its binary companion, Sirius B, was once even more massive than Sirius A. Because the more massive stars burn through their nuclear fuel more quickly than lower-mass ones, Sirius B likely ran out of fuel some time ago. Today, Sirius A remains burning through its hydrogen fuel, and dominates that system in terms of mass and brightness. While Sirius A, today, weighs in at about twice the mass of our Sun, Sirius B is only approximately equal to our Sun’s mass.

However, based on observations of the white dwarfs that happen to pulse, we’ve learned a valuable lesson. Rather than taking multiple days or even (like our Sun) approximately a month to complete a full rotation, like normal stars tend to do, white dwarfs complete a full 360° rotation in as little as an hour. This might seem bizarre, but if you’ve ever seen a figure skating routine, the same principle that explains a spinning skater who pulls their arms in explains the white dwarfs rotational speed: the law of conservation of angular momentum.

(Credit: Deerstop/Wikimedia Commons)

Angular momentum is simply a measure of “How much rotational and/or orbital motion does a mass have to it?” If you puff that massive object up so that its mass is farther from its rotational center, it has to slow down in its rotational speed in order to conserve angular momentum. Similarly, if you compress a massive object down, so that more of its mass is closer to the center of its axis-of-rotation, it will have to speed up in its rotational speed, making more revolutions-per-second, to keep angular momentum conserved.

What happens, then, if you were to take a star like our Sun — with the mass, volume, and rotation speed of the Sun — and compressed it down into a volume the size of the Earth: a typical size for a white dwarf?

Believe it or not, if you make the assumption that angular momentum is conserved, and that both the Sun and the compressed version of the Sun we’re imagining are spheres, this is a completely solvable problem with only one possible answer. If we go conservative, and assume the entirety of the Sun rotates once every 33 days (the longest amount of time it takes any part of the Sun’s photosphere to complete one 360° rotation) and that only the inner 40% of the Sun becomes a white dwarf, you get a remarkable answer: the Sun, as a white dwarf, will complete a rotation in just 25 minutes.

(Credit: David A. Aguilar / CfA)

By bringing all of that mass close in to the stellar remnant’s axis of rotation, we ensure that its rotational speed must rise. In general, if you halve the radius that an object has as it rotates, its rotational speed increases by a factor of four; rotational speed is inversely proportional to the square of a rotating mass’s radius. If you consider that it takes approximately 109 Earths to go across the diameter of the Sun, you can derive the same answer for yourself. (In reality, white dwarfs generally rotate a little more slowly, as the outermost layers get blown off, and only the interior “core” material contracts down to form a white dwarf.)

Unsurprisingly, then, you might start to ask about neutron stars or black holes: even more extreme objects. A neutron star is typically the product of a much more massive star ending its life in a supernova, where the particles in the core get so compressed that it behaves as one giant atomic nucleus composed almost exclusively (90% or more) of neutrons. Neutron stars are typically twice the mass of our Sun, but just about 10-to-40 km across. They rotate far more rapidly than any known star or white dwarf ever could.

(Credit: NASA, NICER, GSFC’s CI Lab)

Even the most naïve estimate you could make for the rotational speed of a neutron star — again, in analogy with our Sun — illustrates just how rapidly we can expect a neutron star to spin. If you repeated the thought experiment of compressing the entire Sun down into a smaller volume, but this time used one that was merely 40 kilometers in diameter, you’d get a much, much more rapid rotation rate than you ever could for a white dwarf: about 10 milliseconds. That same principle that we previously applied to a figure skater, about the conservation of angular momentum, leads us to the conclusion that neutron stars could complete more than 100 full rotations in a single second.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Fields marked with an * are required

In fact, this lines up perfectly with our actual observations. Some neutron stars emit radio pulses along Earth’s line-of-sight to them: pulsars. We can measure the pulse periods of these objects, and while some of them take approximately a full second to complete a rotation, some of them rotate in as little as 1.3 milliseconds, up to a maximum of 766 rotations-per-second.

(Credit: NASA’s Goddard Space Flight Center)

The fastest-spinning neutron stars known are called millisecond pulsars, and they really do rotate at incredibly fast speeds. At their surfaces, those rotation rates are indeed relativistic: meaning they reach speeds that are a significant fraction of the speed of light. The most extreme examples of such neutron stars can reach speeds exceeding 50% the speed of light at the outer surface of these neutron stars.

But that doesn’t even approach the true astrophysical limits found in the Universe. Neutron stars aren’t the densest objects in the Universe; that honor goes to black holes, which take all the mass you’d find in a neutron star — more, in fact — and compress it down into a region of space where even an object moving at the speed of light couldn’t escape from it.

If you compressed the Sun down into a volume just 3 kilometers in radius, that would force it to become a black hole. And yet, the conservation of angular momentum would mean that much of that internal region would experience frame-dragging so severe that space itself would get dragged at speeds approaching the speed of light, even outside of the Schwarzschild radius of the black hole. The more you compress that mass down, the faster the fabric of space itself gets dragged.

(Credit: ESO, ESA/Hubble, M. Kornmesser)

Realistically, we can’t measure the frame-dragging of space itself in the vicinity of a black hole. But we can measure the frame-dragging effects on the matter that happens to be present within that space. For black holes, that means looking at the accretion disks and accretion flows found around these black holes that exist in matter-rich environments. Perhaps paradoxically, the smallest mass black holes, which have the smallest event horizons, actually have the largest amounts of spatial curvature at and near their event horizons.

You might think, therefore, that they’d make the best laboratories for testing these frame dragging effects. But nature surprised us on that front: a supermassive black hole at the center of galaxy NGC 1365 — which also happens to be one of the first galaxies imaged by the James Webb Space Telescope — has had the radiation emitted from the volume outside of it detected and measured, revealing its speed. Even at these large distances, the material spins at 84% the speed of light. If you insist that angular momentum be conserved, it couldn’t have turned out any other way.

(Credit: Andrew Hamilton/JILA/University of Colorado)

Subsequently, we’ve inferred the spins of black holes that have merged together with gravitational wave observatories such as LIGO and Virgo, and found that some black holes spin at the theoretical maximum: around ~95% the speed of light. It’s a tremendously difficult thing to intuit: the notion that black holes should spin at almost the speed of light. After all, the stars that black holes are built from rotate extremely slowly, even by Earth’s standards of one rotation every 24 hours. Yet if you remember that most of the stars in our Universe also have enormous volumes, you’ll realize that they contain an enormous amount of angular momentum.

If you compress that volume down to be very small, those objects have no choice. If angular momentum has to be conserved, all they can do is spin up their rotational speeds until they almost reach the speed of light. At that point, gravitational waves will kick in, and some of that energy (and angular momentum) gets radiated away, bringing it back down to below the theoretical maximum value. If not for those processes, black holes might not be black after all, instead revealing naked singularities at their centers. In this Universe, black holes have no choice but to rotate at extraordinary speeds. Perhaps someday, we’ll be able to measure their rotation directly.

Why black holes spin at nearly the speed of light

Black holes aren’t just the densest masses in the Universe, but they also spin the fastest of all massive objects. Here’s why it must be so.

An illustration of an active black hole, one that accretes matter and accelerates a portion of it outward in two perpendicular jets. The normal matter undergoing an acceleration like this describes how quasars work extremely well. All known, well-measured black holes have enormous rotation rates, and the laws of physics all but ensure that this is mandatory. (Credit: University of Warwick/Mark A. Garlick)

Key Takeaways

  • Black holes are some of the most enigmatic, extreme objects in the entire Universe, with more mass compressed into a tiny volume than any other object.
  • But black holes aren’t just extremely massive, they’re also incredibly fast rotators. Many black holes, from their measured spins, are spinning at more than 90% the speed of light.
  • This might seem like a puzzle, but physics not only has an explanation for why, but shows us that it’s very difficult to create black holes that spin slowly relative to the speed of light. Here’s why.

Ethan Siegel

Share Why black holes spin at nearly the speed of light on Facebook

Share Why black holes spin at nearly the speed of light on Twitter

Share Why black holes spin at nearly the speed of light on LinkedIn

Whenever you take a look out there at the vast abyss of the deep Universe, it’s the points of light that stand out the most: stars and galaxies. While the majority of the light that you’ll first notice does indeed come from stars, a deeper look, going far beyond the visible portion of the electromagnetic spectrum, shows that there’s much more out there. The brightest, most massive stars, by their very nature, have the shortest lifespans, as they burn through their fuel far more quickly than their lower-mass counterparts. Once they’ve reached their limits and can fuse elements no further, they reach the end of their lives and become stellar corpses.

These corpses come in multiple varieties: white dwarfs for the lowest-mass (e.g., Sun-like) stars, neutron stars for the next tier up, and black holes for the most massive stars of all. These compact objects give off electromagnetic emissions spanning wavelengths from radio to X-ray light, revealing properties that range from mundane to absolutely shocking. While most stars themselves may spin relatively slowly, black holes rotate at nearly the speed of light. This might seem counterintuitive, but under the laws of physics, it couldn’t be any other way. Here’s why.

round
(Credit: NASA/Solar Dynamics Observatory)

The closest analogue we have to one of those extreme objects in our own Solar System is the Sun. In another 7 billion years or so, after becoming a red giant and burning through the helium fuel that’s built up within its core, it will end its life by blowing off its outer layers while its core contracts down to a stellar remnant: the most gentle of all major types of stellar death.

The outer layers will create a sight known as a planetary nebula, which comes from the blown-off gases getting ionized and illuminated from the contracting central core. This nebula will glow for tens of thousands of years before cooling off and becoming neutral again, generally returning that material to the interstellar medium. When the opportunity then arises, those processed atoms will participate in future generations of star formation.

But the inner core, largely composed of carbon and oxygen, will contract down as far as it possibly can. In the end, gravitational collapse will only be stopped by the particles ⁠ — atoms, ions, and electrons ⁠ — that the remnant of our Sun will be made of.

planetary nebula
(Credit: Nordic Optical Telescope and Romano Corradi (Isaac Newton Group of Telescopes, Spain))

So long as you remain below a critical mass threshold, the Chandrasekhar mass limit, the quantum properties inherent to those particles will be sufficient to hold the stellar remnant up against gravitational collapse. The endgame for a Sun-like star’s core will be a degenerate state known as a white dwarf. It will possess a sizable fraction of the mass of its parent star, but crammed into a tiny fraction of the volume: approximately the size of Earth.

Astronomers now know enough about stars and stellar evolution to describe what happens during this process. For a star like our Sun, approximately 60% of its mass will get expelled in the outer layers, while the remaining 40% remains in the core. The more massive a star becomes, the more mass, percentage-wise, gets blown off in its outer layers, with less being retained in the core. For the most massive stars that suffer the same fate as our Sun, possessing about 7-8 times the Sun’s mass, the mass fraction remaining in the core comes all the way down to about 18% of the original star’s mass.

This has happened nearby relatively recently, as the brightest star in Earth’s sky, Sirius, has a white dwarf companion, visible in the Hubble image below.

(Credit: NASA, ESA, H. Bond (STScI) and M. Barstow (University of Leicester))

Sirius A is a little bit brighter and more massive than our Sun, and we believe that its binary companion, Sirius B, was once even more massive than Sirius A. Because the more massive stars burn through their nuclear fuel more quickly than lower-mass ones, Sirius B likely ran out of fuel some time ago. Today, Sirius A remains burning through its hydrogen fuel, and dominates that system in terms of mass and brightness. While Sirius A, today, weighs in at about twice the mass of our Sun, Sirius B is only approximately equal to our Sun’s mass.

However, based on observations of the white dwarfs that happen to pulse, we’ve learned a valuable lesson. Rather than taking multiple days or even (like our Sun) approximately a month to complete a full rotation, like normal stars tend to do, white dwarfs complete a full 360° rotation in as little as an hour. This might seem bizarre, but if you’ve ever seen a figure skating routine, the same principle that explains a spinning skater who pulls their arms in explains the white dwarfs rotational speed: the law of conservation of angular momentum.

(Credit: Deerstop/Wikimedia Commons)

Angular momentum is simply a measure of “How much rotational and/or orbital motion does a mass have to it?” If you puff that massive object up so that its mass is farther from its rotational center, it has to slow down in its rotational speed in order to conserve angular momentum. Similarly, if you compress a massive object down, so that more of its mass is closer to the center of its axis-of-rotation, it will have to speed up in its rotational speed, making more revolutions-per-second, to keep angular momentum conserved.

What happens, then, if you were to take a star like our Sun — with the mass, volume, and rotation speed of the Sun — and compressed it down into a volume the size of the Earth: a typical size for a white dwarf?

Believe it or not, if you make the assumption that angular momentum is conserved, and that both the Sun and the compressed version of the Sun we’re imagining are spheres, this is a completely solvable problem with only one possible answer. If we go conservative, and assume the entirety of the Sun rotates once every 33 days (the longest amount of time it takes any part of the Sun’s photosphere to complete one 360° rotation) and that only the inner 40% of the Sun becomes a white dwarf, you get a remarkable answer: the Sun, as a white dwarf, will complete a rotation in just 25 minutes.

(Credit: David A. Aguilar / CfA)

By bringing all of that mass close in to the stellar remnant’s axis of rotation, we ensure that its rotational speed must rise. In general, if you halve the radius that an object has as it rotates, its rotational speed increases by a factor of four; rotational speed is inversely proportional to the square of a rotating mass’s radius. If you consider that it takes approximately 109 Earths to go across the diameter of the Sun, you can derive the same answer for yourself. (In reality, white dwarfs generally rotate a little more slowly, as the outermost layers get blown off, and only the interior “core” material contracts down to form a white dwarf.)

Unsurprisingly, then, you might start to ask about neutron stars or black holes: even more extreme objects. A neutron star is typically the product of a much more massive star ending its life in a supernova, where the particles in the core get so compressed that it behaves as one giant atomic nucleus composed almost exclusively (90% or more) of neutrons. Neutron stars are typically twice the mass of our Sun, but just about 10-to-40 km across. They rotate far more rapidly than any known star or white dwarf ever could.

(Credit: NASA, NICER, GSFC’s CI Lab)

Even the most naïve estimate you could make for the rotational speed of a neutron star — again, in analogy with our Sun — illustrates just how rapidly we can expect a neutron star to spin. If you repeated the thought experiment of compressing the entire Sun down into a smaller volume, but this time used one that was merely 40 kilometers in diameter, you’d get a much, much more rapid rotation rate than you ever could for a white dwarf: about 10 milliseconds. That same principle that we previously applied to a figure skater, about the conservation of angular momentum, leads us to the conclusion that neutron stars could complete more than 100 full rotations in a single second.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Fields marked with an * are required

In fact, this lines up perfectly with our actual observations. Some neutron stars emit radio pulses along Earth’s line-of-sight to them: pulsars. We can measure the pulse periods of these objects, and while some of them take approximately a full second to complete a rotation, some of them rotate in as little as 1.3 milliseconds, up to a maximum of 766 rotations-per-second.

(Credit: NASA’s Goddard Space Flight Center)

The fastest-spinning neutron stars known are called millisecond pulsars, and they really do rotate at incredibly fast speeds. At their surfaces, those rotation rates are indeed relativistic: meaning they reach speeds that are a significant fraction of the speed of light. The most extreme examples of such neutron stars can reach speeds exceeding 50% the speed of light at the outer surface of these neutron stars.

But that doesn’t even approach the true astrophysical limits found in the Universe. Neutron stars aren’t the densest objects in the Universe; that honor goes to black holes, which take all the mass you’d find in a neutron star — more, in fact — and compress it down into a region of space where even an object moving at the speed of light couldn’t escape from it.

If you compressed the Sun down into a volume just 3 kilometers in radius, that would force it to become a black hole. And yet, the conservation of angular momentum would mean that much of that internal region would experience frame-dragging so severe that space itself would get dragged at speeds approaching the speed of light, even outside of the Schwarzschild radius of the black hole. The more you compress that mass down, the faster the fabric of space itself gets dragged.

(Credit: ESO, ESA/Hubble, M. Kornmesser)

Realistically, we can’t measure the frame-dragging of space itself in the vicinity of a black hole. But we can measure the frame-dragging effects on the matter that happens to be present within that space. For black holes, that means looking at the accretion disks and accretion flows found around these black holes that exist in matter-rich environments. Perhaps paradoxically, the smallest mass black holes, which have the smallest event horizons, actually have the largest amounts of spatial curvature at and near their event horizons.

You might think, therefore, that they’d make the best laboratories for testing these frame dragging effects. But nature surprised us on that front: a supermassive black hole at the center of galaxy NGC 1365 — which also happens to be one of the first galaxies imaged by the James Webb Space Telescope — has had the radiation emitted from the volume outside of it detected and measured, revealing its speed. Even at these large distances, the material spins at 84% the speed of light. If you insist that angular momentum be conserved, it couldn’t have turned out any other way.

(Credit: Andrew Hamilton/JILA/University of Colorado)

Subsequently, we’ve inferred the spins of black holes that have merged together with gravitational wave observatories such as LIGO and Virgo, and found that some black holes spin at the theoretical maximum: around ~95% the speed of light. It’s a tremendously difficult thing to intuit: the notion that black holes should spin at almost the speed of light. After all, the stars that black holes are built from rotate extremely slowly, even by Earth’s standards of one rotation every 24 hours. Yet if you remember that most of the stars in our Universe also have enormous volumes, you’ll realize that they contain an enormous amount of angular momentum.

If you compress that volume down to be very small, those objects have no choice. If angular momentum has to be conserved, all they can do is spin up their rotational speeds until they almost reach the speed of light. At that point, gravitational waves will kick in, and some of that energy (and angular momentum) gets radiated away, bringing it back down to below the theoretical maximum value. If not for those processes, black holes might not be black after all, instead revealing naked singularities at their centers. In this Universe, black holes have no choice but to rotate at extraordinary speeds. Perhaps someday, we’ll be able to measure their rotation directly.

How the “Einstein shift” was predicted 8 years before General Relativity

The idea of gravitational redshift crossed Einstein’s mind years before General Relativity was complete. Here’s why it had to be there.

Instead of an empty, blank, three-dimensional grid, putting a mass down causes what would have been ‘straight’ lines to instead become curved by a specific amount. No matter how far away you get from a point mass, the curvature of space never reaches zero, but always remains, even at infinite range. (Credit: Christopher Vitale of Networkologies and the Pratt Institute)

Key Takeaways

  • One of the novel predictions that came along with Einstein’s novel theory of gravity was the idea of “the Einstein shift,” or as it’s known today, a gravitational redshift.
  • But even though it wouldn’t be experimentally confirmed until a 1959 experiment, Einstein himself recognized it was an absolute necessity way back in 1907: a full 8 years before General Relativity was completed.
  • Here’s the remarkable story of how, if you yourself had the same realizations that Einstein did more than 100 years ago, you could have predicted it too, even before General Relativity.

Ethan Siegel

Share How the “Einstein shift” was predicted 8 years before General Relativity on Facebook

Share How the “Einstein shift” was predicted 8 years before General Relativity on Twitter

Share How the “Einstein shift” was predicted 8 years before General Relativity on LinkedIn

It’s extremely rare for any individual to bring about a scientific revolution through their work: where practically everyone conceived of the Universe in one way before that critical work was completed, and then afterward, our conception of the Universe was entirely different. In the case of Albert Einstein, this happened not just once, but multiple times. In 1905, Einstein brought us:

  • the constancy of the speed of light,
  • mass-energy equivalence (via E = mc²),
  • and the special theory of relativity,

among other important advances. But arguably, Einstein’s biggest revolution came a decade later, in 1915, when he incorporated gravitation into relativity as well, leading to the general theory of relativity, or General Relativity, as it’s more commonly known.

With spacetime now understood as a dynamic entity, whose very fabric is curved by the presence and distribution of matter-and-energy, all sorts of new phenomena have been derived. Gravitational waves — ripples that travel through spacetime, carrying energy and moving at the speed of light — were predicted. The bending of starlight around massive, compact objects was an inevitable consequence, as were other gravitational effects like gravitational time dilation and additional orbital precessions.

But the first expected consequence ever predicted from General Relativity — the Einstein shift, also known as gravitational redshift — was predicted way back in 1907, by Einstein himself. Here’s not just how he did it, but how anyone with the same realization, including you, could have done it for themselves.

(Credit: Philip Ronan/Wikimedia Commons)

Imagine you have a photon — a single quantum of light — that’s propagating through space. Light isn’t just a quantum mechanical “energy packet,” but is also an electromagnetic wave. Each photon, or each electromagnetic wave, has a certain amount of energy inherent to it, and the precise amount of energy it possesses is related to its wavelength. Photons with shorter wavelengths have higher energies, with gamma rays, X-rays, and ultraviolet light all more energetic than visible light. Conversely, photons with longer wavelengths have lower amounts of energy inherent to them, with infrared, microwave, and radio waves all less energetic than visible light.

Now, we’ve known for a long time that other types of waves, like sound waves, will appear to be “shortened” or “lengthened” relative to the wavelength at which they were emitted dependent on the relative motions of the source and the observer. It’s the reason why an emergency vehicle’s siren (or an ice cream truck) sounds higher-pitched when it moves toward you, and then lower-pitched when it moves away from you: it’s an example of the Doppler shift.

And if light is a wave in precisely the same fashion, then the Doppler shift, once special relativity came along in 1905, must also apply to light.

(Credit: TxAlien/Wikimedia Commons)

Light can have its wavelength either stretched or compressed due to relative motion, in a Doppler redshift or a Doppler blueshift, but that’s hardly revolutionary or even unexpected. However, it was two years after special relativity, in 1907, that Einstein had what he’d later refer to as his happiest thought: the equivalence principle.

The equivalence principle, in its most straightforward form, simply states that there is nothing special or remarkable about gravitation at all; it’s simply another example of acceleration. If you were being accelerated, and you had no way of observing the source of your acceleration, could you tell the difference between whether propulsion, an external applied force, or a gravitational force was the cause of it.

With this realization — that gravitation was just another form of acceleration — Einstein recognized that it would be possible to make a more general theory of relativity that didn’t just incorporate all possible motions and changes in motions, but one that also included gravitation. Eight years later, his happiest thought would lead to General Relativity and a wholly new conception of how gravity worked.

(Credit: Markus Poessel/Wikimedia commons; retouched by Pbroks13)

But we wouldn’t have to wait until 1915 for the Einstein shift — what we now know as gravitational redshift (or gravitational blueshift) — to arise as a robust prediction. In fact, it was way back in 1907, when he first thought of the equivalence principle, that Einstein published his first prediction of this new type of redshift.

If you were in a spacecraft (or an elevator) that was accelerating upward, then a photon that was emitted from “beneath” you would have its wavelength stretched relative to its emitted wavelength by the time the photon caught up to your eyes. Similarly, if an identical photon were emitted from “above” you instead, its wavelength would appear compressed relative to the wavelength at which it was emitted. In the former case, you’d observe a Doppler redshift; in the latter, a Doppler blueshift.

By applying the equivalence principle, Einstein immediately recognized that the same shifts must apply if the acceleration were due to a gravitational field rather than a moving-and-accelerating spacecraft. If you’re seeing a photon rising up against a gravitational field, you’ll observe it to have a longer wavelength than when it was emitted, a gravitational redshift, and if you’re seeing a photon falling down into a gravitational field, you’ll observe that it has a shorter wavelength, or a gravitational blueshift.

(Credit: Vladi/Wikimedia Commons)

Once Einstein developed both the equivalence principle and what would become his general theory of relativity more comprehensively, in 1911, he quantitatively was able to predict the gravitational redshift of a photon rising out of the gravitational field of the Sun: a prediction that wouldn’t be verified until 1962: seven years after his death. The only robust astronomical observation that ever confirmed gravitational redshift during Einstein’s lifetime came in 1954, when astronomer Daniel Popper measured a gravitational redshift for the spectral lines coming from the white dwarf 40 Eridani B, and found a strong agreement with the predictions of General Relativity.

There was a direct laboratory experiment in 1959, however, that provided our best confirmation of gravitational redshift: the Pound-Rebka experiment. By causing a particular isotope of iron to enter an excited nuclear state, that nucleus would then emit a gamma-ray photon of a specific wavelength. When sent upward, 22.5 meters, to an identical iron nucleus, the gravitational redshift changed that photon’s wavelength significantly enough that the higher nucleus couldn’t absorb it. Only if the nucleus was “driven” with an additional velocity — i.e., if it received an energetic “kick” — would there be enough extra energy in the photon that it would get absorbed again. When the calculation was performed to show how much energy must be added versus how much did General Relativity predict would be needed, the agreement was astounding: to within 1%.

(Credit: Corbis Media/Harvard University)

Not everyone anticipated that this would be the case, however. Some thought that light wouldn’t respond to moving deeper into (or out of) a gravitational field by gaining (or losing) energy, as the equivalence principle demanded. After all, the equivalence principle’s main assertion was that objects that are accelerated by a gravitational force cannot be distinguished from objects that are accelerated by any other type of force: i.e., an inertial force. The tests we were able to perform of the equivalence principle, especially early on, only tested the equivalence of gravitational masses and inertial masses. Massless particles, like photons, had no such test for equivalence.

However, once the world knew about the existence of antimatter — which was indisputable by the early 1930s, both theoretically and experimentally — there was a simple thought experiment that one could have performed to show that it couldn’t be any other way. Matter and antimatter particles are the same in one way, as they have identical rest masses and, therefore (via E = mc²) identical rest mass energies. However, they have opposite electric charges (as well as other quantum numbers) from one another, and most spectacularly, if you collide a matter particle with its antimatter counterpart, they simply annihilate away into two photons of identical energy.

(Credit: Dmitri Pogosyan/University of Alberta)

So, let’s imagine that this is precisely what we have: two particles, one matter and one antimatter. Only, instead of having them here on the surface of Earth, we have them high up above the Earth’s surface: with lots of gravitational potential energy. If we hold that particle-antiparticle pair at rest and simply allow them to annihilate with one another, then the energy of each of the two photons produced will be given by the rest mass energy inherent to each member of the particle-antiparticle pairs: E = mc². We’ll get two photons, and those photons will have well-defined energies.

But now, I’m going to set up two different scenarios for you, and you’re going to have to think about whether these two scenarios are permitted to have different outcomes or not.

  1. The particle-antiparticle pair annihilates up high in the gravitational field, producing two photons that eventually fall down, under the influence of gravity, to the surface of the Earth. Once there, we measure their combined energies.
  2. The particle-antiparticle pair are dropped from up high in the gravitational field, where they fall downward under the influence of gravity. Just before they hit the Earth’s surface, we allow them to annihilate, and then we measure their combined energies.
(Credit: Ray Shapp/Mike Luciuk; edits by E. Siegel)

Let’s think about what’s happening in the second situation first. As the two masses fall in a gravitational field, they pick up speed, and therefore gain kinetic energy. When they’re just about to reach the surface of the Earth and they annihilate, they will produce two photons.

Now, what will the energy, combined, of those two photons be?

Because energy is conserved, both the rest mass energy and the kinetic energy of that particle-antiparticle pair must go into the energy of those two photons. The total energy of the two photons will be given by the sum of the rest mass energies and the kinetic energies of the particle and the antiparticle.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Fields marked with an * are required

It couldn’t be any other way, because energy must be conserved and there are no additional places for that energy to hide.

(Credit: NASA/Goddard Space Flight Center)

Now, let’s go back to the first situation: where the particle-antiparticle pair annihilates into two photons, and then the two photons fall deeper into the gravitational field until they reach the surface of the Earth.

Let’s once again consider, at the moment those two photons reach the surface of the Earth, what with their combined energy be?

Can you see, immediately, that a difference between these two situations isn’t allowed? If energy is conserved, then it cannot matter whether your matter antimatter pair:

  • first annihilates into photons, and then those photons fall down deeper into a gravitational field, or
  • falls down deeper into the gravitational field, and then that pair annihilates into photons.

If the starting conditions of both scenarios are identical, if there are no processes that occur in one scenario that don’t also occur in the other, and if the two photons wind up in the same (or equivalent) state at the end of both scenarios, then they must also have identical energies. The only way for that to be true is if photons, when they fall deeper into a gravitational field, experience the Einstein shift due to gravitation: in this case, a gravitational blueshift.

(Credit: A. Roura, Science, 2022)

In many ways, this thought experiment illustrates perhaps the greatest difference between the older, Newtonian conception of gravitation and the modern one, General Relativity, given to us by Albert Einstein. In Newtonian gravity, it’s only an object’s rest mass that leads to gravitation. The force of gravity is determined by each of the two masses that are exerting the force on one another, as well as the distance (squared) between them. But in General Relativity, all forms of energy matter for gravitation, and all objects — even massless ones — are subject to gravity’s effects.

Since photons carry energy, a photon falling deeper into a gravitational field must gain energy, and a photon climbing out of a gravitational field must lose energy in order to escape. While massive particles would gain or lose speed, a photon cannot; it must always move at the universal speed for all massless particles, the speed of light. As a result, the only way photons can gain or lose energy is by changing their wavelength: blueshifting as they gain energy, redshifting as they lose it.

If energy is to be conserved, then photons must experience not just Doppler shifts due to the relative motion between an emitting source and the observer, but they’ll experience gravitational redshift and blueshifts — the Einstein shift — as well. Although it took approximately half-a-century to validate it observationally and experimentally, from a purely theoretical perspective, it never could have been any other way.

How the “Einstein shift” was predicted 8 years before General Relativity

The idea of gravitational redshift crossed Einstein’s mind years before General Relativity was complete. Here’s why it had to be there.

Instead of an empty, blank, three-dimensional grid, putting a mass down causes what would have been ‘straight’ lines to instead become curved by a specific amount. No matter how far away you get from a point mass, the curvature of space never reaches zero, but always remains, even at infinite range. (Credit: Christopher Vitale of Networkologies and the Pratt Institute)

Key Takeaways

  • One of the novel predictions that came along with Einstein’s novel theory of gravity was the idea of “the Einstein shift,” or as it’s known today, a gravitational redshift.
  • But even though it wouldn’t be experimentally confirmed until a 1959 experiment, Einstein himself recognized it was an absolute necessity way back in 1907: a full 8 years before General Relativity was completed.
  • Here’s the remarkable story of how, if you yourself had the same realizations that Einstein did more than 100 years ago, you could have predicted it too, even before General Relativity.

Ethan Siegel

Share How the “Einstein shift” was predicted 8 years before General Relativity on Facebook

Share How the “Einstein shift” was predicted 8 years before General Relativity on Twitter

Share How the “Einstein shift” was predicted 8 years before General Relativity on LinkedIn

It’s extremely rare for any individual to bring about a scientific revolution through their work: where practically everyone conceived of the Universe in one way before that critical work was completed, and then afterward, our conception of the Universe was entirely different. In the case of Albert Einstein, this happened not just once, but multiple times. In 1905, Einstein brought us:

  • the constancy of the speed of light,
  • mass-energy equivalence (via E = mc²),
  • and the special theory of relativity,

among other important advances. But arguably, Einstein’s biggest revolution came a decade later, in 1915, when he incorporated gravitation into relativity as well, leading to the general theory of relativity, or General Relativity, as it’s more commonly known.

With spacetime now understood as a dynamic entity, whose very fabric is curved by the presence and distribution of matter-and-energy, all sorts of new phenomena have been derived. Gravitational waves — ripples that travel through spacetime, carrying energy and moving at the speed of light — were predicted. The bending of starlight around massive, compact objects was an inevitable consequence, as were other gravitational effects like gravitational time dilation and additional orbital precessions.

But the first expected consequence ever predicted from General Relativity — the Einstein shift, also known as gravitational redshift — was predicted way back in 1907, by Einstein himself. Here’s not just how he did it, but how anyone with the same realization, including you, could have done it for themselves.

(Credit: Philip Ronan/Wikimedia Commons)

Imagine you have a photon — a single quantum of light — that’s propagating through space. Light isn’t just a quantum mechanical “energy packet,” but is also an electromagnetic wave. Each photon, or each electromagnetic wave, has a certain amount of energy inherent to it, and the precise amount of energy it possesses is related to its wavelength. Photons with shorter wavelengths have higher energies, with gamma rays, X-rays, and ultraviolet light all more energetic than visible light. Conversely, photons with longer wavelengths have lower amounts of energy inherent to them, with infrared, microwave, and radio waves all less energetic than visible light.

Now, we’ve known for a long time that other types of waves, like sound waves, will appear to be “shortened” or “lengthened” relative to the wavelength at which they were emitted dependent on the relative motions of the source and the observer. It’s the reason why an emergency vehicle’s siren (or an ice cream truck) sounds higher-pitched when it moves toward you, and then lower-pitched when it moves away from you: it’s an example of the Doppler shift.

And if light is a wave in precisely the same fashion, then the Doppler shift, once special relativity came along in 1905, must also apply to light.

(Credit: TxAlien/Wikimedia Commons)

Light can have its wavelength either stretched or compressed due to relative motion, in a Doppler redshift or a Doppler blueshift, but that’s hardly revolutionary or even unexpected. However, it was two years after special relativity, in 1907, that Einstein had what he’d later refer to as his happiest thought: the equivalence principle.

The equivalence principle, in its most straightforward form, simply states that there is nothing special or remarkable about gravitation at all; it’s simply another example of acceleration. If you were being accelerated, and you had no way of observing the source of your acceleration, could you tell the difference between whether propulsion, an external applied force, or a gravitational force was the cause of it.

With this realization — that gravitation was just another form of acceleration — Einstein recognized that it would be possible to make a more general theory of relativity that didn’t just incorporate all possible motions and changes in motions, but one that also included gravitation. Eight years later, his happiest thought would lead to General Relativity and a wholly new conception of how gravity worked.

(Credit: Markus Poessel/Wikimedia commons; retouched by Pbroks13)

But we wouldn’t have to wait until 1915 for the Einstein shift — what we now know as gravitational redshift (or gravitational blueshift) — to arise as a robust prediction. In fact, it was way back in 1907, when he first thought of the equivalence principle, that Einstein published his first prediction of this new type of redshift.

If you were in a spacecraft (or an elevator) that was accelerating upward, then a photon that was emitted from “beneath” you would have its wavelength stretched relative to its emitted wavelength by the time the photon caught up to your eyes. Similarly, if an identical photon were emitted from “above” you instead, its wavelength would appear compressed relative to the wavelength at which it was emitted. In the former case, you’d observe a Doppler redshift; in the latter, a Doppler blueshift.

By applying the equivalence principle, Einstein immediately recognized that the same shifts must apply if the acceleration were due to a gravitational field rather than a moving-and-accelerating spacecraft. If you’re seeing a photon rising up against a gravitational field, you’ll observe it to have a longer wavelength than when it was emitted, a gravitational redshift, and if you’re seeing a photon falling down into a gravitational field, you’ll observe that it has a shorter wavelength, or a gravitational blueshift.

(Credit: Vladi/Wikimedia Commons)

Once Einstein developed both the equivalence principle and what would become his general theory of relativity more comprehensively, in 1911, he quantitatively was able to predict the gravitational redshift of a photon rising out of the gravitational field of the Sun: a prediction that wouldn’t be verified until 1962: seven years after his death. The only robust astronomical observation that ever confirmed gravitational redshift during Einstein’s lifetime came in 1954, when astronomer Daniel Popper measured a gravitational redshift for the spectral lines coming from the white dwarf 40 Eridani B, and found a strong agreement with the predictions of General Relativity.

There was a direct laboratory experiment in 1959, however, that provided our best confirmation of gravitational redshift: the Pound-Rebka experiment. By causing a particular isotope of iron to enter an excited nuclear state, that nucleus would then emit a gamma-ray photon of a specific wavelength. When sent upward, 22.5 meters, to an identical iron nucleus, the gravitational redshift changed that photon’s wavelength significantly enough that the higher nucleus couldn’t absorb it. Only if the nucleus was “driven” with an additional velocity — i.e., if it received an energetic “kick” — would there be enough extra energy in the photon that it would get absorbed again. When the calculation was performed to show how much energy must be added versus how much did General Relativity predict would be needed, the agreement was astounding: to within 1%.

(Credit: Corbis Media/Harvard University)

Not everyone anticipated that this would be the case, however. Some thought that light wouldn’t respond to moving deeper into (or out of) a gravitational field by gaining (or losing) energy, as the equivalence principle demanded. After all, the equivalence principle’s main assertion was that objects that are accelerated by a gravitational force cannot be distinguished from objects that are accelerated by any other type of force: i.e., an inertial force. The tests we were able to perform of the equivalence principle, especially early on, only tested the equivalence of gravitational masses and inertial masses. Massless particles, like photons, had no such test for equivalence.

However, once the world knew about the existence of antimatter — which was indisputable by the early 1930s, both theoretically and experimentally — there was a simple thought experiment that one could have performed to show that it couldn’t be any other way. Matter and antimatter particles are the same in one way, as they have identical rest masses and, therefore (via E = mc²) identical rest mass energies. However, they have opposite electric charges (as well as other quantum numbers) from one another, and most spectacularly, if you collide a matter particle with its antimatter counterpart, they simply annihilate away into two photons of identical energy.

(Credit: Dmitri Pogosyan/University of Alberta)

So, let’s imagine that this is precisely what we have: two particles, one matter and one antimatter. Only, instead of having them here on the surface of Earth, we have them high up above the Earth’s surface: with lots of gravitational potential energy. If we hold that particle-antiparticle pair at rest and simply allow them to annihilate with one another, then the energy of each of the two photons produced will be given by the rest mass energy inherent to each member of the particle-antiparticle pairs: E = mc². We’ll get two photons, and those photons will have well-defined energies.

But now, I’m going to set up two different scenarios for you, and you’re going to have to think about whether these two scenarios are permitted to have different outcomes or not.

  1. The particle-antiparticle pair annihilates up high in the gravitational field, producing two photons that eventually fall down, under the influence of gravity, to the surface of the Earth. Once there, we measure their combined energies.
  2. The particle-antiparticle pair are dropped from up high in the gravitational field, where they fall downward under the influence of gravity. Just before they hit the Earth’s surface, we allow them to annihilate, and then we measure their combined energies.
(Credit: Ray Shapp/Mike Luciuk; edits by E. Siegel)

Let’s think about what’s happening in the second situation first. As the two masses fall in a gravitational field, they pick up speed, and therefore gain kinetic energy. When they’re just about to reach the surface of the Earth and they annihilate, they will produce two photons.

Now, what will the energy, combined, of those two photons be?

Because energy is conserved, both the rest mass energy and the kinetic energy of that particle-antiparticle pair must go into the energy of those two photons. The total energy of the two photons will be given by the sum of the rest mass energies and the kinetic energies of the particle and the antiparticle.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Fields marked with an * are required

It couldn’t be any other way, because energy must be conserved and there are no additional places for that energy to hide.

(Credit: NASA/Goddard Space Flight Center)

Now, let’s go back to the first situation: where the particle-antiparticle pair annihilates into two photons, and then the two photons fall deeper into the gravitational field until they reach the surface of the Earth.

Let’s once again consider, at the moment those two photons reach the surface of the Earth, what with their combined energy be?

Can you see, immediately, that a difference between these two situations isn’t allowed? If energy is conserved, then it cannot matter whether your matter antimatter pair:

  • first annihilates into photons, and then those photons fall down deeper into a gravitational field, or
  • falls down deeper into the gravitational field, and then that pair annihilates into photons.

If the starting conditions of both scenarios are identical, if there are no processes that occur in one scenario that don’t also occur in the other, and if the two photons wind up in the same (or equivalent) state at the end of both scenarios, then they must also have identical energies. The only way for that to be true is if photons, when they fall deeper into a gravitational field, experience the Einstein shift due to gravitation: in this case, a gravitational blueshift.

(Credit: A. Roura, Science, 2022)

In many ways, this thought experiment illustrates perhaps the greatest difference between the older, Newtonian conception of gravitation and the modern one, General Relativity, given to us by Albert Einstein. In Newtonian gravity, it’s only an object’s rest mass that leads to gravitation. The force of gravity is determined by each of the two masses that are exerting the force on one another, as well as the distance (squared) between them. But in General Relativity, all forms of energy matter for gravitation, and all objects — even massless ones — are subject to gravity’s effects.

Since photons carry energy, a photon falling deeper into a gravitational field must gain energy, and a photon climbing out of a gravitational field must lose energy in order to escape. While massive particles would gain or lose speed, a photon cannot; it must always move at the universal speed for all massless particles, the speed of light. As a result, the only way photons can gain or lose energy is by changing their wavelength: blueshifting as they gain energy, redshifting as they lose it.

If energy is to be conserved, then photons must experience not just Doppler shifts due to the relative motion between an emitting source and the observer, but they’ll experience gravitational redshift and blueshifts — the Einstein shift — as well. Although it took approximately half-a-century to validate it observationally and experimentally, from a purely theoretical perspective, it never could have been any other way.

Not just light: Everything is a wave, including you

A concept known as “wave-particle duality” famously applies to light. But it also applies to all matter — including you.

Credit: Annelisa Leinbach, Claude Mellan

Key Takeaways

  • Quantum physics has redefined our understanding of matter.
  • In the 1920s, the wave-particle duality of light was extended to include all material objects, from electrons to you.
  • Cutting-edge experiments now explore how biological macromolecules can behave as both particle and wave.

Marcelo Gleiser

Share Not just light: Everything is a wave, including you on Facebook

Share Not just light: Everything is a wave, including you on Twitter

Share Not just light: Everything is a wave, including you on LinkedIn

In 1905, the 26-year-old Albert Einstein proposed something quite outrageous: that light could be both wave or particle. This idea is just as weird as it sounds. How could something be two things that are so different? A particle is small and confined to a tiny space, while a wave is something that spreads out. Particles hit one another and scatter about. Waves refract and diffract. They add on or cancel each other out in superpositions. These are very different behaviors. 

Hidden in translation

The problem with this wave-particle duality is that language has issues accommodating both behaviors coming from the same object. After all, language is built of our experiences and emotions, of the things we see and feel. We do not directly see or feel photons. We probe into their nature with experimental set-ups, collecting information through monitors, counters, and the like. 

The photons’ dual behavior emerges as a response to how we set up our experiment. If we have light passing through narrow slits, it will diffract like a wave. If it collides with electrons, it will scatter like a particle. So, in a way, it is our experiment, the question we are asking, that determines the physical nature of light. This introduces a new element into physics: the observer’s interaction with the observed. In more extreme interpretations, we could almost say that the intention of the experimenter determines the physical nature of what is being observed — that the mind determines physical reality. That’s really out there, but what we can say for sure is that light responds to the question we are asking in different ways. In a sense, light is both wave and particle, and it is neither.

This brings us to Bohr’s model of the atom, which we discussed a couple of weeks back. His model pins electrons orbiting the atomic nucleus to specific orbits. The electron can only be in one of these orbits,as if it is set on a train track. It can jump between orbits, but it cannot be in between them. How does that work, exactly? To Bohr, it was an open question. The answer came from a remarkable feat of physical intuition, and it sparked a revolution in our understanding of the world.

The wave nature of a baseball

In 1924, Louis de Broglie, a historian turned physicist, showed quite spectacularly that the electron’s step-like orbits in Bohr’s atomic model are easily understood if the electron is pictured as consisting of standing waves surrounding the nucleus. These are waves much like the ones we see when we shake a rope that is attached at the other end. In the case of the rope, the standing wave pattern appears due to the constructive and destructive interference between waves going and coming back along the rope. For the electron, the standing waves appear for the same reason, but now the electron wave closes on itself like an ouroboros, the mythic serpent that swallows its own tail. When we shake our rope more vigorously, the pattern of standing waves displays more peaks. An electron at higher orbits corresponds to a standing wave with more peaks.

With Einstein’s enthusiastic support, de Broglie boldly extended the notion of wave-particle duality from light to electrons and, by extension, to every moving material object. Not only light, but matter of any kind was associated with waves. 

De Broglie offered a formula known as de Broglie wavelength to compute the wavelength of any matter with mass m moving at velocity v. He associated wavelength λ to m and v — and thus to momentum p = mv — according to the relation λ = h/p, where h is Planck’s constant. The formula can be refined for objects moving close to the speed of light.

As an example, a baseball moving at 70 km per hour has an associated de Broglie wavelength of about 22 billionths of a trillionth of a trillionth of a centimeter (or 2.2 x 10-32 cm). Clearly, not much is waving there, and we are justified in picturing the baseball as a solid object. In contrast, an electron moving at one-tenth the speed of light has a wavelength about half the size of a hydrogen atom (more precisely, half the size of the most probable distance between an atomic nucleus and an electron at its lowest energy state). 

Smarter faster: the Big Think newsletter

Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Fields marked with an * are required

While the wave nature of a moving baseball is irrelevant to understanding its behavior, the wave nature of the electron is essential to understand its behavior in atoms. The crucial point, though, is that everything waves. An electron, a baseball, and you.

Quantum biology

De Broglie’s remarkable idea has been confirmed in countless experiments. In college physics classes we demonstrate how electrons passing through a crystal diffract like waves, with superpositions creating dark and bright spots due to destructive and constructive interference. Anton Zeilinger, who shared the physics Nobel prize this year, has championed diffracting ever-larger objects, from the soccer-ball-shaped C60 molecule (with 60 carbon atoms) to biological macromolecules

The question is how life under such a diffraction experiment would behave at the quantum level. Quantum biology is a new frontier, one where the wave-particle duality plays a key role in the behavior of living beings. Can life survive quantum superposition? Can quantum physics tell us something about the nature of life?

December 24th 2022

About Antimicrobial Resistance

Print

Antimicrobial resistance happens when germs like bacteria and fungi develop the ability to defeat the drugs designed to kill them. That means the germs are not killed and continue to grow. Resistant infections can be difficult, and sometimes impossible, to treat.

Antimicrobial resistance is an urgent global public health threat, killing at least 1.27 million people worldwide and associated with nearly 5 million deaths in 2019. In the U.S., more than 2.8 million antimicrobial-resistant infections occur each year. More than 35,000 people die as a result, according to CDC’s 2019 Antibiotic Resistance (AR) Threats Report. When Clostridioides difficile—a bacterium that is not typically resistant but can cause deadly diarrhea and is associated with antimicrobial use—is added to these, the U.S. toll of all the threats in the report exceeds 3 million infections and 48,000 deaths.

December 21st 2022

Why 21 cm is the magic length for the Universe

Photons come in every wavelength you can imagine. But one particular quantum transition makes light at precisely 21 cm, and it’s magical.

This map of the galaxy Messier 81, constructed from data taken with the Very Large Array, maps out this spiral-armed, star-forming galaxy in 21 centimeter emissions. The spin-flip transition of hydrogen, which emits light at precisely 21 centimeters in wavelength, is in many ways the most important length for radiation in the entire Universe. (Credit: NRAO/AUI/NSF)

Key Takeaways

  • Across the observable Universe, there are some 10^80 atoms, and most of them are simple hydrogen: made of just one proton and one electron each.
  • Every time a hydrogen atom forms, there’s a 50/50 shot that the proton and electron will have their spins aligned, which is a slightly higher-energy state than if they’re not aligned.
  • The quantum transition from the aligned state to the anti-aligned state is one of the most extreme transitions of all, and it produces light of precisely 21 cm in wavelength: arguably the most important length in the Universe.

Ethan Siegel

Share Why 21 cm is the magic length for the Universe on Facebook

Share Why 21 cm is the magic length for the Universe on Twitter

Share Why 21 cm is the magic length for the Universe on LinkedIn

In our Universe, quantum transitions are the governing rule behind every nuclear, atomic, and molecular phenomenon. Unlike the planets in our Solar System, which could stably orbit the Sun at any distance if they possessed the right speed, the protons, neutrons, and electrons that make up all the conventional matter we know of can only bind together in a specific set of configurations. These possibilities, although numerous, are finite in number, as the quantum rules that govern electromagnetism and the nuclear forces restrict how atomic nuclei and the electrons that orbit them can arrange themselves.

In all the Universe, the most common atom of all is hydrogen, with just one proton and one electron. Wherever new stars form, hydrogen atoms become ionized, becoming neutral again if those free electrons can find their way back to a free proton. Although the electrons will typically cascade down the allowed energy levels into the ground state, that normally produces only a specific set of infrared, visible, and ultraviolet light. But more importantly, a special transition occurs in hydrogen that produces light of about the size of your hand: 21 centimeters (about 8¼”) in wavelength. That’s a magic length, and it just might someday unlock the darkest secrets hiding out in the recesses of the Universe.

(Credit: Gianni Bernardi, via his AIMS talk)

When it comes to the light in the Universe, wavelength is the one property that you can count on to reveal how that light was created. Even though light comes to us in the form of photons — individual quanta that, collectively, make up the phenomenon we know as light — there are two very different classes of quantum process that create the light that surrounds us: continuous ones and discrete ones.

A continuous process is something like the light emitted by the photosphere of the Sun. It’s a dark object that’s been heated up to a certain temperature, and it radiates light of all different, continuous wavelengths as dictated by that temperature: what physicists know as blackbody radiation.

A discrete process, however, doesn’t emit light of a continuous set of wavelengths, but rather only at extremely specific wavelengths. A good example of that is the light absorbed by the neutral atoms present within the extreme outer layers of the Sun. As the blackbody radiation strikes those neutral atoms, a few of those photons will have just the right wavelengths to be absorbed by the electrons within the neutral atoms they encounter. When we break sunlight up into its individual wavelengths, the various absorption lines present against the backdrop of continuous, blackbody radiation reveal both of these processes to us.

(Credit: N.A.Sharp, NOAO/NSO/Kitt Peak FTS/AURA/NSF)

Each individual atom has its properties primarily defined by its nucleus, made up of protons (which determine its charge) and neutrons (which, combined with protons, determine its mass). Atoms also have electrons, which orbit the nucleus and occupy a specific set of energy levels. In isolation, each atom will come to exist in the ground state: where the electrons cascade down until they occupy the lowest allowable energy levels, limited only by the quantum rules that determine the various properties that electrons are and aren’t allowed to possess.

Electrons can occupy the ground state — the 1s orbital — of an atom until it’s full, which can hold two electrons. The next energy level up consists of spherical (the 2s) and perpendicular (the 2p) orbitals, which can hold two and six electrons, respectively, for a total of eight. The third energy level can hold 18 electrons: 3s (with two), 3p (with six), and 3d (with ten), and the pattern continues on upward. In general, the “upward” transitions rely on the absorption of a photon of particular wavelengths, while the “downward” transitions result in the emission of photons of the exact same wavelengths.

atom
(Credit: OrangeDog and Szdori/Wikimedia Commons)

That’s the basic structure of an atom, sometimes referred to as “coarse structure.” When you transition from the third energy level to the second energy level in a hydrogen atom, for example, you produce a photon that’s red in color, with a wavelength of precisely 656.3 nanometers: right in the visible light range of human eyes.

But there are very, very slight differences between the exact, precise wavelength of a photon that gets emitted if you transition from:

  • the third energy level down to either the 2s or the 2p orbital,
  • an energy level where the spin angular momentum and the orbital angular momentum are aligned to one where they’re anti-aligned,
  • or one where the nuclear spin and the electron spin are aligned versus anti-aligned.

There are rules as to what’s allowed versus what’s forbidden in quantum mechanics as well, such as the fact that you can transition an electron from a d-orbital to either an s-orbital or a p-orbital, and from an s-orbital to a p-orbital, but not from an s-orbital to another s-orbital.

The slight differences in energy between different types of orbital within the same energy level is known as an atom’s fine-structure, arising from the interaction between the spin of each particle within an atom and the orbital angular momentum of the electrons around the nucleus. It causes a shift in wavelength of less than 0.1%: small but measurable and significant.

(Credit: A. Fischer et al., Journal of the Acoustical Society of America, 2013)

But in quantum mechanics, even “forbidden” transitions can sometimes occur, owing to the phenomenon of quantum tunneling. Sure, you might not be able to transition from an s-orbital to another s-orbital directly, but if you can:

  • transition from an s-orbital to a p-orbital and then back to an s-orbital,
  • transition from an s-orbital to a d-orbital and then back to an s-orbital,
  • or, more generally, transition from an s-orbital to any other allowable state and then back to an s-orbital,

then that transition can occur. The only thing weird about quantum tunneling is that you don’t have to have a “real” transition occur with enough energy to make it happen to the intermediate state; it can happen virtually, so that you only see the final state emerge from the initial state: something that would be forbidden without the invocation of quantum tunneling.

This allows us to go beyond mere “fine structure” and onto hyperfine structure, where the spin of the atomic nucleus and one of the electrons that orbit it begin in an “aligned” state, where the spins are both in the same direction even though the electron is in the lowest-energy, ground (1s) state, to an anti-aligned state, where the spins are reversed.

(Credit: SKA Organisation)

The most famous of these transitions occurs in the simplest type of atom of all: hydrogen. With just one proton and one electron, every time you form a neutral hydrogen atom and the electron cascades down to the ground (lowest-energy) state, there’s a 50% chance that the spins of the central proton and the electron will be aligned, with a 50% chance that the spins will be anti-aligned.

If the spins are anti-aligned, that’s truly the lowest-energy state; there’s nowhere to go via transition that will result in the emission of energy at all. But if the spins are aligned, it becomes possible to quantum tunnel to the anti-aligned state: even though the direct transition process is forbidden, tunneling allows you to go straight from the starting point to the ending point, emitting a photon in the process.

This transition, because of its “forbidden” nature, takes an extremely long time to occur: approximately 10 million years for the average atom. However, this long lifetime of the slightly excited, aligned case for a hydrogen atom has an upside to it: the photon that gets emitted, at 21 centimeters in wavelength and with a frequency of 1420 megahertz, is intrinsically, extremely narrow. In fact, it’s the narrowest, most precise transition line known in all of atomic and nuclear physics!

(Credit: J.Dickey/NASA SkyView)

If you were to go all the way back to the early stages of the hot Big Bang, before any stars had formed, you’d discover that a whopping 92% of the atoms in the Universe were exactly this species of hydrogen: with one proton and one electron in them. As soon as neutral atoms stably form — just a few hundred thousand years after the Big Bang — these neutral hydrogen atoms form with a 50/50 chance of having aligned versus anti-aligned spins. The ones that form anti-aligned will remain so; the ones that form with their spins aligned will undergo this spin-flip transition, emitting radiation of 21 centimeters in wavelength.

Although it’s never yet been done, this gives us a tremendously provocative way to measure the early Universe: by finding a cloud of hydrogen-rich gas, even one that’s never formed stars, we could look for this spin-flip signal — accounting for the expansion of the Universe and the corresponding redshift of the light — to measure the atoms in the Universe from the earliest times ever seen. The only “broadening” to the line we’d expect to see would come from thermal and kinetic effects: from the non-zero temperature and the gravitationally-induced motion of the atoms that emit those 21 centimeter signals.

(Credit: Swinburne University of Technology)

In addition to those primordial signals, 21 centimeter radiation arises as a consequence whenever new stars are produced. Every time that a star-forming event occurs, the more massive newborn stars produce large amounts of ultraviolet radiation: radiation that’s energetic enough to ionize hydrogen atoms. All of a sudden, space that was once filled with neutral hydrogen atoms is now filled with free protons and free electrons.

But those electrons are going to eventually be captured, once again, by those protons, and when there’s no longer enough ultraviolet radiation to ionize them over and over again, the electrons will once again sink down to the ground state, where they’ll have a 50/50 chance of being aligned or anti-aligned with the spin of the atomic nucleus.

Again, that same radiation — of 21 centimeters in wavelength — gets produced, and every time we measure that 21 centimeter wavelength localized in a specific region of space, even if it gets redshifted by the expansion of the Universe, what we’re seeing is evidence of recent star-formation. Wherever star-formation occurs, hydrogen gets ionized, and whenever those atoms become neutral and de-excite again, this specific-wavelength radiation persists for tens of millions of years.

(Credit: Tiltec/Wikimedia Commons)

If we had the capability of sensitively mapping this 21 centimeter emission in all directions and at all redshifts (i.e., distances) in space, we could literally uncover the star-formation history of the entire Universe, as well as the de-excitation of the hydrogen atoms first formed in the aftermath of the hot Big Bang. With sensitive enough observations, we could answer questions like:

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Fields marked with an * are required

  • Are there stars present in dark voids in space below the threshold of what we can observe, waiting to be revealed by their de-exciting hydrogen atoms?
  • In galaxies where no new star-formation is observed, is star-formation truly over, or are there low-levels of new stars being born, just waiting to be discovered from this telltale signature of hydrogen atoms?
  • Are there any events that heat up and lead to hydrogen ionization prior to the formation of the first stars, and are there star-formation bursts that exist beyond the capabilities of even our most powerful infrared observatories to observe directly?

By measuring light of precisely the needed wavelength — 21.106114053 centimeters, plus whatever lengthening effects arise from the cosmic expansion of the Universe — we could reveal the answers to all of these questions and more. In fact, this is one of the main science goals of LOFAR: the low-frequency array, and it presents a strong science case for putting an upscaled version of this array on the radio-shielded far side of the Moon.

(Credit: Saptarshi Bandyopadhyay)

Of course, there’s another possibility that takes us far beyond astronomy when it comes to making use of this important length: creating and measuring enough spin-aligned hydrogen atoms in the lab to detect this spin-flip transition directly, in a controlled fashion. The transition takes about ~10 million years to “flip” on average, which means we’d need around a quadrillion (1015) prepared atoms, kept still and cooled to cryogenic temperatures, to measure not only the emission line, but the width of it. If there are phenomena that cause an intrinsic line-broadening, such as a primordial gravitational wave signal, such an experiment would, quite remarkably, be able to uncover its existence and magnitude.

In all the Universe, there are only a few known quantum transitions with the precision inherent to the hyperfine spin-flip transition of hydrogen, resulting in the emission of radiation that’s 21 centimeters in wavelength. If we want to identify ongoing and recent star-formation across the Universe, the first atomic signals even before the first stars were formed, or the relic strength of yet-undetected gravitational waves left over from cosmic inflation, it becomes clear that the 21 centimeter transition is the most important probe we have in all the cosmos. In many ways, it’s the “magic length” for uncovering some of nature’s greatest secrets.

Tags

Space & Astrophysics

Did the Milky Way lose its black hole?

At four million solar masses, the Milky Way’s supermassive black hole is quite small for a galaxy its size. Did we lose the original?

runaway black hole
Today, the Milky Way galaxy possesses a supermassive black hole of 4.3 million solar masses. While this might seem tremendous, it’s unusually small for a galaxy as massive as our own. Is it possible that, like other galaxies before us, an earlier supermassive black hole was ejected from the galactic core, and all that remains is what nature has assembled in the aftermath? (Credit: Tim Jones/McDonald Observatory)

Key Takeaways

  • While many Milky Way-sized galaxies have supermassive black holes that are a hundred million solar masses or more, ours weighs in at just 4 million Suns.
  • At the same time, we have some very good evidence that the Milky Way wasn’t a newcomer, but is more than 13 billion years old: almost as ancient as the Universe itself.
  • Rather than being on the unlucky side, our supermassive black hole might be the second of its kind: only growing up after the original was ejected. It’s a wild idea, but science may yet validate it.

Ethan Siegel

Share Did the Milky Way lose its black hole? on Facebook

Share Did the Milky Way lose its black hole? on Twitter

Share Did the Milky Way lose its black hole? on LinkedIn

At the heart of the Milky Way, a supermassive behemoth lurks. Formed over billions of years from a combination of merging black holes and the infalling matter that grows them, there now sits a mammoth black hole that weighs in at four million solar masses. It’s the largest black hole in the entire galaxy, and we’d have to travel millions of light-years away to find one that was more massive. From its perch in the galactic center, Sagittarius A* possesses the largest event horizon of any black hole we can observe from our current position in space.

And yet, despite how large, massive, and impressive this central black hole is, that’s only in comparison to the other black holes within our galaxy. When we look out at other large, massive galaxies — ones that are comparable to the Milky Way in size — we actually find that our supermassive black hole is on the rather small, low-mass side. While it’s possible that we’re simply a bit below average in the black hole department, there’s another potential explanation: perhaps the Milky Way once had a larger, truly supermassive black hole at its core, but it was ejected entirely a long time ago. What remains might be nothing more than an in-progress rebuilding project at the center of the Milky Way. Here’s the science of why we should seriously consider that our central, supermassive black hole might not be our galaxy’s original one.

supermassive
(Credit: ESO/MPE)

When we take a look around at the galaxies in our vicinity, we find that they come in a wide variety of sizes, masses and shapes. As far as spiral galaxies go, the Milky Way is fairly typical of large, modern spirals, with an estimated 400 billion stars, a diameter that’s a little bit over 100,000 light-years, and populations of stars that date back more than 13 billion years: just shortly after the time of the Big Bang itself.

While the largest black holes of all, often exceeding billions or even tens of billions of solar masses are found overwhelmingly in the most massive galaxies we know of — giant elliptical galaxies — other comparable spirals generally have larger, more massive black holes than our own. For example:

  • The Sombrero galaxy, about 30% of the Milky Way’s diameter, has a ~1 billion solar mass black hole.
  • Andromeda, the closest large galaxy to the Milky Way and only somewhat larger, has a ~230 million solar mass black hole.
  • NGC 5548, with an active nucleus but bright spiral arms, has a mass of around 70 million solar masses, comparable to that of nearby spirals Messier 81 and also Messier 58.
  • And even Messier 82, much smaller and lower in mass than our own Milky Way (and interacting neighbor of Messier 81) has a black hole of 30 million solar masses.
m81 group
(Credit: R. Gendler, R. Croman, R. Colombari; Acknowledgement: R. Jay GaBany; VLA Data: E. de Block (ASTRON))

In fact, of all of the spiral or elliptical galaxies known to host supermassive black holes, the Milky Way’s is the least massive one known. Additionally, only a few substantial galaxies have supermassive black holes that are even in the same ballpark as Sagittarius A* at the center of the Milky Way. A few spirals — all smaller than the Milky Way — such as Messier 61, NGC 7469, Messier 108 and NGC 3783, all have black holes between 5 and 30 million solar masses. These are some of the smallest supermassive black holes known, and while larger than ours, they’re at least comparable to the Milky Way’s 4.3 million central black hole.

Why would this be the case? There are really only two options.

  1. The first option is that there are many, many galaxies out there, and they have a huge range of black hole masses that they can obtain. We’re only seeing the ones that are easiest to see, and that’s going to be the most massive ones. There may be plenty of lower mass ones out there, and that’s the type we just happen to have.
  2. The second option, however, is that we’re actually well below the cosmic average in terms of the mass of our supermassive black hole, and there’s a physical reason — related to the evolution of our galaxy — that explains it.
OJ 287
(Credit: NASA/JPL-Caltech/R. Hurt (IPAC))

We’re still learning, of course, how supermassive black holes form, grow, and evolve in the Universe. We’re still attempting to figure out all of the steps for how, when galaxies merge, their supermassive black holes can successfully inspiral and merge on short enough timescales to match what we observe. We’ve only recently just discovered the first object in the process of transitioning from a galaxy into a quasar, an important step in the evolution of supermassive black holes. And from observing the earliest galaxies and quasars of all, we find that these supermassive black holes can grow up remarkably fast: reaching masses of around ~1 billion solar masses in just the first 700 million years of cosmic evolution.

In theory, the story of how they form is straightforward.

  • The earliest stars are very massive compared to the majority of stars that form today, and many of them will form black holes of tens, hundreds, or possibly even 1000 or more solar masses.
  • These black holes won’t just feed on the gas, dust, and other matter that’s present, but will sink to the galaxy’s center and merge together on cosmically short timescales.
  • As additional stars form, more and more matter gets “funneled” into the galactic center, growing these black holes further.
  • And when intergalactic material accretes onto the galaxy — as well as when galaxies merge together — it typically results in a feeding frenzy for the black hole, growing its mass even more substantially.
(Credit: F. Wang, AAS237)

Of course, we don’t know for certain how valid this story is. We have precious few high-quality observations of host galaxies and their black holes at those early epochs, and even those only give us a few specific snapshots. If the Hubble Space Telescope and the observatories of its era have shown us what the Universe looks like, it’s fair to say that the major science goal of the James Webb Space Telescope will be to teach us how the Universe grew up. In concert with large optical and infrared ground-based observatories, as well as giant radio arrays such as ALMA, we’ll have plenty of opportunities to either verify, refine, or overthrow our current picture of supermassive black hole formation and growth.

For our Milky Way, we have some pretty solid evidence that at least five significant galactic mergers occurred over the past ~11 billion years of our cosmic history: once the original, seed galaxy that our modern Milky Way would grow into was already firmly established. By that point in cosmic history, based upon how galaxies grow, we would expect to have a supermassive black hole that was at least in the tens-of-millions of solar masses range. With the passage of more time, we’d expect that the black hole would only have grown larger.

Kraken
(Credit: J. M. Diederik Kruijssen et al., MNRAS, 2020)

And yet today, some ~11 billion years later, our supermassive black hole is merely 4.3 million solar masses: less than 2% the mass of Andromeda’s supermassive black hole. It’s enough to make you wonder, “What is it, exactly, that happened (or didn’t happen) to us that resulted in our central black hole being so relatively small?”

It’s worth emphasizing that it is eminently possible that the Milky Way and our central black hole could simply be mundane. That perhaps nothing remarkable happened, and we’re simply able to make good enough observations from our close proximity to Sagittarius A* to determine its mass accurately. Perhaps many of these central black holes that we think are so massive might turn out to be smaller than we realize with our present technology.

But there’s a cosmic lesson that’s always worth remembering: at any moment, whenever we look out at an object in the Universe, we can only see the features whose evidence has survived until the present. This is true of our Solar System, which may have had more planets in the distant past, and it’s true of our galaxy, which may have had a much more massive central black hole a long time ago as well.

https://youtube.com/watch?v=0RYdxP-nLpw%3Ffeature%3Doembed

The Solar System, despite the tremendous difference in scale in comparison to the galaxy, is actually an excellent analogy. Now that we’ve discovered more than 5000 exoplanets, we know that our Solar System’s configuration — with all of the inner planets being small and rocky and all of the outer planets being large and gaseous — is not representative of what’s most common in the Universe. It’s likely that there was a fifth gas giant at one point, that it was ejected, and that the migration of the gas giants cleared out whatever early planets were present in the young Solar System.

Perhaps the reason we have Mercury, Venus, Earth, and Mars is because most of the material for forming planets was already used up in the inner part of the Solar System by the time their seeds came along, and this was as large as nature would let them get in the aftermath of that early “clearing out” event.

Well, it’s also plausible that the Milky Way formed a supermassive black hole the way that we believe most galaxies did, and that at some point we had a rather large one compared to what we see today. What could have happened? An event involving a large amount of gravitation — such as the merger of another galaxy or a strong enough “kick” from a nearby gravitational wave event — could have ejected it.

(Credit: X-ray: NASA/CXC/SAO/F.Civano et al; Optical: NASA/STScI; Optical (wide field): CFHT, NASA/STScI)

“Hold on,” you might object, “is there any evidence that supermassive black holes do get kicked out of galaxies?”

I’m glad you asked, because up until a decade ago, there wasn’t any. But back in 2012, astronomers were studying a system known as CID-42 in a galaxy some 4 billion light-years away. Previously, Hubble observations had revealed two distinct, compact sources that were observable in visible light: one at the center of the galaxy and one offset from the center.

Following up with NASA’s Chandra X-ray observatory, we found that there was a bright X-ray source consistent with heating from at least one supermassive black hole. Using the highest-resolution camera aboard Chandra, they found that the X-rays are only coming from one black hole, not two. But relative to one another, follow-up optical data showed that these two sources are moving away from one another at some 5 million kilometers-per-hour (~3 million miles-per-hour): well in excess of the escape velocity for a galaxy of that mass. As Dr. Francesa Civano, leader of the study, said back in 2012:

“It’s hard to believe that a supermassive black hole weighing millions of times the mass of the sun could be moved at all, let alone kicked out of a galaxy at enormous speed. But these new data support the idea that gravitational waves – ripples in the fabric of space first predicted by Albert Einstein but never detected directly – can exert an extremely powerful force.”

(Credit: V. Varma/Max Planck Institute for Gravitational Physics)

Recently, even though the science of gravitational wave astronomy is only about 5 years old at the time this article is being written, we got observational confirmation that such black hole “kicks” from gravitational waves aren’t particularly rare at all. Published on May 12, 2022, a study led by Dr. Vijay Varma showed that a black hole merger detected in 2020 — GW200129 — resulted in the most-merger black hole, owing to the relative properties of the progenitor black holes, receiving a tremendously fast “kick” of about 1500 km/s. For comparison, you only need to move at about one-third that speed to escape from the Milky Way’s gravitational pull.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Fields marked with an * are required

We’ve now seen fast-moving black holes of both the stellar mass and supermassive varieties. We’ve also seen how mergers can impart these kicks to black holes, particularly when gravitational waves are produced in predominantly one direction, which arises when the black holes have unequal masses or spins, and large precessions.

Putting these pieces together, it’s entirely reasonable that one of the Milky Way’s mergers over the past ~11 billion years resulted in the ejection of its initial central, supermassive black hole. What remains, today, may be merely the result of what it’s been able to regrow in the time that’s passed since.

(Credit: Event Horizon Telescope collaboration)

It cannot be emphasized enough what a remarkable achievement it is that the Event Horizon Telescope collaboration has, at long last, finally imaged the supermassive black hole at the center of the Milky Way: Sagittarius A*. It confirmed, to better than 95% precision, at least one thing that we already knew from measuring the motions of the stars in the galactic center’s vicinity: that there’s an object their weighing in at an impressive 4.3 million solar masses. Nevertheless, as large as that value is, it’s extraordinarily down there on the low end for a supermassive black hole.

Did the Milky Way lose its black hole?

At four million solar masses, the Milky Way’s supermassive black hole is quite small for a galaxy its size. Did we lose the original?

runaway black hole
Today, the Milky Way galaxy possesses a supermassive black hole of 4.3 million solar masses. While this might seem tremendous, it’s unusually small for a galaxy as massive as our own. Is it possible that, like other galaxies before us, an earlier supermassive black hole was ejected from the galactic core, and all that remains is what nature has assembled in the aftermath? (Credit: Tim Jones/McDonald Observatory)

Key Takeaways

  • While many Milky Way-sized galaxies have supermassive black holes that are a hundred million solar masses or more, ours weighs in at just 4 million Suns.
  • At the same time, we have some very good evidence that the Milky Way wasn’t a newcomer, but is more than 13 billion years old: almost as ancient as the Universe itself.
  • Rather than being on the unlucky side, our supermassive black hole might be the second of its kind: only growing up after the original was ejected. It’s a wild idea, but science may yet validate it.

Ethan Siegel

Share Did the Milky Way lose its black hole? on Facebook

Share Did the Milky Way lose its black hole? on Twitter

Share Did the Milky Way lose its black hole? on LinkedIn

At the heart of the Milky Way, a supermassive behemoth lurks. Formed over billions of years from a combination of merging black holes and the infalling matter that grows them, there now sits a mammoth black hole that weighs in at four million solar masses. It’s the largest black hole in the entire galaxy, and we’d have to travel millions of light-years away to find one that was more massive. From its perch in the galactic center, Sagittarius A* possesses the largest event horizon of any black hole we can observe from our current position in space.

And yet, despite how large, massive, and impressive this central black hole is, that’s only in comparison to the other black holes within our galaxy. When we look out at other large, massive galaxies — ones that are comparable to the Milky Way in size — we actually find that our supermassive black hole is on the rather small, low-mass side. While it’s possible that we’re simply a bit below average in the black hole department, there’s another potential explanation: perhaps the Milky Way once had a larger, truly supermassive black hole at its core, but it was ejected entirely a long time ago. What remains might be nothing more than an in-progress rebuilding project at the center of the Milky Way. Here’s the science of why we should seriously consider that our central, supermassive black hole might not be our galaxy’s original one.

supermassive
(Credit: ESO/MPE)

When we take a look around at the galaxies in our vicinity, we find that they come in a wide variety of sizes, masses and shapes. As far as spiral galaxies go, the Milky Way is fairly typical of large, modern spirals, with an estimated 400 billion stars, a diameter that’s a little bit over 100,000 light-years, and populations of stars that date back more than 13 billion years: just shortly after the time of the Big Bang itself.

While the largest black holes of all, often exceeding billions or even tens of billions of solar masses are found overwhelmingly in the most massive galaxies we know of — giant elliptical galaxies — other comparable spirals generally have larger, more massive black holes than our own. For example:

  • The Sombrero galaxy, about 30% of the Milky Way’s diameter, has a ~1 billion solar mass black hole.
  • Andromeda, the closest large galaxy to the Milky Way and only somewhat larger, has a ~230 million solar mass black hole.
  • NGC 5548, with an active nucleus but bright spiral arms, has a mass of around 70 million solar masses, comparable to that of nearby spirals Messier 81 and also Messier 58.
  • And even Messier 82, much smaller and lower in mass than our own Milky Way (and interacting neighbor of Messier 81) has a black hole of 30 million solar masses.
m81 group
(Credit: R. Gendler, R. Croman, R. Colombari; Acknowledgement: R. Jay GaBany; VLA Data: E. de Block (ASTRON))

In fact, of all of the spiral or elliptical galaxies known to host supermassive black holes, the Milky Way’s is the least massive one known. Additionally, only a few substantial galaxies have supermassive black holes that are even in the same ballpark as Sagittarius A* at the center of the Milky Way. A few spirals — all smaller than the Milky Way — such as Messier 61, NGC 7469, Messier 108 and NGC 3783, all have black holes between 5 and 30 million solar masses. These are some of the smallest supermassive black holes known, and while larger than ours, they’re at least comparable to the Milky Way’s 4.3 million central black hole.

Why would this be the case? There are really only two options.

  1. The first option is that there are many, many galaxies out there, and they have a huge range of black hole masses that they can obtain. We’re only seeing the ones that are easiest to see, and that’s going to be the most massive ones. There may be plenty of lower mass ones out there, and that’s the type we just happen to have.
  2. The second option, however, is that we’re actually well below the cosmic average in terms of the mass of our supermassive black hole, and there’s a physical reason — related to the evolution of our galaxy — that explains it.
OJ 287
(Credit: NASA/JPL-Caltech/R. Hurt (IPAC))

We’re still learning, of course, how supermassive black holes form, grow, and evolve in the Universe. We’re still attempting to figure out all of the steps for how, when galaxies merge, their supermassive black holes can successfully inspiral and merge on short enough timescales to match what we observe. We’ve only recently just discovered the first object in the process of transitioning from a galaxy into a quasar, an important step in the evolution of supermassive black holes. And from observing the earliest galaxies and quasars of all, we find that these supermassive black holes can grow up remarkably fast: reaching masses of around ~1 billion solar masses in just the first 700 million years of cosmic evolution.

In theory, the story of how they form is straightforward.

  • The earliest stars are very massive compared to the majority of stars that form today, and many of them will form black holes of tens, hundreds, or possibly even 1000 or more solar masses.
  • These black holes won’t just feed on the gas, dust, and other matter that’s present, but will sink to the galaxy’s center and merge together on cosmically short timescales.
  • As additional stars form, more and more matter gets “funneled” into the galactic center, growing these black holes further.
  • And when intergalactic material accretes onto the galaxy — as well as when galaxies merge together — it typically results in a feeding frenzy for the black hole, growing its mass even more substantially.
(Credit: F. Wang, AAS237)

Of course, we don’t know for certain how valid this story is. We have precious few high-quality observations of host galaxies and their black holes at those early epochs, and even those only give us a few specific snapshots. If the Hubble Space Telescope and the observatories of its era have shown us what the Universe looks like, it’s fair to say that the major science goal of the James Webb Space Telescope will be to teach us how the Universe grew up. In concert with large optical and infrared ground-based observatories, as well as giant radio arrays such as ALMA, we’ll have plenty of opportunities to either verify, refine, or overthrow our current picture of supermassive black hole formation and growth.

For our Milky Way, we have some pretty solid evidence that at least five significant galactic mergers occurred over the past ~11 billion years of our cosmic history: once the original, seed galaxy that our modern Milky Way would grow into was already firmly established. By that point in cosmic history, based upon how galaxies grow, we would expect to have a supermassive black hole that was at least in the tens-of-millions of solar masses range. With the passage of more time, we’d expect that the black hole would only have grown larger.

Kraken
(Credit: J. M. Diederik Kruijssen et al., MNRAS, 2020)

And yet today, some ~11 billion years later, our supermassive black hole is merely 4.3 million solar masses: less than 2% the mass of Andromeda’s supermassive black hole. It’s enough to make you wonder, “What is it, exactly, that happened (or didn’t happen) to us that resulted in our central black hole being so relatively small?”

It’s worth emphasizing that it is eminently possible that the Milky Way and our central black hole could simply be mundane. That perhaps nothing remarkable happened, and we’re simply able to make good enough observations from our close proximity to Sagittarius A* to determine its mass accurately. Perhaps many of these central black holes that we think are so massive might turn out to be smaller than we realize with our present technology.

But there’s a cosmic lesson that’s always worth remembering: at any moment, whenever we look out at an object in the Universe, we can only see the features whose evidence has survived until the present. This is true of our Solar System, which may have had more planets in the distant past, and it’s true of our galaxy, which may have had a much more massive central black hole a long time ago as well.

https://youtube.com/watch?v=0RYdxP-nLpw%3Ffeature%3Doembed

The Solar System, despite the tremendous difference in scale in comparison to the galaxy, is actually an excellent analogy. Now that we’ve discovered more than 5000 exoplanets, we know that our Solar System’s configuration — with all of the inner planets being small and rocky and all of the outer planets being large and gaseous — is not representative of what’s most common in the Universe. It’s likely that there was a fifth gas giant at one point, that it was ejected, and that the migration of the gas giants cleared out whatever early planets were present in the young Solar System.

Perhaps the reason we have Mercury, Venus, Earth, and Mars is because most of the material for forming planets was already used up in the inner part of the Solar System by the time their seeds came along, and this was as large as nature would let them get in the aftermath of that early “clearing out” event.

Well, it’s also plausible that the Milky Way formed a supermassive black hole the way that we believe most galaxies did, and that at some point we had a rather large one compared to what we see today. What could have happened? An event involving a large amount of gravitation — such as the merger of another galaxy or a strong enough “kick” from a nearby gravitational wave event — could have ejected it.

(Credit: X-ray: NASA/CXC/SAO/F.Civano et al; Optical: NASA/STScI; Optical (wide field): CFHT, NASA/STScI)

“Hold on,” you might object, “is there any evidence that supermassive black holes do get kicked out of galaxies?”

I’m glad you asked, because up until a decade ago, there wasn’t any. But back in 2012, astronomers were studying a system known as CID-42 in a galaxy some 4 billion light-years away. Previously, Hubble observations had revealed two distinct, compact sources that were observable in visible light: one at the center of the galaxy and one offset from the center.

Following up with NASA’s Chandra X-ray observatory, we found that there was a bright X-ray source consistent with heating from at least one supermassive black hole. Using the highest-resolution camera aboard Chandra, they found that the X-rays are only coming from one black hole, not two. But relative to one another, follow-up optical data showed that these two sources are moving away from one another at some 5 million kilometers-per-hour (~3 million miles-per-hour): well in excess of the escape velocity for a galaxy of that mass. As Dr. Francesa Civano, leader of the study, said back in 2012:

“It’s hard to believe that a supermassive black hole weighing millions of times the mass of the sun could be moved at all, let alone kicked out of a galaxy at enormous speed. But these new data support the idea that gravitational waves – ripples in the fabric of space first predicted by Albert Einstein but never detected directly – can exert an extremely powerful force.”

(Credit: V. Varma/Max Planck Institute for Gravitational Physics)

Recently, even though the science of gravitational wave astronomy is only about 5 years old at the time this article is being written, we got observational confirmation that such black hole “kicks” from gravitational waves aren’t particularly rare at all. Published on May 12, 2022, a study led by Dr. Vijay Varma showed that a black hole merger detected in 2020 — GW200129 — resulted in the most-merger black hole, owing to the relative properties of the progenitor black holes, receiving a tremendously fast “kick” of about 1500 km/s. For comparison, you only need to move at about one-third that speed to escape from the Milky Way’s gravitational pull.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Fields marked with an * are required

We’ve now seen fast-moving black holes of both the stellar mass and supermassive varieties. We’ve also seen how mergers can impart these kicks to black holes, particularly when gravitational waves are produced in predominantly one direction, which arises when the black holes have unequal masses or spins, and large precessions.

Putting these pieces together, it’s entirely reasonable that one of the Milky Way’s mergers over the past ~11 billion years resulted in the ejection of its initial central, supermassive black hole. What remains, today, may be merely the result of what it’s been able to regrow in the time that’s passed since.

(Credit: Event Horizon Telescope collaboration)

It cannot be emphasized enough what a remarkable achievement it is that the Event Horizon Telescope collaboration has, at long last, finally imaged the supermassive black hole at the center of the Milky Way: Sagittarius A*. It confirmed, to better than 95% precision, at least one thing that we already knew from measuring the motions of the stars in the galactic center’s vicinity: that there’s an object their weighing in at an impressive 4.3 million solar masses. Nevertheless, as large as that value is, it’s extraordinarily down there on the low end for a supermassive black hole.

In all known galaxies of comparable size to the Milky Way, there is no other that has a supermassive black hole of such a low mass as our own. Although there’s still so much remaining to learn about black holes, including how they form, grow, and co-evolve with their host galaxies, one tantalizingly plausible explanation is that a major black hole ejection happened relatively late-in-the-game here in our home galaxy. Even though all we have left are the survivors, and the long-ago ejected behemoth may now be tens of millions of light-years away, it’s possible that this is one aspect of our cosmic history that may someday fall within our reach.

In all known galaxies of comparable size to the Milky Way, there is no other that has a supermassive black hole of such a low mass as our own. Although there’s still so much remaining to learn about black holes, including how they form, grow, and co-evolve with their host galaxies, one tantalizingly plausible explanation is that a major black hole ejection happened relatively late-in-the-game here in our home galaxy. Even though all we have left are the survivors, and the long-ago ejected behemoth may now be tens of millions of light-years away, it’s possible that this is one aspect of our cosmic history that may someday fall within our reach.