Science

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

Science is firstly a methodology. It is not firstly a body of proven knowledge as rather pompous self important Covid19 ex spurts and Little Greta

would have would have us believe. Scientists should start from observation, hypothesis, prediction, experiment and thus to cautious conclusion.

An honest scientists sets out to disprove the hypothesis, not prove it. But in these days of chasing grants and accolades, this golden rule is too frequently ignored, often with serious harm done.

I suspect that is the case with the highly politicised Covid 19 pronouncements of ‘The Science.’ In the process, unscrupulous nonentities are advanced up the science career ladder where they feed off the work of brighter Phd students, then claim credit in many cases.

In the case of Covid. the whole outpuring of how to deal with and exaggerate this problem, brings such so called experts to the fore, with guesswork passed off as science -with no account to the wider harm done to health and mass survival. R.J Cook

Previous Science material is now on the the Science Archive Page. There will be more to come on here.

A giant black hole keeps evading detection and scientists can’t explain it January 13th 2021

By Mike Wall 12 days ago

Scientists are stumped by this black hole mystery.

This composite image of the galaxy cluster Abell 2261 contains optical data from NASA's Hubble Space Telescope and Japan's Subaru Telescope showing galaxies in the cluster and in the background, and data from NASA's Chandra X-ray Observatory showing hot gas (colored pink) pervading the cluster. The middle of the image shows the large elliptical galaxy in the center of the cluster. This composite image of the galaxy cluster Abell 2261 contains optical data from NASA’s Hubble Space Telescope and Japan’s Subaru Telescope showing galaxies in the cluster and in the background, and data from NASA’s Chandra X-ray Observatory showing hot gas (colored pink) pervading the cluster. The middle of the image shows the large elliptical galaxy in the center of the cluster. (Image: © X-ray: NASA/CXC/Univ of Michigan/K. Gültekin; Optical: NASA/STScI/NAOJ/Subaru; Infrared: NSF/NOAO/KPNO)

An enormous black hole keeps slipping through astronomers’ nets.

Supermassive black holes are thought to lurk at the hearts of most, if not all, galaxies. Our own Milky Way has one as massive as 4 million suns, for example, and M87’s — the only black hole ever imaged directly — tips the scales at a whopping 2.4 billion solar masses.

The big galaxy at the core of the cluster Abell 2261, which lies about 2.7 billion light-years from Earth, should have an even larger central black hole — a light-gobbling monster that weighs as much as 3 billion to 100 billion suns, astronomers estimate from the galaxy’s mass. But the exotic object has evaded detection so far.

Related: Historic first images of a black hole show Einstein was right (again)Click here for more Space.com videos…CLOSE

For instance, researchers previously looked for X-rays streaming from the galaxy’s center, using data gathered by NASA’s Chandra X-ray Observatory in 1999 and 2004. X-rays are a potential black-hole signature: As material falls into a black hole’s maw, it accelerates and heats up tremendously, emitting lots of high-energy X-ray light. But that hunt turned up nothing.

Now, a new study has conducted an even deeper search for X-rays in the same galaxy, using Chandra observations from 2018. And this new effort didn’t just look in the galaxy’s center; it also considered the possibility that the black hole was knocked toward the hinterlands after a monster galactic merger.

When black holes and other massive objects collide, they throw off ripples in space-time known as gravitational waves. If the emitted waves aren’t symmetrical in all directions, they could end up pushing the merged supermassive black hole away from the center of the newly enlarged galaxy, scientists say.

Such “recoiling” black holes are purely hypothetical creatures; nobody has definitively spotted one to date. Indeed, “it is not known whether supermassive black holes even get close enough to each other to produce gravitational waves and merge; so far, astronomers have only verified the mergers of much smaller black holes,” NASA officials wrote in a statement about the new study.

“The detection of recoiling supermassive black holes would embolden scientists using and developing observatories to look for gravitational waves from merging supermassive black holes,” they added. 

Abell 2261’s central galaxy is a good place to hunt for such a unicorn, researchers said, for it bears several possible signs of a dramatic merger. For example, observations by the Hubble Space Telescope and ground-based Subaru Telescope show that its core, the region of highest star density, is much larger than expected for a galaxy of its size. And the densest stellar patch is about 2,000 light-years away from the galaxy’s center — “strikingly distant,” NASA officials wrote.Click here for more Space.com videos…

In the new study, a team led by Kayhan Gultekin from the University of Michigan found that the densest concentrations of hot gas were not in the galaxy’s central regions. But the Chandra data didn’t reveal any significant X-ray sources, either in the galactic core or in big clumps of stars farther afield. So the mystery of the missing supermassive black hole persists.

That mystery could be solved by Hubble’s successor — NASA’s big, powerful James Webb Space Telescope, which is scheduled to launch in October 2021. 

If James Webb doesn’t spot a black hole in the galaxy’s heart or in one of its bigger stellar clumps, “then the best explanation is that the black hole has recoiled well out of the center of the galaxy,” NASA officials wrote.

The new study has been accepted for publication in a journal of the American Astronomical Society. You can read it for free at the online preprint site arXiv.org

Mike Wall is the author of “Out There” (Grand Central Publishing, 2018; illustrated by Karl Tate), a book about the search for alien life. Follow him on Twitter @michaeldwall. Follow us on Twitter @Spacedotcom or Facebook. 

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com..

Quantum Leaps, Long Assumed to Be Instantaneous, Take Time

An experiment caught a quantum system in the middle of a jump — something the originators of quantum mechanics assumed was impossible. Posted January 12th 2021

Quanta Magazine

  • Philip Ball

Read when you’ve got time to spare.Screenshot_2020-09-23 Quantum Leaps, Long Assumed to Be Instantaneous, Take Time Quanta Magazine.png

A quantum leap is a rapidly gradual process. Credit: Quanta Magazine; source: qoncha.

When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.

A 2019 experiment shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.” The study, which was led by Zlatko Minev, a graduate student in Devoret’s lab, was published on Monday in Nature. Already, colleagues are excited. “This is really a fantastic experiment,” said the physicist William Oliver of the Massachusetts Institute of Technology, who wasn’t involved in the work. “Really amazing.”

But there’s more. With their high-speed monitoring system, the researchers could spot when a quantum jump was about to appear, “catch” it halfway through, and reverse it, sending the system back to the state in which it started. In this way, what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control. We can take charge of the quantum.

All Too Random

The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr, Werner Heisenberg and their colleagues in the mid-1920s, in a picture now commonly called the Copenhagen interpretation. Bohr had argued earlier that the energy states of electrons in atoms are “quantized”: Only certain energies are available to them, while all those in between are forbidden. He proposed that electrons change their energy by absorbing or emitting quantum particles of light — photons — that have energies matching the gap between permitted electron states. This explained why atoms and molecules absorb and emit very characteristic wavelengths of light — why many copper salts are blue, say, and sodium lamps yellow.

Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s. Heisenberg’s quantum mechanics enumerated all the allowed quantum states, and implicitly assumed that jumps between them are instant — discontinuous, as mathematicians would say. “The notion of instantaneous quantum jumps … became a foundational notion in the Copenhagen interpretation,” historian of science Mara Beller has written.

Another of the architects of quantum mechanics, the Austrian physicist Erwin Schrödinger, hated that idea. He devised what seemed at first to be an alternative to Heisenberg’s math of discrete quantum states and instant jumps between them. Schrödinger’s theory represented quantum particles in terms of wavelike entities called wave functions, which changed only smoothly and continuously over time, like gentle undulations on the open sea. Things in the real world don’t switch suddenly, in zero time, Schrödinger thought — discontinuous “quantum jumps” were just a figment of the mind. In a 1952 paper called “Are there quantum jumps?,” Schrödinger answered with a firm “no,” his irritation all too evident in the way he called them “quantum jerks.”

The argument wasn’t just about Schrödinger’s discomfort with sudden change. The problem with a quantum jump was also that it was said to just happen at a random moment — with nothing to say why that particular moment. It was thus an effect without a cause, an instance of apparent randomness inserted into the heart of nature. Schrödinger and his close friend Albert Einstein could not accept that chance and unpredictability reigned at the most fundamental level of reality. According to the German physicist Max Born, the whole controversy was therefore “not so much an internal matter of physics, as one of its relation to philosophy and human knowledge in general.” In other words, there’s a lot riding on the reality (or not) of quantum jumps.

Seeing Without Looking

To probe further, we need to see quantum jumps one at a time. In 1986, three teams of researchers reported them happening in individual atoms suspended in space by electromagnetic fields. The atoms flipped between a “bright” state, where they could emit a photon of light, and a “dark” state that did not emit at random moments, remaining in one state or the other for periods of between a few tenths of a second and a few seconds before jumping again. Since then, such jumps have been seen in various systems, ranging from photons switching between quantum states to atoms in solid materials jumping between quantized magnetic states. In 2007 a team in France reported jumps that correspond to what they called “the birth, life and death of individual photons.”

In these experiments the jumps indeed looked abrupt and random — there was no telling, as the quantum system was monitored, when they would happen, nor any detailed picture of what a jump looked like. The Yale team’s setup, by contrast, allowed them to anticipate when a jump was coming, then zoom in close to examine it. The key to the experiment is the ability to collect just about all of the available information about it, so that none leaks away into the environment before it can be measured. Only then can they follow single jumps in such detail.

The quantum systems the researchers used are much larger than atoms, consisting of wires made from a superconducting material — sometimes called “artificial atoms” because they have discrete quantum energy states analogous to the electron states in real atoms. Jumps between the energy states can be induced by absorbing or emitting a photon, just as they are for electrons in atoms.

Devoret and colleagues wanted to watch a single artificial atom jump between its lowest-energy (ground) state and an energetically excited state. But they couldn’t monitor that transition directly, because making a measurement on a quantum system destroys the coherence of the wave function — its smooth wavelike behavior  — on which quantum behavior depends. To watch the quantum jump, the researchers had to retain this coherence. Otherwise they’d “collapse” the wave function, which would place the artificial atom in one state or the other. This is the problem famously exemplified by Schrödinger’s cat, which is allegedly placed in a coherent quantum “superposition” of live and dead states but becomes only one or the other when observed.

To get around this problem, Devoret and colleagues employ a clever trick involving a second excited state. The system can reach this second state from the ground state by absorbing a photon of a different energy. The researchers probe the system in a way that only ever tells them whether the system is in this second “bright” state, so named because it’s the one that can be seen. The state to and from which the researchers are actually looking for quantum jumps is, meanwhile, the “dark” state — because it remains hidden from direct view.

The researchers placed the superconducting circuit in an optical cavity (a chamber in which photons of the right wavelength can bounce around) so that, if the system is in the bright state, the way that light scatters in the cavity changes. Every time the bright state decays by emission of a photon, the detector gives off a signal akin to a Geiger counter’s “click.”

The key here, said Oliver, is that the measurement provides information about the state of the system without interrogating that state directly. In effect, it asks whether the system is in, or is not in, the ground and dark states collectively. That ambiguity is crucial for maintaining quantum coherence during a jump between these two states. In this respect, said Oliver, the scheme that the Yale team has used is closely related to those employed for error correction in quantum computers. There, too, it’s necessary to get information about quantum bits without destroying the coherence on which the quantum computation relies. Again, this is done by not looking directly at the quantum bit in question but probing an auxiliary state coupled to it.

The strategy reveals that quantum measurement is not about the physical perturbation induced by the probe but about what you know (and what you leave unknown) as a result. “Absence of an event can bring as much information as its presence,” said Devoret. He compares it to the Sherlock Holmes story in which the detective infers a vital clue from the “curious incident” in which a dog did not do anything in the night. Borrowing from a different (but often confused) dog-related Holmes story, Devoret calls it “Baskerville’s Hound meets Schrödinger’s Cat.”

To Catch a Jump

The Yale team saw a series of clicks from the detector, each signifying a decay of the bright state, arriving typically every few microseconds. This stream of clicks was interrupted approximately every few hundred microseconds, apparently at random, by a hiatus in which there were no clicks. Then after a period of typically 100 microseconds or so, the clicks resumed. During that silent time, the system had presumably undergone a transition to the dark state, since that’s the only thing that can prevent flipping back and forth between the ground and bright states.

So here in these switches from “click” to “no-click” states are the individual quantum jumps — just like those seen in the earlier experiments on trapped atoms and the like. However, in this case Devoret and colleagues could see something new.

Before each jump to the dark state, there would typically be a short spell where the clicks seemed suspended: a pause that acted as a harbinger of the impending jump. “As soon as the length of a no-click period significantly exceeds the typical time between two clicks, you have a pretty good warning that the jump is about to occur,” said Devoret.

That warning allowed the researchers to study the jump in greater detail. When they saw this brief pause, they switched off the input of photons driving the transitions. Surprisingly, the transition to the dark state still happened even without photons driving it — it is as if, by the time the brief pause sets in, the fate is already fixed. So although the jump itself comes at a random time, there is also something deterministic in its approach.

With the photons turned off, the researchers zoomed in on the jump with fine-grained time resolution to see it unfold. Does it happen instantaneously — the sudden quantum jump of Bohr and Heisenberg? Or does it happen smoothly, as Schrödinger insisted it must? And if so, how?

The team found that jumps are in fact gradual. That’s because, even though a direct observation could reveal the system only as being in one state or another, during a quantum jump the system is in a superposition, or mixture, of these two end states. As the jump progresses, a direct measurement would be increasingly likely to yield the final rather than the initial state. It’s a bit like the way our decisions may evolve over time. You can only either stay at a party or leave it — it’s a binary choice — but as the evening wears on and you get tired, the question “Are you staying or leaving?” becomes increasingly likely to get the answer “I’m leaving.”

The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump. Using a method called tomographic reconstruction, the researchers could figure out the relative weightings of the dark and ground states in the superposition. They saw these weights change gradually over a period of a few microseconds. That’s pretty fast, but it’s certainly not instantaneous.

What’s more, this electronic system is so fast that the researchers could “catch” the switch between the two states as it is happening, then reverse it by sending a pulse of photons into the cavity to boost the system back to the dark state. They can persuade the system to change its mind and stay at the party after all.

Flash of Insight

The experiment shows that quantum jumps “are indeed not instantaneous if we look closely enough,” said Oliver, “but are coherent processes”: real physical events that unfold over time.

The gradualness of the “jump” is just what is predicted by a form of quantum theory called quantum trajectories theory, which can describe individual events like this. “It is reassuring that the theory matches perfectly with what is seen” said David DiVincenzo, an expert in quantum information at Aachen University in Germany, “but it’s a subtle theory, and we are far from having gotten our heads completely around it.”

The possibility of predicting quantum jumps just before they occur, said Devoret, makes them somewhat like volcanic eruptions. Each eruption happens unpredictably, but some big ones can be anticipated by watching for the atypically quiet period that precedes them. “To the best of our knowledge, this precursory signal [to a quantum jump] has not been proposed or measured before,” he said.

Devoret said that an ability to spot precursors to quantum jumps might find applications in quantum sensing technologies. For example, “in atomic clock measurements, one wants to synchronize the clock to the transition frequency of an atom, which serves as a reference,” he said. But if you can detect right at the start if the transition is about to happen, rather than having to wait for it to be completed, the synchronization can be faster and therefore more precise in the long run.

DiVincenzo thinks that the work might also find applications in error correction for quantum computing, although he sees that as “quite far down the line.” To achieve the level of control needed for dealing with such errors, though, will require this kind of exhaustive harvesting of measurement data — rather like the data-intensive situation in particle physics, said DiVincenzo.

The real value of the result is not, though, in any practical benefits; it’s a matter of what we learn about the workings of the quantum world. Yes, it is shot through with randomness — but no, it is not punctuated by instantaneous jerks. Schrödinger, aptly enough, was both right and wrong at the same time.

Philip Ball is a science writer and author based in London who contributes frequently to Nature, New Scientist, Prospect, Nautilus and The Atlantic, among other publications.

One Hundred Years Ago, Einstein’s Theory of General Relativity Baffled the Press and the Public

Few people claimed to fully understand it, but the esoteric theory still managed to spark the public’s imagination. Posted January 10th 2021

Smithsonian Magazine

  • Dan Falk

Read when you’ve got time to spare.GettyImages-3091072crop.jpg

After two eclipse expeditions confirmed Einstein’s theory of general relativity, the scientist became an international celebrity. Pictured above in his home, circa 1925. Photo from General Photographic Agency / Getty Images.

When the year 1919 began, Albert Einstein was virtually unknown beyond the world of professional physicists. By year’s end, however, he was a household name around the globe. November 1919 was the month that made Einstein into “Einstein,” the beginning of the former patent clerk’s transformation into an international celebrity.

On November 6, scientists at a joint meeting of the Royal Society of London and the Royal Astronomical Society announced that measurements taken during a total solar eclipse earlier that year supported Einstein’s bold new theory of gravity, known as general relativity. Newspapers enthusiastically picked up the story. “Revolution in Science,” blared the Times of London; “Newtonian Ideas Overthrown.” A few days later, the New York Times weighed in with a six-tiered headline—rare indeed for a science story. “Lights All Askew in the Heavens,” trumpeted the main headline. A bit further down: “Einstein’s Theory Triumphs” and “Stars Not Where They Seemed, or Were Calculated to Be, But Nobody Need Worry.”

The spotlight would remain on Einstein and his seemingly impenetrable theory for the rest of his life. As he remarked to a friend in 1920: “At present every coachman and every waiter argues about whether or not the relativity theory is correct.” In Berlin, members of the public crowded into the classroom where Einstein was teaching, to the dismay of tuition-paying students. And then he conquered the United States. In 1921, when the steamship Rotterdam arrived in Hoboken, New Jersey, with Einstein on board, it was met by some 5,000 cheering New Yorkers. Reporters in small boats pulled alongside the ship even before it had docked. An even more over-the-top episode played out a decade later, when Einstein arrived in San Diego, en route to the California Institute of Technology where he had been offered a temporary position. Einstein was met at the pier not only by the usual throng of reporters, but by rows of cheering students chanting the scientist’s name.

The intense public reaction to Einstein has long intrigued historians. Movie stars have always attracted adulation, of course, and 40 years later the world would find itself immersed in Beatlemania—but a physicist? Nothing like it had ever been seen before, and—with the exception of Stephen Hawking, who experienced a milder form of celebrity—it hasn’t been seen since, either.

Over the years, a standard, if incomplete, explanation emerged for why the world went mad over a physicist and his work: In the wake of a horrific global war—a conflict that drove the downfall of empires and left millions dead—people were desperate for something uplifting, something that rose above nationalism and politics. Einstein, born in Germany, was a Swiss citizen living in Berlin, Jewish as well as a pacifist, and a theorist whose work had been confirmed by British astronomers. And it wasn’t just any theory, but one which moved, or seemed to move, the stars. After years of trench warfare and the chaos of revolution, Einstein’s theory arrived like a bolt of lightning, jolting the world back to life.

Mythological as this story sounds, it contains a grain of truth, says Diana Kormos-Buchwald, a historian of science at Caltech and director and general editor of the Einstein Papers Project. In the immediate aftermath of the war, the idea of a German scientist—a German anything—receiving acclaim from the British was astonishing.

“German scientists were in limbo,” Kormos-Buchwald says. “They weren’t invited to international conferences; they weren’t allowed to publish in international journals. And it’s remarkable how Einstein steps in to fix this problem. He uses his fame to repair contact between scientists from former enemy countries.”

At that time, Kormos-Buchwald adds, the idea of a famous scientist was unusual. Marie Curie was one of the few widely known names. (She already had two Nobel Prizes by 1911; Einstein wouldn’t receive his until 1922, when he was retroactively awarded the 1921 prize.) However, Britain also had something of a celebrity-scientist in the form of Sir Arthur Eddington, the astronomer who organized the eclipse expeditions to test general relativity. Eddington was a Quaker and, like Einstein, had been opposed to the war. Even more crucially, he was one of the few people in England who understood Einstein’s theory, and he recognized the importance of putting it to the test.

“Eddington was the great popularizer of science in Great Britain. He was the Carl Sagan of his time,” says Marcia Bartusiak, science author and professor in MIT’s graduate Science Writing program. “He played a key role in getting the media’s attention focused on Einstein.”

It also helped Einstein’s fame that his new theory was presented as a kind of cage match between himself and Isaac Newton, whose portrait hung in the very room at the Royal Society where the triumph of Einstein’s theory was announced.

“Everyone knows the trope of the apple supposedly falling on Newton’s head,” Bartusiak says. “And here was a German scientist who was said to be overturning Newton, and making a prediction that was actually tested—that was an astounding moment.”

Much was made of the supposed incomprehensibility of the new theory. In the New York Times story of November 10, 1919—the “Lights All Askew” edition—the reporter paraphrases J.J. Thompson, president of the Royal Society, as stating that the details of Einstein’s theory “are purely mathematical and can only be expressed in strictly scientific terms” and that it was “useless to endeavor to detail them for the man in the street.” The same article quotes an astronomer, W.J.S. Lockyer, as saying that the new theory’s equations, “while very important,” do not “affect anything on this earth. They do not personally concern ordinary human beings; only astronomers are affected.” (If Lockyer could have time travelled to the present day, he would discover a world in which millions of ordinary people routinely navigate with the help of GPS satellites, which depend directly on both special and general relativity.)

The idea that a handful of clever scientists might understand Einstein’s theory, but that such comprehension was off limits to mere mortals, did not sit well with everyone—including the New York Times’ own staff. The day after the “Lights All Askew” article ran, an editorial asked what “common folk” ought to make of Einstein’s theory, a set of ideas that “cannot be put in language comprehensible to them.” They conclude with a mix of frustration and sarcasm: “If we gave it up, no harm would be done, for we are used to that, but to have the giving up done for us is—well, just a little irritating.”

Things were not going any smoother in London, where the editors of the Times confessed their own ignorance but also placed some of the blame on the scientists themselves. “We cannot profess to follow the details and implications of the new theory with complete certainty,” they wrote on November 28, “but we are consoled by the reflection that the protagonists of the debate, including even Dr. Einstein himself, find no little difficulty in making their meaning clear.”

Readers of that day’s Times were treated to Einstein’s own explanation, translated from German. It ran under the headline, “Einstein on his Theory.” The most comprehensible paragraph was the final one, in which Einstein jokes about his own “relative” identity: “Today in Germany I am called a German man of science, and in England I am represented as a Swiss Jew. If I come to be regarded as a bête noire, the descriptions will be reversed, and I shall become a Swiss Jew for the Germans, and a German man of science for the English.”

Not to be outdone, the New York Times sent a correspondent to pay a visit to Einstein himself, in Berlin, finding him “on the top floor of a fashionable apartment house.” Again they try—both the reporter and Einstein—to illuminate the theory. Asked why it’s called “relativity,” Einstein explains how Galileo and Newton envisioned the workings of the universe and how a new vision is required, one in which time and space are seen as relative. But the best part was once again the ending, in which the reporter lays down a now-clichéd anecdote which would have been fresh in 1919: “Just then an old grandfather’s clock in the library chimed the mid-day hour, reminding Dr. Einstein of some appointment in another part of Berlin, and old-fashioned time and space enforced their wonted absolute tyranny over him who had spoken so contemptuously of their existence, thus terminating the interview.”

Efforts to “explain Einstein” continued. Eddington wrote about relativity in the Illustrated London News and, eventually, in popular books. So too did luminaries like Max Planck, Wolfgang Pauli and Bertrand Russell. Einstein wrote a book too, and it remains in print to this day. But in the popular imagination, relativity remained deeply mysterious. A decade after the first flurry of media interest, an editorial in the New York Times lamented: “Countless textbooks on relativity have made a brave try at explaining and have succeeded at most in conveying a vague sense of analogy or metaphor, dimly perceptible while one follows the argument painfully word by word and lost when one lifts his mind from the text.”

Eventually, the alleged incomprehensibility of Einstein’s theory became a selling point, a feature rather than a bug. Crowds continued to follow Einstein, not, presumably, to gain an understanding of curved space-time, but rather to be in the presence of someone who apparently did understand such lofty matters. This reverence explains, perhaps, why so many people showed up to hear Einstein deliver a series of lectures in Princeton in 1921. The classroom was filled to overflowing—at least at the beginning, Kormos-Buchwald says. “The first day there were 400 people there, including ladies with fur collars in the front row. And on the second day there were 200, and on the third day there were 50, and on the fourth day the room was almost empty.” 1919_eclipse_positive.jpg

Original caption: From the report of Sir Arthur Eddington on the expedition to verify Albert Einstein’s prediction of the bending of light around the sun. Photo from Wikimedia Commons / Public Domain.

If the average citizen couldn’t understand what Einstein was saying, why were so many people keen on hearing him say it? Bartisuak suggests that Einstein can be seen as the modern equivalent of the ancient shaman who would have mesmerized our Paleolithic ancestors. The shaman “supposedly had an inside track on the purpose and nature of the universe,” she says. “Through the ages, there has been this fascination with people that you think have this secret knowledge of how the world works. And Einstein was the ultimate symbol of that.”

The physicist and science historian Abraham Pais has described Einstein similarly. To many people, Einstein appeared as “a new Moses come down from the mountain to bring the law and a new Joshua controlling the motion of the heavenly bodies.” He was the “divine man” of the 20th century.

Einstein’s appearance and personality helped. Here was a jovial, mild-mannered man with deep-set eyes, who spoke just a little English. (He did not yet have the wild hair of his later years, though that would come soon enough.) With his violin case and sandals—he famously shunned socks—Einstein was just eccentric enough to delight American journalists. (He would later joke that his profession was “photographer’s model.”) According to Walter Isaacson’s 2007 biography, Einstein: His Life and Universe, the reporters who caught up with the scientist “were thrilled that the newly discovered genius was not a drab or reserved academic” but rather “a charming 40-year-old, just passing from handsome to distinctive, with a wild burst of hair, rumpled informality, twinkling eyes, and a willingness to dispense wisdom in bite-sized quips and quotes.”

The timing of Einstein’s new theory helped heighten his fame as well. Newspapers were flourishing in the early 20th century, and the advent of black-and-white newsreels had just begun to make it possible to be an international celebrity. As Thomas Levenson notes in his 2004 book Einstein in Berlin, Einstein knew how to play to the cameras. “Even better, and usefully in the silent film era, he was not expected to be intelligible. … He was the first scientist (and in many ways the last as well) to achieve truly iconic status, at least in part because for the first time the means existed to create such idols.”

Einstein, like many celebrities, had a love-hate relationship with fame, which he once described as “dazzling misery.” The constant intrusions into his private life were an annoyance, but he was happy to use his fame to draw attention to a variety of causes that he supported, including Zionism, pacifism, nuclear disarmament and racial equality.

Not everyone loved Einstein, of course. Various groups had their own distinctive reasons for objecting to Einstein and his work, John Stachel, the founding editor of the Einstein Papers Project and a professor at Boston University, told me in a 2004 interview. Some American philosophers rejected relativity for being too abstract and metaphysical, while some Russian thinkers felt it was too idealistic. Some simply hated Einstein because he was a Jew.

“Many of those who opposed Einstein on philosophical grounds were also anti-Semites, and later on, adherents of what the Nazis called Deutsche Physic—‘German physics’—which was ‘good’ Aryan physics, as opposed to this Jüdisch Spitzfindigkeit—‘Jewish subtlety,’ Stachel says. “So one gets complicated mixtures, but the myth that everybody loved Einstein is certainly not true. He was hated as a Jew, as a pacifist, as a socialist [and] as a relativist, at least.” As the 1920s wore on, with anti-Semitism on the rise, death threats against Einstein became routine. Fortunately he was on a working holiday in the United States when Hitler came to power. He would never return to the country where he had done his greatest work.

For the rest of his life, Einstein remained mystified by the relentless attention paid to him. As he wrote in 1942, “I never understood why the theory of relativity with its concepts and problems so far removed from practical life should for so long have met with a lively, or indeed passionate, resonance among broad circles of the public. … What could have produced this great and persistent psychological effect? I never yet heard a truly convincing answer to this question.”

Today, a full century after his ascent to superstardom, the Einstein phenomenon continues to resist a complete explanation. The theoretical physicist burst onto the world stage in 1919, expounding a theory that was, as the newspapers put it, “dimly perceptible.” Yet in spite of the theory’s opacity—or, very likely, because of it—Einstein was hoisted onto the lofty pedestal where he remains to this day. The public may not have understood the equations, but those equations were said to reveal a new truth about the universe, and that, it seems, was enough.

Dan Falk is a science journalist based in Toronto. His books include “The Science of Shakespeare” and “In Search of Time.”

A Physicist’s Physicist Ponders the Nature of Reality

Edward Witten reflects on the meaning of dualities in physics and math, emergent space-time, and the pursuit of a complete description of nature. Posted January 8th 2021

Quanta Magazine

  • Natalie Wolchover

Read when you’ve got time to spare.EdWitten_2880x1780_03.jpg

Edward Witten in his office at the Institute for Advanced Study in Princeton, New Jersey. All photos by Jean Sweep for Quanta Magazine.

Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the leading candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.

During a visit in fall 2017, I spotted Witten on the Institute’s central lawn and requested an interview; in his quick, alto voice, he said he couldn’t promise to be able to answer my questions but would try. Later, when I passed him on the stone paths, he often didn’t seem to see me.

Physics luminaries since Albert Einstein, who lived out his days in the same intellectual haven, have sought to unify gravity with the other forces of nature by finding a more fundamental quantum theory to replace Einstein’s approximate picture of gravity as curves in the geometry of space-time. M-theory, which Witten proposed in 1995, could conceivably offer this deeper description, but only some aspects of the theory are known. M-theory incorporates within a single mathematical structure all five versions of string theory, which renders the elements of nature as minuscule vibrating strings. These five string theories connect to each other through “dualities,” or mathematical equivalences. Over the past 30 years, Witten and others have learned that the string theories are also mathematically dual to quantum field theories — descriptions of particles moving through electromagnetic and other fields that serve as the language of the reigning “Standard Model” of particle physics. While he’s best known as a string theorist, Witten has discovered many new quantum field theories and explored how all these different descriptions are connected. His physical insights have led time and again to deep mathematical discoveries.

Researchers pore over his work and hope he’ll take an interest in theirs. But for all his scholarly influence, Witten, who is 66, does not often broadcast his views on the implications of modern theoretical discoveries. Even his close colleagues eagerly suggested questions they wanted me to ask him.

When I arrived at his office at the appointed hour on a summery Thursday last month, Witten wasn’t there. His door was ajar. Papers covered his coffee table and desk — not stacks, but floods: text oriented every which way, some pages close to spilling onto the floor. (Research papers get lost in the maelstrom as he finishes with them, he later explained, and every so often he throws the heaps away.) Two girls smiled out from a framed photo on a shelf; children’s artwork decorated the walls, one celebrating Grandparents’ Day. When Witten arrived minutes later, we spoke for an hour and a half about the meaning of dualities in physics and math, the current prospects of M-theory, what he’s reading, what he’s looking for, and the nature of reality. The interview has been condensed and edited for clarity.

Witten_KidsArt_2K.jpg

Physicists are talking more than ever lately about dualities, but you’ve been studying them for decades. Why does the subject interest you?

People keep finding new facets of dualities. Dualities are interesting because they frequently answer questions that are otherwise out of reach. For example, you might have spent years pondering a quantum theory and you understand what happens when the quantum effects are small, but textbooks don’t tell you what you do if the quantum effects are big; you’re generally in trouble if you want to know that. Frequently dualities answer such questions. They give you another description, and the questions you can answer in one description are different than the questions you can answer in a different description.

What are some of these newfound facets of dualities?

It’s open-ended because there are so many different kinds of dualities. There are dualities between a gauge theory [a theory, such as a quantum field theory, that respects certain symmetries] and another gauge theory, or between a string theory for weak coupling [describing strings that move almost independently from one another] and a string theory for strong coupling. Then there’s AdS/CFT duality, between a gauge theory and a gravitational description. That duality was discovered 20 years ago, and it’s amazing to what extent it’s still fruitful. And that’s largely because around 10 years ago, new ideas were introduced that rejuvenated it. People had new insights about entropy in quantum field theory — the whole story about “it from qubit.”

The AdS/CFT duality connects a theory of gravity in a space-time region called anti-de Sitter space (which curves differently than our universe) to an equivalent quantum field theory describing that region’s gravity-free boundary. Everything there is to know about AdS space — often called the “bulk” since it’s the higher-dimensional region — is encoded, like in a hologram, in quantum interactions between particles on the lower-dimensional boundary. Thus, AdS/CFT gives physicists a “holographic” understanding of the quantum nature of gravity.

That’s the idea that space-time and everything in it emerges like a hologram out of information stored in the entangled quantum states of particles.

Yes. Then there are dualities in math, which can sometimes be interpreted physically as consequences of dualities between two quantum field theories. There are so many ways these things are interconnected that any simple statement I try to make on the fly, as soon as I’ve said it I realize it didn’t capture the whole reality. You have to imagine a web of different relationships, where the same physics has different descriptions, revealing different properties. In the simplest case, there are only two important descriptions, and that might be enough. If you ask me about a more complicated example, there might be many, many different ones.

I’m not certain what we should hope for. Traditionally, quantum field theory was constructed by starting with the classical picture [of a smooth field] and then quantizing it. Now we’ve learned that there are a lot of things that happen that that description doesn’t do justice to. And the same quantum theory can come from different classical theories. Now, Nati Seiberg [a theoretical physicist who works down the hall] would possibly tell you that he has faith that there’s a better formulation of quantum field theory that we don’t know about that would make everything clearer. I’m not sure how much you should expect that to exist. That would be a dream, but it might be too much to hope for; I really don’t know.

There’s another curious fact that you might want to consider, which is that quantum field theory is very central to physics, and it’s actually also clearly very important for math. But it’s extremely difficult for mathematicians to study; the way physicists define it is very hard for mathematicians to follow with a rigorous theory. That’s extremely strange, that the world is based so much on a mathematical structure that’s so difficult.

What do you see as the relationship between math and physics?

I prefer not to give you a cosmic answer but to comment on where we are now. Physics in quantum field theory and string theory somehow has a lot of mathematical secrets in it, which we don’t know how to extract in a systematic way. Physicists are able to come up with things that surprise the mathematicians. Because it’s hard to describe mathematically in the known formulation, the things you learn about quantum field theory you have to learn from physics.

I find it hard to believe there’s a new formulation that’s universal. I think it’s too much to hope for. I could point to theories where the standard approach really seems inadequate, so at least for those classes of quantum field theories, you could hope for a new formulation. But I really can’t imagine what it would be.

You can’t imagine it at all? 

No, I can’t. Traditionally it was thought that interacting quantum field theory couldn’t exist above four dimensions, and there was the interesting fact that that’s the dimension we live in. But one of the offshoots of the string dualities of the 1990s was that it was discovered that quantum field theories actually exist in five and six dimensions. And it’s amazing how much is known about their properties.

I’ve heard about the mysterious (2,0) theory, a quantum field theory describing particles in six dimensions, which is dual to M-theory describing strings and gravity in seven-dimensional AdS space. Does this (2,0) theory play an important role in the web of dualities?

Yes, that’s the pinnacle. In terms of conventional quantum field theory without gravity, there is nothing quite like it above six dimensions. From the (2,0) theory’s existence and main properties, you can deduce an incredible amount about what happens in lower dimensions. An awful lot of important dualities in four and fewer dimensions follow from this six-dimensional theory and its properties. However, whereas what we know about quantum field theory is normally from quantizing a classical field theory, there’s no reasonable classical starting point of the (2,0) theory. The (2,0) theory has properties [such as combinations of symmetries] that sound impossible when you first hear about them. So you can ask why dualities exist, but you can also ask why is there a 6-D theory with such and such properties? This seems to me a more fundamental restatement.

Dualities sometimes make it hard to maintain a sense of what’s real in the world, given that there are radically different ways you can describe a single system. How would you describe what’s real or fundamental?

What aspect of what’s real are you interested in? What does it mean that we exist? Or how do we fit into our mathematical descriptions?

The latter.

Well, one thing I’ll tell you is that in general, when you have dualities, things that are easy to see in one description can be hard to see in the other description. So you and I, for example, are fairly simple to describe in the usual approach to physics as developed by Newton and his successors. But if there’s a radically different dual description of the real world, maybe some things physicists worry about would be clearer, but the dual description might be one in which everyday life would be hard to describe.

What would you say about the prospect of an even more optimistic idea that there could be one single quantum gravity description that really does help you in every case in the real world?

Well, unfortunately, even if it’s correct I can’t guarantee it would help. Part of what makes it difficult to help is that the description we have now, even though it’s not complete, does explain an awful lot. And so it’s a little hard to say, even if you had a truly better description or a more complete description, whether it would help in practice.

Are you speaking of M-theory?

M-theory is the candidate for the better description.

You proposed M-theory 22 years ago. What are its prospects today?

Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

Witten_Drawing_V.psdcombo.jpg

What’s an example of something else we might need?

Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

Are you willing to speculate about how it would be different?

I really doubt I can say anything useful. I guess I suspect that there’s an extra layer of abstractness compared to what we’re used to. I tend to think that there isn’t a precise quantum description of space-time — except in the types of situations where we know that there is, such as in AdS space. I tend to think, otherwise, things are a little bit murkier than an exact quantum description. But I can’t say anything useful.

The other night I was reading an old essay by the 20th-century Princeton physicist John Wheeler. He was a visionary, certainly. If you take what he says literally, it’s hopelessly vague. And therefore, if I had read this essay when it came out 30 years ago, which I may have done, I would have rejected it as being so vague that you couldn’t work on it, even if he was on the right track.

I’m trying to learn about what people are trying to say with the phrase “it from qubit.” Wheeler talked about “it from bit,” but you have to remember that this essay was written probably before the term “qubit” was coined and certainly before it was in wide currency. Reading it, I really think he was talking about qubits, not bits, so “it from qubit” is actually just a modern translation.

Don’t expect me to be able to tell you anything useful about it — about whether he was right. When I was a beginning grad student, they had a series of lectures by faculty members to the new students about theoretical research, and one of the people who gave such a lecture was Wheeler. He drew a picture on the blackboard of the universe visualized as an eye looking at itself. I had no idea what he was talking about. It’s obvious to me in hindsight that he was explaining what it meant to talk about quantum mechanics when the observer is part of the quantum system. I imagine there is something we don’t understand about that.

Observing a quantum system irreversibly changes it, creating a distinction between past and future. So the observer issue seems possibly related to the question of time, which we also don’t understand. With the AdS/CFT duality, we’ve learned that new spatial dimensions can pop up like a hologram from quantum information on the boundary. Do you think time is also emergent — that it arises from a timeless complete description?

I tend to assume that space-time and everything in it are in some sense emergent. By the way, you’ll certainly find that that’s what Wheeler expected in his essay. As you’ll read, he thought the continuum was wrong in both physics and math. He did not think one’s microscopic description of space-time should use a continuum of any kind — neither a continuum of space nor a continuum of time, nor even a continuum of real numbers. On the space and time, I’m sympathetic to that. On the real numbers, I’ve got to plead ignorance or agnosticism. It is something I wonder about, but I’ve tried to imagine what it could mean to not use the continuum of real numbers, and the one logician I tried discussing it with didn’t help me.

Do you consider Wheeler a hero?

I wouldn’t call him a hero, necessarily, no. Really I just became curious what he meant by “it from bit,” and what he was saying. He definitely had visionary ideas, but they were too far ahead of their time. I think I was more patient in reading a vague but inspirational essay than I might have been 20 years ago. He’s also got roughly 100 interesting-sounding references in that essay. If you decided to read them all, you’d have to spend weeks doing it. I might decide to look at a few of them.

Why do you have more patience for such things now?

I think when I was younger I always thought the next thing I did might be the best thing in my life. But at this point in life I’m less persuaded of that. If I waste a little time reading somebody’s essay, it doesn’t seem that bad.

Do you ever take your mind off physics and math?

My favorite pastime is tennis. I am a very average but enthusiastic tennis player.

In my career I’ve only been able to take small jumps. Relatively small jumps. What Wheeler was talking about was an enormous jump. And he does say at the beginning of the essay that he has no idea if this will take 10, 100 or 1,000 years.

And he was talking about explaining how physics arises from information.

Yes. The way he phrases it is broader: He wants to explain the meaning of existence. That was actually why I thought you were asking if I wanted to explain the meaning of existence.

I see. Does he have any hypotheses?

No. He only talks about things you shouldn’t do and things you should do in trying to arrive at a more fundamental description of physics.

Do you have any ideas about the meaning of existence?

No. [Laughs.]

Natalie Wolchover is a senior writer and editor at Quanta Magazine covering the physical sciences.

Vodka made out of thin air: toasting the planet’s good health Posted January 3rd 2021

Air Vodka bottle

Drink to that: carbon negative vodka

The Air Company, based in New York, makes vodka from two ingredients: carbon dioxide and water. Each bottle that’s produced takes carbon dioxide out of the air. It has been chosen as one of the finalists in the $20m NRG COSIA Carbon XPRIZE, which aims to incentivise innovation in the field of carbon capture, utilisation and storage.

The company’s chief technology officer, Stafford Sheehan, hit upon the idea while trying to create artificial photosynthesis as a chemistry PhD student at Yale. Photosynthesis, you may remember from chemistry at school, is the process by which plants use sunlight to convert CO2 into energy. For 2bn years, plants were equal to the task of balancing the carbon in the atmosphere – but now we are emitting it at a rate beyond what nature can restore with photosynthesis. Hence the interest in carbon capture. “Our aim is to take CO2 that would otherwise be emitted into the atmosphere and transform it into things that are better for the planet,” says Sheehan

A nice cold martini is undoubtedly better for the planet than global warming. Unfortunately, however, it would require 11 quadrillion Air Vodka Martinis to make any kind of significant impact. But Sheehan hopes to make alcohol for a variety of different applications. “Ethanol, methanol and propanol are three of the most-produced chemicals in the world, all alcohols,” he says. “Plastics, resins, fragrances, cleaners, sanitisers, bio-jet fuel… almost all start from alcohol. If we can make the base alcohol for all of those from carbon that would otherwise be emitted, that would make a major impact.”

Air Company currently captures its CO2 from old-fashioned alcohol production: concentrated CO2 rising from a standard fuel-alcohol fermentation stack is transformed into vodka. That’s a fairly boutique product. However, power stations are much more plentiful sources. “You can burn natural gas, then capture the CO2 you’re emitting, and that feeds you the carbon dioxide,” says Sheehan. “That’s what we’d like to do and that’s where you can do it at scale.” Richard Godwin

Nasa’s Mars rover and the ‘seven minutes of terror’ Posted December 31st 2020

By Jonathan Amos
BBC Science CorrespondentPublished23 DecemberRelated Topics

The US space agency (Nasa) has released an animation showing how its one-tonne Perseverance rover will land on Mars on 18 February.

The robot is being sent to a crater called Jezero where it will search for evidence of past life. But to undertake this science, it must first touch down softly.

The sequence of manoeuvres needed to land on Mars is often referred to as the “seven minutes of terror” – and with good reason.

So much has to go right in a frighteningly short space of time or the arriving mission will dig a very big and very expensive new hole in the Red Planet.

What’s more, it’s all autonomous.

With a distance on the day of 209 million km (130 million miles) between Earth and Mars, every moment and every movement you see in the animation has to be commanded by onboard computers.

It starts more than 100km above Mars where the Perseverance rover will encounter the first wisps of atmosphere.

Artwork: The skycrane
image captionArtwork: The “skycrane” lowers the rovers on a series of nylon cords

At this point, the vehicle, in its protective capsule, is travelling at 20,000km/h (12,000mph).

In little more than 400 seconds, the descent system has to reduce this velocity to less than 1m/s at the surface.

Most of the work is done by a heat shield.

As the capsule plunges deeper into the Martian air, it gets super-hot at more than 1,000C – but at the same time, the drag slows the fall dramatically.

By the time the supersonic parachute deploys from the backshell of the capsule, the velocity has already been reduced to 1,200km/h.

Perseverance will ride the 21.5m-wide parachute for just over a minute, further scrubbing that entry speed.

The most complex phases are still to come, however.

Perseverance
image captionOne tonne of high technology: Seven instruments, 23 cameras, two microphones and a drill

At an altitude of 2km, and while moving at 100m/s – the Perseverance rover and its “Skycrane” separate from the backshell and fall away.

Eight rockets then ignite on the cradle to bring the rover into a hovering position just above the surface. Nylon cords are used to lower the multi-billion-dollar wheeled vehicle to the ground.

But that’s still not quite it.

When Perseverance senses contact, it must immediately sever the cables or it will be dragged behind the crane as the cradle flies away to dispose of itself at a safe distance.

The sequence looks much the same as was used to put Nasa’s last rover, Curiosity, on the surface of Mars eight years ago. However, the navigation tools have been improved to put Perseverance down in an even more precisely defined landing zone.

Touchdown is expected in the late afternoon, local time, on Mars – just before 21:00 GMT on Earth.

It’s worth remembering that on the day of landing, the time it takes for a radio signal to reach Earth from Mars will be roughly 700 seconds.

This means that when Nasa receives the message from Perseverance that it has engaged the top of the atmosphere, the mission will already have been dead or alive on the planet’s surface for several minutes.

The robot will be recording its descent on camera and with microphones. The media files will be sent back to Earth after landing – assuming Perseverance survives.

Presentational grey line
Rover diagram

Read our guides to Perseverance (also known as the Mars 2020 mission) – where it’s going and what it will be doing.

media captionThe perseverance rover will deploy a helicopter
Infographic
image captionPerseverance will target a crater that once held a lake

Why does COVID-19 kill some people and spare others? Study finds certain patients – particularly men – have ‘autoantibodies’ that cause immune system proteins to attack the body’s own cells and tissues Posted December 31st 2020

  • A new study found 10% of around 1,000 severely ill coronavirus patients have so-called autoantibodies
  • They disable immune system proteins that prevent the virus from multiplying itself and also attack the body’s cells and tissues
  • Researchers say the patients didn’t develop autoantibodies in response to the virus but had them before the pandemic began
  • Of the 101 patients with autoantibodies, 94% were male, which may suggest why men are more likely to die from COVID-19 than women

By Mary Kekatos Senior Health Reporter For Dailymail.com

Published: 18:27, 13 November 2020 | Updated: 20:53, 13 November 2020

768 shares 121 View comments

Researchers believe they are one step closer to understanding why the novel coronavirus kills some people and spares others.

In a new study, they found that some critically ill patients have antibodies that deactivate certain immune system proteins.

These antibodies, which are known autoantibodies, allow the virus to replicate and spread through the body, and also attack our cells and tissues.

What’s more, nearly all of the patients with these autoantibodies were men.

The international team, led by the Institut Imagine in Paris and Rockefeller University in New York City, says the findings suggest that doctors should screen patients for these autoantibodies so they can provide treatment to patients before they fall too ill. A new study found some severely ill coronavirus patients have so-called autoantibodies, which allow the virus to replicating and spread throughout the body, and also attack the body's cells and tissues. Pictured: Medics transfer a patient on a stretcher from an ambulance outside of Emergency at Coral Gables Hospital in Florida, July 2020

A new study found some severely ill coronavirus patients have so-called autoantibodies, which allow the virus to replicating and spread throughout the body, and also attack the body’s cells and tissues. Pictured: Medics transfer a patient on a stretcher from an ambulance outside of Emergency at Coral Gables Hospital in Florida, July 2020Nearly 10% of severely ill patients had autoantibodies (left) compared to those with mild or asymptomatic cases (center) and healthy controls (right)

Of the 101 patients with autoantibodies, 94% were male, which may suggest why men are more likely to die than women (above)

For the study, published in the journal Science, the team looked at nearly 1,000 patients with with life-threatening cases of COVID-19.

These patients were compared with more than 600 patients who had mild or asymptomatic cases and around 1,200 healthy controls. 

Results showed nearly 10 percent of those with critical cases had autoantibodies that disable immune system proteins called interferons.

Interferons are signaling proteins released by infected cells and are named for their prowess to ‘interfere’ with the virus’s ability to multiply itself. 

They are known as ‘call-to-arms’ immune cells because they control viral replication until more immune cells have time to arrive and attack the pathogen. 

Autoantibodies block interferons and attack the body’s cells, tissues and organs.

None of the 663 people with asymptomatic or mild cases of COVID-19 had autoantibodies and just four of the healthy controls did.

However, the team said these critically ill patients didn’t make autoantibodies in response to being infected. 

Rather, they always had them and the autoantibodies didn’t cause any problems until patients contracted the virus.  

‘Before COVID, their condition was silent,’ lead author Paul Bastard, a PhD candidate at Instiut Imagine and a researcher at Rockefeller University, told Kaiser Health News.

‘Most of them hadn’t gotten sick before.’

The team also found that 94 percent of the 101 coronavirus patients who had autoantibodies were men.  

Researchers say it may explain why, around the world, men have accounted for about 60 percent of deaths from COVID-19.

‘You see significantly more men dying in their 30s, not just in their 80s,’ Dr Sabra Klein, a professor of molecular microbiology and immunology at the Johns Hopkins Bloomberg School of Public Health, told Kaiser Health nNews.

Bastard, the lead author, says screening coronavirus patients might help predict which ones will become severely ill and allow doctors to treat them sooner. 

‘This is one of the most important things we’ve learned about the immune system since the start of the pandemic,’ Dr Eric Topol, executive vice president for research at Scripps Research in San Diego, who was not involved in the study, told Kaiser Health News. 

‘This is a breakthrough finding.’  +7

A Test for the Leading Big Bang Theory

Cosmologists have predicted the existence of an oscillating signal that could distinguish between cosmic inflation and alternative theories of the universe’s birth.

Quanta Magazine

  • Natalie Wolchover

The leading hypothesis about the universe’s birth — that a quantum speck of space became energized and inflated in a split second, creating a baby cosmos — solves many puzzles and fits all observations to date. Yet this “cosmic inflation” hypothesis lacks definitive proof. Telltale ripples that should have formed in the inflating spatial fabric, known as primordial gravitational waves, haven’t been detected in the geometry of the universe by the world’s most sensitive telescopes. Their absence has fueled underdog theories of cosmogenesis in recent years. And yet cosmic inflation is wriggly. In many variants of the idea, the sought-after ripples would simply be too weak to observe.

“The question is whether one can test the entire [inflation] scenario, not just specific models,” said Avi Loeb, an astrophysicist and cosmologist at Harvard University. “If there is no guillotine that can kill off some theories, then what’s the point?”

In a paper that appeared on the physics preprint site, arxiv.org, in February 2019, Loeb and two Harvard colleagues, Xingang Chen and Zhong-Zhi Xianyu, suggested such a guillotine. The researchers predicted an oscillatory pattern in the distribution of matter throughout the cosmos that, if detected, could distinguish between inflation and alternative scenarios — particularly the hypothesis that the Big Bang was actually a bounce preceded by a long period of contraction.

The paper has yet to be peer-reviewed, but Will Kinney, an inflationary cosmologist at the University at Buffalo and a visiting professor at Stockholm University, said “the analysis seems correct to me.” He called the proposal “a very elegant idea.”

“If the signal is real and observable, it would be very interesting,” Sean Carroll of the California Institute of Technology said in an email.

If it does exist, the signal would appear in density variations across the universe. Imagine taking a giant ice cream scoop to the sky and counting how many galaxies wind up inside. Do this many times all over the cosmos, and you’ll find that the number of scooped-up galaxies will vary above or below some average. Now increase the size of your scoop. When scooping larger volumes of universe, you might find that the number of captured galaxies now varies more extremely than before. As you use progressively larger scoops, according to Chen, Loeb and Xianyu’s calculations, the amplitude of matter density variations should oscillate between more and less extreme as you move up the scales. “What we showed,” Loeb explained, is that from the form of these oscillations, “you can tell if the universe was expanding or contracting when the density perturbations were produced” — reflecting an inflationary or bounce cosmology, respectively.

Regardless of which theory of cosmogenesis is correct, cosmologists believe that the density variations observed throughout the cosmos today were almost certainly seeded by random ripples in quantum fields that existed long ago.

Because of quantum uncertainty, any quantum field that filled the primordial universe would have fluctuated with ripples of all different wavelengths. Periodically, waves of a certain wavelength would have constructively interfered, forming peaks — or equivalently, concentrations of particles. These concentrations later grew into the matter density variations seen on different scales in the cosmos today.

But what caused the peaks at a particular wavelength to get frozen into the universe when they did? According to the new paper, the timing depended on whether the peaks formed while the universe was exponentially expanding, as in inflation models, or while it was slowly contracting, as in bounce models.

If the universe contracted in the lead-up to a bounce, ripples in the quantum fields would have been squeezed. At some point the observable universe would have contracted to a size smaller than ripples of a certain wavelength, like a violin whose resonant cavity is too small to produce the sounds of a cello. When the too-large ripples disappeared, whatever peaks, or concentrations of particles, existed at that scale at that moment would have been “frozen” into the universe. As the observable universe shrank further, ripples at progressively smaller and smaller scales would have vanished, freezing in as density variations. Ripples of some sizes might have been constructively interfering at the critical moment, producing peak density variations on that scale, whereas slightly shorter ripples that disappeared a moment later might have frozen out of phase. These are the oscillations between high and low density variations that Chen, Loeb and Xianyu argue should theoretically show up as you change the size of your galaxy ice cream scoop.

The authors argue that a qualitative difference between the forms of oscillations in the two scenarios will reveal which one occurred. In both cases, it was as if the quantum field put tick marks on a piece of tape as it rushed past — representing the expanding or contracting universe. If space were expanding exponentially, as in inflation, the tick marks imprinted on the universe by the field would have grown farther and farther apart. If the universe contracted, the tick marks should have become closer and closer together as a function of scale. Thus Chen, Loeb and Xianyu argue that the changing separation between the peaks in density variations as a function of scale should reveal the universe’s evolutionary history. “We can finally see whether the primordial universe was actually expanding or contracting, and whether it did it inflationarily fast or extremely slowly,” Chen said.

Exactly what the oscillatory signal might look like, and how strong it might be, depend on the unknown nature of the quantum fields that might have created it. Discovering such a signal would tell us about those primordial cosmic ingredients. As for whether the putative signal will show up at all in future galaxy surveys, “the good news,” according to Kinney, is that the signal is probably “much, much easier to detect” than other searched-for signals called “non-gaussianities”: triangles and other geometric arrangements of matter in the sky that would also verify and reveal details of inflation. The bad news, though, “is that the strength and the form of the signal depend on a lot of things you don’t know,” Kinney said, such as constants whose values might be zero, and it’s entirely possible that “there will be no detectable signal.”

Natalie Wolchover is a senior writer and editor at Quanta Magazine covering the physical sciences.Quanta Magazine

More from Quanta Magazine

Physicists Debate Hawking’s Idea That the Universe Had No Beginning

A recent challenge to Stephen Hawking’s biggest idea—about how the universe might have come from nothing—has cosmologists choosing sides. Posted December 26th 2020

Quanta Magazine

  • Natalie Wolchover

.Shuttlecock_Universe_2880x1620_Lede.jpg

Credit: Mike Zeng for Quanta Magazine.

In 1981, many of the world’s leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing.

Before Hawking’s talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, “What happened before that?” The Big Bang theory, for instance — pioneered 50 years before Hawking’s lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican’s academy of sciences — rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from?

The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking’s talk, the cosmologist Alan Guth realized that the Big Bang’s problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it?

Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there’s no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, “There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?”

The “no-boundary proposal,” which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called “wave function of the universe” that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before.

“Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to,” Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. “It would be like asking what lies south of the South Pole.”

Hartle and Hawking’s proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe’s size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock’s rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, “We didn’t have birds in the very early universe; we have birds later on. … We didn’t have time in the early universe, but we have time later on.”

The no-boundary proposal has fascinated and inspired physicists for nearly four decades. “It’s a stunningly beautiful and provocative idea,” said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking’s. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories’ various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, “was in some ways the simplest possible proposal for that.”

But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. “The trouble with Stephen and Jim’s approach is it was ambiguous,” Turok said — “deeply ambiguous.”

In their 2017 paper, published in Physical Review Letters, Turok and his co-authors approached Hartle and Hawking’s no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. “We discovered that it just failed miserably,” Turok said. “It was just not possible quantum mechanically for a universe to start in the way they imagined.” The trio checked their math and queried their underlying assumptions before going public, but “unfortunately,” Turok said, “it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster.”

The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues’ reasoning. “We disagree with his technical arguments,” said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter’s life. “But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that’s the more interesting discussion.”

After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated — yet friendly — debate has helped firm up the idea that most tickled Hawking’s fancy. Even critics of his and Hartle’s specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure.

Garden of Cosmic Delights

Hartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo’s theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin.

In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein’s equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking “singularity theorems” meant there was no way for space-time to begin smoothly, undramatically at a point. No-boundary-Graphic-v5-897x1720.jpg

Credit: 5W Infographics for Quanta Magazine.

Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking’s hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this “path integral” gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision.

Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible “expansion histories,” smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails.

The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. “Murray Gell-Mann used to ask me,” Hartle said, referring to the late Nobel Prize-winning physicist, “if you know the wave function of the universe, why aren’t you rich?” Of course, to actually solve for the wave function using Feynman’s method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in “minisuperspace,” defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking’s shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.)

Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate.

The rival solutions are the two “classical” expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein’s theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation.

One of the two classical solutions resembles our universe. On large scales, it’s smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe.

The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong.

The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our “contour of integration” pick up? Researchers have forked down different paths.

In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called “lapse.” Lapse is essentially the height of each possible shuttlecock universe — the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution.

“People place huge faith in Stephen’s intuition,” Turok said by phone. “For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn’t always right.”

Imaginary Universes

Jonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking’s student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog’s, and apparently Hawking’s, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It’s similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. “You can do that parameterization in many different ways, but none of them are any more physical than another one,” Halliwell said.

He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be “normalizable,” but the wildly fluctuating universe that Turok’s team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws — obvious signs, according to no-boundary’s defenders, to walk the other way.

It’s true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that’s a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. “That’s a principle which is not written in the stars, and which we profoundly disagree with,” Hertog said.

According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt — after mulling over the issue during a layover at Raleigh-Durham International — argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace.

In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present.

That effort is continuing in Hawking’s absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they’re studying is no longer Hartle-Hawking, in his opinion — though Hartle himself disagrees.

For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological model that has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe’s mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative.

There has also been a revival of interest in the “tunneling proposal,” an alternative way that the universe might have arisen from nothing, conceived in the ’80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical “tunneling” event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment.

Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning — though universes that tunnel into existence may have other problems.

No matter how things go, perhaps we’ll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. “If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?” asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe “is the right kind of question to ask,” said Maldacena, who, incidentally, is a member of the Pontifical Academy. “Whether we are finding the right wave function, or how we should think about the wave function — it’s less clear.”

Natalie Wolchover is a senior writer and editor at Quanta Magazine covering the physical sciences.

Blinding with science December 21st 2020

As a teacher and college lecturer over 18 years, originally I intended to spend more time developing the Junius Education page, because good education is essential to healthy society and that society’s claim to be democratic.

In Britain, education for the masses has always left much to be desired, with a social class biased basis and social class outcomes. Strangely, maybe due to Britain’s appalling general education, I have been called upon to teach most subjects, including maths and science.

Biology was my least favourite science, but my ex wife was brilliant at the subject. Dissections and statistsics were among her speciallities. She could also have been a good teacher, because she made an unpleasant subject interesting to me. That doesn’t make me an expert, but there is an old saying, ‘If you want to know something, ask an expert and they will never stop talking.’ Until I met her, I had never heard of micro biology and statistics on epidemics were the least of my concerns.

In my 1950s childhood, there were killer diseases and I had most of them because my father did not trust vaccinations. As I have said elsewhere, he died when I was 11, but that was enough for him to make an impression. As a post graduate student of psychology, I learned the Jesuit concept of ‘Give me the child until it is seven and I will give you the adult.’ I have substituted modern words for boy and man.

So my father’s scepticism coupled with a distrust of the society that sent him to war, left him struggling in civvy street, then cycling 12 miles to work and back to drive a lorry – a job that killed him, means I don’t trust the official line on Covid 19, as I have written elsewhere.

Herd immunity comes from catching things, not lockdown or mask wearing. Scientists are blinkered, tests and studies are not the real world. Not being allowed to discuss age, BAME, lifestyle, mass immigration and old age, along with the Brexit diversion should raise question marks over the alleged new big killer strain which authorities fear will overwhelm a failing NHS ( National Health Service ). The idea that this mutant, known back in September , spreads faster than the other one, is a mix of guesswork and timely propaganda.

As for a vaccine which they want us to take, without removing lockdown because they say the virus will keep mutating, why bother ? Many fear that because it is RNA based, and designed therefore to plug in to modify DNA – think hard drives and RAM- it is going to alter our minds even more than lockdown. Maybe it will. Our DNA is modifying all the time. That is a major reason for ageing.

The problem is that when we start talking about these acids and viruses, the sceintists have the edge. To some they are Gods, others devils and some just don’t believe because they speak for politicans. Politicians have earned mistrust and even contempt. So it is what it is.

More people are going mad and becoming destitute, with relationships crumbling and children suffering as unemployment soars along with collapsing businesses, because of lockdown for a disease which kills less than 1% of those affected.The average age of Covid related death – it never kills on its own- is 82, and for that we sacrifice the young !

The death stats are deliberately misleading because it is always about Covid related after 28 days from test. Its key impact is on high density BAME communities, certain related lifestyles and old age, especially in privatised care homes.

Sturgeon speaking for Scotland, echoed by London Mayor Sadiq Khan are leading the charge for longer tougher lockdown because they see it as a way of stalling and ultimately ending Brexit.

There are many good reasons for scrapping Brexit and joining the fight to save Europe from the real fascists running France, Britain and Germany, but scaremongering and causing harm with this ridiculous new variant, just to help this type of useless posturing politician, is not one of them.

Lockdown is about adjusting the dangerous global economy to stifle the waves of working class protest Europe and U.S wide. Blinding people with science that few British MPs understand- in spite of their expensive educations in humanities and money grabbing law – is the the British Parliament and Government’s speciality.

Epidemiologists are not virologists. Covid’s key impact is on high density BAME communities, certain related lifestyles and old age, especially in privatised care homes. NHS and care home workers – the latter often casuals going from one home to another – are drawn heavily from BAME because of low pay.

Epidemiologists are politically correct, omitting key variables from practice and debate. They take a model of the virus, relate it to highly selective sample populations, using a computer to predict its spread on a rate calculated in a laboratory. In short, there is too much guesswork.

Below is a fatalistic fait accompli extract from more feel good pseudo Dunkirk ( a great British disaster in case you didn’t know, my dad was wounded there ) spirit journalism from the politcally correct MI5 friendly Guardian . R.J Cook

Has a year of living with Covid-19 rewired our brains? Paula Coccoza Posted December 21st 2020

Illustration of mirror image of woman. One distorted as if through a plastic sheet; the other wearing a mask.

The loss of the connecting power of touch can ‘trigger factors that contribute to depression – sadness, lower energy levels, lethargy. Illustration: Nathalie Lees/The Guardian

The pandemic is expected to precipitate a mental health crisis, but perhaps also a chance to approach life with new clarity

When the bubonic plague spread through England in the 17th century, Sir Isaac Newton fled Cambridge where he was studying for the safety of his family home in Lincolnshire. The Newtons did not live in a cramped apartment; they enjoyed a large garden with many fruit trees. In these uncertain times, out of step with ordinary life, his mind roamed free of routines and social distractions. And it was in this context that a single apple falling from a tree struck him as more intriguing than any of the apples he had previously seen fall. Gravity was a gift of the plague. So, how is this pandemic going for you?

In different ways, this is likely a question we are all asking ourselves. Whether you have experienced illness, relocated, lost a loved one or a job, got a kitten or got divorced, eaten more or exercised more, spent longer showering each morning or reached every day for the same clothes, it is an inescapable truth that the pandemic alters us all. But how? And when will we have answers to these questions – because surely there will be a time when we can scan our personal balance sheets and see in the credit column something more than grey hairs, a thicker waist and a kitten? (Actually, the kitten is pretty rewarding.) What might be the psychological impact of living through a pandemic? Will it change us for ever?

“People talk about the return to normality, and I don’t think that is going to happen,” says Frank Snowden, a historian of pandemics at Yale, and the author of Epidemics and Society: From the Black Death to the Present. Snowden has spent 40 years studying pandemics. Then last spring, just as his phone was going crazy with people wanting to know if history could shed light on Covid-19, his life’s work landed in his lap. He caught the coronavirus.

Snowden believes that Covid-19 was not a random event. All pandemics “afflict societies through the specific vulnerabilities people have created by their relationships with the environment, other species, and each other,” he says. Each pandemic has its own properties, and this one – a bit like the bubonic plague – affects mental health. Snowden sees a second pandemic coming “in the train of the Covid-19 first pandemic … [a] psychological pandemic”.

Relatives embrace through plastic deviceepa08490430 A woman (L) embraces her nephew (R) through a plastic device after three months without a hug at a home for the elderly in Valencia, Spain, 17 June 2020, amid the ongoing coronavirus pandemic. EPA/BIEL ALINO

A man embraces his aunt through a plastic curtain at a home for the elderly in Spain in June, for the first time in three months. Photograph: Biel Aliño/EPA

Aoife O’Donovan, an associate professor of psychiatry at the UCSF Weill Institute for Neurosciences in California, who specialises in trauma, agrees. “We are dealing with so many layers of uncertainty,” she says. “Truly horrible things have happened and they will happen to others and we don’t know when or to whom or how and it is really demanding cognitively and physiologically.”

The impact is experienced throughout the body, she says, because when people perceive a threat, abstract or actual, they activate a biological stress response. Cortisol mobilises glucose. The immune system is triggered, increasing levels of inflammation. This affects the function of the brain, making people more sensitive to threats and less sensitive to rewards.

In practice, this means that your immune system may be activated simply by hearing someone next to you cough, or by the sight of all those face masks and the proliferation of a colour that surely Pantone should rename “surgical blue”, or by a stranger walking towards you, or even, as O’Donovan found, seeing a friend’s cleaner in the background of a Zoom call, maskless. And because, O’Donovan points out, government regulations are by necessity broad and changeable, “as individuals we have to make lots of choices. This is uncertainty on a really intense scale.”Quick Guide

Covid at Christmas: how do rules vary across Europe?

The unique characteristics of Covid-19 play into this sense of uncertainty. The illness “is much more complex than anyone imagined in the beginning”, Snowden says, a sort of shapeshifting adversary. In some it is a respiratory disease, in others gastrointestinal, in others it can cause delirium and cognitive impairment, in some it has a very long tail, while many experience it as asymptomatic. Most of us will never know if we have had it, and not knowing spurs a constant self-scrutiny. Symptom checkers raise questions more than they allay fears: when does tiredness become fatigue? When does a cough become “continuous”?

O’Donovan sighs. She sounds tired; this is a busy time to be a threat researcher and her whole life is work now. She finds the body’s response to uncertainty “beautiful” – its ability to mobilise to see off danger – but she’s concerned that it is ill-suited to frequent and prolonged threats. “This chronic activation can be harmful in the long term. It accelerates biological ageing and increases risk for diseases of ageing,” she says.

In daily life, uncertainty has played out in countless tiny ways as we try to reorient ourselves in a crisis, in the absence of the usual landmarks – schools, families, friendships, routines and rituals. Previously habitual rhythms, of time alone and time with others, the commute and even postal deliveries, are askew.

Philippa Perry

Philippa Perry: ‘We are becoming a sort of non-person.’ Photograph: Pål Hansen/The Observer

There is no new normal – just an evolving estrangement. Even a simple “how are you?” is heavy with hidden questions (are you infectious?), and rarely brings a straightforward answer; more likely a hypervigilant account of a mysterious high temperature experienced back in February.

Thomas Dixon, a historian of emotions at Queen Mary University of London, says that when the pandemic hit, he stopped opening his emails with the phrase “I hope this finds you well.”

The old “social dances” – as the psychotherapist Philippa Perry calls them – of finding a seat in a cafe or on the bus have not only vanished, taking with them opportunities to experience a sense of belonging, but have been replaced with dances of rejection. Perry thinks that’s why she misses the Pret a Manger queue. “We were all waiting to pay for our sandwiches that we were all taking back to our desks. It was a sort of group activity even if I didn’t know the other people in the group.”

In contrast, pandemic queues are not organic; they are a series of regularly spaced people being processed by a wayfinding system. Further rejection occurs if a pedestrian steps into the gutter to avoid you, or when the delivery person you used to enjoy greeting sees you at the door and lunges backwards. It provides no consolation, Perry says, to understand cognitively why we repel others. The sense of rejection remains.

The word “contagion” comes from the Latin for “with” and “touch”, so it is no wonder that social touch is demonised in a pandemic. But at what cost? The neuroscientists Francis McGlone and Merle Fairhurst study nerve fibres called C-tactile afferents, which are concentrated in hard-to-reach places such as the back and shoulders. They wire social touch into a complex reward system, so that when we are stroked, touched, hugged or patted, oxytocin is released, lowering the heart rate and inhibiting the production of cortisone. “Very subtle requirements,” says McGlone, “to keep you on an even plane.”

But McGlone is worried. “Everywhere I look at changes of behaviour during the pandemic, this little flag is flying, this nerve fibre – touch, touch, touch!” While some people – especially those locked down with young children – might be experiencing more touch, others are going entirely without. Fairhurst is examining the data collected from a large survey she and McGlone launched in May, and she is finding those most at risk from the negative emotional impact of loss of touch are young people. “Age is a significant indicator of loneliness and depression,” she says. The loss of the connecting power of touch triggers “factors that contribute to depression – sadness, lower energy levels, lethargy”.

“We are becoming a sort of non-person,” says Perry. Masks render us mostly faceless. Hand sanitiser is a physical screen. Fairhurst sees it as “a barrier, like not speaking somebody’s language”. And Perry is not the only one to favour the “non-person clothes” of pyjamas and tracksuits. Somehow, the repeat-wearing of clothes makes all clothing feel like fatigues. They suit our weariness, and add an extra layer to it.

Cultural losses feed this sense of dehumanisation. Eric Clarke, a professor at Wadham College, Oxford, with a research interest in the psychology of music, led street singing in his cul-de-sac during the first lockdown, which “felt almost like a lifeline”, but he has missed going to live music events. “The impact on me has been one of a feeling of degradation or erosion of my aesthetic self,” he says. “I feel less excited by the world around me than I do when I’m going to music.” And the street music, like the street clapping, stopped months ago. Now “we are all living like boil-in-a-bag rice, closed off from the world in a plastic envelope of one sort or another.”

No element of Covid-19 has dehumanised people more than the way it has led us to experience death. Individuals become single units in a very long and horribly growing number, of course. But before they become statistics, the dying are condemned to isolation. “They are literally depersonalised,” Snowden says. He lost his sister during the pandemic. “I didn’t see her, and nor was she with her family … It breaks bonds and estranges people.”

The Rise and Fall of Nikola Tesla and his Tower

The inventor’s vision of a global wireless-transmission tower proved to be his undoing. Posted December 20th 2020

Smithsonian Magazine

  • Gilbert King

Read when you’ve got time to spare.ezgif.com-webp-to-jpg(79).jpgcrop.jpg

Wardenclyffe Tower in 1904. Photo from Wikimedia Commons / Public Domain.

By the end of his brilliant and tortured life, the Serbian physicist, engineer and inventor Nikola Tesla was penniless and living in a small New York City hotel room. He spent days in a park surrounded by the creatures that mattered most to him—pigeons—and his sleepless nights working over mathematical equations and scientific problems in his head. That habit would confound scientists and scholars for decades after he died, in 1943. His inventions were designed and perfected in his imagination.

Tesla believed his mind to be without equal, and he wasn’t above chiding his contemporaries, such as Thomas Edison, who once hired him. “If Edison had a needle to find in a haystack,” Tesla once wrote, “he would proceed at once with the diligence of the bee to examine straw after straw until he found the object of his search. I was a sorry witness of such doing that a little theory and calculation would have saved him ninety percent of his labor.”

But what his contemporaries may have been lacking in scientific talent (by Tesla’s estimation), men like Edison and George Westinghouse clearly possessed the one trait that Tesla did not—a mind for business. And in the last days of America’s Gilded Age, Nikola Tesla made a dramatic attempt to change the future of communications and power transmission around the world.  He managed to convince J.P. Morgan that he was on the verge of a breakthrough, and the financier gave Tesla more than $150,000 to fund what would become a gigantic, futuristic and startling tower in the middle of Long Island, New York. In 1898, as Tesla’s plans to create a worldwide wireless transmission system became known, Wardenclyffe Tower would be Tesla’s last chance to claim the recognition and wealth that had always escaped him.

Nikola Tesla was born in modern-day Croatia in 1856; his father, Milutin, was a priest of the Serbian Orthodox Church. From an early age, he demonstrated the obsessiveness that would puzzle and amuse those around him. He could memorize entire books and store logarithmic tables in his brain. He picked up languages easily, and he could work through days and nights on only a few hours sleep.

At the age of 19, he was studying electrical engineering at the Polytechnic Institute at Graz in Austria, where he quickly established himself as a star student. He found himself in an ongoing debate with a professor over perceived design flaws in the direct-current (DC) motors that were being demonstrated in class. “In attacking the problem again I almost regretted that the struggle was soon to end,” Tesla later wrote. “I had so much energy to spare. When I undertook the task it was not with a resolve such as men often make. With me it was a sacred vow, a question of life and death. I knew that I would perish if I failed. Now I felt that the battle was won. Back in the deep recesses of the brain was the solution, but I could not yet give it outward expression.”

He would spend the next six years of his life “thinking” about electromagnetic fields and a hypothetical motor powered by alternate-current that would and should work. The thoughts obsessed him, and he was unable to focus on his schoolwork. Professors at the university warned Tesla’s father that the young scholar’s working and sleeping habits were killing him. But rather than finish his studies, Tesla became a gambling addict, lost all his tuition money, dropped out of school and suffered a nervous breakdown. It would not be his last.

In 1881, Tesla moved to Budapest, after recovering from his breakdown, and he was walking through a park with a friend, reciting poetry, when a vision came to him. There in the park, with a stick, Tesla drew a crude diagram in the dirt—a motor using the principle of rotating magnetic fields created by two or more alternating currents. While AC electrification had been employed before, there would never be a practical, working motor run on alternating current until he invented his induction motor several years later.

In June 1884, Tesla sailed for New York City and arrived with four cents in his pocket and a letter of recommendation from Charles Batchelor—a former employer—to Thomas Edison, which was purported to say, “My Dear Edison: I know two great men and you are one of them. The other is this young man!”

A meeting was arranged, and once Tesla described the engineering work he was doing, Edison, though skeptical, hired him. According to Tesla, Edison offered him $50,000 if he could improve upon the DC generation plants Edison favored. Within a few months, Tesla informed the American inventor that he had indeed improved upon Edison’s motors. Edison, Tesla noted, refused to pay up. “When you become a full-fledged American, you will appreciate an American joke,” Edison told him.

Tesla promptly quit and took a job digging ditches. But it wasn’t long before word got out that Tesla’s AC motor was worth investing in, and the Western Union Company put Tesla to work in a lab not far from Edison’s office, where he designed AC power systems that are still used around the world. “The motors I built there,” Tesla said, “were exactly as I imagined them. I made no attempt to improve the design, but merely reproduced the pictures as they appeared to my vision, and the operation was always as I expected.”

Tesla patented his AC motors and power systems, which were said to be the most valuable inventions since the telephone. Soon, George Westinghouse, recognizing that Tesla’s designs might be just what he needed in his efforts to unseat Edison’s DC current, licensed his patents for $60,000 in stocks and cash and royalties based on how much electricity Westinghouse could sell. Ultimately, he won the “War of the Currents,” but at a steep cost in litigation and competition for both Westinghouse and Edison’s General Electric Company. 1024px-Tesla_Sarony.jpg

Nikola Tesla. Photo from Napoleon Sarony / Wikimedia Commons / Public Domain.

Fearing ruin, Westinghouse begged Tesla for relief from the royalties Westinghouse agreed to. “Your decision determines the fate of the Westinghouse Company,” he said. Tesla, grateful to the man who had never tried to swindle him, tore up the royalty contract, walking away from millions in royalties that he was already owed and billions that would have accrued in the future. He would have been one of the wealthiest men in the world—a titan of the Gilded Age.

His work with electricity reflected just one facet of his fertile mind. Before the turn of the 20th century, Tesla had invented a powerful coil that was capable of generating high voltages and frequencies, leading to new forms of light, such as neon and fluorescent, as well as X-rays. Tesla also discovered that these coils, soon to be called “Tesla Coils,” made it possible to send and receive radio signals. He quickly filed for American patents in 1897, beating the Italian inventor Guglielmo Marconi to the punch.

Tesla continued to work on his ideas for wireless transmissions when he proposed to J.P. Morgan his idea of a wireless globe. After Morgan put up the $150,000 to build the giant transmission tower, Tesla promptly hired the noted architect Stanford White of McKim, Mead, and White in New York. White, too, was smitten with Tesla’s idea. After all, Tesla was the highly acclaimed man behind Westinghouse’s success with alternating current, and when Tesla talked, he was persuasive.

“As soon as completed, it will be possible for a business man in New York to dictate instructions, and have them instantly appear in type at his office in London or elsewhere,” Tesla said at the time. “He will be able to call up, from his desk, and talk to any telephone subscriber on the globe, without any change whatever in the existing equipment. An inexpensive instrument, not bigger than a watch, will enable its bearer to hear anywhere, on sea or land, music or song, the speech of a political leader, the address of an eminent man of science, or the sermon of an eloquent clergyman, delivered in some other place, however distant. In the same manner any picture, character, drawing or print can be transferred from one to another place. Millions of such instruments can be operated from but one plant of this kind.”

White quickly got to work designing Wardenclyffe Tower in 1901, but soon after construction began it became apparent that Tesla was going to run out of money before it was finished. An appeal to Morgan for more money proved fruitless, and in the meantime investors were rushing to throw their money behind Marconi. In December 1901, Marconi successfully sent a signal from England to Newfoundland. Tesla grumbled that the Italian was using 17 of his patents, but litigation eventually favored Marconi and the commercial damage was done.  (The U.S. Supreme Court ultimately upheld Tesla’s claims, clarifying Tesla’s role in the invention of the radio—but not until 1943, after he died.) Thus the Italian inventor was credited as the inventor of radio and became rich. Wardenclyffe Tower became a 186-foot-tall relic (it would be razed in 1917), and the defeat—Tesla’s worst—led to another of his breakdowns. ”It is not a dream,” Tesla said, “it is a simple feat of scientific electrical engineering, only expensive—blind, faint-hearted, doubting world!” 1024px-Guglielmo_Marconi_1901_wireless_signal.jpg

Guglielmo Marconi in 1901. Photo from LIFE / Wikimedia Commons / Public Domain.

By 1912, Tesla began to withdraw from that doubting world. He was clearly showing signs of obsessive-compulsive disorder, and was potentially a high-functioning autistic. He became obsessed with cleanliness and fixated on the number three; he began shaking hands with people and washing his hands—all done in sets of three. He had to have 18 napkins on his table during meals, and would count his steps whenever he walked anywhere. He claimed to have an abnormal sensitivity to sounds, as well as an acute sense of sight, and he later wrote that he had “a violent aversion against the earrings of women,” and “the sight of a pearl would almost give me a fit.”

Near the end of his life, Tesla became fixated on pigeons, especially a specific white female, which he claimed to love almost as one would love a human being. One night, Tesla claimed the white pigeon visited him through an open window at his hotel, and he believed the bird had come to tell him she was dying. He saw “two powerful beans of light” in the bird’s eyes, he later said. “Yes, it was a real light, a powerful, dazzling, blinding light, a light more intense than I had ever produced by the most powerful lamps in my laboratory.” The pigeon died in his arms, and the inventor claimed that in that moment, he knew that he had finished his life’s work.

Nikola Tesla would go on to make news from time to time while living on the 33rd floor of the New Yorker Hotel. In 1931 he made the cover of Time magazine, which featured his inventions on his 75th birthday. And in 1934, the New York Times reported that Tesla was working on a “Death Beam” capable of knocking 10,000 enemy airplanes out of the sky. He hoped to fund a prototypical defensive weapon in the interest of world peace, but his appeals to J.P. Morgan Jr. and British Prime Minister Neville Chamberlain went nowhere. Tesla did, however, receive a $25,000 check from the Soviet Union, but the project languished.  He died in 1943, in debt, although Westinghouse had been paying his room and board at the hotel for years.

Gilbert King is a contributing writer in history for Smithsonian.com. His book Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America won the Pulitzer Prize in 2013.

Sources

Books: Nikola Tesla, My Inventions: The Autobiography of Nikola Tesla, Hart Brothers, Pub., 1982. Margaret Cheney, Tesla: Man Out of Time, Touchstone, 1981.

Why Black Hole Interiors Grow (Almost) Forever

The renowned physicist Leonard Susskind has identified a possible quantum origin for the ever-growing volume of black holes. December 17th 2020

Quanta Magazine

  • Natalie Wolchover

Read when you’ve got time to spare.GettyImages-1212735156.jpg

Credit: koto_feja / Getty Images.

Leonard Susskind, a pioneer of string theory, the holographic principle and other big physics ideas spanning the past half-century, has proposed a solution to an important puzzle about black holes. The problem is that even though these mysterious, invisible spheres appear to stay a constant size as viewed from the outside, their interiors keep growing in volume essentially forever. How is this possible?

In a series of recent papers and talks, the 78-year-old Stanford University professor and his collaborators conjecture that black holes grow in volume because they are steadily increasing in complexity — an idea that, while unproven, is fueling new thinking about the quantum nature of gravity inside black holes.

Black holes are spherical regions of such extreme gravity that not even light can escape. First discovered a century ago as shocking solutions to the equations of Albert Einstein’s general theory of relativity, they’ve since been detected throughout the universe. (They typically form from the inward gravitational collapse of dead stars.) Einstein’s theory equates the force of gravity with curves in space-time, the four-dimensional fabric of the universe, but gravity becomes so strong in black holes that the space-time fabric bends toward its breaking point — the infinitely dense “singularity” at the black hole’s center.

According to general relativity, the inward gravitational collapse never stops. Even though, from the outside, the black hole appears to stay a constant size, expanding slightly only when new things fall into it, its interior volume grows bigger and bigger all the time as space stretches toward the center point. For a simplified picture of this eternal growth, imagine a black hole as a funnel extending downward from a two-dimensional sheet representing the fabric of space-time. The funnel gets deeper and deeper, so that infalling things never quite reach the mysterious singularity at the bottom. In reality, a black hole is a funnel that stretches inward from all three spatial directions. A spherical boundary surrounds it called the “event horizon,” marking the point of no return.

Since at least the 1970s, physicists have recognized that black holes must really be quantum systems of some kind — just like everything else in the universe. What Einstein’s theory describes as warped space-time in the interior is presumably really a collective state of vast numbers of gravity particles called “gravitons,” described by the true quantum theory of gravity. In that case, all the known properties of a black hole should trace to properties of this quantum system.

Indeed, in 1972, the Israeli physicist Jacob Bekenstein figured out that the area of the spherical event horizon of a black hole corresponds to its “entropy.” This is the number of different possible microscopic arrangements of all the particles inside the black hole, or, as modern theorists would describe it, the black hole’s storage capacity for information.

Bekenstein’s insight led Stephen Hawking to realize two years later that black holes have temperatures, and that they therefore radiate heat. This radiation causes black holes to slowly evaporate away, giving rise to the much-discussed “black hole information paradox,” which asks what happens to information that falls into black holes. Quantum mechanics says the universe preserves all information about the past. But how does information about infalling stuff, which seems to slide forever toward the central singularity, also evaporate out?

The relationship between a black hole’s surface area and its information content has kept quantum gravity researchers busy for decades. But one might also ask: What does the growing volume of its interior correspond to, in quantum terms? “For whatever reason, nobody, including myself for a number of years, really thought very much about what that means,” said Susskind. “What is the thing which is growing? That should have been one of the leading puzzles of black hole physics.”

In recent years, with the rise of quantum computing, physicists have been gaining new insights about physical systems like black holes by studying their information-processing abilities — as if they were quantum computers. This angle led Susskind and his collaborators to identify a candidate for the evolving quantum property of black holes that underlies their growing volume. What’s changing, the theorists say, is the “complexity” of the black hole — roughly a measure of the number of computations that would be needed to recover the black hole’s initial quantum state, at the moment it formed. After its formation, as particles inside the black hole interact with one another, the information about their initial state becomes ever more scrambled. Consequently, their complexity continuously grows.

Using toy models that represent black holes as holograms, Susskind and his collaborators have shown that the complexity and volume of black holes both grow at the same rate, supporting the idea that the one might underlie the other. And, whereas Bekenstein calculated that black holes store the maximum possible amount of information given their surface area, Susskind’s findings suggest that they also grow in complexity at the fastest possible rate allowed by physical laws.

John Preskill, a theoretical physicist at the California Institute of Technology who also studies black holes using quantum information theory, finds Susskind’s idea very interesting. “That’s really cool that this notion of computational complexity, which is very much something that a computer scientist might think of and is not part of the usual physicist’s bag of tricks,” Preskill said, “could correspond to something which is very natural for someone who knows general relativity to think about,” namely the growth of black hole interiors.

Researchers are still puzzling over the implications of Susskind’s thesis. Aron Wall, a theorist at Stanford (soon moving to the University of Cambridge), said, “The proposal, while exciting, is still rather speculative and may not be correct.” One challenge is defining complexity in the context of black holes, Wall said, in order to clarify how the complexity of quantum interactions might give rise to spatial volume.

A potential lesson, according to Douglas Stanford, a black hole specialist at the Institute for  Advanced Study in Princeton, New Jersey, “is that black holes have a type of internal clock that keeps time for a very long time. For an ordinary quantum system,” he said, “this is the complexity of the state. For a black hole, it is the size of the region behind the horizon.”

If complexity does underlie spatial volume in black holes, Susskind envisions consequences for our understanding of cosmology in general. “It’s not only black hole interiors that grow with time. The space of cosmology grows with time,” he said. “I think it’s a very, very interesting question whether the cosmological growth of space is connected to the growth of some kind of complexity. And whether the cosmic clock, the evolution of the universe, is connected with the evolution of complexity. There, I don’t know the answer.”

Natalie Wolchover is a senior writer and editor at Quanta Magazine covering the physical sciences.Quanta Magazine

More from Quanta Magazine

End of Ageing and Cancer? Scientists Unveil Structure of the ‘Immortality’ Enzyme Telomerase

Detailed images of the anti-ageing enzyme telomerase are a drug designer’s dream. Posted December 8th 2020

Read when you’ve got time to spare.file-20180425-175044-pm9x2m.png

Telomeres on a chromosome. Credit: AJC1 / Flickr, CC BY-NC-ND.

Making a drug is like trying to pick a lock at the molecular level. There are two ways in which you can proceed. You can try thousands of different keys at random, hopefully finding one that fits. The pharmaceutical industry does this all the time – sometimes screening hundreds of thousands of compounds to see if they interact with a certain enzyme or protein. But unfortunately it’s not always efficient – there are more drug molecule shapes than seconds have passed since the beginning of the universe.

Alternatively, like a safe cracker, you can x-ray the lock you want to open and work out the probable shape of the key from the pictures you get. This is much more effective for discovering drugs, as you can use computer models to identify promising compounds before researchers go into the lab to find the best one. A 2018 study, published in Nature, presents detailed images of a crucial anti-ageing enzyme known as telomerase – raising hopes that we can soon slow ageing and cure cancer.

Every organism packages its DNA into chromosomes. In simple bacteria like E. coli this is a single small circle. More complex organisms have far more DNA and multiple linear chromosomes (22 pairs plus sex chromosomes). These probably appeared because they provided an evolutionary advantage, but they also come with a downside. file-20180425-175054-8dgqfa.jpg

We may all soon live to be centenarians. Credit: Dan Negureanu / Shutterstcock.

At the end of each chromosome is a protective cap called a telomere . However, most human cells can’t copy them – meaning that every time they divide, their telomeres become shorter. When telomeres become too short, the cell enters a toxic state called “senescence”. If these senescent cells are not cleared by the immune system, they begin to compromise the function of the tissues in which they reside. For millennia, humans have perceived this gradual compromise in tissue function over time without understanding what caused it. We simply called it ageing.

Enter telomerase, a specialised telomere repair enzyme in two parts – able to add DNA to the chromosome tips. The first part is a protein called TERT that does the copying. The second component is called TR, a small piece of RNA which acts as a template. Together, these form telomerase, which trundles up and down on the ends of chromosomes, copying the template. At the bottom, a human telomere is roughly 3,000 copies of the DNA sequence “TTAGGG” – laid down and maintained by telomerase. But sadly, production of TERT is repressed in human tissues with the exception of sperm, eggs and some immune cells.

Ageing Versus Cancer

Organisms regulate their telomere maintenance in this way because they are walking a biological tightrope. On the one hand, they need to replace the cells they lose in the course of their ordinary daily lives by cell division. However, any cell with an unlimited capacity to divide is the seed of a tumour. And it turns out that the majority of human cancers have active telomerase and shorter telomeres than the cells surrounding them.

This indicates that the cell from which they came divided as normal but then picked up a mutation which turned TERT back on. Cancer and ageing are flip sides of the same coin and telomerase, by and large, is doing the flipping. Inhibit telomerase, and you have a treatment for cancer, activate it and you prevent senescence. That, at least, is the theory.

The researchers behind the new study were not just able to obtain the structure of a proportion of the enzyme, but of the entire molecule as it was working. This was a tour de force involving the use of cryo-electron microscopy – a technique using a beam of electrons (rather than light) to take thousands of detailed images of individual molecules from different angles and combine them computationally.

Prior to the development of this method, for which scientists won the Nobel Prize last year, it was necessary to crystallise proteins to image them. This typically requires thousands of attempts and many years of trying, if it works at all.

Elixir of Youth?

TERT itself is a large molecule and although it has shown to lengthen lifespan when introduced into normal mice using gene therapy this is technically challenging and fraught with difficulties. Drugs that can turn on the enzyme that produces it are far better, easier to deliver and cheaper to make.

We already know of a few compounds to inhibit and activate telomerase – discovered through the cumbersome process of randomly screening for drugs. Sadly, they are not very efficient.

Some of the most provocative studies involve the compound TA-65 (Cycloastragenol) – a natural product which lengthens telomeres experimentally and has been claimed to show benefit in early stage macular degeneration (vision loss). As a result, TA65 has been sold over the internet and has prompted at least one (subsequently dismissed) lawsuit over claims that it caused cancer in a user. This sad story illustrates an important public health message best summarised simply as “don’t try this at home, folks”.

The telomerase inhibitors we know of so far, however, have genuine clinical benefit in various cancers, particularly in combination with other drugs. However, the doses required are relatively high.

The new study is extremely promising because, by knowing the structure of telomerase, we can use computer models to identify the most promising activators and inhibitors and then test them to find which ones are most effective. This is a much quicker process than randomly trying different molecules to see if they work.

So how far could could we go? In terms of cancer, it is hard to tell. The body can easily become resistant to cancer drugs, including telomerase inhibitors. Prospects for slowing ageing where there is not cancer are somewhat easier to estimate. In mice, deleting senescent cells or dosing with telomerase (gene therapy) both give increases in lifespan of the order of 20 percent – despite being inefficient techniques. It may be that at some point other ageing mechanisms, such as the accumulation of damaged proteins, start to come into play.

But if we did manage to stop the kind of ageing caused by senescent cells using telomerase activation, we could start devoting all our efforts into tackling these additional ageing processes. There’s every reason to be optimistic that we may soon live much longer, healthier lives than we do today.

Comment We can see the problems of allegedly trying to save the aged, whilst ruining lives of the young and younger by lockdown tyranny.

Overpopulation is already a major issue which the politically correct and elites don’t want to discuss.

From a science point of view this is fascinating and treating cancer is good ( my best friend has just died of it, but he was a 74 year old, worked as a tile maker bathed in clay dust for years, chain smoker and drinker also for years . Cancer patients like my friend Mike, who worked with me on the building sites, was much neglected because of ludicrous lockdown, but there is a cycle of life and need for renewal. Also I suspect it will only be the pampered super rich , like the Royals, who will afford such medicine.

Getting old folk to step back rather than staying in charge and corrupting the young into corrupt careers is a big enough problem already. Bigotry and mental health issues won’t be resolved by elixirs of youth – though they obviously have a sex selling point, which are priorities for the rich and other escapists. Sex is after all the field of youth and the ultimate delusional drug.

But what about the old Third World, they breed so fast, they can’t cope with the numbers, so there are all maanner of health, crime and migratory issues.

Scientists are nerds. They do things because they can. Then corrupt dark matter of the industrial military complex, fake democrats and dictators take old. There will be the rich preserved in luxury, and a swarming mass of healthy bodies with brains more addled than the rich old politicians. How will it all pan out, as if it isn’t bad enough. I remind readers that I am not ageist, but at 70, well past my own sell by date.

R.J Cook and old and best friend Mick Birrell, at home near aaliverpool, who died two weeks ago i an hospice. Father Daly gave the last rights, then after the funeral and cremation his ashes were taken back to Ireland. Michael used to flatter me as the best Irish folk singer this side of the Irish Sea.

I was up at his home looking after his sister on the weekend of October 4th/5th 2008, having arrived in my motorhome with son Kieran. Poor girl had dementia. For years she would phone me to sing her old songs like ‘The Rare Old Times’ over the phone. That day on she stood sadly in front of the wood stove, as I started to whistle an old favourite ‘Danny Boy.’

She couldn’t speak, but still ahd that rare beauty and soul of the Irish – my mother’s father was Irish. As I whisteled, her big sad blue eyes started to stream with tear. Then I sang, ‘The Rare Old Times.;

R.J Cook

About 610,000,000 results (0.75 seconds) 

Dublin In The Rare Old Times - Luke Kelly

Dublin In The Rare Old Times – Luke Kelly – YouTube

www.youtube.com › watch

Luke Kelly was more than a folk singer. I thought he was unique. Very special. he drank a lot and died young. I met him once in 1971 at a bar in Norwich, a modest man and hero to me, but no fan of the then current I.R.A. He told me he was a Marxist.

Would an elixir of youth made him a better man. I don’t think see. Beauty . loss and longing are comforted by the right songs. Those songs come from instinct. T

They employ the science of accoustics, but to be better than pop, there needs to e deeper painful place for the sound, tone and texture to be born. This was Anne; favourite because she loved Dublin as it used to be. Anne died very soon after. As for me and Irish fold songs, I was beaten up onm a pavment after singing my son about how poilce thugs killed innocent Ian Tomlinson at G7. That’ another story. The pub was full of off duty cops. aia quit the scene,deciding folkies are fakies.

R.J Cook

False-positive COVID-19 results: hidden problems and costs Posted December 8th 2020

Published:September 29, 2020DOI:https://doi.org/10.1016/S2213-2600(20)30453-7PlumX Metrics

Advertisement

RT-PCR tests to detect severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) RNA are the operational gold standard for detecting COVID-19 disease in clinical practice. RT-PCR assays in the UK have analytical sensitivity and specificity of greater than 95%, but no single gold standard assay exists.12 New assays are verified across panels of material, confirmed as COVID-19 by multiple testing with other assays, together with a consistent clinical and radiological picture. These new assays are often tested under idealised conditions with hospital samples containing higher viral loads than those from asymptomatic individuals living in the community. As such, diagnostic or operational performance of swab tests in the real world might differ substantially from the analytical sensitivity and specificity.2Although testing capacity and therefore the rate of testing in the UK and worldwide has continued to increase, more and more asymptomatic individuals have undergone testing. This growing inclusion of asymptomatic people affects the other key parameter of testing, the pretest probability, which underpins the veracity of the testing strategy. In March and early April, 2020, most people tested in the UK were severely ill patients admitted to hospitals with a high probability of infection. Since then, the number of COVID-19-related hospital admissions has decreased markedly from more than 3000 per day at the peak of the first wave, to just more than 100 in August, while the number of daily tests jumped from 11 896 on April 1, 2020, to 190 220 on Aug 1, 2020. In other words, the pretest probability will have steadily decreased as the proportion of asymptomatic cases screened increased against a background of physical distancing, lockdown, cleaning, and masks, which have reduced viral transmission to the general population. At present, only about a third of swab tests are done in those with clinical needs or in health-care workers (defined as the pillar 1 community in the UK), while the majority are done in wider community settings (pillar 2). At the end of July, 2020, the positivity rate of swab tests within both pillar 1 (1·7%) and pillar 2 (0·5%) remained significantly lower than those in early April, when positivity rates reached 50%.3

• View related content for this articleGlobally, most effort so far has been invested in turnaround times and low test sensitivity (ie, false negatives); one systematic review reported false-negative rates of between 2% and 33% in repeat sample testing.4 Although false-negative tests have until now had priority due to the devastating consequences of undetected cases in health-care and social care settings, and the propagation of the epidemic especially by asymptomatic or mildly symptomatic patients,1 the consequences of a false-positive result are not benign from various perspectives (panel), in particular among health-care workers.PanelPotential consequences of false-positive COVID-19 swab test resultsIndividual perspectiveHealth-related

  • •For swab tests taken for screening purposes before elective procedures or surgeries: unnecessary treatment cancellation or postponement
  • •For swab tests taken for screening purposes during urgent hospital admissions: potential exposure to infection following a wrong pathway in hospital settings as an in-patient

Financial

  • •Financial losses related to self-isolation, income losses, and cancelled travel, among other factors

Psychological

  • •Psychological damage due to misdiagnosis or fear of infecting others, isolation, or stigmatisation

Global perspectiveFinancial

  • •Misspent funding (often originating from taxpayers) and human resources for test and trace
  • •Unnecessary testing
  • •Funding replacements in the workplace
  • •Various business losses

Epidemiological and diagnostic performance

  • •Overestimating COVID-19 incidence and the extent of asymptomatic infection
  • •Misleading diagnostic performance, potentially leading to mistaken purchasing or investment decisions if a new test shows high performance by identification of negative reference samples as positive (ie, is it a false positive or does the test show higher sensitivity than the other comparator tests used to establish the negativity of the test sample?)

Societal

  • •Misdirection of policies regarding lockdowns and school closures
  • •Increased depression and domestic violence (eg, due to lockdown, isolation, and loss of earnings after a positive test).

Technical problems including contamination during sampling (eg, a swab accidentally touches a contaminated glove or surface), contamination by PCR amplicons, contamination of reagents, sample cross-contamination, and cross-reactions with other viruses or genetic material could also be responsible for false-positive results.2 These problems are not only theoretical; the US Center for Disease Control and Prevention had to withdraw testing kits in March, 2020, when they were shown to have a high rate of false-positives due to reagent contamination.5The current rate of operational false-positive swab tests in the UK is unknown; preliminary estimates show it could be somewhere between 0·8% and 4·0%.26 This rate could translate into a significant proportion of false-positive results daily due to the current low prevalence of the virus in the UK population, adversely affecting the positive predictive value of the test.2 Considering that the UK National Health Service employs 1·1 million health-care workers, many of whom have been exposed to COVID-19 at the peak of the first wave, the potential disruption to health and social services due to false positives could be considerable.Any diagnostic test result should be interpreted in the context of the pretest probability of disease. For COVID-19, the pretest probability assessment includes symptoms, previous medical history of COVID-19 or presence of antibodies, any potential exposure to COVID-19, and likelihood of an alternative diagnosis.1 When low pretest probability exists, positive results should be interpreted with caution and a second specimen tested for confirmation. Notably, current policies in the UK and globally do not include special provisions for those who test positive despite being asymptomatic and having laboratory confirmed COVID-19 in the past (by RT-PCR swab test or antibodies). Prolonged viral RNA shedding, which is known to last for weeks after recovery, can be a potential reason for positive swab tests in those previously exposed to SARS-CoV-2. However, importantly, no data suggests that detection of low levels of viral RNA by RT-PCR equates with infectivity unless infectious virus particles have been confirmed with laboratory culture-based methods.7 If viral load is low, it might need to be taken into account when assessing the validity of the result.8To summarise, false-positive COVID-19 swab test results might be increasingly likely in the current epidemiological climate in the UK, with substantial consequences at the personal, health system, and societal levels (panel). Several measures might help to minimise false-positive results and mitigate possible consequences. Firstly, stricter standards should be imposed in laboratory testing. This includes the development and implementation of external quality assessment schemes and internal quality systems, such as automatic blinded replication of a small number of tests for performance monitoring to ensure false-positive and false-negative rates remain low, and to permit withdrawal of a malfunctioning test at the earliest possibility. Secondly, pretest probability assessments should be considered, and clear evidence-based guidelines on interpretation of test results developed. Thirdly, policies regarding the testing and prevention of virus transmission in health-care workers might need adjustments, with an immediate second test implemented for any health-care worker testing positive. Finally, research is urgently required into the clinical and epidemiological significance of prolonged virus shedding and the role of people recovering from COVID-19 in disease transmission.

Dark matter holds our universe together. No one knows what it is. Posted December 5th 2020

Dark matter, unexplained. By Brian Resnick@B_resnickbrian@vox.com Nov 25, 2020, 8:30am EST Additional reporting by Noam Hassenfeld and Byrd Pinkerton

If you go outside on a dark night, in the darkest places on Earth, you can see as many as 9,000 stars. They appear as tiny points of light, but they are massive infernos. And while these stars seem astonishingly numerous to our eyes, they represent just the tiniest fraction of all the stars in our galaxy, let alone the universe.

The beautiful challenge of stargazing is keeping this all in mind: Every small thing we see in the night sky is immense, but what’s even more immense is the unseen, the unknown.

I’ve been thinking about this feeling — the awesome, terrifying feeling of smallness, of the extreme contrast of the big and small — while reporting on one of the greatest mysteries in science for Unexplainable, a new Vox podcast pilot you can listen to below.

It turns out all the stars in all the galaxies, in all the universe, barely even begin to account for all the stuff of the universe. Most of the matter in the universe is actually unseeable, untouchable, and, to this day, undiscovered.

Scientists call this unexplained stuff “dark matter,” and they believe there’s five times more of it in the universe than normal matter — the stuff that makes up you and me, stars, planets, black holes, and everything we can see in the night sky or touch here on Earth. It’s strange even calling all that “normal” matter, because in the grand scheme of the cosmos, normal matter is the rare stuff. But to this day, no one knows what dark matter actually is.

“I think it gives you intellectual and kind of epistemic humility — that we are simultaneously, super insignificant, a tiny, tiny speck of the universe,” Priya Natarajan, a Yale physicist and dark matter expert, said on a recent phone call. “But on the other hand, we have brains in our skulls that are like these tiny, gelatinous cantaloupes, and we have figured all of this out.”

The story of dark matter is a reminder that whatever we know, whatever truth about the universe we have acquired as individuals or as a society, is insignificant compared to what we have not yet explained.

It’s also a reminder that, often, in order to discover something true, the first thing we need to do is account for what we don’t know.

This accounting of the unknown is not often a thing that’s celebrated in science. It doesn’t win Nobel Prizes. But, at least, we can know the size of our ignorance. And that’s a start.

But how does it end? Though physicists have been trying for decades to figure out what dark matter is, the detectors they built to find it have gone silent year after year. It makes some wonder: Have they been chasing a ghost? Dark matter might not be real. Instead, there could be something more deeply flawed in physicists’ understanding of gravity that would explain it away. Still, the search, fueled by faith in scientific observations, continues, despite the possibility that dark matter may never be found.

To learn about dark matter is to grapple with, and embrace, the unknown.

The woman who told us how much we don’t know

Scientists are, to this day, searching for dark matter because they believe it is there to find. And they believe so largely because of Vera Rubin, an astronomer who died in 2016 at age 88.

Growing up in Washington, DC, in the 1930s, like so many young people getting started in science, Rubin fell in love with the night sky.

Rubin shared a bedroom and bed with her sister Ruth. Ruth was older and got to pick her favorite side of the bed, the one that faced the bedroom windows and the night sky.

“But the windows captivated Vera’s attention,” Ashley Yeager, a journalist writing a forthcoming biography on Rubin, says. “Ruth remembers Vera constantly crawling over her at night, to be able to open the windows and look out at the night sky and start to track the stars.” Ruth just wanted to sleep, and “there Vera was tinkering and trying to take pictures of the stars and trying to track their motions.” It wasn’t that everything we knew about matter was wrong. It was that everything we knew about normal matter was insignificant.

Not everyone gets to turn their childlike wonder and captivation of the unknown into a career, but Rubin did.

Flash-forward to the late 1960s, and she’s at the Kitt Peak National Observatory near Tucson, Arizona, doing exactly what she did in that childhood bedroom: tracking the motion of stars.

This time, though, she has a cutting-edge telescope and is looking at stars in motion at the edge of the Andromeda Galaxy. Just 40 years prior, Edwin Hubble had determined, for the first time, that Andromeda was a galaxy outside of our own, and that galaxies outside our own even existed. With one observation, Hubble doubled the size of the known universe.

By 1960, scientists were still asking basic questions in the wake of this discovery. Like: How do galaxies move?

Rubin and her colleague Kent Ford were at the observatory doing this basic science, charting how stars are moving at the edge of Andromeda. “I guess I wanted to confirm Newton’s laws,” Rubin said in an archival interview with science historian David DeVorkin.

The Andromeda Galaxy.

Per Newton’s equations, the stars in the galaxy ought to move like the planets in our solar system do. Mercury, the closest planet to the sun, orbits very quickly, propelled by the sun’s gravity to a speed of around 106,000 mph. Neptune, far from the sun, and less influenced by its gravity, moves much slower, at around 12,000 mph.

The same thing ought to happen in galaxies too: Stars near the dense, gravity-rich centers of galaxies ought to move faster than the stars along the edges.

But that wasn’t what Rubin and Ford observed. Instead, they saw that the stars along the edge of Andromeda were going the same speed as the stars in the interior. “I think it was kind of like a ‘what the fuck’ moment,” Yeager says. “It was just so different than what everyone had expected.”

On the left, what Rubin expected to see: stars orbiting the outskirts of a galaxy moving slower than those near the center. On the right, what was observed: the stars on the outside moving at the same speed as the center.

The data pointed to an enormous problem: The stars couldn’t just be moving that fast on their own.

At those speeds, the galaxy should be ripping itself apart like an accelerating merry-go-round with the brake turned off. To explain why this wasn’t happening, these stars needed some kind of extra gravity out there acting like an engine. There had to be a source of mass for all that extra gravity. (For a refresher: Physicists consider gravity to be a consequence of mass. The more mass in an area, the stronger the gravitational pull.)

The data suggested that there was a staggering amount of mass in the galaxy that astronomers simply couldn’t see. “As they’re looking out there, they just can’t seem to find any kind of evidence that it’s some normal type of matter,” Yeager says. It wasn’t black holes; it wasn’t dead stars. It was something else generating the gravity needed to both hold the galaxy together and propel those outer stars to such fast speeds.

Know of a great unanswered question in science? Tell me about it: Brian@vox.com

“I mean, when you first see it, I think you’re afraid of being … you’re afraid of making a dumb mistake, you know, that there’s just some simple explanation,” Rubin later recounted. Other scientists might have immediately announced a dramatic conclusion based on this limited data. But not Rubin. She and her collaborators dug in and decided to do a systematic review of the star speeds in galaxies.

Rubin and Ford weren’t the first group to make an observation of stars moving fast at the edge of a galaxy. But what Rubin and her collaborators are famous for is verifying the finding across the universe. “She [studied] 20 galaxies, and then 40 and then 60, and they all show this bizarre behavior of stars out far in the galaxy, moving way, way too fast,” Yeager explains.

This is why people say Rubin ought to have won a Nobel Prize (the prizes are only awarded to living recipients, so she will never win one). She didn’t “discover” dark matter. But the data she collected over her career made it so the astronomy community had to reckon with the idea that most of the mass in the universe is unknown.

By 1985, Rubin was confident enough in her observations to declare something of an anti-eureka: announcing not a discovery, but a huge absence in our collective knowledge. “Nature has played a trick on astronomers,” she’s paraphrased as saying at an International Astronomical Union conference in 1985, “who thought we were studying the universe. We now know that we were studying only a small fraction of it.”

To this day, no one has “discovered” dark matter. But Rubin did something incredibly important: She told the scientific world about what they were missing.

In the decades since this anti-eureka, other scientists have been trying to fill in the void Rubin pointed to. Their work isn’t complete. But what they’ve been learning about dark matter is that it’s incredibly important to the very structure of our universe, and that it’s deeply, deeply weird.

Dark matter isn’t just enormous. It’s also strange.

Since Rubin’s WTF moment in the Arizona desert, more and more evidence has accumulated that dark matter is real, and weird, and accounts for most of the mass in the universe.

“Even though we can’t see it, we can still infer that dark matter is there,” Kathryn Zurek, a Caltech astrophysicist, explains. “Even if we couldn’t see the moon with our eyes, we would still know that it was there because it pulls the oceans in different directions — and it’s really very similar with dark matter.”

Scientists can’t see dark matter directly. But they can see its influence on the space and light around it. The biggest piece of indirect evidence: Dark matter, like all matter that accumulates in large quantities, has the ability to warp the very fabric of space.

“You can visualize dark matter as these lumps of matter that create little potholes in space-time,” Natarajan says. “All the matter in the universe is pockmarked with dark matter.”

When light falls into one of these potholes, it bends like light does in a lens. In this way, we can’t “see” dark matter, but we can “see” the distortions it produces in astronomers’ views of the cosmos. From this, we know dark matter forms a spherical cocoon around galaxies, lending them more mass, which allows their stars to move faster than what Newton’s laws would otherwise suggest.

This is a NASA/ESA Hubble Space Telescope image of the galaxy cluster MACS J0717.5+3745. Shown in blue on the image is a map of the dark matter found within the cluster.

These are indirect observations, but they have given scientists some clues about the intrinsic nature of dark matter. It’s not called dark matter because of its color. It has no color. It’s called “dark” because it neither reflects nor emits light, nor any sort of electromagnetic radiation. So we can’t see it directly even with the most powerful telescopes.

Not only can we not see it, we couldn’t touch it if we tried: If some sentient alien tossed a piece of dark matter at you, it would pass right through you. If it were going fast enough, it would pass right through the entire Earth. Dark matter is like a ghost.

Here’s one reason physicists are confident in that weird fact. Astronomers have made observations of galaxy clusters that have slammed into one another like a head-on collision between two cars on the highway.

Astronomers deduced that in the collision, much of the normal matter in the galaxy clusters slowed down and mixed together (like two cars in a head-on collision would stop one another and crumple together). But the dark matter in the cluster didn’t slow down in the collision. It kept going, as if the collision didn’t even happen.

The event is recreated in this animation. The red represents normal matter in the galaxy clusters, and the blue represents dark matter. During the collision, the blue dark matter acts like a ghost, just passing through the normal colliding matter as if it weren’t there.

(A note: These two weird aspects of dark matter — its invisibility and its untouchability — are connected: Dark matter simply does not interact with the electromagnetic force of nature. The electromagnetic force lights up our universe with light and radiation, but it also makes the world feel solid.)

A final big piece of evidence for dark matter is that it helps physicists make sense of how galaxies formed in the early universe. “We know that dark matter had to be present to be part of that process,” astrophysicist Katie Mack explains. It’s believed dark matter coalesced together in the early universe before normal matter did, creating gravitational wells for normal matter to fall into. Those gravitational wells formed by dark matter became the seeds of galaxies.

So dark matter not only holds galaxies together, as Rubin’s work implied — it’s why galaxies are there in the first place.

So: What is it?

To this day, no one really knows what dark matter is.

Scientists’ best guess is that it’s a particle. Particles are the smallest building blocks of reality — they’re so small, they make up atoms. It’s thought that dark matter is just another one of these building blocks, but one we haven’t seen up close for ourselves. (There are a lot of different proposed particles that may be good dark matter candidates. Scientists still aren’t sure exactly which one it will be.)

You might be wondering: Why can’t we find the most common source of matter in all the universe? Well, our scientific equipment is made out of normal matter. So if dark matter passes right through normal matter, trying to find dark matter is like trying to catch a ghost baseball with a normal glove.

Plus, while dark matter is bountiful in the universe, it’s really diffuse. There are just not massive boulders of it passing nearby Earth. It’s more like we’re swimming in a fine mist of it. “If you add up all the dark matter inside humans, all humans on the planet at any given moment, it’s one nanogram,” Natarajan says — teeny-tiny.

Dark matter may never be “discovered,” and that’s okay

Some physicists favor a different interpretation for what Rubin observed, and for what other scientists have observed since: that it’s not that there’s some invisible mass of dark matter dominating the universe, but that scientists’ fundamental understanding of gravity is flawed and needs to be reworked.

While “that’s a definite possibility,” Natarajan says, currently, there’s a lot more evidence on the side of dark matter being real and not just a mirage based on a misunderstanding of gravity. “We would need a new theory [of gravity] that can explain everything that we see already,” she explains. “There is no such theory that is currently available.”

On the left, a Hubble Space Telescope image of a galaxy cluster. On the right, a blue shading has been added to indicate where the dark matter ought to be.

It’s not hard to believe in something invisible, Mack says, if all the right evidence is there. We do it all the time.

“It’s similar to if you’re walking down the street,” she says. “And as you’re walking, you see that some trees are kind of bending over, and you hear some leaves rustling and maybe you see a plastic bag sort of floating past you and you feel a little cold on one side. You can pretty much figure out there’s wind. Right? And that wind explains all of these different phenomena. … There are many, many different pieces of evidence for dark matter. And for each of them, you might be able to find some other explanation that works just as well. But when taken together, it’s really good evidence.”

Meanwhile, experiments around the world are trying to directly detect dark matter. Physicists at the Large Hadron Collider are hoping their particle collisions may one day produce some detectable dark matter. Astronomers are looking out in space for more clues, hoping one day dark matter will reveal itself through an explosion of gamma rays. Elsewhere, scientists have burrowed deep underground, shielding labs from noise and radiation, hoping that dark matter will one day pass through a detector they’ve carefully designed and make itself known.

But it hasn’t happened yet. It may never happen: Scientists hope that dark matter isn’t a complete ghost to normal matter. They hope that every once in a while, when it collides with normal matter, it does something really, really subtle, like shove one single atom to the side, and set off a delicately constructed alarm.

But that day may never come. It could be dark matter just never prods normal matter, that it remains a ghost.

“I really did get into this business because I thought I would be detecting this within five years,” Prisca Cushman, a University of Minnesota physicist who works on a dark matter detector, says. She’s been trying to find dark matter for 20 years. She still believes it exists, that it’s out there to be discovered. But maybe it’s just not the particular candidate particle her detector was initially set up to find.

That failure isn’t a reason to give up, she says. “By not seeing [dark matter] yet with a particular detector, we’re saying, ‘Oh, so it’s not this particular model that we thought it might be.’ And that is an extremely interesting statement. Because all of a sudden an army of theorists go out and say, ‘Hey, what else could it be?’”

But even if the dark matter particle is never found, that won’t discount all science has learned about it. “It’s like you’re on a beach,” Natarajan explains. “You have a lot of sand dunes. And so we are in a situation where we are able to understand how these sand dunes form, but we don’t actually know what a grain of sand is made of.”

Embracing the unknown

Natarajan and the other physicists I spoke to for this story are comfortable with the unknown nature of dark matter. They’re not satisfied, they want to know more, but they accept it’s real. They accept it because that’s the state of the evidence. And if new evidence comes along to disprove it, they’ll have to accept that too.

“Inherent to the nature of science is the fact that whatever we know is provisional,” Natarajan says. “It is apt to change. So I think what motivates people like me to continue doing science is the fact that it keeps opening up more and more questions. Nothing is ultimately resolved.”

That’s true when it comes to the biggest questions, like “what is the universe made of?”

It’s true in so many other areas of science, too: Despite the endless headlines that proclaim new research findings that get published daily, there are many more unanswered questions than answered. Scientists don’t really understand how bicycles stay upright, or know the root cause of Alzheimer’s disease or how to treat it. Similarly, at the beginning of the Covid-19 pandemic, we craved answers: Why do some people get much sicker than others, what does immunity to the virus look like? The truth was we couldn’t yet know (and still don’t, for sure). But that didn’t mean the scientific process was broken.

The truth is, when it comes to a lot of fields of scientific progress, we’re in the middle of the story, not the end. The lesson is that truth and knowledge are hard-won.

In the case of dark matter, it wasn’t that everything we knew about matter was wrong. It was that everything we knew about normal matter was insignificant compared to our ignorance about dark matter. The story of dark matter fits with a narrative of scientific progress that makes us humans seem smaller and smaller at each turn. First, we learned that Earth wasn’t the center of the universe. Now dark matter teaches us that the very stuff we’re made of — matter — is just a fraction of all reality.

If dark matter is one day discovered, it will only open up more questions. Dark matter could be more than one particle, more than one thing. There could be a richness and diversity in dark matter that’s a little like the richness and diversity we see in normal matter. It’s possible, and this is speculation, that there’s a kind of shadow universe that we don’t have access to — scientists label it the “dark sector” — that is made up of different components and exists, as a ghost, enveloping our galaxies.

It’s a little scary to learn how little we know, to learn we don’t even know what most of the universe is made out of. But there’s a sense of optimism in a question, right? It makes you feel like we can know the answer to them.

There’s so much about our world that’s arrogant: from politicians who only believe in what’s convenient for them to Silicon Valley companies that claim they’re helping the world while fracturing it, and so many more examples. If only everyone could see a bit of what Vera Rubin saw — a fundamental truth not just about the universe, but about humanity.

“In a spiral galaxy, the ratio of dark-to-light matter is about a factor of 10,” Rubin said in a 2000 interview. “That’s probably a good number for the ratio of our ignorance to knowledge. We’re out of kindergarten, but only in about third grade.”


Will you help keep Vox free for all?

There is tremendous power in understanding. Vox answers your most important questions and gives you clear information to help make sense of an increasingly chaotic world. A financial contribution to Vox will help us continue providing free explanatory journalism to the millions who are relying on us. Please consider making a contribution to Vox today, from as little as $3.

Science & Truth November 27th 2020

As previously written here, science is a methodology for the purposes of discovery, mamipulation and understanding of nature. Good science requires pure and good inspiration. Unfortunately it also requires money which is in the hands of a tiny political, military and business complex.

Hitler and the Nazis demonstrated how science can be used for evil purposes. World War Two’s victors were quick to employ the discoveries and inventions of the Nazi devils, who among other things gave us the ICBM. The British made a major contribution to nuclear, chemical and germ warfare.

Education in the dominant Anglo United States block favours rich kids. As somone who taught maths and science in a British seconday modern school, it is my view that the whole curriculum is the opposite of inspiring.. I have previously quoted that while teaching at Aylesbury’s Grange School, I was inspired to write the following punk song, after watching the headmaster berating a young boy for wearing trainers.

“They limit our horizons and fit us for a task

Dressed in blue overalls and carrying a flask

Left Right, left right the headmaster roars

And mind you don’t stand talking, blocking up the doors

He’s tellin’ them they’re wearing the wrong kind of shoes

Their little faces quiver, do they know they’re goin’ to lose

All runnin’ ’round in circles and marchin’ up and down

Preparing for a life driftin’ round the town.”

That was back in 1980. The country and the world has got worse. Peoples’ lives are boxed in – even if only a cardboard box on a cold dirty pavement.

Yesterday’s headlines warned British people that unemployment due to Covid restrictions will soon hit 2.6 million. The people dying because of this lockdown, not Covid, do not count for the well paid mainstream media, politicians and other authorities. We are warned of higher taxes to come, but no mention of higher taxes for the rich whose global economy created this situation and continues to make them ever more obscenely rich and powerful. BLM, which these people back and fund, is just another divide and fool smokescreen to keep them safe, justifying more arrests and sentences for dissidents.

So we read below about a wonderful vaccine coming out of so much scientific uncertainty – but we must still have social distancing. The good old Nazi abuse of science and psuedo science is alive and well. So called scientists care only for the pay, the grant money and status. Science created World War One’s mustard gass which blinded people. Science has for a long time been the elite’s weapon of choice for blinding people with. Education is very much part of that process. An old tutor of mine, Basil Bernstein, from my days as a post grad at London University produced a set of studies called ‘Knowledge & Control’. He hit the nail on the head. R.J Cook

The End of the Pandemic Is Now in Sight

A year of scientific uncertainty is over. Two vaccines look like they will work, and more should follow.Sarah ZhangNovember 18, 2020

A man holding syringes
Herb Snitzer / Getty

Editor’s Note: The Atlantic is making vital coverage of the coronavirus available to all readers. Find the collection here.

For all that scientists have done to tame the biological world, there are still things that lie outside the realm of human knowledge. The coronavirus was one such alarming reminder, when it emerged with murky origins in late 2019 and found naive, unwitting hosts in the human body. Even as science began to unravel many of the virus’s mysteries—how it spreads, how it tricks its way into cells, how it kills—a fundamental unknown about vaccines hung over the pandemic and our collective human fate: Vaccines can stop many, but not all, viruses. Could they stop this one?

The answer, we now know, is yes. A resounding yes. Pfizer and Moderna have separately released preliminary data that suggest their vaccines are both more than 90 percent effective, far more than many scientists expected. Neither company has publicly shared the full scope of their data, but independent clinical-trial monitoring boards have reviewed the results, and the FDA will soon scrutinize the vaccines for emergency use authorization. Unless the data take an unexpected turn, initial doses should be available in December.

The tasks that lie ahead—manufacturing vaccines at scale, distributing them via a cold or even ultracold chain, and persuading wary Americans to take them—are not trivial, but they are all within the realm of human knowledge. The most tenuous moment is over: The scientific uncertainty at the heart of COVID-19 vaccines is resolved. Vaccines work. And for that, we can breathe a collective sigh of relief. “It makes it now clear that vaccines will be our way out of this pandemic,” says Kanta Subbarao, a virologist at the Doherty Institute, who has studied emerging viruses.

The invention of vaccines against a virus identified only 10 months ago is an extraordinary scientific achievement. They are the fastest vaccines ever developed, by a margin of years. From virtually the day Chinese scientists shared the genetic sequence of a new coronavirus in January, researchers began designing vaccines that might train the immune system to recognize the still-unnamed virus. They needed to identify a suitable piece of the virus to turn into a vaccine, and one promising target was the spike-shaped proteins that decorate the new virus’s outer shell. Pfizer and Moderna’s vaccines both rely on the spike protein, as do many vaccine candidates still in development. These initial successes suggest this strategy works; several more COVID-19 vaccines may soon cross the finish line. To vaccinate billions of people across the globe and bring the pandemic to a timely end, we will need all the vaccines we can get.

But it is no accident or surprise that Moderna and Pfizer are first out of the gate. They both bet on a new and hitherto unproven idea of using mRNA, which has the long-promised advantage of speed. This idea has now survived a trial by pandemic and emerged likely triumphant. If mRNA vaccines help end the pandemic and restore normal life, they may also usher in a new era for vaccine development.


The human immune system is awesome in its power, but an untrained one does not know how to aim its fire. That’s where vaccines come in. They present a harmless snapshot of a pathogen, a “wanted” poster, if you will, that primes the immune system to recognize the real virus when it comes along. Traditionally, this snapshot could be in the form of a weakened virus or an inactivated virus or a particularly distinctive viral molecule. But those approaches require vaccine makers to manufacture viruses and their molecules, which takes time and expertise. Both are lacking during a pandemic caused by a novel virus.

mRNA vaccines offer a clever shortcut. We humans don’t need to intellectually work out how to make viruses; our bodies are already very, very good at incubating them. When the coronavirus infects us, it hijacks our cellular machinery, turning our cells into miniature factories that churn out infectious viruses. The mRNA vaccine makes this vulnerability into a strength. What if we can trick our own cells into making just one individually harmless, though very recognizable, viral protein? The coronavirus’s spike protein fits this description, and the instructions for making it can be encoded into genetic material called mRNA.

Both vaccines, from Moderna and from Pfizer’s collaboration with the smaller German company BioNTech, package slightly modified spike-protein mRNA inside a tiny protective bubble of fat. Human cells take up this bubble and simply follow the directions to make spike protein. The cells then display these spike proteins, presenting them as strange baubles to the immune system. Recognizing these viral proteins as foreign, the immune system begins building an arsenal to prepare for the moment a virus bearing this spike protein appears.

This overall process mimics the steps of infection better than some traditional vaccines, which suggests that mRNA vaccines may provoke a better immune response for certain diseases. When you inject vaccines made of inactivated viruses or viral pieces, they can’t get inside the cell, and the cell can’t present those viral pieces to the immune system. Those vaccines can still elicit proteins called antibodies, which neutralize the virus, but they have a harder time stimulating T cells, which make up another important part of the immune response. (Weakened viruses used in vaccines can get inside cells, but risk causing an actual infection if something goes awry. mRNA vaccines cannot cause infection because they do not contain the whole virus.) Moreover, inactivated viruses or viral pieces tend to disappear from the body within a day, but mRNA vaccines can continue to produce spike protein for two weeks, says Drew Weissman, an immunologist at the University of Pennsylvania, whose mRNA vaccine research has been licensed by both BioNTech and Moderna. The longer the spike protein is around, the better for an immune response.

All of this is how mRNA vaccines should work in theory. But no one on Earth, until last week, knew whether mRNA vaccines actually do work in humans for COVID-19. Although scientists had prototyped other mRNA vaccines before the pandemic, the technology was still new. None had been put through the paces of a large clinical trial. And the human immune system is notoriously complicated and unpredictable. Immunology is, as my colleague Ed Yong has written, where intuition goes to die. Vaccines can even make diseases more severe, rather than less. The data from these large clinical trials from Pfizer/BioNTech and Moderna are the first, real-world proof that mRNA vaccines protect against disease as expected. The hope, in the many years when mRNA vaccine research flew under the radar, was that the technology would deliver results quickly in a pandemic. And now it has.

“What a relief,” says Barney Graham, a virologist at the National Institutes of Health, who helped design the spike protein for the Moderna vaccine. “You can make thousands of decisions, and thousands of things have to go right for this to actually come out and work. You’re just worried that you have made some wrong turns along the way.” For Graham, this vaccine is a culmination of years of such decisions, long predating the discovery of the coronavirus that causes COVID-19. He and his collaborators had homed in on the importance of spike protein in another virus, called respiratory syncytial virus, and figured out how to make the protein more stable and thus suitable for vaccines. This modification appears in both Pfizer/BioNTech’s and Moderna’s vaccines, as well as other leading vaccine candidates.

The spectacular efficacy of these vaccines, should the preliminary data hold, likely also has to do with the choice of spike protein as vaccine target. On one hand, scientists were prepared for the spike protein, thanks to research like Graham’s. On the other hand, the coronavirus’s spike protein offered an opening. Three separate components of the immune system—antibodies, helper cells, and killer T cells—all respond to the spike protein, which isn’t the case with most viruses.

In this, we were lucky. “It’s the three punches,” says Alessandro Sette. Working with Shane Crotty, his fellow immunologist at the La Jolla Institute, Sette found that COVID-19 patients whose immune systems can marshal all three responses against the spike protein tend to fare the best. The fact that most people can recover from COVID-19 was always encouraging news; it meant a vaccine simply needed to jump-start the immune system, which could then take on the virus itself. But no definitive piece of evidence existed that proved COVID-19 vaccines would be a slam dunk. “There’s nothing like a Phase 3 clinical trial,” Crotty says. “You don’t know what’s gonna happen with a vaccine until it happens, because the virus is complicated and the immune system is complicated.”

Experts anticipate that the ongoing trials will clarify still-unanswered questions about the COVID-19 vaccines. For example, Ruth Karron, the director of the Center for Immunization Research at Johns Hopkins University, asks, does the vaccine prevent only a patient’s symptoms? Or does it keep them from spreading the virus? How long will immunity last? How well does it protect the elderly, many of whom have a weaker response to the flu vaccine? So far, Pfizer has noted that its vaccine seems to protect the elderly just as well, which is good news because they are especially vulnerable to COVID-19.

Several more vaccines using the spike protein are in clinical trials too. They rely on a suite of different vaccine technologies, including weakened viruses, inactivated viruses, viral proteins, and another fairly new concept called DNA vaccines. Never before have companies tested so many different types of vaccines against the same virus, which might end up revealing something new about vaccines in general. You now have the same spike protein delivered in many different ways, Sette points out. How will the vaccines behave differently? Will they each stimulate different parts of the immune system? And which parts are best for protecting against the coronavirus? The pandemic is an opportunity to compare different types of vaccines head-on.

If the two mRNA vaccines continue to be as good as they initially seem, their success will likely crack open a whole new world of mRNA vaccines. Scientists are already testing them against currently un-vaccinable viruses such as Zika and cytomegalovirus and trying to make improved versions of existing vaccines, such as for the flu. Another possibility lies in personalized mRNA vaccines that can stimulate the immune system to fight cancer.

But the next few months will be a test of one potential downside of mRNA vaccines: their extreme fragility. mRNA is an inherently unstable molecule, which is why it needs that protective bubble of fat, called a lipid nanoparticle. But the lipid nanoparticle itself is exquisitely sensitive to temperature. For longer-term storage, Pfizer/BioNTech’s vaccine has to be stored at –70 degrees Celsius and Moderna’s at –20 Celsius, though they can be kept at higher temperatures for a shorter amount of time. Pfizer/BioNTech and Moderna have said they can collectively supply enough doses for 22.5 million people in the United States by the end of the year.

Distributing the limited vaccines fairly and smoothly will be a massive political and logistical challenge, especially as it begins during a bitter transition of power in Washington. The vaccine is a scientific triumph, but the past eight months have made clear how much pandemic preparedness is not only about scientific research. Ensuring adequate supplies of tests and personal protective equipment, providing economic relief, and communicating the known risks of COVID-19 transmission are all well within the realm of human knowledge, yet the U.S. government has failed at all of that.

The vaccine by itself cannot slow the dangerous trajectory of COVID-19 hospitalizations this fall or save the many people who may die by Christmas. But it can give us hope that the pandemic will end. Every infection we prevent now—through masking and social distancing—is an infection that can, eventually, be prevented forever through vaccines.

Sarah Zhang is a staff writer at The Atlantic.

The Science Behind Your Cheap Wine Posted November 17th 2020

How advances in bottling, fermenting and taste-testing are democratizing a once-opaque liquid.

Smithsonian Magazine

  • Ben Panko

Read when you’ve got time to spare.GettyImages-680787411.jpg

To develop the next big mass-market wine, winemakers first hone flavor using focus groups, then add approved flavoring and coloring additives to make the drink match up with what consumers want. Credit: Zero Creatives / Getty Images.

We live in a golden age of wine, thanks in part to thirsty millennials and Americans seemingly intent on out-drinking the French. Yet for all its popularity, the sommelier’s world is largely a mysterious one. Bottles on grocery store shelves come adorned with whimsical images and proudly proclaim their region of origin, but rarely list ingredients other than grapes. Meanwhile, ordering wine at a restaurant can often mean pretending to understand terms like “mouthfeel,” “legs” or “bouquet.”

“I liked wine the same way I liked Tibetan hand puppetry or theoretical particle physics,” writes journalist Bianca Bosker in the introduction to her 2017 book Cork Dork, “which is to say I had no idea what was going on but was content to smile and nod.”

Curious about what exactly happened in this shrouded world, Bosker took off a year and a half from writing to train to become a sommelier, and talk her way into wine production facilities across the country. In the end, Bosker learned that most wine is nowhere near as “natural” as many people think—and that scientific advances have helped make cheap wine nearly as good as the expensive stuff.

“There’s an incredible amount we don’t understand about what makes wine—this thing that shakes some people to the core,” Bosker says. In particular, most people don’t realize how much chemistry goes into making a product that is supposedly just grapes and yeast, she says. Part of the reason is that, unlike food and medicines, alcoholic beverages in the U.S. aren’t covered by the Food and Drug Administration. That means winemakers aren’t required to disclose exactly what is in each bottle; all they have to reveal is the alcohol content and whether the wine has sulfites or certain food coloring additives.

In Cork Dork, published by Penguin Books, Bosker immerses herself in the world of wine and interviews winemakers and scientists to distill for the average drinking person what goes into your bottle of pinot. “One of the things that I did was to go into this wine conglomerate [Treasury Wine Estates] that produces millions of bottles of wine per year,” Bosker says. “People are there developing wine the way flavor scientists develop the new Oreo or Doritos flavor.” 

For Treasury Wine Estates, the process of developing a mass-market wine starts in a kind of “sensory insights lab,” Bosker found. There, focus groups of professional tasters blind-sample a variety of Treasury’s wine products. The best ones are then sampled by average consumers to help winemakers get a sense of which “sensory profiles” would do best in stores and restaurants, whether it be “purplish wines with blackberry aromas, or low-alcohol wines in a pink shade,” she writes.

From these baseline preferences, the winemakers take on the role of the scientist, adding a dash of acidity or a hint of red to bring their wines in line with what consumers want. Winemakers can draw on a list of more than 60 government-approved additives that can be used to tweak everything from color to acidity to even thickness. 

Then the wines can be mass-produced in huge steel vats, which hold hundreds of gallons and are often infused with oak chips to impart the flavor of real oaken barrels. Every step of this fermentation process is closely monitored, and can be altered by changing temperature or adding more nutrients for the yeast. Eventually, the wine is packaged on huge assembly lines, churning out thousands of bottles an hour that will make their way to your grocery store aisle and can sometimes sell for essentially the same price as bottled water.

“This idea of massaging grapes with the help of science is not new,” Bosker points out. The Romans, for example, added lead to their wine to make it thicker. In the Middle Ages, winemakers began adding sulfur to make wines stay fresh for longer.

However, starting in the 1970s, enologists (wine scientists) at the University of California at Davis took the science of winemaking to new heights, Bosker says. These entrepreneurial wine wizards pioneered new forms of fermentation to help prevent wine from spoiling and produce it more efficiently. Along with the wide range of additives, winemakers today can custom order yeast that will produce wine with certain flavors or characteristics. Someday soon, scientists might even build yeast from scratch.

Consumers most commonly associate these kinds of additives with cheap, mass-produced wines like Charles Shaw (aka “Two Buck Chuck”) or Barefoot. But even the most expensive red wines often have their color boosted with the use of “mega-red” or “mega-purple” juice from other grape varieties, says Davis enologist Andrew Waterhouse. Other common manipulations include adding acidity with tartaric acid to compensate for the less acidic grapes grown in warmer climates, or adding sugar to compensate for the more acidic grapes grown in cooler climates.

Tannins, a substance found in grape skins, can be added to make a wine taste “drier” (less sweet) and polysaccharides can even be used to give the wine a “thicker mouthfeel,” meaning the taste will linger more on the tongue.

When asked if there was any truth to the oft-repeated legend that cheap wine is bound to give more headaches and worse hangovers, Waterhouse was skeptical. “There’s no particular reason that I can think of that expensive wine is better than cheap wine,” Waterhouse says. He adds, however, that there isn’t good data on the topic. “As you might suspect, the [National Institutes of Health] can’t make wine headaches a high priority,” he says.

Instead, Waterhouse suggests, there may be a simpler explanation: “It’s just possible that people tend to drink more wine when it’s cheap.”

While this widespread use of additives may make some natural-foods consumers cringe, Bosker found no safety or health issues to worry about in her research. Instead, she credits advancements in wine science with improving the experience of wine for most people by “democratizing quality.” “The technological revolution that has taken place in the winery has actually elevated the quality of really low-end wines,” Bosker says.

The main issue she has with the modern wine industry is that winemakers aren’t usually transparent with all of their ingredients—because they don’t have to be. “I find it outrageous that most people don’t realize that their fancy Cabernet Sauvignon has actually been treated with all kinds of chemicals,” Bosker says. 

Yet behind those fancy labels and bottles and newfangled chemical manipulation, the biggest factor influencing the price of wine is an old one: terroir, or the qualities a wine draws from the region where it was grown. Famous winemaking areas such as Bordeaux, France, or Napa Valley, California, can still land prices 10 times higher than just as productive grape-growing land in other areas, says Waterhouse. Many of these winemakers grow varieties of grapes that produce less quantity, but are considered by winemakers to be far higher quality.

“Combine the low yield and the high cost of the land, and there’s a real structural difference in the pricing of those wines,” Waterhouse says. Yet as winemakers continue to advance the science of making, cultivating and bottling this endlessly desirable product, that may soon change. After all, as Bosker says, “wine and science have always gone hand in hand.”

Ben Panko is a staff writer for Smithsonian.com

By Iain Baird on 15 May 2011

Colour television in Britain

Beginning in the late 1960s, British households began the rather expensive process of investing in their first colour televisions, causing the act of viewing to change dramatically.

Larger screens, sharper images and of course, colour, meant that the television audience experienced a feeling of greater realism while viewing—an enhanced sense of actually “being there”. Programmers sought to attract their new audience with brightly-coloured fare such as The Avengers, Z Cars, Dad’s Army, and The Prisoner. This change, which was important, was difficult to recognise because it was so gradual; many households did not buy colour sets right away. Not helping things was the fact that for several years so many programmes were still only available in black-and-white.

Invention

Colour television was first demonstrated publicly by John Logie Baird on 3 July 1928 in his laboratory at 133 Long Acre in London. The technology used was electro-mechanical, and the early test subject was a basket of strawberries “which proved popular with the staff”. The following month, the same demonstration was given to a mostly academic audience attending a British Association for the Advancement of Science meeting in Glasgow.

In the mid-late 1930s, Baird returned to his colour television research and developed some of the world’s first colour television systems, most of which used cathode-ray tubes. The effect of World War II, which saw BBC television service suspended, caused his company to go out of business and ended his salary. Nonetheless, he continued his colour television research by financing it from his own personal savings, including cashing in his life insurance policy. He gave the world’s first demonstration of a fully integrated electronic colour picture tube on 16 August 1944. Baird’s untimely death only two years later marked the end of his pioneering colour research in Britain.

The lead in colour television research transferred to the USA with demonstrations given by CBS Laboratories. Soon after, the Radio Corporation of America (RCA) channelled some of its massive resources towards colour television development.

Broadcast

The world’s first proper colour television service began in the USA. Colour television was available in select cities beginning in 1954 using the NTSC (National Television Standards Committee)-compatible colour system championed by RCA. A small fledgling colour service introduced briefly by CBS in 1951 was stopped after RCA complained to the FCC (Federal Communications Commission) that it was not compatible with the existing NTSC black-and-white television sets.

Meanwhile in Britain, several successful colour television tests were carried out, but it would take many more years for a public service to become viable here due to post-war austerity and uncertainty about what kind of colour television system would be the best one for Britain to adopt—and when.

On 1 July 1967, BBC2 launched Europe’s first colour service with the Wimbledon tennis championships, presented by David Vine. This was broadcast using the Phase Alternating Line (PAL) system, which was based on the work of the German television engineer Walter Bruch. PAL seemed the obvious solution, the signal to the British television industry that the time for a public colour television service had finally arrived. PAL was a marked improvement over the American NTSC-compatible system on which it was based, and soon dubbed “never twice the same colour” in comparison to PAL.

On 15 November 1969, colour broadcasting went live on the remaining two channels, BBC1 and ITV, which were in fact more popular than BBC2. Only about half of the national population was brought within the range of colour signals by 15th November, 1969. Colour could be received in the London Weekend Television/Thames region, ATV (Midlands), Granada (North-West) and Yorkshire TV regions. ITV’s first colour programmes in Scotland appeared on 13 December 1969 in Central Scotland; in Wales on 6 April 1970 in South Wales; and in Northern Ireland on 14 September 1970 in the eastern parts.

Colour TV licences were introduced on 1 January 1968, costing £10—twice the price of the standard £5 black and white TV licence.

Programmes

The BBC and ITV sought programmes that could exploit this new medium of colour television. Major sporting events were linked to colour television from the very start of the technology being made available. Snooker, with its rainbow of different-coloured balls, was ideal. On 23 July 1969, BBC2’s Pot Black was born, a series of non-ranking snooker tournaments. It would run until 1986, with one-off programmes continuing up to the present day.

The first official colour programme on BBC1 was a concert by Petula Clark from the Royal Albert Hall, London, broadcast at midnight on 14/15 November 1969. This might seem an odd hour to launch a colour service, but is explained by the fact that the Postmaster General’s colour broadcasting licence began at exactly this time.

The first official colour programme on ITV was a Royal Auto Club Road Report at 09.30 am, followed at 09.35 by The Growing Summer, London Weekend Television’s first colour production for children, starring Wendy Hiller. This was followed at 11.00 by Thunderbirds. The episode was ‘City of Fire’, which also became the first programme to feature a colour advertisement, for Birds Eye peas.

The 9th World Cup finals in Mexico, 1970, were not only the very first to be televised in colour, but also the first that viewers in Europe were able to watch live via trans-Atlantic satellite.

Colour TV sets

Colour TV sets did not outnumber black-and-white sets until 1976, mainly due to the high price of the early colour sets. Colour receivers were almost as expensive in real terms as the early black and white sets had been; the monthly rental for a large-screen receiver was £8. In March, 1969, there were only 100,000 colour TV sets in use; by the end of 1969 this had doubled to 200,000; and by 1972 there were 1.6 million.

Iain Baird

Iain was Associate Curator of Television and Radio at the National Science and Media Museum until 2016.

Color motion picture film

From Wikipedia, the free encyclopedia Jump to navigationJump to search

Still from test film made by Edward Turner in 1902

Color motion picture film refers both to unexposed color photographic film in a format suitable for use in a motion picture camera, and to finished motion picture film, ready for use in a projector, which bears images in color.

The first color cinematography was by additive color systems such as the one patented by Edward Raymond Turner in 1899 and tested in 1902.[1] A simplified additive system was successfully commercialized in 1909 as Kinemacolor. These early systems used black-and-white film to photograph and project two or more component images through different color filters.

During 1920, the first practical subtractive color processes were introduced. These also used black-and-white film to photograph multiple color-filtered source images, but the final product was a multicolored print that did not require special projection equipment. Before 1932, when three-strip Technicolor was introduced, commercialized subtractive processes used only two color components and could reproduce only a limited range of color.

In 1935, Kodachrome was introduced, followed by Agfacolor in 1936. They were intended primarily for amateur home movies and “slides“. These were the first films of the “integral tripack” type, coated with three layers of differently color-sensitive emulsion, which is usually what is meant by the words “color film” as commonly used. The few color photographic films still being made in the 2010s are of this type. The first color negative films and corresponding print films were modified versions of these films. They were introduced around 1940 but only came into wide use for commercial motion picture production in the early 1950s. In the US, Eastman Kodak‘s Eastmancolor was the usual choice, but it was often re-branded with another trade name, such as “WarnerColor”, by the studio or the film processor.

Later color films were standardized into two distinct processes: Eastman Color Negative 2 chemistry (camera negative stocks, duplicating interpositive and internegative stocks) and Eastman Color Positive 2 chemistry (positive prints for direct projection), usually abbreviated as ECN-2 and ECP-2. Fuji’s products are compatible with ECN-2 and ECP-2.

Film was the dominant form of cinematography until the 2010s, when it was largely replaced by digital cinematography.[2]

Contents

Overview

The first motion pictures were photographed using a simple homogeneous photographic emulsion that yielded a black-and-white image—that is, an image in shades of gray, ranging from black to white, corresponding to the luminous intensity of each point on the photographed subject. Light, shade, form and movement were captured, but not color.

With color motion picture film, information about the color of the light at each image point is also captured. This is done by analyzing the visible spectrum of color into several regions (normally three, commonly referred to by their dominant colors: red, green and blue) and recording each region separately.

Current color films do this with three layers of differently color-sensitive photographic emulsion coated on one strip of film base. Early processes used color filters to photograph the color components as completely separate images (e.g., three-strip Technicolor) or adjacent microscopic image fragments (e.g., Dufaycolor) in a one-layer black-and-white emulsion.

Each photographed color component, initially just a colorless record of the luminous intensities in the part of the spectrum that it captured, is processed to produce a transparent dye image in the color complementary to the color of the light that it recorded. The superimposed dye images combine to synthesize the original colors by the subtractive color method. In some early color processes (e.g., Kinemacolor), the component images remained in black-and-white form and were projected through color filters to synthesize the original colors by the additive color method.

Tinting and hand coloring

See also: List of early color feature films

The earliest motion picture stocks were orthochromatic, and recorded blue and green light, but not red. Recording all three spectral regions required making film stock panchromatic to some degree. Since orthochromatic film stock hindered color photography in its beginnings, the first films with color in them used aniline dyes to create artificial color. Hand-colored films appeared in 1895 with Thomas Edison‘s hand-painted Annabelle’s Dance for his Kinetoscope viewers.

Many early filmmakers from the first ten years of film also used this method to some degree. George Méliès offered hand-painted prints of his own films at an additional cost over the black-and-white versions, including the visual-effects pioneering A Trip to the Moon (1902). The film had various parts of the film painted frame-by-frame by twenty-one women in Montreuil[3] in a production-line method.[4]

The first commercially successful stencil color process was introduced in 1905 by Segundo de Chomón working for Pathé Frères. Pathé Color, renamed Pathéchrome in 1929, became one of the most accurate and reliable stencil coloring systems. It incorporated an original print of a film with sections cut by pantograph in the appropriate areas for up to six colors[3] by a coloring machine with dye-soaked, velvet rollers.[5] After a stencil had been made for the whole film, it was placed into contact with the print to be colored and run at high speed (60 feet per minute) through the coloring (staining) machine. The process was repeated for each set of stencils corresponding to a different color. By 1910, Pathé had over 400 women employed as stencilers in their Vincennes factory. Pathéchrome continued production through the 1930s.[3]

A more common technique emerged in the early 1910s known as film tinting, a process in which either the emulsion or the film base is dyed, giving the image a uniform monochromatic color. This process was popular during the silent era, with specific colors employed for certain narrative effects (red for scenes with fire or firelight, blue for night, etc.).[4]

A complementary process, called toning, replaces the silver particles in the film with metallic salts or mordanted dyes. This creates a color effect in which the dark parts of the image are replaced with a color (e.g., blue and white rather than black and white). Tinting and toning were sometimes applied together.[4]

In the United States, St. Louis engraver Max Handschiegl and cinematographer Alvin Wyckoff created the Handschiegl Color Process, a dye-transfer equivalent of the stencil process, first used in Joan the Woman (1917) directed by Cecil B. DeMille, and used in special effects sequences for films such as The Phantom of the Opera (1925).[3]

Eastman Kodak introduced its own system of pre-tinted black-and-white film stocks called Sonochrome in 1929. The Sonochrome line featured films tinted in seventeen different colors including Peachblow, Inferno, Candle Flame, Sunshine, Purple Haze, Firelight, Azure, Nocturne, Verdante, Aquagreen,[6] Caprice, Fleur de Lis, Rose Doree, and the neutral-density Argent, which kept the screen from becoming excessively bright when switching to a black-and-white scene.[3]

Tinting and toning continued to be used well into the sound era. In the 1930s and 1940s, some western films were processed in a sepia-toning solution to evoke the feeling of old photographs of the day. Tinting was used as late as 1951 for Sam Newfield‘s sci-fi film Lost Continent for the green lost-world sequences. Alfred Hitchcock used a form of hand-coloring for the orange-red gun-blast at the audience in Spellbound (1945).[3] Kodak’s Sonochrome and similar pre-tinted stocks were still in production until the 1970s and were used commonly for custom theatrical trailers and snipes.

In the last half of the 20th century, Norman McLaren, who was one of the pioneers in animated movies, made several animated films in which he directly hand-painted the images, and in some cases, also the soundtrack, on each frame of the film. This approach was previously employed in the early years of movies, late 19th and early 20th century. One of the precursors in color hand painting frame by frame were the Aragonese Segundo de Chomón and his French wife Julienne Mathieu, who were Melies’ close competitors.

Tinting was gradually replaced by natural color techniques.

Physics of light and color

The principles on which color photography is based were first proposed by Scottish physicist James Clerk Maxwell in 1855 and presented at the Royal Society in London in 1861.[3] By that time, it was known that light comprises a spectrum of different wavelengths that are perceived as different colors as they are absorbed and reflected by natural objects. Maxwell discovered that all natural colors in this spectrum as perceived by the human eye may be reproduced with additive combinations of three primary colorsred, green, and blue—which, when mixed equally, produce white light.[3]

Between 1900 and 1935, dozens of natural color systems were introduced, although only a few were successful.[6]

Additive color

The first color systems that appeared in motion pictures were additive color systems. Additive color was practical because no special color stock was necessary. Black-and-white film could be processed and used in both filming and projection. The various additive systems entailed the use of color filters on both the movie camera and projector. Additive color adds lights of the primary colors in various proportions to the projected image. Because of the limited amount of space to record images on film, and later because the lack of a camera that could record more than two strips of film at once, most early motion-picture color systems consisted of two colors, often red and green or red and blue.[4]

A pioneering three-color additive system was patented in England by Edward Raymond Turner in 1899.[7] It used a rotating set of red, green and blue filters to photograph the three color components one after the other on three successive frames of panchromatic black-and-white film. The finished film was projected through similar filters to reconstitute the color. In 1902, Turner shot test footage to demonstrate his system, but projecting it proved problematic because of the accurate registration (alignment) of the three separate color elements required for acceptable results. Turner died a year later without having satisfactorily projected the footage. In 2012, curators at the National Media Museum in Bradford, UK, had the original custom-format nitrate film copied to black-and-white 35 mm film, which was then scanned into a digital video format by telecine. Finally, digital image processing was used to align and combine each group of three frames into one color image.[8] As a result, these films from 1902 became viewable in full color.[9] File:A Visit to the Seaside (1908).webmPlay mediaA Visit to the Seaside, the first motion picture in Kinemacolor File:The Delhi Durbar 1911.webmPlay mediaWith Our King and Queen Through India, extract

Practical color in the motion picture business began with Kinemacolor, first demonstrated in 1906.[5] This was a two-color system created in England by George Albert Smith, and promoted by film pioneer Charles Urban‘s The Charles Urban Trading Company in 1908. It was used for a series of films including the documentary With Our King and Queen Through India, depicting the Delhi Durbar (also known as The Durbar at Delhi, 1912), which was filmed in December 1911. The Kinemacolor process consisted of alternating frames of specially sensitized black-and-white film exposed at 32 frames per second through a rotating filter with alternating red and green areas. The printed film was projected through similar alternating red and green filters at the same speed. A perceived range of colors resulted from the blending of the separate red and green alternating images by the viewer’s persistence of vision.[4][10]

William Friese-Greene invented another additive color system called Biocolour, which was developed by his son Claude Friese-Greene after William’s death in 1921. William sued George Albert Smith, alleging that the Kinemacolor process infringed on the patents for his Bioschemes, Ltd.; as a result, Smith’s patent was revoked in 1914.[3] Both Kinemacolor and Biocolour had problems with “fringing” or “haloing” of the image, due to the separate red and green images not fully matching up.[3]

By their nature, these additive systems were very wasteful of light. Absorption by the color filters involved meant that only a minor fraction of the projection light actually reached the screen, resulting in an image that was dimmer than a typical black-and-white image. The larger the screen, the dimmer the picture. For this and other case-by-case reasons, the use of additive processes for theatrical motion pictures had been almost completely abandoned by the early 1940s, though additive color methods are employed by all the color video and computer display systems in common use today.[4]

Subtractive color

The first practical subtractive color process was introduced by Kodak as “Kodachrome”, a name recycled twenty years later for a very different and far better-known product. Filter-photographed red and blue-green records were printed onto the front and back of one strip of black-and-white duplitized film. After development, the resulting silver images were bleached away and replaced with color dyes, red on one side and cyan on the other. The pairs of superimposed dye images reproduced a useful but limited range of color. Kodak’s first narrative film with the process was a short subject entitled Concerning $1000 (1916). Though their duplitized film provided the basis for several commercialized two-color printing processes, the image origination and color-toning methods constituting Kodak’s own process were little-used.

The first truly successful subtractive color process was William van Doren Kelley’s Prizma,[11] an early color process that was first introduced at the American Museum of Natural History in New York City on 8 February 1917.[12][13] Prizma began in 1916 as an additive system similar to Kinemacolor.

However, after 1917, Kelley reinvented the process as a subtractive one with several years of short films and travelogues, such as Everywhere With Prizma (1919) and A Prizma Color Visit to Catalina (1919) before releasing features such as the documentary Bali the Unknown (1921), The Glorious Adventure (1922), and Venus of the South Seas (1924). A Prizma promotional short filmed for Del Monte Foods titled Sunshine Gatherers (1921) is available on DVD in Treasures 5 The West 1898–1938 by the National Film Preservation Foundation.

The invention of Prizma led to a series of similarly printed color processes. This bipack color system used two strips of film running through the camera, one recording red, and one recording blue-green light. With the black-and-white negatives being printed onto duplitized film, the color images were then toned red and blue, effectively creating a subtractive color print.

Leon Forrest Douglass (1869–1940), a founder of Victor Records, developed a system he called Naturalcolor, and first showed a short test film made in the process on 15 May 1917 at his home in San Rafael, California. The only feature film known to have been made in this process, Cupid Angling (1918) — starring Ruth Roland and with cameo appearances by Mary Pickford and Douglas Fairbanks — was filmed in the Lake Lagunitas area of Marin County, California.[14]

After experimenting with additive systems (including a camera with two apertures, one with a red filter, one with a green filter) from 1915 to 1921, Dr. Herbert Kalmus, Dr. Daniel Comstock, and mechanic W. Burton Wescott developed a subtractive color system for Technicolor. The system used a beam splitter in a specially modified camera to send red and green light to adjacent frames of one strip of black-and-white film. From this negative, skip-printing was used to print each color’s frames contiguously onto film stock with half the normal base thickness. The two prints were chemically toned to roughly complementary hues of red and green,[5] then cemented together, back to back, into a single strip of film. The first film to use this process was The Toll of the Sea (1922) starring Anna May Wong. Perhaps the most ambitious film to use it was The Black Pirate (1926), starring and produced by Douglas Fairbanks.

The process was later refined through the incorporation of dye imbibition, which allowed for the transferring of dyes from both color matrices into a single print, avoiding several problems that had become evident with the cemented prints and allowing multiple prints to be created from a single pair of matrices.[4]

Technicolor’s early system were in use for several years, but it was a very expensive process: shooting cost three times that of black-and-white photography and printing costs were no cheaper. By 1932, color photography in general had nearly been abandoned by major studios, until Technicolor developed a new advancement to record all three primary colors. Utilizing a special dichroic beam splitter equipped with two 45-degree prisms in the form of a cube, light from the lens was deflected by the prisms and split into two paths to expose each one of three black-and-white negatives (one each to record the densities for red, green, and blue).[15]

The three negatives were then printed to gelatin matrices, which also completely bleached the image, washing out the silver and leaving only the gelatin record of the image. A receiver print, consisting of a 50% density print of the black-and-white negative for the green record strip, and including the soundtrack, was struck and treated with dye mordants to aid in the imbibition process (this “black” layer was discontinued in the early 1940s). The matrices for each strip were coated with their complementary dye (yellow, cyan, or magenta), and then each successively brought into high-pressure contact with the receiver, which imbibed and held the dyes, which collectively rendered a wider spectrum of color than previous technologies.[16] The first animation film with the three-color (also called three-strip) system was Walt Disney‘s Flowers and Trees (1932), the first short live-action film was La Cucaracha (1934), and the first feature was Becky Sharp (1935).[5]

Gasparcolor, a single-strip 3-color system, was developed in 1933 by the Hungarian chemist Dr. Bela Gaspar.[17]

The real push for color films and the nearly immediate changeover from black-and-white production to nearly all color film were pushed forward by the prevalence of television in the early 1950s. In 1947, only 12 percent of American films were made in color. By 1954, that number rose to over 50 percent.[3] The rise in color films was also aided by the breakup of Technicolor’s near monopoly on the medium.

In 1947, the United States Justice Department filed an antitrust suit against Technicolor for monopolization of color cinematography (even though rival processes such as Cinecolor and Trucolor were in general use). In 1950, a federal court ordered Technicolor to allot a number of its three-strip cameras for use by independent studios and filmmakers. Although this certainly affected Technicolor, its real undoing was the invention of Eastmancolor that same year.[3]

Monopack color film

A strip of undeveloped 35 mm color negative.

In the field of motion pictures, the many-layered type of color film normally called an integral tripack in broader contexts has long been known by the less tongue-twisting term monopack. For many years, Monopack (capitalized) was a proprietary product of Technicolor Corp, whereas monopack (not capitalized) generically referred to any of several single-strip color film products, including various Eastman Kodak products. It appeared that Technicolor made no attempt to register Monopack as a trademark with the US Patent and Trademark Office, although it asserted that term as if it were a registered trademark, and it had the force of a legal agreement between it and Eastman Kodak to back up that assertion. It was a solely-sourced product, too, as Eastman Kodak was legally prevented from marketing any color motion picture film products wider than 16mm, 35mm specifically, until the expiration of the so-called “Monopack Agreement” in 1950. This, notwithstanding the facts that Technicolor never had the capability to manufacture sensitized motion picture films of any kind, nor single-strip color films based upon its so-called “Troland Patent” (which Technicolor maintained covered all monopack-type films in general, and which Eastman Kodak elected not to contest as Technicolor was then one of its largest customers, if not its largest customer). After 1950, Eastman Kodak was free to make and market color films of any kind, particularly including monopack color motion picture films in 65/70mm, 35mm, 16mm and 8mm. The “Monopack Agreement” had no effect on color still films.

Monopack color films are based on the subtractive color system, which filters colors from white light by using superimposed cyan, magenta and yellow dye images. Those images are created from records of the amounts of red, green and blue light present at each point of the image formed by the camera lens. A subtractive primary color (cyan, magenta, yellow) is what remains when one of the additive primary colors (red, green, blue) has been removed from the spectrum. Eastman Kodak’s monopack color films incorporated three separate layers of differently color sensitive emulsion into one strip of film. Each layer recorded one of the additive primaries and was processed to produce a dye image in the complementary subtractive primary.

Kodachrome was the first commercially successful application of monopack multilayer film, introduced in 1935.[18] For professional motion picture photography, Kodachrome Commercial, on a 35mm BH-perforated base, was available exclusively from Technicolor, as its so-called “Technicolor Monopack” product. Similarly, for sub-professional motion picture photography, Kodachrome Commercial, on a 16mm base, was available exclusively from Eastman Kodak. In both cases, Eastman Kodak was the sole manufacturer and the sole processor. In the 35mm case, Technicolor dye-transfer printing was a “tie-in” product.[19] In the 16mm case, there were Eastman Kodak duplicating and printing stocks and associated chemistry, not the same as a “tie-in” product. In exceptional cases, Technicolor offered 16mm dye-transfer printing, but this necessitated the exceptionally wasteful process of printing on a 35mm base, only thereafter to be re-perforated and re-slit to 16mm, thereby discarding slightly more than one-half of the end product.

A late modification to the “Monopack Agreement”, the “Imbibition Agreement”, finally allowed Technicolor to economically manufacture 16mm dye-transfer prints as so-called “double-rank” 35/32mm prints (two 16mm prints on a 35mm base that was originally perforated at the 16mm specification for both halves, and was later re-slit into two 16mm wide prints without the need for re-perforation). This modification also facilitated the early experiments by Eastman Kodak with its negative-positive monopack film, which eventually became Eastmancolor. Essentially, the “Imbibition Agreement” lifted a portion of the “Monopack Agreement’s” restrictions on Technicolor (which prevented it from making motion picture products less than 35mm wide) and somewhat related restrictions on Eastman Kodak (which prevented it from experimenting and developing monopack products greater than 16mm wide).

Eastmancolor, introduced in 1950,[20] was Kodak’s first economical, single-strip 35 mm negative-positive process incorporated into one strip of film. This eventually rendered Three-Strip color photography obsolete, even though, for the first few years of Eastmancolor, Technicolor continued to offer Three-Strip origination combined with dye-transfer printing (150 titles produced in 1953, 100 titles produced in 1954 and 50 titles produced in 1955, the last year for Three-Strip as camera negative stock). The first commercial feature film to use Eastmancolor was the documentary Royal Journey, released in December 1951.[20] Hollywood studios waited until an improved version of Eastmancolor negative came out in 1952 before using it; This is Cinerama was an early film which employed three separate and interlocked strips of Eastmancolor negative. This is Cinerama was initially printed on Eastmancolor positive, but its significant success eventually resulted in it being reprinted by Technicolor, using dye-transfer.

By 1953, and especially with the introduction of anamorphic wide screen CinemaScope, Eastmancolor became a marketing imperative as CinemaScope was incompatible with Technicolor’s Three-Strip camera and lenses. Indeed, Technicolor Corp became one of the best, if not the best, processor of Eastmancolor negative, especially for so-called “wide gauge” negatives (5-perf 65mm, 6-perf 35mm), yet it far preferred its own 35mm dye-transfer printing process for Eastmancolor-originated films with a print run that exceeded 500 prints,[21] not withstanding the significant “loss of register” that occurred in such prints that were expanded by CinemaScope’s 2X horizontal factor, and, to a lesser extent, with so-called “flat wide screen” (variously 1.66:1 or 1.85:1, but spherical and not anamorphic). This nearly fatal flaw was not corrected until 1955 and caused numerous features initially printed by Technicolor to be scrapped and reprinted by DeLuxe Labs. (These features are often billed as “Color by Technicolor-DeLuxe”.) Indeed, some Eastmancolor-originated films billed as “Color by Technicolor” were never actually printed using the dye-transfer process, due in part to the throughput limitations of Technicolor’s dye-transfer printing process, and competitor DeLuxe’s superior throughput. Incredibly, DeLuxe once had a license to install a Technicolor-type dye-transfer printing line, but as the “loss of register” problems became apparent in Fox’s CinemaScope features that were printed by Technicolor, after Fox had become an all-CinemaScope producer, Fox-owned DeLuxe Labs abandoned its plans for dye-transfer printing and became, and remained, an all-Eastmancolor shop, as Technicolor itself later became.

Technicolor continued to offer its proprietary imbibition dye-transfer printing process for projection prints until 1975, and even briefly revived it in 1998. As an archival format, Technicolor prints are one of the most stable color print processes yet created, and prints properly cared for are estimated to retain their color for centuries.[22] With the introduction of Eastmancolor low-fade positive print (LPP) films, properly stored (at 45 °F or 7 °C and 25 percent relative humidity) monopack color film is expected to last, with no fading, a comparative amount of time. Improperly stored monopack color film from before 1983 can incur a 30 percent image loss in as little as 25 years.[23]

Functionality

A representation of the layers within a piece of developed color 35 mm negative film. When developed, the dye couplers in the blue-, green-, and red-sensitive layers turn the exposed silver halide crystals to their complementary colors (yellow, magenta, and cyan). The film is made up of (A) Clear protective topcoat, (B) UV filter, (C) “Fast” blue layer, (D) “Slow” blue layer, (E) Yellow filter to cut all blue light from passing through to (F) “Fast” green layer, (G) “Slow” green layer, (H) Inter (subbing) layer, (I) “Fast” red layer, (J) “Slow” red layer, (K) Clear triacetate base, and (L) Antihalation (rem-jet) backing.

A color film is made up of many different layers that work together to create the color image. Color negative films provide three main color layers: the blue record, green record, and red record; each made up of two separate layers containing silver halide crystals and dye-couplers. A cross-sectional representation of a piece of developed color negative film is shown in the figure at the right. Each layer of the film is so thin that the composite of all layers, in addition to the triacetate base and antihalation backing, is less than 0.0003″ (8 µm) thick.[24]

The three color records are stacked as shown at right, with a UV filter on top to keep the non-visible ultraviolet radiation from exposing the silver-halide crystals, which are naturally sensitive to UV light. Next are the fast and slow blue-sensitive layers, which, when developed, form the latent image. When the exposed silver-halide crystal is developed, it is coupled with a dye grain of its complementary color. This forms a dye “cloud” (like a drop of water on a paper towel) and is limited in its growth by development-inhibitor-releasing (DIR) couplers, which also serve to refine the sharpness of the processed image by limiting the size of the dye clouds. The dye clouds formed in the blue layer are actually yellow (the opposite or complementary color to blue).[25] There are two layers to each color; a “fast” and a “slow.” The fast layer features larger grains that are more sensitive to light than the slow layer, which has finer grain and is less sensitive to light. Silver-halide crystals are naturally sensitive to blue light, so the blue layers are on the top of the film and they are followed immediately by a yellow filter, which stops any more blue light from passing through to the green and red layers and biasing those crystals with extra blue exposure. Next are the red-sensitive record (which forms cyan dyes when developed); and, at the bottom, the green-sensitive record, which forms magenta dyes when developed. Each color is separated by a gelatin layer that prevents silver development in one record from causing unwanted dye formation in another. On the back of the film base is an anti-halation layer that absorbs light which would otherwise be weakly reflected back through the film by that surface and create halos of light around bright features in the image. In color film, this backing is “rem-jet”, a black-pigmented, non-gelatin layer which is removed in the developing process.[24]

Eastman Kodak manufactures film in 54-inch (1,372 mm) wide rolls. These rolls are then slit into various sizes (70 mm, 65 mm, 35 mm, 16 mm) as needed.

Manufacturers of color film for motion picture use

See also: List of motion picture film stocks

Motion picture film, primarily because of the rem-jet backing, requires different processing than standard C-41 process color film. The process necessary is ECN-2, which has an initial step using an alkaline bath to remove the backing layer. There are also minor differences in the remainder of the process. If motion picture negative is run through a standard C-41 color film developer bath, the rem-jet backing partially dissolves and destroys the integrity of the developer and, potentially, ruins the film.

Kodak color motion picture films

In the late 1980s, Kodak introduced the T-Grain emulsion, a technological advancement in the shape and make-up of silver halide grains in their films. T-Grain is a tabular silver halide grain that allows for greater overall surface area, resulting in greater light sensitivity with a relatively small grain and a more uniform shape that results in a less overall graininess to the film. This made for sharper and more sensitive films. The T-Grain technology was first employed in Kodak’s EXR line of motion picture color negative stocks.[26] This was further refined in 1996 with the Vision line of emulsions, followed by Vision2 in the early 2000s and Vision3 in 2007.

Fuji color motion picture films

Fuji films also integrate tabular grains in their SUFG (Super Unified Fine Grain) films. In their case, the SUFG grain is not only tabular, it is hexagonal and consistent in shape throughout the emulsion layers. Like the T-grain, it has a larger surface area in a smaller grain (about one-third the size of traditional grain) for the same light sensitivity. In 2005, Fuji unveiled their Eterna 500T stock, the first in a new line of advanced emulsions, with Super Nano-structure Σ Grain Technology.