|NHS to set up artificial intelligence lab September 13th 2019|
|Ministers are setting aside £250m to create a national artificial intelligence laboratory for the NHS in England, with the aim of enhancing care of patients and research. Health Secretary Matt Hancock says AI has “enormous power” to improve care, save lives and ensure doctors have more time to spend with patients. Our health and science correspondent James Gallagher says AI has already shown its potential – such as in spotting cancers or eye conditions from scans – but that it’s not routinely used across the health service. Its use also poses challenges for managers, from training staff to enhancing cyber-security and ensuring patient confidentiality, he adds.|
What is AI? August 19th 2019
From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.
Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.
Why research AI safety?
In the near term, the goal of keeping AI’s impact on society beneficial motivates research in many areas, from economics and law to technical topics such as verification, validity, security and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if it controls your car, your airplane, your pacemaker, your automated trading system or your power grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons.
In the long term, an important question is what will happen if the quest for strong AI succeeds and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.
There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of these possibilities, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm. We believe research today will help us better prepare for and prevent such potentially negative consequences in the future, thus enjoying the benefits of AI while avoiding pitfalls.
How can AI be dangerous?
Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:
- The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.
- The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
As these examples illustrate, the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety research is to never place humanity in the position of those ants.
Why the recent interest in AI safety
Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risks posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the headlines?
The idea that the quest for strong AI would ultimately succeed was long thought of as science fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of superintelligence in our lifetime. While some experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto Rico Conference guessed that it would happen before 2060. Since it may take decades to complete the required safety research, it is prudent to start it now.
Because AI has the potential to become more intelligent than any human, we have no surefire way of predicting how it will behave. We can’t use past technological developments as much of a basis because we’ve never created anything that has the ability to, wittingly or unwittingly, outsmart us. The best example of what we could face may be our own evolution. People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?
FLI’s position is that our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it. In the case of AI technology, FLI’s position is that the best way to win that race is not to impede the former, but to accelerate the latter, by supporting AI safety research.
The Top Myths About Advanced AI
A captivating conversation is taking place about the future of artificial intelligence and what it will/should mean for humanity. There are fascinating controversies where the world’s leading experts disagree, such as: AI’s future impact on the job market; if/when human-level AI will be developed; whether this will lead to an intelligence explosion; and whether this is something we should welcome or fear. But there are also many examples of of boring pseudo-controversies caused by people misunderstanding and talking past each other. To help ourselves focus on the interesting controversies and open questions — and not on the misunderstandings — let’s clear up some of the most common myths.
The first myth regards the timeline: how long will it take until machines greatly supersede human-level intelligence? A common misconception is that we know the answer with great certainty.
One popular myth is that we know we’ll get superhuman AI this century. In fact, history is full of technological over-hyping. Where are those fusion power plants and flying cars we were promised we’d have by now? AI has also been repeatedly over-hyped in the past, even by some of the founders of the field. For example, John McCarthy (who coined the term “artificial intelligence”), Marvin Minsky, Nathaniel Rochester and Claude Shannon wrote this overly optimistic forecast about what could be accomplished during two months with stone-age computers: “We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College […] An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”
On the other hand, a popular counter-myth is that we know we won’t get superhuman AI this century. Researchers have made a wide range of estimates for how far we are from superhuman AI, but we certainly can’t say with great confidence that the probability is zero this century, given the dismal track record of such techno-skeptic predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 — less than 24 hours before Szilard’s invention of the nuclear chain reaction — that nuclear energy was “moonshine.” And Astronomer Royal Richard Woolley called interplanetary travel “utter bilge” in 1956. The most extreme form of this myth is that superhuman AI will never arrive because it’s physically impossible. However, physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.
There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. All these surveys have the same conclusion: the world’s leading experts disagree, so we simply don’t know. For example, in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more.
There’s also a related myth that people who worry about AI think it’s only a few years away. In fact, most people on record worrying about superhuman AI guess it’s still at least decades away. But they argue that as long as we’re not 100% sure that it won’t happen this century, it’s smart to start safety research now to prepare for the eventuality. Many of the safety problems associated with human-level AI are so hard that they may take decades to solve. So it’s prudent to start researching them now rather than the night before some programmers drinking Red Bull decide to switch one on.
Another common misconception is that the only people harboring concerns about AI and advocating AI safety research are luddites who don’t know much about AI. When Stuart Russell, author of the standard AI textbook, mentioned this during his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI safety research is hugely controversial. In fact, to support a modest investment in AI safety research, people don’t need to be convinced that risks are high, merely non-negligible — just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.
It may be that media have made the AI safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do. For example, a techno-skeptic who only read about Bill Gates’s position in a British tabloid may mistakenly think Gates believes superintelligence to be imminent. Similarly, someone in the beneficial-AI movement who knows nothing about Andrew Ng’s position except his quote about overpopulation on Mars may mistakenly think he doesn’t care about AI safety, whereas in fact, he does. The crux is simply that because Ng’s timeline estimates are longer, he naturally tends to prioritize short-term AI challenges over long-term ones.
Myths About the Risks of Superhuman AI
Many AI researchers roll their eyes when seeing this headline: “Stephen Hawking warns that rise of robots may be disastrous for mankind.” And as many have lost count of how many similar articles they’ve seen. Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, and robots.
If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car? Although this mystery of consciousness is interesting in its own right, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.
The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.
The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”
I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.
The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.
The Interesting Controversies
Not wasting time on the above-mentioned misconceptions lets us focus on true and interesting controversies where even the experts disagree. What sort of future do you want? Should we develop lethal autonomous weapons? What would you like to happen with job automation? What career advice would you give today’s kids? Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth? Further down the road, would you like us to create superintelligent life and spread it through our cosmos? Will we control intelligent machines or will they control us? Will intelligent machines replace us, coexist with us, or merge with us? What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way? Please join the conversation!
- Max Tegmark: How to get empowered, not overpowered, by AI
- Stuart Russell: 3 principles for creating safer AI
- Sam Harris: Can we build AI without losing control over it?
- Talks from the Beneficial AI 2017 conference in Asilomar, CA
- Stuart Russell – The Long-Term Future of (Artificial) Intelligence
- Humans Need Not Apply
- Nick Bostrom: What happens when computers get smarter than we are?
- Value Alignment – Stuart Russell: Berkeley IdeasLab Debate Presentation at the World Economic Forum
- Social Technology and AI: World Economic Forum Annual Meeting 2015
- Stuart Russell, Eric Horvitz, Max Tegmark – The Future of Artificial Intelligence
- Jaan Tallinn on Steering Artificial Intelligence
- Concerns of an Artificial Intelligence Pioneer
- Transcending Complacency on Superintelligent Machines
- Why We Should Think About the Threat of Artificial Intelligence
- Stephen Hawking Is Worried About Artificial Intelligence Wiping Out Humanity
- Artificial Intelligence could kill us all. Meet the man who takes that risk seriously
- Artificial Intelligence Poses ‘Extinction Risk’ To Humanity Says Oxford University’s Stuart Armstrong
- What Happens When Artificial Intelligence Turns On Us?
- Can we build an artificial superintelligence that won’t kill us?
- Artificial intelligence: Our final invention?
- Artificial intelligence: Can we keep it in the box?
- Science Friday: Christof Koch and Stuart Russell on Machine Intelligence (transcript)
- Transcendence: An AI Researcher Enjoys Watching His Own Execution
- Science Goes to the Movies: ‘Transcendence’
- Our Fear of Artificial Intelligence
Essays by AI Researchers
- Stuart Russell: What do you Think About Machines that Think?
- Stuart Russell: Of Myths and Moonshine
- Jacob Steinhardt: Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems
- Eliezer Yudkowsky: Why value-aligned AI is a hard engineering problem
- Eliezer Yudkowsky: There’s No Fire Alarm for Artificial General Intelligence
- Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence
- Intelligence Explosion: Evidence and Import (MIRI)
- Intelligence Explosion and Machine Ethics (Luke Muehlhauser, MIRI)
- Artificial Intelligence as a Positive and Negative Factor in Global Risk (MIRI)
- Basic AI drives
- Racing to the Precipice: a Model of Artificial Intelligence Development
- The Ethics of Artificial Intelligence
- The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents
- Wireheading in mortal universal agents
- AGI Safety Literature Review
- Bruce Schneier – Resources on Existential Risk, p. 110
- Aligning Superintelligence with Human Interests: A Technical Research Agenda (MIRI)
- MIRI publications
- Stanford One Hundred Year Study on Artificial Intelligence (AI100)
- Preparing for the Future of Intelligence: White House report that discusses the current state of AI and future applications, as well as recommendations for the government’s role in supporting AI development.
- Artificial Intelligence, Automation, and the Economy: White House report that discusses AI’s potential impact on jobs and the economy, and strategies for increasing the benefits of this transition.
- IEEE Special Report: Artificial Intelligence: Report that explains deep learning, in which neural networks teach themselves and make decisions on their own.
- The Asilomar Conference: A Case Study in Risk Mitigation (Katja Grace, MIRI)
- Pre-Competitive Collaboration in Pharma Industry (Eric Gastfriend and Bryan Lee, FLI): A case study of pre-competitive collaboration on safety in industry.
Blog posts and talks
- AI control
- AI Impacts
- No time like the present for AI safety work
- AI Risk and Opportunity: A Strategic Analysis
- Where We’re At – Progress of AI and Related Technologies: An introduction to the progress of research institutions developing new AI technologies.
- AI safety
- Wait But Why on Artificial Intelligence
- Response to Wait But Why by Luke Muehlhauser
- Slate Star Codex on why AI-risk research is not that controversial
- Less Wrong: A toy model of the AI control problem
- What Should the Average EA Do About AI Alignment?
- Waking Up Podcast #116 – AI: Racing Toward the Brink with Eliezer Yudkowsky
- Superintelligence: Paths, Dangers, Strategies
- Life 3.0: Being Human in the Age of Artificial Intelligence
- Our Final Invention: Artificial Intelligence and the End of the Human Era
- Facing the Intelligence Explosion
- E-book about the AI risk (including a “Terminator” scenario that’s more plausible than the movie version)
- Machine Intelligence Research Institute: A non-profit organization whose mission is to ensure that the creation of smarter-than-human intelligence has a positive impact.
- Centre for the Study of Existential Risk (CSER): A multidisciplinary research center dedicated to the study and mitigation of risks that could lead to human extinction.
- Future of Humanity Institute: A multidisciplinary research institute bringing the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.
- Partnership on AI: Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
- Global Catastrophic Risk Institute: A think tank leading research, education, and professional networking on global catastrophic risk.
- Organizations Focusing on Existential Risks: A brief introduction to some of the organizations working on existential risks.
- 80,000 Hours: A career guide for AI safety researchers.
Many of the organizations listed on this page and their descriptions are from a list compiled by the Global Catastrophic Risk institute; we are most grateful for the efforts that they have put into compiling it. These organizations above all work on computer technology issues, though many cover other topics as well. This list is undoubtedly incomplete; please contact us to suggest additions or corrections.
Avoiding the Issue August 5th 2019
It may be amazing what the armed forces and engineers can do in an emergency like Whaley Bridge, but I doubt that lessons will be learned as far as human environmental damage is concerned.
Interestingly, and understandably, many local people didn’t want to leave their homes, thinkingh it was a case of the authorities going mad with health and safety over a storm in a teacup rather than a storm in reservoir.
There is indeed a great deal of not exactly nonsense, but some rather distracting talk about the environment, most if not all of it avoiding the main issues, human overpopulation and elite greed.
The truth is hidden by an elite owned and operated media only intereted in finding scapegoats among those of us in the lower order masses.
Between 25 and 33% of carbon emissions are due to land use, including farming for an unsustainable and growing human population, most of it coming from what we used to call the ‘Third World’. The Holy Grail of economic growth is out of control so as to feed the masses and elite media cries for equality down at the borrom, while the elite monopolise teh profits of raping the earth, spinning us into more And more disasters and wars.
The following articles should be read with this in mind. The U.S.A, an empire in decline, is fighting to keep its annual consumption of the earth’s natural resources up at 52% of the world’s total, for a nation that comprises only 6% of the world’s population. Britain dominates European policy and is the U.S’s lackey- hence them confiscating the Iranian tanker.
Robert Cook August 5th 2019
Whaley Bridge latest: race to save dam and halt flood continues as emergency crews say they still need two more days
Emergency teams need at least two more days to pump water from a reservoir at risk of bursting its banks and flooding the town below, fire crews said today.
Hundreds of families evacuated from their homes of the Peak District town of Whaley Bridge on Thursday were forced to spend a fifth night away from home as the race-against-time continues.
Emergency crews, who have been working round-the-clock to shore up the Victorian structure which partially collapsed in torrential rain, today said it is now “relatively stable” as the threat of storms appeared to recede.
An RAF Chinook helicopter has been dropping sandbags on the crumbling wall while six rescue boats have been deployed in case the dam bursts.
© Provided by Independent Digital News & Media Limited
The reservoir is now just under half of its 300million gallon capcity, but water levels must drop to at least 25 per cent before the evacuees can return home.
Derbyshire deputy chief fire officer Gavin Tomlinson said the operation would continue “for a few days yet”.
He said: “As soon as we get the water level down to a safe level, which is around 25 per cent of the contents of the dam, then the emergency phase is over and then the contractors can look at the repairing of the dam wall.”
RAF Wing Commander John Coles, who is in charge of the operation to shore up the wall, added: “I think the assessment is now that actually the dam is relatively stable.
“The military will stand by ready to come back up if required but I think the sense of the moment is very much we’ve got through the worst of it. We were fortunate with the weather.”
Derbyshire chief fire officer Terry McDermott said a seven-day estimate for how long people would be out of their homes was a “worst case scenario”.
Labour leader Jeremy Corbyn was expected to visit the flood-struck Derbyshire town this morning after Friday’s visit by Prime Minister Boris Johnson.
Met Office spokesman Grahame Madge said the next 36 hours were looking “largely dry” and unlikely to affect the rescue operation. © Provided by Independent Digital News & Media Limited Emergency workers are pumping water out of the reservoir to ease the pressure on the dam (PA)
However, he said a heavy band of rain was expected later Thursday and into Friday: “That may bring a heavier pulse of rain. We will keep an eye on the forecast and issue any warnings that may be relevant.
© Provided by Independent Digital News & Media Limited The emergency crews say they need about two more days to pump water from the reservoir (PA)
Police allowed one resident from each of the 400 properties to return to the evacuation zone for 15-minutes to gather essentials on Saturday.
However, 31 people, including a “small number” who were initially evacuated but have since returned to their homes, remained in 22 properties last night.
Deputy chief constable Rachel Swann told a residents meeting they were not only putting their own lives at risk, but also those of emergency services staff.
Is There Evidence of Reincarnation?
Table of Contents
by Stephen Wagner Updated May 22, 2019
Have you lived before? The idea that our souls live through many lifetimes over the centuries is known as reincarnation. It has been part of virtually every culture since ancient times. The Egyptians, Greeks, Romans, and Aztecs all believed in the “transmigration of souls” from one body to another after death. Reincarnation is also a fundamental concept of Hinduism.
Although it is not a part of official Christian doctrine, many Christians believe in reincarnation or at least accept its possibility. Jesus, it is believed, was resurrected three days after his crucifixion. The idea that we can live again after death as another person, as a member of the opposite sex or in a completely different station in life, is intriguing and, for many people, highly appealing.
But is reincarnation just an idea, or is there real evidence to support it? Many researchers have tackled this question—and their results are surprising.
Past Life Regression Hypnosis
The practice of reaching past lives through hypnosis is controversial, primarily because hypnosis is not a reliable tool. It can certainly help researchers access the unconscious mind, but the information found there should not be taken as truth. For example, it has been shown that hypnosis can create false memories. That doesn’t mean, however, that regression hypnosis should be dismissed out of hand. If information from a “past life” can be verified through research, then the case for reincarnation becomes more compelling.
The most famous case of past life regression through hypnosis is that of Ruth Simmons. In 1952, her therapist, Morey Bernstein, encouraged her to travel in her mind back to a time before her birth. Suddenly, Ruth began to speak with an Irish accent and claimed that her name was Bridey Murphy, who lived in 19th-century Belfast, Ireland. Ruth recalled many details of her life as Bridey, but attempts to find out if Ms. Murphy actually existed were unfortunately unsuccessful. There was, however, some indirect evidence for the truth of her story. Under hypnosis, Bridey mentioned the names of two grocers in Belfast from whom she had bought food, Mr. Farr and John Carrigan. A Belfast librarian found a city directory for 1865-1866 that listed both men as grocers. Simmons’ story was told both in a book by Bernstein and in a 1956 movie, “The Search for Bridey Murphy.”
Unusual Illnesses and Physical Ailments
Do you have a lifelong illness or physical pain that you cannot account for? It may be the result of some past life trauma, some researchers suggest.
In “Have We Really Lived Before?,” Dr. Michael C. Pollack describes his lower back pain, which grew steadily worse over the years and limited his activities. He believes he found a possible explanation for the pain during a series of past life therapy sessions: “I discovered that I had lived at least three prior lifetimes in which I had been killed by being knifed or speared in the low back. After processing and healing the past life experiences, my back began to heal.”
Research conducted by Nicola Dexter, a past life therapist, has discovered correlations between some of her patients’ illnesses and their past lives. She found, for example, a case of bulimia caused by swallowing salt water in a previous life; a persistent pain in the shoulder and arm caused by participating, in a past life, in a dangerous game of tug-of-war; and a fear of razors and shaving that was the result of the sufferer having had his hand cut off in a previous life.
Phobias and Nightmares
Where does seemingly irrational fear come from? Fear of heights, fear of water, fear of flying? Many of us have normal reservations about such things, but some people have fears so great that they become debilitating. And some fears are completely baffling—a fear of carpets, for example. Where do such fears come from? The answer, of course, can be psychologically complex, but researchers think that in some cases there might be a connection to experiences from previous lifetimes.
In “Healing Past Lives Through Dreams,” author J.D. writes about his claustrophobia, which includes a tendency to panic whenever his arms or legs are confined or restricted. He believes that a dream of a past life uncovered a trauma that explains his fear. “One night in the dream state I found myself hovering over a disturbing scene,” he writes. “It was a town in 15th-century Spain, and a frightened man was being hog-tied by a small jeering crowd. He had expressed beliefs contrary to the church. Some local ruffians, with the blessing of the church officials, were eager to administer justice. The men bound the heretic hand and foot, then wrapped him very tightly in a blanket. The crowd carried him to an abandoned stone building, shoved him into a dark corner under the floor, and left him to die. I realized with horror the man was me.”
In his book ”Someone Else’s Yesterday,” Jeffrey J. Keene theorizes that a person in this life may strongly resemble the person he or she was in a previous life. Keene, an Assistant Fire Chief who lives in Westport, Connecticut, believes he is the reincarnation of John B. Gordon, a Confederate General of the Army of Northern Virginia, who died on January 9, 1904. As evidence, he offers photos of himself and the general. There is a striking resemblance. Beyond sharing physical similarities, Keene says that individuals and their past incarnations often “think alike, look alike and even share facial scars. Their lives are so intertwined that they appear to be one.”
Another case of such resemblance is that of artist Peter Teekamp, who believes he may be the reincarnation of artist Paul Gauguin. Here, too, there is a physical resemblance, along with similarities between the two painters’ work.
Children’s Spontaneous Recall and Special Knowledge
Many small children who claim to recall past lives also express knowledge that could not have come from their own experiences. Such cases are documented in Carol Bowman’s “Children’s Past Lives“:
“Eighteen-month-old Elsbeth had never spoken a complete sentence. But one evening, as her mother was bathing her, Elsbeth spoke up and gave her mother a shock. ‘I’m going to take my vows,’ she told her mother. Taken aback, she questioned the baby girl about her queer statement. ‘I’m not Elsbeth now,’ the child replied. ‘I’m Rose, but I’m going to be Sister Teresa Gregory.'”
Can proof of past lives be demonstrated by comparing the handwriting of a living person to that of the deceased person he or she claims to have been? Indian researcher Vikram Raj Singh Chauhan believes so. Chauhan’s findings have been received favorably at the National Conference of Forensic Scientists at Bundelkhand University, Jhansi.
A six-year-old boy named Taranjit Singh from the village of Alluna Miana, India, claimed since he was two that he had previously been a person named Satnam Singh. This other boy had lived in the village of Chakkchela, Taranjit insisted, and Taranjit even knew Satnam’s father’s name. Satnam had been killed while riding his bike home from school. An investigation verified the many details Taranjit knew about Satnam’s life. But the clincher was that their handwriting, a trait experts know is as distinct as a fingerprint, was virtually identical.
Matching Birthmarks and Birth Defects
Dr. Ian Stevenson, head of the Department of Psychiatric Medicine at the University of Virginia School of Medicine, was one of the foremost researchers on the subject of reincarnation and past lives. In 1993, he wrote a paper entitled “Birthmarks and Birth Defects Corresponding to Wounds on Deceased Persons,” which described possible physical evidence for past lives. “Among 895 cases of children who claimed to remember a previous life (or were thought by adults to have had a previous life),” Stevenson writes, “birthmarks and/or birth defects attributed to the previous life were reported in 309 (35 percent) of the subjects. The birthmark or birth defect of the child was said to correspond to a wound (usually fatal) or other mark on the deceased person whose life the child said it remembered.”
But could any of these cases be verified?
In one fascinating case, an Indian boy claimed to remember the life of a man named Maha Ram, who was killed by a shotgun fired at close range. The boy had an array of birthmarks in the center of his chest that looked like they might correspond to a shotgun blast. So the story was investigated. Indeed, there was a man named Maha Ram who was killed by a shotgun blast to the chest. An autopsy report recorded the man’s chest wounds, which corresponded directly with the boy’s birthmarks.
In another case, a man from Thailand claimed that when he was a child he had distinct memories of a past life as his own paternal uncle. This man had a large scar-like birthmark on the back of his head. His uncle, it turned out, died from a severe knife wound to the same area.
Dr. Stevenson documented a number of cases like these, many of which he could verify through medical records.
Why did the US want to go to the Moon?
A space race developed between the US and the then Soviet Union, after the 1957 launch of the first Soviet Sputnik satellite.
When John F Kennedy became US President in 1961, many Americans believed they were losing the race for technological superiority to their Cold War enemy. Image copyright Getty Images Image caption Missions by Soviet cosmonauts including Yuri Gagarin and Valentina Tereshkova, the first woman in space, worried the US
It was in that year that Soviet Union made the first ever manned spaceflight.
The US was determined to get a manned mission there first and in 1962 Kennedy made a now-famous speech announcing: “We choose to go to the Moon!”
The space race continued and in 1965 the Soviets successfully guided an unmanned craft to touch down on the Moon.
How did the US plan for its mission?
The US space agency, Nasa, committed huge amounts of resources to what became known as the Apollo programme.
About 400,000 people worked on the 17 Apollo missions, at a cost of $25bn. Image copyright NASA Image caption The Saturn V rocket lifts off
Three astronauts were chosen for the Apollo 11 mission: Buzz Aldrin, Neil Armstrong and Michael Collins.
A powerful rocket – the Saturn V – carried the Apollo command and service module and the attached lunar module that was to touch down on the Moon.
- Apollo in 50 numbers: The technology
- Apollo in 50 numbers: The workers
- Apollo in 50 numbers: Medicine and health
The plan was to use the Earth’s orbit to reach that of the Moon, after which Armstrong and Aldrin would get into the lunar module. They would descend to the Moon’s surface while Collins stayed behind in the command and service module.
Did anything go wrong?
The first crewed flight that was meant to test going into orbit was Apollo 1 in 1967. Image copyright NASA Image caption The lunar module as seen from the command and service module
But disaster struck during pre-flight routine checks, when fire swept through the command module and killed three astronauts.
Manned space flights were suspended for months.
During the Apollo 11 mission itself, there were communications issues with ground control. And an alarm message sounded on the computer which the crew had never heard before.
The lunar module also ended up touching down away from the original target area.
- Thirteen things you should know about Apollo 11
- The thirteen minutes that defined a century
- Margaret Hamilton: The Apollo software pioneer
Walking on the Moon
Despite these problems, on 20 July – nearly 110 hours after leaving Earth, Neil Armstrong became the first person to step on to the surface of the Moon. He was followed 20 minutes later by Buzz Aldrin.
Armstrong’s words, beamed to the world by TV, entered history: “That’s one small step for man, one giant leap for mankind.”
The two men spent more than two hours outside the lunar module, collecting samples from the surface, taking pictures, and setting up a number of scientific experiments.
After completing their Moon exploration, the two men successfully docked with the command and service module. Image copyright NASA Image caption The three astronauts after being picked up in the Pacific
The return journey to Earth began and the crew splashed down in the Pacific Ocean on 24 July.
An estimated 650 million people worldwide had watched the first Moon landing. For the US, the achievement helped it demonstrate its power to a world audience.
It was also an important boost to national self-esteem at the end of a tumultuous decade. It had seen Kennedy assassinated, race riots in major cities and unease about its military involvement in Vietnam.
How do we know it really happened?
A total of six US missions had landed men on the lunar surface by the end of 1972, yet to this day there are conspiracy theories saying that the landings were staged.
But Nasa has had a reconnaissance craft orbiting the Moon since 2009. It sends back high-resolution images showing signs of exploration on the surface by the Apollo missions, such as footprints and wheel tracks. Image copyright NASA Image caption The moon landings became a cause for national celebration
There is also geological evidence from rocks brought back from the surface.
What’s the point of going to the Moon?
The US remains the only country to have put people on the Moon’s surface. Image copyright NASA Image caption Sally Ride (far left) the first female US astronaut to go to space – pictured with astronauts Judith Resnik, Anna Fisher, Kathryn Sullivan and Rhea Seddon
However, Russia, Japan, China, the European Space Agency and India have either sent probes to orbit the Moon, or landed vehicles on its surface.
For a country to be able to do so is a sign of its technological prowess, giving it membership of an elite club.
- What does China want to do on the far side of the Moon?
- Why so many people believe conspiracy theories
There are also more practical reasons such as the desire to exploit its resources.
Ice found at both poles may make it easier for craft to reach deeper into space, as it contains hydrogen and oxygen which can be used to fuel rockets.
There’s also interest in mining the moon for gold, platinum and rare earth metals, although it’s not yet clear how easy it would be to extract such resources.
All images subject to copyright BBC
Climate Emergency June 5th 2019
The University of East Anglia (UEA) is joining forces with other organisations around the world to declare a climate and biodiversity emergency.
UEA is one of the world’s pre-eminent climate change research institutions and the work of the Tyndall Centre for Climate Change Research, the Climatic Research Unit (CRU) and researchers in other UEA schools, has pioneered understanding of Earth’s changing climate.
The CRU at UEA is responsible for the global temperature record, which first brought global warming to the world’s attention. Researchers at UEA’s Tyndall Centre also help publish the annual Global Carbon Budget, the update for policy-makers on fossil fuel emissions.
UEA’s declaration comes on World Environment Day (Wednesday 5 June), the UN’s flagship awareness day on environmental issues from global warming to marine pollution and wildlife crime.
UEA has made the most substantive and sustained contribution to the Intergovernmental Panel on Climate Change (IPCC) of any University in the world. UEA and the Tyndall Centre is a core partner of the new Climate and Social Transitions Centre.
Vice-Chancellor Professor David Richardson said: “Over the decades researchers from UEA have arguably done more to further our knowledge of humankind’s impact on Earth’s climate and eco-systems than from any other institution.
“As a University we have reduced our carbon emissions by five per cent since 1990 – despite the campus doubling in size. We also fully recognise that we need to move faster to deal with what is a climate and biodiversity emergency and that we all have a part to play in addressing this crisis.”
UEA has also signed up to the SDG Accord designed to inspire, celebrate and advance the critical role that education has in delivering the UN’s Sustainable Development Goals (SDGs) and the value it brings to our global society.
The Accord is a commitment learning institutions are making to do more to deliver the goals, to annually report on progress and to do so in ways which share the learning with each other nationally and internationally.
UEA absolutely recognises the indivisible and interconnected nature of the universal set of Goals – People, Prosperity, Planet, Partnership, Peace and that as educators we have a responsibility to play a central and transformational role in attaining the Sustainable Development Goals.
Director of the Climatic Research Unit, Professor Tim Osborn, said: “UEA has been monitoring climate change and researching its consequences for almost 50 years. We understand what is causing our climate to change and can assess the significant risks that it brings – for human society as well as for the natural world.
“Together with other causes, especially continuing habitat loss in many parts of the world and overexploitation of marine species, climate change represents a huge challenge to biodiversity and places many species at risk.”
Professor of Evolutionary Biology and Science Engagement, Ben Garrod, said: “Global climate change is the single greatest threat facing our planet today. If we are to have any chance of success in tackling the problem and reducing its effects then we need to act swiftly, decisively and collaboratively.
“It is estimated that a million species are facing the imminent threat of extinction and predictions as to the effects on the global human population are severe. By joining the growing number of institutions declaring a climate emergency, UEA can help by not only raising awareness but by contributing to solutions through our pioneering studies and leading researchers.
“Now we need every university, every council, government, school, business and every individual citizen to declare an environmental emergency and to work together to ensure we have a future where our food, health and homes are not all at stake.”
Dr Lynn Dicks, Natural Environment Research Council (NERC) Research Fellow at UEA, said: “Nature is under serious pressure all over the world. Roughly a quarter of all species are at risk of extinction in the groups of animals and plants that have been assessed. Nature is essential for human existence and good quality of life, and yet we are trashing it in exchange for economic wealth.
“Ongoing conversion of wilderness to agriculture, and direct exploitation of wild species through hunting, fishing and logging, are the biggest problems. Climate change is already driving species to extinction as well, and this will get much worse in the coming century.
“Researchers in the Centre for Ecology, Evolution and Conservation at UEA are working with partners in Government, industry and the NGO sector to understand our impacts on nature and to develop strategies to protect and restore it.”
UEA operates a Sustainability Board, currently chaired by the Pro-Vice-Chancellor for the Faculty of Medicine and Health, Professor Dylan Edwards, and with representation from staff, estates and the UEA Students’ Union. It meets quarterly and reviews the performance of the implementation teams that are charged with achieving the targets for the campus.
These teams address the university’s sustainability goals across eight areas: Sustainable Food; Transport; Purchasing; Engagement and Communications; Energy and Carbon Reduction; Sustainable Labs; Biodiversity; Water and Waste.
UEA has a 15 year £300m estate strategy to improve and modernise our buildings, which will include improvements to their energy usage. The 10 year programme to refurbish the Lasdun Wall is just one factor precluding reaching net zero by 2025 and the University supports the UK committee on Climate Change target of carbon neutrality by 2050.
The decision to declare a climate and biodiversity emergency follows a motion tabled at UEA Assembly on 22 May by Dr Hannah Hoechner, on behalf of Extinction Rebellion UEA. The University’s response to the specifics of the Extinction Rebellion UEA request:
a. “Formally declare a climate emergency”. From the introductory paragraph it is clear that UEA detected and recognises that there is a global climate crisis. We also feel there is a connected Biodiversity emergency. Our preference would be to declare a “Climate and Biodiversity emergency”, which may help to appreciate their inter-relationships. We support therefore that UEA should declare a climate and biodiversity emergency. Professor Sir Robert Watson, UEA and the Tyndall Centre, is co-Chair of the Intergovernmental Platform on Biodiversity and Ecosystem Services.
b. “Commit to the target of carbon neutrality by 2025, in accordance with the precautionary principle”. The University’s view is that this target is unattainable. It takes the position that UEA should commit to net carbon neutrality by 2050, in alignment and support of the recent recommendation from the independent UK Committee on Climate Change. It is important to note the impressive improvements that UEA Estates and the Sustainability Board have made, in particular this year we are on target for a 50% reduction in CO2 emissions compared to the 1990 baseline data, despite the doubling in size of the university across the period.
c. “to appoint a senior staff member of the Executive Team with their sole responsibility being to achieve this carbon reduction target and to promote sustainability” Two members of the Executive Team serve on the Sustainability Board (Professors Dylan Edwards and Mark Searcey) and they will be joined in future by Jenny Baxter, Chief Operating Officer. The reporting process for Sustainability Board to ET will be enhanced, with quarterly reports of the Board’s activities. It was not felt that appointing one person was a sustainable way to promote sustainability.
d. “create a consultative forum to harness the passion and expertise available among UEA staff and students to mount the necessary emergency response”. In essence, this is the function of the Sustainability Board. At the May meeting there was extensive discussion about the need to enhance the visibility of the Board, ensure it is informed by the depth of research at the Tyndall Centre for Climate Change Research and build better engagement with its work to make a sustainable campus. We need to accelerate the pace of change and also communicate what we are doing. We need also to cement our goals for sustainability into the next iteration of the UEA plan, with clear, explicit targets and ways to monitor our progress. We recognise that there is much work to be done.
For more information about UEA’s research into climate and the environment please visit www.uea.ac.uk/research/research-themes/understanding-human-and-natural-environments
Carl Gustav Jung
Synchronicity (German: Synchronizität) is a concept, first introduced by analytical psychologist Carl Jung, which holds that events are “meaningful coincidences” if they occur with no causal relationship yet seem to be meaningfully related. During his career, Jung furnished several different definitions of it. Jung defined synchronicity as an “acausal connecting (togetherness) principle,” “meaningful coincidence”, and “acausal parallelism.” He introduced the concept as early as the 1920s but gave a full statement of it only in 1951 in an Eranos lecture.
In 1952 Jung published a paper “Synchronizität als ein Prinzip akausaler Zusammenhänge” (Synchronicity – An Acausal Connecting Principle) in a volume which also contained a related study by the physicist and Nobel laureate Wolfgang Pauli, who was sometimes critical of Jung’s ideas. Jung’s belief was that, just as events may be connected by causality, they may also be connected by meaning. Events connected by meaning need not have an explanation in terms of causality, which does not generally contradict the Axiom of Causality but in specific cases can lead to prematurely giving up causal explanation.
Diagram illustrating Carl Jung’s concept of synchronicity
How are we to recognize acausal combinations of events, since it is obviously impossible to examine all chance happenings for their causality? The answer to this is that acausal events may be expected most readily where, on closer reflection, a causal connection appears to be inconceivable.
In the introduction to his book, Jung on Synchronicity and the Paranormal, Roderick Main wrote:
The culmination of Jung’s lifelong engagement with the paranormal is his theory of synchronicity, the view that the structure of reality includes a principle of acausal connection which manifests itself most conspicuously in the form of meaningful coincidences. Difficult, flawed, prone to misrepresentation, this theory none the less remains one of the most suggestive attempts yet made to bring the paranormal within the bounds of intelligibility. It has been found relevant by psychotherapists, parapsychologists, researchers of spiritual experience and a growing number of non-specialists. Indeed, Jung’s writings in this area form an excellent general introduction to the whole field of the paranormal.
In his book Synchronicity: An Acausal Connecting Principle, Jung wrote:
…it is impossible, with our present resources, to explain ESP, or the fact of meaningful coincidence, as a phenomenon of energy. This makes an end of the causal explanation as well, for “effect” cannot be understood as anything except a phenomenon of energy. Therefore it cannot be a question of cause and effect, but of a falling together in time, a kind of simultaneity. Because of this quality of simultaneity, I have picked on the term “synchronicity” to designate a hypothetical factor equal in rank to causality as a principle of explanation.
Synchronicity was a principle which, Jung felt, gave conclusive evidence for his concepts of archetypes and the collective unconscious. It described a governing dynamic which underlies the whole of human experience and history — social, emotional, psychological, and spiritual. The emergence of the synchronistic paradigm was a significant move away from Cartesian dualism towards an underlying philosophy of double-aspect theory. Some argue this shift was essential to bringing theoretical coherence to Jung’s earlier work.
Even at Jung’s presentation of his work on synchronicity in 1951 at an Eranos lecture, his ideas on synchronicity were evolving. On Feb. 25, 1953, in a letter to Carl Seelig, the Swiss author and journalist who wrote a biography of Albert Einstein, Jung wrote, “Professor Einstein was my guest on several occasions at dinner. . . These were very early days when Einstein was developing his first theory of relativity [and] It was he who first started me on thinking about a possible relativity of time as well as space, and their psychic conditionality. More than 30 years later the stimulus led to my relation with the physicist professor W. Pauli and to my thesis of psychic synchronicity.”
Following discussions with both Albert Einstein and Wolfgang Pauli, Jung believed there were parallels between synchronicity and aspects of relativity theory and quantum mechanics.[better source needed]
Jung believed life was not a series of random events but rather an expression of a deeper order, which he and Pauli referred to as Unus mundus. This deeper order led to the insights that a person was both embedded in a universal wholeness and that the realisation of this was more than just an intellectual exercise, but also had elements of a spiritual awakening. From the religious perspective, synchronicity shares similar characteristics of an “intervention of grace”. Jung also believed that in a person’s life, synchronicity served a role similar to that of dreams, with the purpose of shifting a person’s egocentric conscious thinking to greater wholeness.
In his book Synchronicity Jung tells the following story as an example of a synchronistic event:
My example concerns a young woman patient who, in spite of efforts made on both sides, proved to be psychologically inaccessible. The difficulty lay in the fact that she always knew better about everything. Her excellent education had provided her with a weapon ideally suited to this purpose, namely a highly polished Cartesian rationalism with an impeccably “geometrical” idea of reality. After several fruitless attempts to sweeten her rationalism with a somewhat more human understanding, I had to confine myself to the hope that something unexpected and irrational would turn up, something that would burst the intellectual retort into which she had sealed herself. Well, I was sitting opposite her one day, with my back to the window, listening to her flow of rhetoric. She had an impressive dream the night before, in which someone had given her a golden scarab — a costly piece of jewellery. While she was still telling me this dream, I heard something behind me gently tapping on the window. I turned round and saw that it was a fairly large flying insect that was knocking against the window-pane from outside in the obvious effort to get into the dark room. This seemed to me very strange. I opened the window immediately and caught the insect in the air as it flew in. It was a scarabaeid beetle, or common rose-chafer (Cetonia aurata), whose gold-green colour most nearly resembles that of a golden scarab. I handed the beetle to my patient with the words, “Here is your scarab.” This experience punctured the desired hole in her rationalism and broke the ice of her intellectual resistance. The treatment could now be continued with satisfactory results.
The French writer Émile Deschamps claims in his memoirs that, in 1805, he was treated to some plum pudding by a stranger named Monsieur de Fontgibu. Ten years later, the writer encountered plum pudding on the menu of a Paris restaurant and wanted to order some, but the waiter told him that the last dish had already been served to another customer, who turned out to be de Fontgibu. Many years later, in 1832, Deschamps was at a dinner and once again ordered plum pudding. He recalled the earlier incident and told his friends that only de Fontgibu was missing to make the setting complete – and in the same instant, the now-senile de Fontgibu entered the room, having got the wrong address.
Jung wrote, after describing some examples, “When coincidences pile up in this way, one cannot help being impressed by them – for the greater the number of terms in such a series, or the more unusual its character, the more improbable it becomes.”
In his book Thirty Years That Shook Physics – The Story of Quantum Theory (1966), George Gamow writes about Wolfgang Pauli, who was apparently considered a person particularly associated with synchronicity events. Gamow whimsically refers to the “Pauli effect“, a mysterious phenomenon which is not understood on a purely materialistic basis, and probably never will be. The following anecdote is told:
It is well known that theoretical physicists cannot handle experimental equipment; it breaks whenever they touch it. Pauli was such a good theoretical physicist that something usually broke in the lab whenever he merely stepped across the threshold. A mysterious event that did not seem at first to be connected with Pauli’s presence once occurred in Professor J. Franck’s laboratory in Göttingen. Early one afternoon, without apparent cause, a complicated apparatus for the study of atomic phenomena collapsed. Franck wrote humorously about this to Pauli at his Zürich address and, after some delay, received an answer in an envelope with a Danish stamp. Pauli wrote that he had gone to visit Bohr and at the time of the mishap in Franck’s laboratory his train was stopped for a few minutes at the Göttingen railroad station. You may believe this anecdote or not, but there are many other observations concerning the reality of the Pauli Effect! 
Relationship with causality
Causality, when defined expansively (as for instance in the “mystic psychology” book The Kybalion, or in the platonic Kant-style Axiom of Causality), states that “nothing can happen without being caused.” Such an understanding of causality may be incompatible with synchronicity. Other definitions of causality (for example, the neo-Humean definition) are concerned only with the relation of cause to effect. As such, they are compatible with synchronicity. There are also opinions which hold that, where there is no external observable cause, the cause can be internal.
It is also pointed out that, since Jung took into consideration only the narrow definition of causality – only the efficient cause – his notion of “acausality” is also narrow and so is not applicable to final and formal causes as understood in Aristotelian or Thomist systems. The final causality is inherent in synchronicity (because it leads to individuation) or synchronicity can be a kind of replacement for final causality; however, such finalism or teleology is considered to be outside the domain of modern science.
Mainstream mathematics argues that statistics and probability theory (exemplified in, e.g., Littlewood’s law or the law of truly large numbers) suffice to explain any purported synchronistic events as mere coincidences. The law of truly large numbers, for instance, states that in large enough populations, any strange event is arbitrarily likely to happen by mere chance. However, some proponents of synchronicity question whether it is even sensible in principle to try to evaluate synchronicity statistically. Jung himself and von Franz argued that statistics work precisely by ignoring what is unique about the individual case, whereas synchronicity tries to investigate that uniqueness.
Among some psychologists, Jung’s works, such as The Interpretation of Nature and the Psyche, were received as problematic. Fritz Levi, in his 1952 review in Neue Schweizer Rundschau (New Swiss Observations), critiqued Jung’s theory of synchronicity as vague in determinability of synchronistic events, saying that Jung never specifically explained his rejection of “magic causality” to which such an acausal principle as synchronicity would be related. He also questioned the theory’s usefulness.
In psychology and cognitive science, confirmation bias is a tendency to search for or interpret new information in a way that confirms one’s preconceptions, and avoids information and interpretations that contradict prior beliefs. It is a type of cognitive bias and represents an error of inductive inference, or is a form of selection bias toward confirmation of the hypothesis under study, or disconfirmation of an alternative hypothesis. Confirmation bias is of interest in the teaching of critical thinking, as the skill is misused if rigorous critical scrutiny is applied only to evidence that challenges a preconceived idea, but not to evidence that supports it.
Likewise, in psychology and sociology, the term apophenia is used for the mistaken detection of a pattern or meaning in random or meaningless data. Skeptics, such as Robert Todd Carroll of the Skeptic’s Dictionary, argue that the perception of synchronicity is better explained as apophenia. Primates use pattern detection in their form of intelligence, and this can lead to erroneous identification of non-existent patterns. A famous example of this is the fact that human face recognition is so robust, and based on such a basic archetype (essentially two dots and a line contained in a circle), that human beings are very prone to identify faces in random data all through their environment, like the “man in the moon”, or faces in wood grain, an example of the visual form of apophenia known as pareidolia.
Charles Tart sees danger in synchronistic thinking: ‘This danger is the temptation to mental laziness. […] it would be very tempting to say, “Well, it’s synchronistic, it’s forever beyond my understanding,” and so (prematurely) give up trying to find a causal explanation.’
Jung and his followers (e.g., Marie-Louise von Franz) share in common the belief that numbers are the archetypes of order, and the major participants in synchronicity creation. This hypothesis has implications that are relevant to some of the “chaotic” phenomena in nonlinear dynamics. Dynamical systems theory has provided a new context from which to speculate about synchronicity because it gives predictions about the transitions between emergent states of order and nonlocality. This view, however, is not part of mainstream mathematical thought.
According to a certain view, synchronicity serves as a way of making sense of or describing some aspects of quantum mechanics. It argues that quantum experiments demonstrate that, at least in the microworld of subatomic particles, there is an instantaneous connection between particles no matter how far away they are from one another. Known as quantum non-locality or entanglement, the proponents of this view argue that this points to a unified field that precedes physical reality. As with archetypal reasoning, the proponents argue that two events can correspond to each other (e.g. particle with particle, or person with person) in a meaningful way.
F. David Peat saw parallels between Synchronicity and David Bohm‘s theory of implicate order. According to Bohm’s theory, there are three major realms of existence: the explicate (unfolded) order, the implicate (enfolded) order, and a source or ground beyond both. The flowing movement of the whole can thus be understood as a process of continuous enfolding and unfolding of order or structure. From moment to moment there is a rhythmic pulse between implicate and explicate realities. Therefore, synchronicity would literally take place as the bridge between implicate and explicate orders, whose complementary nature define the undivided totality.
Many people believe that the Universe or God causes synchronicities. Among the general public, divine intervention is the most widely accepted explanation for meaningful coincidences.
- Jung, Carl (1972). Synchronicity – An Acausal Connecting Principle. Routledge and Kegan Paul. ISBN 978-0-7100-7397-6. Also included in his Collected Works volume 8.
- Jung, Carl (1977). Jung on Synchronicity and the Paranormal: Key Readings. Routledge. ISBN 978-0-415-15508-3.
- Jung, Carl (1981). The Archetypes and the Collective Unconscious. Princeton University Press. ISBN 978-0-691-01833-1.
- Wilson, Robert Anton (1988) Coincidance: A Head Test