Links for Keyword: Consciousness

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 274

by Josh Wilbur Jake Haendel was a hard-partying chef from a sleepy region of Massachusetts. When he was 28, his heroin addiction resulted in catastrophic brain damage and very nearly killed him. In a matter of months, Jake’s existence became reduced to a voice in his head. Jake’s parents had divorced when he was young. He grew up between their two homes in a couple of small towns just beyond reach of Boston, little more than strip malls, ailing churches and half-empty sports bars. His mother died of breast cancer when he was 19. By then, he had already been selling marijuana and abusing OxyContin, an opioid, for years. “Like a lot of kids at my school, I fell in love with oxy. If I was out to dinner with my family at a restaurant, I would go to the bathroom just to get a fix,” he said. He started culinary school, where he continued to experiment with opioids and cocaine. He hid his drug use from family and friends behind a sociable, fun-loving front. Inside, he felt anxious and empty. “I numbed myself with partying,” he said. After culinary school, he took a job as a chef at a local country club. At 25, Jake tried heroin for the first time, with a co-worker (narcotics are notoriously prevalent in American kitchens). By the summer of 2013, Jake was struggling to find prescription opioids. For months, he had been fending off the symptoms of opioid withdrawal, which he likened to “a severe case of the flu with an added feeling of impending doom”. Heroin offered a euphoric high, staving off the intense nausea and shaking chills of withdrawal. Despite his worsening addiction, Jake married his girlfriend, Ellen, in late 2016. Early in their relationship, Ellen had asked him if he was using heroin. He had lied without hesitation, but she soon found out the truth, and within months, the marriage was falling apart. “I was out of control, selling lots of heroin, using even more, spending a ridiculous amount of money on drugs and alcohol,” he said. In May 2017, Ellen noticed that he was talking funnily, his words slurred and off-pitch. “What’s up with your voice?” she asked him repeatedly.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 4: Development of the Brain
Link ID: 27595 - Posted: 11.27.2020

Joel Frohlich Three years ago, I asked, “What the heck is a claustrum?” In that post, I described the mystery of this oddly shaped brain region, located just below the cerebral cortex. Because the claustrum is vanishingly thin in its cross section (think of a pancake shaped like North America), very few patients or lab animals have experienced lesions that specifically destroy the claustrum. For this reason, it’s difficult to pin down what happens when just this brain region (and not others) goes offline. But given its wealth of connections to other brain areas, neuroscientist Christof Koch speculated in 2017 that “the claustrum could be coordinating inputs and outputs across the brain to create consciousness.” This idea is supported by a report of a woman with epilepsy who lost consciousness after her claustrum was electrically stimulated, and perhaps also by the consciousness-transforming effects of Salvinorin A, a drug that binds to receptors that are abundant in the claustrum and alters body image. Could the claustrum, an enigma of the brain, also be the key to the conscious mind? Well, now we have the answer. Using a genetic engineering technique called optogenetics that enables neurons to fire impulses in response to blue light, a team at the RIKEN Brain Science Institute in Japan has discovered what the heck the claustrum actually does. During deep sleep when you’re not dreaming, your cerebral cortex shows slow waves of electrical activity. These waves are very synchronous, meaning they reflect the coordinated activity of many neurons, more so than the smaller, faster waves that are generally present when you are either awake or dreaming. How does the brain coordinate the activity of so many neurons? It turns out that the claustrum plays a key role. © 2020 Sussex Publishers, LLC

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 27579 - Posted: 11.14.2020

Sara Reardon In Alysson Muotri’s laboratory, hundreds of miniature human brains, the size of sesame seeds, float in Petri dishes, sparking with electrical activity. These tiny structures, known as brain organoids, are grown from human stem cells and have become a familiar fixture in many labs that study the properties of the brain. Muotri, a neuroscientist at the University of California, San Diego (UCSD), has found some unusual ways to deploy his. He has connected organoids to walking robots, modified their genomes with Neanderthal genes, launched them into orbit aboard the International Space Station, and used them as models to develop more human-like artificial-intelligence systems. Like many scientists, Muotri has temporarily pivoted to studying COVID-19, using brain organoids to test how drugs perform against the SARS-CoV-2 coronavirus. But one experiment has drawn more scrutiny than the others. In August 2019, Muotri’s group published a paper in Cell Stem Cell reporting the creation of human brain organoids that produced coordinated waves of activity, resembling those seen in premature babies1. The waves continued for months before the team shut the experiment down. This type of brain-wide, coordinated electrical activity is one of the properties of a conscious brain. The team’s finding led ethicists and scientists to raise a host of moral and philosophical questions about whether organoids should be allowed to reach this level of advanced development, whether ‘conscious’ organoids might be entitled to special treatment and rights not afforded to other clumps of cells and the possibility that consciousness could be created from scratch. The idea of bodiless, self-aware brains was already on the minds of many neuroscientists and bioethicists. Just a few months earlier, a team at Yale University in New Haven, Connecticut, announced that it had at least partially restored life to the brains of pigs that had been killed hours earlier. By removing the brains from the pigs’ skulls and infusing them with a chemical cocktail, the researchers revived the neurons’ cellular functions and their ability to transmit electrical signals2.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 27552 - Posted: 10.28.2020

Adrian Owen DR. ADRIAN OWEN: Imagine this scenario. You've unfortunately had a terrible accident. You're lying in a hospital bed and you're aware—you're aware but you're unable to respond, but the doctors and your relatives don't know that. You have to lie there, listening to them deciding whether to let you live or die. I can think of nothing more terrifying. Communication is at the very heart of what makes us human. It's the basis of everything. What we're doing is we're returning the ability to communicate to some patients who seem to have lost that forever. The vegetative state is often referred to as a state of wakefulness without awareness. Patients open their eyes, they'll just gaze around the room. They'll have sleeping and waking cycles, but they never show any evidence of having any awareness. So, typically, the way that we assess consciousness is through command following. We ask somebody to do something, say, squeeze our hand, and if they do it, you know that they're conscious. The problem in the vegetative state is that these patients by definition can produce no movements. And the question I asked is, well, could somebody command follow with their brain? It was that idea that pushed us into a new realm of understanding this patient population. When a part of your brain is involved in generating a thought or performing an action, it burns energy in the form of glucose, and it's replenished through blood flow. As blood flows to that part of the brain, we're able to see that with the FMRI scanner. I think one of the key insights was the realization that we could simply get somebody to lie in the scanner and imagine something and, based on the pattern of brain activity, we will be able to work out what it is they were thinking. We had to find something that produces really a quite distinct pattern of activity that was more or less the same for everybody. So, we came up with two tasks. One task, imagine playing tennis, produces activity in the premotor cortex in almost every healthy person we tried this in. A different task, thinking about moving from room to room in your house, produces an entirely different pattern of brain activity; particularly, it involves a part of the brain known as the parahippocampal gyrus. And again, it's very consistent across different people.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 27513 - Posted: 10.07.2020

By John Horgan I interviewed psychologist Susan Blackmore 20 years ago while doing research for my book Rational Mysticism. Here, lightly edited, is my description of her: “Her hair was dyed orange, red, and yellow, dark-rooted, cut short as a boy’s, with sideburns plunging like daggers past each multi-ringed ear. Words spewed from her pell-mell, accompanied by equally vigorous hand signals and facial expressions. She was keen on onomatopoeic sound effects: Ahhhhh (to express her pleasure at finding other smart people when she entered Oxford); DUN da la DUN da la DUN (the galloping noise she heard as she sped down a tree-lined tunnel in her first out-of-body experience); Zzzzzzt (the sound of reality dissolving after her second toke of the psychedelic dimethyltryptamine). We were talking in the dining room of the inn where she was staying, and twice we had to move to a quieter spot when employees or patrons of the inn started talking near us. One side effect of her spiritual practice, she explained, is that she has a hard time ignoring stimuli. ‘I think it is one of the bad effects of practicing mindfulness. I'm so aware of everything all the time.’” Blackmore began her career as a parapsychologist, intent on finding evidence for astral projection and extrasensory perception. Her investigations transformed her into a materialist and Darwinian (one of her best-known books describes humans as “meme machines”) who doesn’t believe in ESP, God or free will. And yet she is a mystic, too, who explores consciousness via meditation and psychedelics. In other words, Blackmore pulls off the trick of being both a hard-nosed skeptic and an open-minded adventurer. What more can one ask of a mind scientist? Curious about how her thinking has evolved in our mind-boggling era, I e-mailed her a few questions. An edited transcript of the interview follows. © 2020 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 4: Development of the Brain
Link ID: 27475 - Posted: 09.16.2020

By John Horgan It is a central dilemma of human life—more urgent, arguably, than the inevitability of suffering and death. I have been brooding and ranting to my students about it for years. It surely troubles us more than ever during this plague-ridden era. Philosophers call it the problem of other minds. I prefer to call it the solipsism problem. Solipsism, technically, is an extreme form of skepticism, at once utterly nuts and irrefutable. It holds that you are the only conscious being in existence. The cosmos sprang into existence when you became sentient, and it will vanish when you die. As crazy as this proposition seems, it rests on a brute fact: each of us is sealed in an impermeable prison cell of subjective awareness. Even our most intimate exchanges might as well be carried out via Zoom. You experience your own mind every waking second, but you can only infer the existence of other minds through indirect means. Other people seem to possess conscious perceptions, emotions, memories, intentions, just as you do, but you can’t be sure they do. You can guess how the world looks to me, based on my behavior and utterances, including these words you are reading, but you have no first-hand access to my inner life. For all you know, I might be a mindless bot. Natural selection instilled in us the capacity for a so-called theory of mind—a talent for intuiting others’ emotions and intentions. But we have a countertendency to deceive each other, and to fear we are being deceived. The ultimate deception would be pretending you’re conscious when you’re not. The solipsism problem thwarts efforts to explain consciousness. Scientists and philosophers have proposed countless contradictory hypotheses about what consciousness is and how it arises. Panpsychists contend that all creatures and even inanimate matter—even a single proton!—possess consciousness. Hard-core materialists insist, conversely (and perversely), that not even humans are all that conscious. © 2020 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 27468 - Posted: 09.12.2020

By Laura Sanders When your brain stops working — completely and irreversibly — you’re dead. But drawing the line between life and brain death isn’t always easy. A new report attempts to clarify that distinction, perhaps helping to ease the anguish of family members with a loved one whose brain has died but whose heart still beats. Brain death has been a recognized concept in medicine for decades. But there’s a lot of variation in how people define it, says Gene Sung, a neurocritical care physician at the University of Southern California in Los Angeles. “Showing that there is some worldwide consensus, understanding and agreement at this time will hopefully help minimize misunderstanding of what brain death is,” Sung says. As part of the World Brain Death Project, Sung and his colleagues convened doctors from professional societies around the world to forge a consensus on how to identify brain death. This group, including experts in critical care, neurology and neurosurgery, reviewed the existing research on brain death (which was slim) and used their clinical expertise to write the recommendations, published August 3 in JAMA. In addition to the main guidelines, the final product included 17 supplements that address legal and religious aspects, provide checklists and flowcharts, and even trace the history of relevant medical advances. “Basically, we wrote a book,” Sung says. © Society for Science & the Public 2000–2020.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 15: Language and Lateralization
Link ID: 27413 - Posted: 08.11.2020

Burcin Ikiz About five years ago, researchers from the Allen Institute for Brain Science in Seattle received a special donation: a piece of a live, rare brain tissue. It came from a very deep part of the brain neuroscientists usually can’t access. The donated tissue contained a rare and mysterious type of brain cells called von Economo neurons (VENs) that are thought to be linked to social intelligence and several neurological diseases. The tissue was a byproduct of a surgery to remove a brain tumor from a patient in her 60s. The location of the tissue turned out to be in one of the deepest layers of the frontoinsular cortex, which is one of the few places where these rare neurons are found in the human brain. “This was one of the extremely rare chances that we received this tissue from a donor that had a tumor being removed from quite a deep [brain] structure,” said Rebecca Hodge, who is the co-first author of the study, published in Nature Communications on March 3rd. Hodge and her colleagues became the first scientists to record electrical spikes from these neurons. Further studies they did on these cells gave them clues about the VENs’ identity and function in the human brain. VENs are large, spindle-shaped neurons. They were first identified by the Ukrainian scientist Vladimir Betz more than a century ago. They were later named after the anatomist Constantin von Economo, who described their shape and distribution through the human cortex. Only humans and especially social animals with large brains, such as great apes, whales, dolphins, and elephants have VENs. It is hypothesized that the cells evolved independently in these animals. Since common lab animals with smaller brains, like mice and rats, don’t have VENs, it is difficult to study them in a lab environment. © 2017 – 2019 Massive Science Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 27291 - Posted: 06.08.2020

Brenda Patoine Can the key to consciousness be found in the folds of the cerebrum? Can the simple unfettered state of “being conscious” be localized in the brain, its properties deconstructed to precisely timed patterns of neural firing? Finding the answers is the goal of a $20 million international research program to search for the neural footprints of consciousness. The broad, multi-year initiative—termed Accelerating Research in Consciousness (ARC)—is being funded by the Templeton World Charity Foundation. In the first phase, representing $5 million, two leading brain theories of consciousness with diametrically opposed assumptions will face off to test their hypotheses. ARC pits the Integrated Information Theory (IIT) and the Global Neuronal Workplace (GNW) theory directly against one another, in what Templeton calls “adversarial collaboration,” to settle some fundamental questions about how, when, and where the brain processes subjective awareness of ourselves and the world around us. The two theoretical models are in stark contrast to one another: their definitions and assumptions of what constitutes consciousness differ and their whole approach to the subject is fundamentally different. What they have in common is that they both study the neural correlates of consciousness. IIT is the brainchild of Giulio Tononi, a professor and director of the Wisconsin Institute for Sleep and Consciousness at the University of Wisconsin. GNW has been elaborated by Stanislas Dehaene of INSERM/Unicog, in concert with Lionel Naccache of Sorbonne/INSERM, Jean-Pierre Changeux of Institut Pasteur, and others. These two theories were selected by Christof Koch, a leading consciousness researcher who is serving as an advisor to the Templeton project, because each has an established following among scientists and a “preponderance of evidence” backing them, says Koch, who now heads the Allen Institute for Brain Science. © 2020 The Dana Foundation.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 27201 - Posted: 04.16.2020

By Scott Barry Kaufman Who are you and how did you become interested in free will? I am an Assistant Professor of Philosophy at Iona College where I also serve as a faculty member for the Iona Neuroscience program. I have previously worked in the Scientific and Philosophical Studies of Mind program at Franklin and Marshall College as well as previous appointments as a Lecturer at King’s College London and University of Alabama. My recent and forthcoming publications focus on issues of autonomy in terms of philosophical accounts of free will as well as how it intersects with neuroscience and psychiatry. One of the main questions I investigate is what neuroscience can tell us about meaningful agency (see here for my recent review of the topic as part of an extended review of research on agency, freedom, and responsibility for the John Templeton Foundation). I became interested in free will via an interdisciplinary route. As an undergraduate at Grinnell College, I majored in psychology with a strong emphasis on experimental psychology and clinical psychology. During my senior year at Grinnell I realized that I was fascinated by the theoretical issues operating in the background of the psychological studies that we read and conducted, especially issues of how the mind is related to the brain, prospects for the scientific study of consciousness, and how humans as agents fit into a natural picture of the world. So I followed these interests to the study of philosophy of psychology and eventually found my way to the perfect fusion of these topics: the neuroscience of free will. What is free will? Free will seems to be a familiar feature of our everyday lives — most of us believe that (at least at times) what we do is up to us to some extent. For instance, that I freely decided to take my job or that I am acting freely when I decide to go for a run this afternoon. Free will is not just that I move about in the world to achieve a goal, but that I exercise meaningful control over what I decide to do. My decisions and actions are up to me in the sense that they are mine — a product of my values, desires, beliefs, and intentions. I decided to take this job because I valued the institution’s mission or I believed that this job would be enriching or a good fit for me. Correspondingly, it seems to me that at least at times I could have decided to and done something else than what I did. I decided to go for a run this afternoon, but no one made me and I wasn’t subject to any compulsion; I could have gone for a coffee instead, at least it seems to me. Philosophers take these starting points and work to construct plausible accounts of free will. Broadly speaking, there is a lot of disagreement as to the right view of free will, but most philosophers believe that a person has free will if they have the ability to act freely, and that this kind of control is linked to whether it would be appropriate to hold that person responsible (e.g., blame or praise them) for what they do. For instance, we don’t typically hold people responsible for what they do if they were acting under severe threat or inner compulsion. © 2020 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 27128 - Posted: 03.17.2020

By Laura Sanders SEATTLE — Live bits of brain look like any other piece of meat — pinkish, solid chunks of neural tissue. But unlike other kinds of tissue or organs donated for research, they hold the memories, thoughts and feelings of a person. “It is identified with who we are,” Karen Rommelfanger, a neuroethicist at Emory University in Atlanta, said February 13 in a news conference at the annual meeting of the American Association for the Advancement of Science. That uniqueness raises a whole new set of ethical quandaries when it comes to experimenting with living brain tissue, she explained. Such donations are crucial to emerging research aimed at teasing out answers to what makes us human. For instance, researchers at the Seattle-based Allen Institute for Brain Science conduct experiments on live brain tissue to get clues about how the cells in the human brain operate (SN: 8/7/19). These precious samples, normally discarded as medical waste, are donated by patients undergoing brain surgery and raced to the lab while the nerve cells are still viable. Other experiments rely on systems that are less sophisticated than a human brain, such as brain tissue from other animals and organoids. These clumps of neural tissue, grown from human stem cells, are still a long way from mimicking the complexities of the human brain (SN: 10/24/19). But with major advances, these systems might one day be capable of much more advanced behavior, which might ultimately lead to awareness, a conundrum that raises ethical issues. © Society for Science & the Public 2000–2020

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 27047 - Posted: 02.18.2020

By Tam Hunt Strangely, modern science was long dominated by the idea that to be scientific means to remove consciousness from our explanations, in order to be “objective.” This was the rationale behind behaviorism, a now-dead theory of psychology that took this trend to a perverse extreme. Behaviorists like John Watson and B.F. Skinner scrupulously avoided any discussion of what their human or animal subjects thought, intended or wanted, and focused instead entirely on behavior. They thought that because thoughts in other peoples’ heads, or in animals, are impossible to know with certainty, we should simply ignore them in our theories. We can only be truly scientific, they asserted, if we focus solely on what can be directly observed and measured: behavior. Erwin Schrödinger, one of the key architects of quantum mechanics in the early part of the 20th century, labeled this approach in his philosophical 1958 book Mind and Matter, the “principle of objectivation” and expressed it clearly: Advertisement “By [the principle of objectivation] I mean … a certain simplification which we adopt in order to master the infinitely intricate problem of nature. Without being aware of it and without being rigorously systematic about it, we exclude the Subject of Cognizance from the domain of nature that we endeavor to understand. We step with our own person back into the part of an onlooker who does not belong to the world, which by this very procedure becomes an objective world.” Schrödinger did, however, identify both the problem and the solution. He recognized that “objectivation” is just a simplification that is a temporary step in the progress of science in understanding nature. © 2020 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 27044 - Posted: 02.18.2020

By Bernardo Kastrup At least since the Enlightenment, in the 18th century, one of the most central questions of human existence has been whether we have free will. In the late 20th century, some thought neuroscience had settled the question. However, as it has recently become clear, such was not the case. The elusive answer is nonetheless foundational to our moral codes, criminal justice system, religions and even to the very meaning of life itself—for if every event of life is merely the predictable outcome of mechanical laws, one may question the point of it all. But before we ask ourselves whether we have free will, we must understand what exactly we mean by it. A common and straightforward view is that, if our choices are predetermined, then we don’t have free will; otherwise we do. Yet, upon more careful reflection, this view proves surprisingly inappropriate. To see why, notice first that the prefix “pre” in “predetermined choice” is entirely redundant. Not only are all predetermined choices determined by definition, all determined choices can be regarded as predetermined as well: they always result from dispositions or necessities that precede them. Therefore, what we are really asking is simply whether our choices are determined. In this context, a free-willed choice would be an undetermined one. But what is an undetermined choice? It can only be a random one, for anything that isn’t fundamentally random reflects some underlying disposition or necessity that determines it. There is no semantic space between determinism and randomness that could accommodate choices that are neither. This is a simple but important point, for we often think—incoherently—of free-willed choices as neither determined nor random. © 2020 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 27024 - Posted: 02.07.2020

Jennifer Rankin in Brussels A pioneering Belgian neurologist has been awarded €1m to fund further work in helping diagnose the most severe brain injuries, as he seeks to battle “the silent epidemic” and help people written off as “vegetative” who, it is believed, will never recover. Steven Laureys, head of the coma science group at Liège University hospital, plans to use the £850,000 award – larger than the Nobel prize – to improve the diagnosis of coma survivors labelled as being in a “persistent vegetative state”. That is “a horrible term” he says, although still one widely used by the general public and many clinicians. Laureys, who has spent more than two decades exploring the boundaries of human consciousness, prefers the term “unresponsive wakefulness” to describe people who are unconscious but show signs of being awake, such as opening their eyes or moving. These patients are often wrongly described as being in a coma, a condition that only lasts a few weeks, in which people are completely unresponsive. “The old view was to consider consciousness, which was one of the biggest mysteries for science to solve, as all or nothing,” he told the Guardian, shortly after he was awarded the Generet prize by Belgium’s King Baudouin Foundation this week. He said that a third of patients he treats at the Liège coma centre had been wrongly diagnosed as being in a vegetative state, despite signs of consciousness. As a young doctor in the 1990s he was frustrated by the questions that torture the families of coma survivors: can their loved ones see or hear them? Can they feel anything, including pain? Laureys and his 30-strong team of engineers and clinicians have shown that some of those with a “vegetative state” diagnosis are minimally conscious, showing signs of awareness such as responding to commands with their eyes. © 2020 Guardian News & Media Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26970 - Posted: 01.20.2020

By Gareth Cook One of science’s most challenging problems is a question that can be stated easily: Where does consciousness come from? In his new book Galileo’s Error: Foundations for a New Science of Consciousness, philosopher Philip Goff considers a radical perspective: What if consciousness is not something special that the brain does but is instead a quality inherent to all matter? It is a theory known as “panpsychism,” and Goff guides readers through the history of the idea, answers common objections (such as “That’s just crazy!”) and explains why he believes panpsychism represents the best path forward. He answered questions from Mind Matters editor Gareth Cook. Can you explain, in simple terms, what you mean by panpsychism? In our standard view of things, consciousness exists only in the brains of highly evolved organisms, and hence consciousness exists only in a tiny part of the universe and only in very recent history. According to panpsychism, in contrast, consciousness pervades the universe and is a fundamental feature of it. This doesn’t mean that literally everything is conscious. The basic commitment is that the fundamental constituents of reality—perhaps electrons and quarks—have incredibly simple forms of experience. And the very complex experience of the human or animal brain is somehow derived from the experience of the brain’s most basic parts. It might be important to clarify what I mean by “consciousness,” as that word is actually quite ambiguous. Some people use it to mean something quite sophisticated, such as self-awareness or the capacity to reflect on one’s own existence. This is something we might be reluctant to ascribe to many nonhuman animals, never mind fundamental particles. But when I use the word consciousness, I simply mean experience: pleasure, pain, visual or auditory experience, et cetera. © 2020 Scientific American,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26959 - Posted: 01.15.2020

By Joseph Stern, M.D. The bullet hole in the teenager’s forehead was so small, it belied the damage already done to his brain. The injury was fatal. We knew this the moment he arrived in the emergency room. Days later, his body was being kept alive in the intensive care unit despite an exam showing that he was brain-dead and no blood was flowing to his brain. Eventually, all his organs failed and his heart stopped beating. But the nurses continued to care for the boy and his family, knowing he was already dead but trying to help the family members with the agonizing process of accepting his death. This scenario occurs all too frequently in the neurosurgical I.C.U. Doctors often delay the withdrawal of life-sustaining supports such as ventilators and IV drips, and nurses continue these treatments — adhering to protocols, yet feeling internal conflict. A lack of consensus or communication among doctors, nurses and families often makes these situations more difficult for all involved. Brain death is stark and final. When the patient’s brain function has ceased, bodily death inevitably follows, no matter what we do. Continued interventions, painful as they may be, are necessarily of limited duration. We can keep a brain-dead patient’s body alive for a few days at the most before his heart stops for good. Trickier and much more common is the middle ground of a neurologically devastating injury without brain death. Here, decisions can be more difficult, and electing to continue or to withdraw treatment much more problematic. Inconsistent communication and support between medical staff members and families plays a role. A new field, neuropalliative care, seeks to focus “on outcomes important to patients and families” and “to guide and support patients and families through complex choices involving immense uncertainty and intensely important outcomes of mind and body.” © 2020 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26958 - Posted: 01.14.2020

By John Horgan Last month I participated in a symposium hosted by the Center for Theory & Research at Esalen, a retreat center in Big Sur, California. Fifteen men and women representing physics, psychology and other fields attempted to make sense of mystical and paranormal experiences, which are generally ignored by conventional, materialist science. The organizers invited me because of my criticism of hard-core materialism and interest in mysticism, but in a recent column I pushed back against ideas advanced at the meeting. Below other attendees push back against me. My fellow speaker Bjorn Ekeberg, whose response is below, took the photos of Esalen, including the one of me beside a stream (I'm the guy on the right). -- John Horgan Jeffrey Kripal, philosopher of religion at Rice University and author, most recently, of The Flip: Epiphanies of Mind and the Future of Knowledge (see our online chats here and here): Thank you, John, for reporting on your week with us all. As one of the moderators of “Physics, Experience and Metaphysics,” let me try to reply, briefly (and too simplistically), to your various points. First, let me begin with something that was left out of your generous summary: the key role of the imagination in so many exceptional or anomalous experiences. As you yourself pointed out with respect to your own psychedelic opening, this is no ordinary or banal “imagination.” This is a kind of “super-imagination” that projects fantastic visionary displays that none of us could possibly come up with in ordinary states: this is a flying caped Superman to our bespectacled Clark Kent. None of this, of course, implies that anything seen in these super-imagined states is literally true (like astral travel or ghosts) or non-human, but it does tell us something important about why the near-death or psychedelic experiencers commonly report that these visionary events are “more real” than ordinary reality (which is also, please note, partially imagined, if our contemporary neuroscience of perception is correct). Put in terms of a common metaphor that goes back to Plato, the fictional movies on the screen can ALL be different and, yes, of course, humanly and historically constructed, but the Light projecting them can be quite Real and the Same. Fiction and reality are in no way exclusive of one another in these paradoxical states. © 2020 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26953 - Posted: 01.13.2020

By John Horgan I just spent a week at a symposium on the mind-body problem, the deepest of all mysteries. The mind-body problem--which encompasses consciousness, free will and the meaning of life--concerns who we really are. Are we matter, which just happens to give rise to mind? Or could mind be the basis of reality, as many sages have insisted? The week-long powwow, called “Physics, Experience and Metaphysics,” took place at Esalen Institute, the legendary retreat center in Big Sur, California. Fifteen men and women representing physics, psychology, philosophy, religious studies and other fields sat in a room overlooking the Pacific and swapped mind-body ideas. What made the conference unusual, at least for me, was the emphasis on what were called “exceptional experiences,” involving telepathy, telekinesis, astral projection, past-life recall and mysticism. I’ve been obsessed with mysticism since I was a kid. As defined by William James in The Varieties of Religious Experience, mystical experiences are breaches in your ordinary life, during which you encounter absolute reality--or, if you prefer, God. You believe, you know, you are seeing things the way they really are. These experiences are usually brief, lasting only minutes or hours. They can be triggered by trauma, prayer, meditation or drugs, or they may strike you out of the blue. Advertisement I’ve had mild mystical intuitions while sober, for example, during a Buddhist retreat last year. But my most intense experience, by far, happened in 1981 while I was under the influence of a potent hallucinogen. I tried to come to terms with my experiences in my book Rational Mysticism, but my obsession endures. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26924 - Posted: 12.30.2019

By John Horgan Philosophy has taken a beating lately, even, or especially, from philosophers, who are compulsive critics, even, especially, of their own calling. But bright young women and men still aspire to be full-time truth-seekers in our corrupt, capitalist world. Over the past five years, I have met a bunch of impressive young philosophers while doing research on the mind-body problem. Hedda Hassel Mørch, for example. I first heard Mørch (pronounced murk) speak in 2015 at a New York University workshop on integrated information theory, and I ran into her at subsequent events at NYU and elsewhere. She makes a couple of appearances—one anonymous--in my book Mind-Body Problems. We recently crossed tracks in online chitchat about panpsychism, which proposes that consciousness is a property of all matter, not just brains. I’m a panpsychism critic, she’s a proponent. Below Mørch answers some questions.—John Horgan Horgan: Why philosophy? And especially philosophy of mind? Mørch: I remember thinking at some point that if I didn’t study philosophy I would always be curious about what philosophers know. And even if it turned out that they know nothing then at least I would know I wasn’t missing anything. Advertisement One reason I was attracted to philosophy of mind in particular was that it seemed like an area where philosophy clearly has some real and useful work to do. In other areas of philosophy, it might seem that many central questions can either be deflated or taken over by science. For example, in ethics, one might think there are no moral facts and so all we can do is figure out what we mean by the words “right” and “wrong”. And in metaphysics, questions such as “is the universe infinite” can now, at least arguably, be understood as scientific questions. But consciousness is a phenomenon which is obviously real, and the question of how it arises from the brain is clearly a substantive, not merely verbal question, which does not seem tractable by science as we know it. As David Chalmers says, science as we know it can only tackle the so-called easy problems of consciousness, not the hard problem. © 2019 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26904 - Posted: 12.19.2019

Christof Koch A future where the thinking capabilities of computers approach our own is quickly coming into view. We feel ever more powerful machine-learning (ML) algorithms breathing down our necks. Rapid progress in coming decades will bring about machines with human-level intelligence capable of speech and reasoning, with a myriad of contributions to economics, politics and, inevitably, warcraft. The birth of true artificial intelligence will profoundly affect humankind’s future, including whether it has one. The following quotes provide a case in point: “From the time the last great artificial intelligence breakthrough was reached in the late 1940s, scientists around the world have looked for ways of harnessing this ‘artificial intelligence’ to improve technology beyond what even the most sophisticated of today’s artificial intelligence programs can achieve.” Advertisement “Even now, research is ongoing to better understand what the new AI programs will be able to do, while remaining within the bounds of today’s intelligence. Most AI programs currently programmed have been limited primarily to making simple decisions or performing simple operations on relatively small amounts of data.” These two paragraphs were written by GPT-2, a language bot I tried last summer. Developed by OpenAI, a San Francisco–based institute that promotes beneficial AI, GPT-2 is an ML algorithm with a seemingly idiotic task: presented with some arbitrary starter text, it must predict the next word. The network isn’t taught to “understand” prose in any human sense. Instead, during its training phase, it adjusts the internal connections in its simulated neural networks to best anticipate the next word, the word after that, and so on. Trained on eight million Web pages, its innards contain more than a billion connections that emulate synapses, the connecting points between neurons. When I entered the first few sentences of the article you are reading, the algorithm spewed out two paragraphs that sounded like a freshman’s effort to recall the gist of an introductory lecture on machine learning during which she was daydreaming. The output contains all the right words and phrases—not bad, really! Primed with the same text a second time, the algorithm comes up with something different. © 2019 Scientific American,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 26894 - Posted: 12.12.2019