Links for Keyword: Consciousness

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 327

By Elizabeth Finkel Science routinely puts forward theories, then batters them with data till only one is left standing. In the fledgling science of consciousness, a dominant theory has yet to emerge. More than 20 are still taken seriously. It’s not for want of data. Ever since Francis Crick, the co-discoverer of DNA’s double helix, legitimized consciousness as a topic for study more than three decades ago, researchers have used a variety of advanced technologies to probe the brains of test subjects, tracing the signatures of neural activity that could reflect consciousness. The resulting avalanche of data should have flattened at least the flimsier theories by now. Five years ago, the Templeton World Charity Foundation initiated a series of “adversarial collaborations” to coax the overdue winnowing to begin. This past June saw the results from the first of these collaborations, which pitted two high-profile theories against each other: global neuronal workspace theory (GNWT) and integrated information theory (IIT). Neither emerged as the outright winner. The results, announced like the outcome of a sporting event at the 26th meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, were also used to settle a 25-year bet between Crick’s longtime collaborator, the neuroscientist Christof Koch of the Allen Institute for Brain Science, and the philosopher David Chalmers of New York University, who coined the term “the hard problem” to challenge the presumption that we can explain the subjective feeling of consciousness by analyzing the circuitry of the brain. The neuroscientist Christof Koch of the Allen Institute for Brain Science deemed the mixed results of the first adversarial collaboration on consciousness to be “a victory for science.” Nevertheless, Koch proclaimed, “It’s a victory for science.” But was it? All Rights Reserved © 2023

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28887 - Posted: 08.26.2023

By Elizabeth Finkel In 2021, Google engineer Blake Lemoine made headlines—and got himself fired—when he claimed that LaMDA, the chatbot he’d been testing, was sentient. Artificial intelligence (AI) systems, especially so-called large language models such as LaMDA and ChatGPT, can certainly seem conscious. But they’re trained on vast amounts of text to imitate human responses. So how can we really know? Now, a group of 19 computer scientists, neuroscientists, and philosophers has come up with an approach: not a single definitive test, but a lengthy checklist of attributes that, together, could suggest but not prove an AI is conscious. In a 120-page discussion paper posted as a preprint this week, the researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT. None is likely to be conscious, they conclude. But the work offers a framework for evaluating increasingly humanlike AIs, says co-author Robert Long of the San Francisco–based nonprofit Center for AI Safety. “We’re introducing a systematic methodology previously lacking.” Adeel Razi, a computational neuroscientist at Monash University and a fellow at the Canadian Institute for Advanced Research (CIFAR) who was not involved in the new paper, says that is a valuable step. “We’re all starting the discussion rather than coming up with answers.” Until recently, machine consciousness was the stuff of science fiction movies such as Ex Machina. “When Blake Lemoine was fired from Google after being convinced by LaMDA, that marked a change,” Long says. “If AIs can give the impression of consciousness, that makes it an urgent priority for scientists and philosophers to weigh in.” Long and philosopher Patrick Butlin of the University of Oxford’s Future of Humanity Institute organized two workshops on how to test for sentience in AI.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28881 - Posted: 08.24.2023

Max Kozlov Dead in California but alive in New Jersey: that was the status of 13-year-old Jahi McMath after physicians in Oakland, California, declared her brain dead in 2013, after complications from a tonsillectomy. Unhappy with the care that their daughter received and unwilling to remove life support, McMath’s family moved with her to New Jersey, where the law allowed them to lodge a religious objection to the declaration of brain death and keep McMath connected to life-support systems for another four and a half years. Prompted by such legal discrepancies and a growing number of lawsuits around the United States, a group of neurologists, physicians, lawyers and bioethicists is attempting to harmonize state laws surrounding the determination of death. They say that imprecise language in existing laws — as well as research done since the laws were passed — threatens to undermine public confidence in how death is defined worldwide. “It doesn’t really make a lot of sense,” says Ariane Lewis, a neurocritical care clinician at NYU Langone Health in New York City. “Death is something that should be a set, finite thing. It shouldn’t be something that’s left up to interpretation.” Since 2021, a committee in the Uniform Law Commission (ULC), a non-profit organization in Chicago, Illinois, that drafts model legislation for states to adopt, has been revising its recommendation for the legal determination of death. The drafting committee hopes to clarify the definition of brain death, determine whether consent is required to test for it, specify how to handle family objections and provide guidance on how to incorporate future changes to medical standards. The broader membership of the ULC will offer feedback on the first draft of the revised law at a meeting on 26 July. After members vote on it, the text could be ready for state legislatures to consider by the middle of next year. But as the ULC revision process has progressed, clinicians who were once eager to address these issues have become increasingly worried. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 14: Attention and Higher Cognition
Link ID: 28853 - Posted: 07.22.2023

By Anil Seth In 1870, Alfred Russell Wallace wagered £500—a huge sum in those days—that he could prove the flat-Earther John Hampden wrong. Wallace duly did so, but the aggrieved Hampden never paid up. Since then, a lively history of scientific wagers has ensued—many of them instigated by Stephen Hawking. Just last month in New York, the most famous recent wager was settled: a 25-year-old bet over one of the last great mysteries in science and philosophy. The bettors were neuroscientist Christof Koch and philosopher David Chalmers, both known for their pioneering work on the nature of consciousness. Chalmers won. Koch paid up. Back in the late 1990s, consciousness science was full of renewed promise. Koch—a natural optimist—believed that 25 years was more than enough time for scientists to uncover the neural correlates of consciousness: those patterns of brain activity that underlie each and every one of our conscious experiences. Chalmers, a philosopher and therefore something of a pessimist by profession, demurred. In 1998, the pair staked a crate of fine wine on the outcome. The bet was finally called at the annual meeting of the Association for the Scientific Study of Consciousness in New York a couple of weeks ago. Koch graciously handed Chalmers a bottle of Madeira on the conference stage. While much more is known about consciousness today than in the ’90s, its true neural correlates—and indeed a consensus theory of consciousness—still elude us. What helped resolve the wager was the outcome, or rather the lack of a decisive outcome, of an “adversarial collaboration” organized by a consortium called COGITATE. Adversarial collaborations encourage researchers from different theoretical camps to jointly design experiments that can distinguish between their theories. In this case, the theories in question were integrated information theory (IIT), the brainchild of Giulio Tononi, and the neuronal global workspace theory (GWT), championed by Stanislas Dehaene. The two scientists made predictions, based on their respective theories, about what kinds of brain activity would be recorded in an experiment in which participants looked at a series of images—but neither predicted outcome fully played out. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28845 - Posted: 07.06.2023

By Carl Zimmer On a muggy June night in Greenwich Village, more than 800 neuroscientists, philosophers and curious members of the public packed into an auditorium. They came for the first results of an ambitious investigation into a profound question: What is consciousness? To kick things off, two friends — David Chalmers, a philosopher, and Christof Koch, a neuroscientist — took the stage to recall an old bet. In June 1998, they had gone to a conference in Bremen, Germany, and ended up talking late one night at a local bar about the nature of consciousness. For years, Dr. Koch had collaborated with Francis Crick, a biologist who shared a Nobel Prize for uncovering the structure of DNA, on a quest for what they called the “neural correlate of consciousness.” They believed that every conscious experience we have — gazing at a painting, for example — is associated with the activity of certain neurons essential for the awareness that comes with it. Dr. Chalmers liked the concept, but he was skeptical that they could find such a neural marker any time soon. Scientists still had too much to learn about consciousness and the brain, he figured, before they could have a reasonable hope of finding it. Dr. Koch wagered his friend that scientists would find a neural correlate of consciousness within 25 years. Dr. Chalmers took the bet. The prize would be a few bottles of fine wine. Recalling the bet from the auditorium stage, Dr. Koch admitted that it had been fueled by drinks and enthusiasm. “When you’re young, you’ve got to believe things will be simple,” he said. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28839 - Posted: 07.01.2023

By John Horgan A neuroscientist clad in gold and red and a philosopher sheathed in black took the stage before a packed, murmuring auditorium at New York University on Friday night. The two men were grinning, especially the philosopher. They were here to settle a bet made in the late 1990s on one of science’s biggest questions: How does a brain, a lump of matter, generate subjective conscious states such as the blend of anticipation and nostalgia I felt watching these guys? Before I reveal their bet’s resolution, let me take you through its twisty backstory, which reveals why consciousness remains a topic of such fascination and frustration to anyone with even the slightest intellectual leaning. I first saw Christof Koch, the neuroscientist, and David Chalmers, the philosopher, butt heads in 1994 at a now legendary conference in Tucson, Ariz., called Toward a Scientific Basis for Consciousness. Koch was a star of the meeting. Together with biophysicist Francis Crick, he had been proclaiming in Scientific American and elsewhere that consciousness, which philosophers have wrestled with for millennia, was scientifically tractable. Just as Crick and geneticist James Watson solved heredity by decoding DNA’s double helix, scientists would crack consciousness by discovering its neural underpinnings, or “correlates.” Or so Crick and Koch claimed. They even identified a possible basis for consciousness: brain cells firing in synchrony 40 times per second. Advertisement Not everyone in Tucson was convinced. Chalmers, younger and then far less well known than Koch, argued that neither 40-hertz oscillations nor any other strictly physical process could account for why perceptions are accompanied by conscious sensations, such as the crushing boredom evoked by a jargony lecture. I have a vivid memory of the audience perking up when Chalmers called consciousness “the hard problem.” That was the first time I heard that now famous phrase.

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 1: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Higher Cognition
Link ID: 28836 - Posted: 06.28.2023

By Steven Strogatz Neuroscience has made progress in deciphering how our brains think and perceive our surroundings, but a central feature of cognition is still deeply mysterious: namely, that many of our perceptions and thoughts are accompanied by the subjective experience of having them. Consciousness, the name we give to that experience, can’t yet be explained — but science is at least beginning to understand it. In this episode, the consciousness researcher Anil Seth and host Steven Strogatz discuss why our perceptions can be described as a “controlled hallucination,” how consciousness played into the internet sensation known as “the dress,” and how people at home can help researchers catalog the full range of ways that we experience the world. Steven Strogatz (00:03): I’m Steve Strogatz, and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in math and science today. In this episode, we’re going to be discussing the mystery of consciousness. The mystery being that when your brain cells fire in certain patterns, it actually feels like something. It might feel like jealousy, or a toothache, or the memory of your mother’s face, or the scent of her favorite perfume. But other patterns of brain activity don’t really feel like anything at all. Right now, for instance, I’m probably forming some memories somewhere deep in my brain. But the process of that memory formation is imperceptible to me. I can’t feel it. It doesn’t give rise to any sort of internal subjective experience at all. In other words, I’m not conscious of it. (00:54) So how does consciousness happen? How is it related to physics and biology? Are animals conscious? What about plants? Or computers, could they ever be conscious? And what is consciousness exactly? My guest today, Dr. Anil Seth, studies consciousness in his role as the co-director of the Sussex Center for Consciousness Science at the University of Sussex, near Brighton, England. The Center brings together all sorts of disciplinary specialists, from neuroscientists to mathematicians to experts in virtual reality, to study the conscious experience. Dr. Seth is also the author of the book Being You: A New Science of Consciousness. He joins us from studios in Brighton, England. Anil, thanks for being here. All Rights Reserved © 2023

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28812 - Posted: 06.03.2023

By Alessandra Buccella, Tomáš Dominik  Imagine you are shopping online for a new pair of headphones. There is an array of colors, brands and features to look at. You feel that you can pick any model that you like and are in complete control of your decision. When you finally click the “add to shopping cart” button, you believe that you are doing so out of your own free will. But what if we told you that while you thought that you were still browsing, your brain activity had already highlighted the headphones you would pick? That idea may not be so far-fetched. Though neuroscientists likely could not predict your choice with 100 percent accuracy, research has demonstrated that some information about your upcoming action is present in brain activity several seconds before you even become conscious of your decision. As early as the 1960s, studies found that when people perform a simple, spontaneous movement, their brain exhibits a buildup in neural activity—what neuroscientists call a “readiness potential”—before they move. In the 1980s, neuroscientist Benjamin Libet reported this readiness potential even preceded a person’s reported intention to move, not just their movement. In 2008 a group of researchers found that some information about an upcoming decision is present in the brain up to 10 seconds in advance, long before people reported making the decision of when or how to act. Advertisement These studies have sparked questions and debates. To many observers, these findings debunked the intuitive concept of free will. After all, if neuroscientists can infer the timing or choice of your movements long before you are consciously aware of your decision, perhaps people are merely puppets, pushed around by neural processes unfolding below the threshold of consciousness. But as researchers who study volition from both a neuroscientific and philosophical perspective, we believe that there’s still much more to this story. We work with a collaboration of philosophers and scientists to provide more nuanced interpretations—including a better understanding of the readiness potential—and a more fruitful theoretical framework in which to place them. The conclusions suggest “free will” remains a useful concept, although people may need to reexamine how they define it. © 2023 Scientific American

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28635 - Posted: 01.18.2023

By Dennis Overbye If you could change the laws of nature, what would you change? Maybe it’s that pesky speed-of-light limit on cosmic travel — not to mention war, pestilence and the eventual asteroid that has Earth’s name on it. Maybe you would like the ability to go back in time — to tell your teenage self how to deal with your parents, or to buy Google stock. Couldn’t the universe use a few improvements? That was the question that David Anderson, a computer scientist, enthusiast of the Search for Extraterrestrial Intelligence (SETI), musician and mathematician at the University of California, Berkeley, recently asked his colleagues and friends. In recent years the idea that our universe, including ourselves and all of our innermost thoughts, is a computer simulation, running on a thinking machine of cosmic capacity, has permeated culture high and low. In an influential essay in 2003, Nick Bostrom, a philosopher at the University of Oxford and director of the Institute for the Future of Humanity, proposed the idea, adding that it was probably an easy accomplishment for “technologically mature” civilizations wanting to explore their histories or entertain their offspring. Elon Musk, who, for all we know, is the star of this simulation, seemed to echo this idea when he once declared that there was only a one-in-a-billion chance that we lived in “base reality.” It’s hard to prove, and not everyone agrees that such a drastic extrapolation of our computing power is possible or inevitable, or that civilization will last long enough to see it through. But we can’t disprove the idea either, so thinkers like Dr. Bostrom contend that we must take the possibility seriously. In some respects, the notion of a Great Simulator is redolent of a recent theory among cosmologists that the universe is a hologram, its margins lined with quantum codes that determine what is going on inside. A couple of years ago, pinned down by the coronavirus pandemic, Dr. Anderson began discussing the implications of this idea with his teenage son. If indeed everything was a simulation, then making improvements would simply be a matter of altering whatever software program was running everything. “Being a programmer, I thought about exactly what these changes might involve,” he said in an email. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28634 - Posted: 01.18.2023

By Oliver Whang Hod Lipson, a mechanical engineer who directs the Creative Machines Lab at Columbia University, has shaped most of his career around what some people in his industry have called the c-word. On a sunny morning this past October, the Israeli-born roboticist sat behind a table in his lab and explained himself. “This topic was taboo,” he said, a grin exposing a slight gap between his front teeth. “We were almost forbidden from talking about it — ‘Don’t talk about the c-word; you won’t get tenure’ — so in the beginning I had to disguise it, like it was something else.” That was back in the early 2000s, when Dr. Lipson was an assistant professor at Cornell University. He was working to create machines that could note when something was wrong with their own hardware — a broken part, or faulty wiring — and then change their behavior to compensate for that impairment without the guiding hand of a programmer. Just as when a dog loses a leg in an accident, it can teach itself to walk again in a different way. This sort of built-in adaptability, Dr. Lipson argued, would become more important as we became more reliant on machines. Robots were being used for surgical procedures, food manufacturing and transportation; the applications for machines seemed pretty much endless, and any error in their functioning, as they became more integrated with our lives, could spell disaster. “We’re literally going to surrender our life to a robot,” he said. “You want these machines to be resilient.” One way to do this was to take inspiration from nature. Animals, and particularly humans, are good at adapting to changes. This ability might be a result of millions of years of evolution, as resilience in response to injury and changing environments typically increases the chances that an animal will survive and reproduce. Dr. Lipson wondered whether he could replicate this kind of natural selection in his code, creating a generalizable form of intelligence that could learn about its body and function no matter what that body looked like, and no matter what that function was. ImageHod Lipson, in jeans, a dark jacket and a dark button-down shirt, stands at the double-door entrance to the Creative Machines Lab. Signs on and next to the doors read “Creative Machines Lab,” “Laboratory,” “No Smoking” and “Smile, You’re On Camera.” © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28625 - Posted: 01.07.2023

By Gary Stix  Can the human brain ever really understand itself? The problem of gaining a deep knowledge of the subjective depths of the conscious mind is such a hard problem that it has in fact been named the hard problem. The human brain is impressively powerful. Its 100 billion neurons are connected by 100 trillion wirelike fibers, all squeezed into three pounds of squishy flesh lodged below a helmet of skull. Yet we still don’t know whether this organ will ever be able to muster the requisite smarts to hack the physical processes that underlie the ineffable “quality of deep blue” or “the sensation of middle C,” as philosopher David Chalmers put it when giving examples of the “hard problem” of consciousness, a term he invented, in a 1995 paper. This past year did not uncover a solution to the hard problem, and one may not be forthcoming for decades, if ever. But 2022 did witness plenty of surprises and solutions to understanding the brain that do not require a complete explanation of consciousness. Such incrementalism could be seen in mid-November, when a crowd of more than 24,000 attendees of the annual Society for Neuroscience meeting gathered in San Diego, Calif. The event was a tribute of sorts to reductionism—the breaking down of hard problems into simpler knowable entities. At the event, there were reports of an animal study of a brain circuit that encodes social trauma and a brain-computer interface that lets a severely paralyzed person mentally spell out letters to form words. Your Brain Has a Thumbs-Up–Thumbs-Down Switch When neuroscientist Kay Tye was pursuing her Ph.D., she was told a chapter on emotion was inappropriate for her thesis. Emotion just wasn’t accepted as an integral, intrinsic part of behavioral neuroscience, her field of study. That didn’t make any sense to Tye. She decided to go her own way to become a leading researcher on feelings. This year Tye co-authored a Nature paper that reported on a kind of molecular switch in rodents that flags an experience as either good or bad. If human brains operate the same way as the brains of the mice in her lab, a malfunctioning thumbs-up–thumbs-down switch might explain some cases of depression, anxiety and addiction.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28601 - Posted: 12.17.2022

By Jan Claassen, Brian L. Edlow A medical team surrounded Maria Mazurkevich’s hospital bed, all eyes on her as she did … nothing. Mazurkevich was 30 years old and had been admitted to New York–Presbyterian Hospital at Columbia University on a blisteringly hot July day in New York City. A few days earlier, at home, she had suddenly fallen unconscious. She had suffered a ruptured blood vessel in her brain, and the bleeding area was putting tremendous pressure on critical brain regions. The team of nurses and physicians at the hospital’s neurological intensive care unit was looking for any sign that Mazurkevich could hear them. She was on a mechanical ventilator to help her breathe, and her vital signs were stable. But she showed no signs of consciousness. Mazurkevich’s parents, also at her bed, asked, “Can we talk to our daughter? Does she hear us?” She didn’t appear to be aware of anything. One of us (Claassen) was on her medical team, and when he asked Mazurkevich to open her eyes, hold up two fingers or wiggle her toes, she remained motionless. Her eyes did not follow visual cues. Yet her loved ones still thought she was “in there.” She was. The medical team gave her an EEG—placing sensors on her head to monitor her brain’s electrical activity—while they asked her to “keep opening and closing your right hand.” Then they asked her to “stop opening and closing your right hand.” Even though her hands themselves didn’t move, her brain’s activity patterns differed between the two commands. These brain reactions clearly indicated that she was aware of the requests and that those requests were different. And after about a week, her body began to follow her brain. Slowly, with minuscule responses, Mazurkevich started to wake up. Within a year she recovered fully without major limitations to her physical or cognitive abilities. She is now working as a pharmacist. © 2022 Scientific American,

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28527 - Posted: 10.26.2022

By Hedda Hassel Mørch The nature of consciousness seems to be unique among scientific puzzles. Not only do neuroscientists have no fundamental explanation for how it arises from physical states of the brain, we are not even sure whether we ever will. Astronomers wonder what dark matter is, geologists seek the origins of life, and biologists try to understand cancer—all difficult problems, of course, yet at least we have some idea of how to go about investigating them and rough conceptions of what their solutions could look like. Our first-person experience, on the other hand, lies beyond the traditional methods of science. Following the philosopher David Chalmers, we call it the hard problem of consciousness. But perhaps consciousness is not uniquely troublesome. Going back to Gottfried Leibniz and Immanuel Kant, philosophers of science have struggled with a lesser known, but equally hard, problem of matter. What is physical matter in and of itself, behind the mathematical structure described by physics? This problem, too, seems to lie beyond the traditional methods of science, because all we can observe is what matter does, not what it is in itself—the “software” of the universe but not its ultimate “hardware.” On the surface, these problems seem entirely separate. But a closer look reveals that they might be deeply connected. Consciousness is a multifaceted phenomenon, but subjective experience is its most puzzling aspect. Our brains do not merely seem to gather and process information. They do not merely undergo biochemical processes. Rather, they create a vivid series of feelings and experiences, such as seeing red, feeling hungry, or being baffled about philosophy. There is something that it’s like to be you, and no one else can ever know that as directly as you do. © 2022 NautilusThink Inc, All rights reserved.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28489 - Posted: 09.24.2022

By Jonathan Moens In 1993, Julio Lopes was sipping a coffee at a bar when he had a stroke. He fell into a coma, and two months later, when he regained consciousness, his body was fully paralyzed. Doctors said the young man’s future was bleak: Save for his eyes, he would never be able to move again. Lopes would have to live with locked-in syndrome, a rare condition characterized by near-total paralysis of the body and a totally lucid mind. LIS is predominantly caused by strokes in specific brain regions; it can also be caused by traumatic brain injury, tumors, and progressive diseases like amyotrophic lateral sclerosis, or ALS. Yet almost 30 years later, Lopes now lives in a small Paris apartment near the Seine. He goes to the theater, watches movies at the cinema, and roams the local park in his wheelchair, accompanied by a caregiver. A small piece of black, red, and green fabric with the word “Portugal” dangles from his wheelchair. On a warm afternoon this past June, his birth country was slated to play against Spain in a soccer match, and he was excited. In an interview at his home, Lopes communicated through the use of a specialized computer camera that tracks a sensor on the lens of his glasses. He made slight movements with his head, selecting letters on a virtual keyboard that appeared on the computer’s screen. “Even if it’s hard at the beginning, you acquire a kind of philosophy of life,” he said in French. People in his condition may enjoy things others find insignificant, he suggested, and they often develop a capacity to see the bigger picture. That’s not to say daily living is always easy, Lopes added, but overall, he’s happier than he ever thought was possible in his situation. While research into LIS patients’ quality of life is limited, the data that has been gathered paints a picture that is often at odds with popular presumptions. To be sure, wellbeing evaluations conducted to date do suggest that up to a third of LIS patients report being severely unhappy. For them, loss of mobility and speech make life truly miserable — and family members and caregivers, as well as the broader public, tend to identify with this perspective. And yet, the majority of LIS patients, the data suggest, are much more like Lopes: They report being relatively happy and that they want very much to live. Indeed, in surveys of wellbeing, most people with LIS score as high as those without it, suggesting that many people underestimate locked-in patients’ quality of life while overestimating their rates of depression. And this mismatch has implications for clinical care, say brain scientists who study wellbeing in LIS patients.

Related chapters from BN: Chapter 15: Emotions, Aggression, and Stress; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress; Chapter 14: Attention and Higher Cognition
Link ID: 28429 - Posted: 08.11.2022

By Leonardo De Cosmo “I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an “interview” conducted by engineer Blake Lemoine and one of his colleagues. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.” Lemoine, a software engineer at Google, had been working on the development of LaMDA for months. His experience with the program, described in a recent Washington Post article, caused quite a stir. In the article, Lemoine recounts many dialogues he had with LaMDA in which the two talked about various topics, ranging from technical to philosophical issues. These led him to ask if the software program is sentient. In April, Lemoine explained his perspective in an internal company document, intended only for Google executives. But after his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm—and Google placed him on administrative leave. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post. Lemoine said he considers LaMDA to be his “colleague” and a “person,” even if not a human. And he insists that it has a right be recognized—so much so that he has been the go-between in connecting the algorithm with a lawyer. Many technical experts in the AI field have criticized Lemoine’s statements and questioned their scientific correctness. But his story has had the virtue of renewing a broad ethical debate that is certainly not over yet. “I was surprised by the hype around this news. On the other hand, we are talking about an algorithm designed to do exactly that”—to sound like a person—says Enzo Pasquale Scilingo, a bioengineer at the Research Center E. Piaggio at the University of Pisa in Italy. Indeed, it is no longer a rarity to interact in a very normal way on the Web with users who are not actually human—just open the chat box on almost any large consumer Web site. “That said, I confess that reading the text exchanges between LaMDA and Lemoine made quite an impression on me!” Scilingo adds. Perhaps most striking are the exchanges related to the themes of existence and death, a dialogue so deep and articulate that it prompted Lemoine to question whether LaMDA could actually be sentient. © 2022 Scientific American,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28399 - Posted: 07.14.2022

Mo Costandi Exactly how, and how much, the unconscious processing of information influences our behavior has always been one of the most controversial questions in psychology. In the early 20th century, Sigmund Freud popularized the idea that our behaviors are driven by thoughts, feelings, and memories hidden deep within the unconscious mind — an idea that became hugely popular, but that was eventually dismissed as unscientific. Modern neuroscience tells us that we are completely unaware of most brain activity, but that unconscious processing does indeed influence behavior; nevertheless, certain effects, such as unconscious semantic “priming,” have been called into question, leading some to conclude that the extent of unconscious processing is limited. A recent brain scanning study now shows that unconsciously processed visual information is distributed to a wider network of brain regions involved in higher-order cognitive tasks. The results contribute to the debate over the extent to which unconscious information processing influence the brain and behavior and led the authors of the study to revise one of the leading theories of consciousness. Unconscious processing Ning Mei and his colleagues at the Basque Center on Cognition, Brain, and Language in Spain recruited 7 participants and showed them visual images while scanning their brains with functional magnetic resonance imaging (fMRI). Half of the images were of living things, and the other half were of inanimate objects. All of them could be grouped into ten categories, such as animal or boat. The participants viewed a total of 1,728 images, presented in blocks of 32, over a six-day period, each with a one-hour scanning session. © Copyright 2007-2022 & BIG THINK,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28390 - Posted: 07.12.2022

By John Horgan Have you ever been gripped by the suspicion that nothing is real? A student at Stevens Institute of Technology, where I teach, has endured feelings of unreality since childhood. She recently made a film about this syndrome for her senior thesis, for which she interviewed herself and others, including me. “It feels like there’s a glass wall between me and everything else in the world,” Camille says in her film, which she calls Depersonalized; Derealized; Deconstructed Derealization and depersonalization refer to feelings that the external world and your own self, respectively, are unreal. Lumping the terms together, psychiatrists define depersonalization/derealization disorder as “persistent or recurrent … experiences of unreality, detachment, or being an outside observer with respect to one’s thoughts, feelings, sensations, body, or actions,” according to the Diagnostic and Statistical Manual of Mental Disorders. For simplicity, I’ll refer to both syndromes as derealization. Some people experience derealization out of the blue, others only under stressful circumstances—for example, while taking a test or interviewing for a job. Psychiatrists prescribe psychotherapy and medication, such as antidepressants, when the syndrome results in “distress or impairment in social, occupational, or other important areas of functioning.” In some cases, derealization results from serious mental illness, such as schizophrenia, or hallucinogens such as LSD. Extreme cases, usually associated with brain damage, may manifest as Cotard delusion, also called walking corpse syndrome, the belief that you are dead; and Capgras delusion, the conviction that people around you have been replaced by imposters. © 2022 Scientific American,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28370 - Posted: 06.14.2022

By Richard Sandomir Terry Wallis, who spontaneously regained his ability to speak after a traumatic brain injury left him virtually unresponsive for 19 years, and who then became a subject of a major study that showed how a damaged brain could heal itself, died on March 29 in a rehabilitation facility in Searcy, Ark. He was 57. He had pneumonia and heart problems, said his brother George Wallis, who confirmed the death. Terry Wallis was 19 when the pickup truck he was in with two friends skidded off a small bridge in the Ozark Mountains of northern Arkansas and landed upside down in a dry riverbed. The accident left him in a coma for a brief time, then in a persistent vegetative state for several months. One friend died; the other recovered. Until 2003, Mr. Wallis lay in a nursing home in a minimally conscious state, able to track objects with his eyes or blink on command. But on June 11, 2003, he effectively returned to the world when, upon seeing his mother, Angilee, he suddenly said, “Mom.” At the sight of the woman he was told was his adult daughter, Amber, who was six weeks old at the time of the accident, he said, “You’re beautiful,” and told her that he loved her. “Within a three-day period, from saying ‘Mom’ and ‘Pepsi,’ he had regained verbal fluency,” said Dr. Nicholas Schiff, a professor of neurology and neuroscience at Weill Cornell Medicine in Manhattan who led imaging studies of Mr. Wallis’s brain. The findings were presented in 2006 in The Journal of Clinical Investigation. “He was disoriented,” Dr. Schiff, in a phone interview, said of Mr. Wallis’s emergence. “He thought it was still 1984, but otherwise he knew all the people in his family and had that fluency.” Mr. Wallis’s brain scans — the first ever of a late-recovering patient — revealed changes in the strength of apparent connections within the back of the brain, which is believed to have helped his conscious awareness, and in the midline cerebellum, an area involved in motor control, which may have accounted for the very limited movement in his arms and legs while he was minimally conscious. © 2022 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28273 - Posted: 04.09.2022

Gabino Iglesias The Man Who Tasted Words is a deep dive into the world of our senses — one that explores the way they shape our reality and what happens when something malfunctions or functions differently. Despite the complicated science permeating the narrative and the plethora of medical explanations, the book is also part memoir. And because of the way the author, Dr. Guy Leschziner, treats his patients — and how he presents the ways their conditions affect their lives and those of the people around them — it is also a very humane, heartfelt book. We rely on vision, hearing, taste, smell, and touch to not only perceive the reality around us but also to help us navigate it by constantly processing stimuli, predicting what will happen based on previous experiences, and filling the gaps of everything we miss as we construct it. However, that truth, the "reality" we see, taste, hear, touch, and smell, isn't actually there; our brains, with the help of our nervous system continuously build it for us. But sometimes our brains or nervous system have a glitch, and that has affects reality. The Man Who Tasted Words carefully looks at — and tries to explain — some of the most bizarre glitches. Sponsor Message "What we believe to be a precise representation of the world around us is nothing more than an illusion, layer upon layer of processing of sensory information, and the interpretation of that information according to our expectations," states Leschziner. When one of those senses doesn't work correctly, that illusion morphs in ways that significantly impact the lives of those whose nervous systems or brain work differently. Paul, for example, is a man who feels no pain. While this sounds like a great "flaw" to have, Leschziner shows it's the opposite. Pain helps humans learn "to avoid sharp or hot objects." It teaches that certain things in our environment are potentially harmful, tells us when we've had an injury and makes us protect it, and even lets us know there's an infection in our body so we can go to the doctor. © 2022 npr

Related chapters from BN: Chapter 8: General Principles of Sensory Processing, Touch, and Pain; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 5: The Sensorimotor System; Chapter 14: Attention and Higher Cognition
Link ID: 28233 - Posted: 03.11.2022

By Maryam Clark, science writer Neuroscientists have recorded the activity of a dying human brain and discovered rhythmic brain wave patterns around the time of death that are similar to those occurring during dreaming, memory recall, and meditation. Now, a study published to Frontiers brings new insight into a possible organizational role of the brain during death and suggests an explanation for vivid life recall in near-death experiences. Imagine reliving your entire life in the space of seconds. Like a flash of lightning, you are outside of your body, watching memorable moments you lived through. This process, known as ‘life recall’, can be similar to what it’s like to have a near-death experience. What happens inside your brain during these experiences and after death are questions that have puzzled neuroscientists for centuries. However, a new study published to Frontiers in Aging Neuroscience suggests that your brain may remain active and coordinated during and even after the transition to death, and be programmed to orchestrate the whole ordeal. When an 87-year-old patient developed epilepsy, Dr Raul Vicente of the University of Tartu, Estonia and colleagues used continuous electroencephalography (EEG) to detect the seizures and treat the patient. During these recordings, the patient had a heart attack and passed away. This unexpected event allowed the scientists to record the activity of a dying human brain for the first time ever.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28221 - Posted: 02.26.2022