Links for Keyword: Consciousness

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 337

By Rachel Nuwer One person felt a sensation of “slowly floating into the air” as images flashed around. Another recalled “the most profound sense of love and peace,” unlike anything experienced before. Consciousness became a “foreign entity” to another whose “whole sense of reality disappeared.” These were some of the firsthand accounts shared in a small survey of people who belonged to an unusual cohort: They had all undergone a near-death experience and tried psychedelic drugs. The survey participants described their near-death and psychedelic experiences as being distinct, yet they also reported significant overlap. In a paper published on Thursday, researchers used these accounts to provide a comparison of the two phenomena. “For the first time, we have a quantitative study with personal testimony from people who have had both of these experiences,” said Charlotte Martial, a neuroscientist at the University of Liège in Belgium and an author of the findings, which were published in the journal Neuroscience of Consciousness. “Now we can say for sure that psychedelics can be a kind of window through which people can enter a rich, subjective state resembling a near-death experience.” Near-death experiences are surprisingly common — an estimated 5 to 10 percent of the general population has reported having one. For decades, scientists largely dismissed the fantastical stories of people who returned from the brink of death. But some researchers have started to take these accounts seriously. “In recent times, the science of consciousness has become interested in nonordinary states,” said Christopher Timmermann, a research fellow at the Center for Psychedelic Research at Imperial College London and an author of the article. “To get a comprehensive account of what it means to be a human being requires incorporating these experiences.” © 2024 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 4: Development of the Brain
Link ID: 29450 - Posted: 08.22.2024

By Carl Zimmer When people suffer severe brain damage — as a result of car crashes, for example, or falls or aneurysms — they may slip into a coma for weeks, their eyes closed, their bodies unresponsive. Some recover, but others enter a mysterious state: eyes open, yet without clear signs of consciousness. Hundreds of thousands of such patients in the United States alone are diagnosed in a vegetative state or as minimally conscious. They may survive for decades without regaining a connection to the outside world. These patients pose an agonizing mystery both for their families and for the medical professionals who care for them. Even if they can’t communicate, might they still be aware? A large study published on Wednesday suggests that a quarter of them are. Teams of neurologists at six research centers asked 241 unresponsive patients to spend several minutes at a time doing complex cognitive tasks, such as imagining themselves playing tennis. Twenty-five percent of them responded with the same patterns of brain activity seen in healthy people, suggesting that they were able to think and were at least somewhat aware. Dr. Nicholas Schiff, a neurologist at Weill Cornell Medicine and an author of the study, said the study shows that up to 100,000 patients in the United States alone might have some level of consciousness despite their devastating injuries. The results should lead to more sophisticated exams of people with so-called disorders of consciousness, and to more research into how these patients might communicate with the outside world, he said: “It’s not OK to know this and to do nothing.” When people lose consciousness after a brain injury, neurologists traditionally diagnose them with a bedside exam. They may ask patients to say something, to look to their left or right, or to give a thumbs-up. © 2024 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29436 - Posted: 08.15.2024

By Hartmut Neven & Christof Koch The brain is a mere piece of furniture in the vastness of the cosmos, subject to the same physical laws as asteroids, electrons or photons. On the surface, its three pounds of neural tissue seem to have little to do with quantum mechanics, the textbook theory that underlies all physical systems, since quantum effects are most pronounced on microscopic scales. Newly proposed experiments, however, promise to bridge this gap between microscopic and macroscopic systems, like the brain, and offer answers to the mystery of consciousness. Quantum mechanics explains a range of phenomena that cannot be understood using the intuitions formed by everyday experience. Recall the Schrödinger’s cat thought experiment, in which a cat exists in a superposition of states, both dead and alive. In our daily lives there seems to be no such uncertainty—a cat is either dead or alive. But the equations of quantum mechanics tell us that at any moment the world is composed of many such coexisting states, a tension that has long troubled physicists. Taking the bull by its horns, the cosmologist Roger Penrose in 1989 made the radical suggestion that a conscious moment occurs whenever a superimposed quantum state collapses. The idea that two fundamental scientific mysteries—the origin of consciousness and the collapse of what is called the wave function in quantum mechanics—are related, triggered enormous excitement. Penrose’s theory can be grounded in the intricacies of quantum computation. Consider a quantum bit, a qubit, the unit of information in quantum information theory that exists in a superposition of a logical 0 with a logical 1. According to Penrose, when this system collapses into either 0 or 1, a flicker of conscious experience is created, described by a single classical bit. © 2024 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29427 - Posted: 08.11.2024

Tijl Grootswagers Genevieve L Quek Manuel Varlet You are standing in the cereal aisle, weighing up whether to buy a healthy bran or a sugary chocolate-flavoured alternative. Your hand hovers momentarily before you make the final grab. But did you know that during those last few seconds, while you’re reaching out, your brain is still evaluating the pros and cons – influenced by everything from your last meal, the health star rating, the catchy jingle in the ad, and the colours of the letters on the box? Our recently published research shows our brains do not just think first and then act. Even while you are reaching for a product on a supermarket shelf, your brain is still evaluating whether you are making the right choice. Read news coverage based on evidence, not tweets Further, we found measuring hand movements offers an accurate window into the brain’s ongoing evaluation of the decision – you don’t have to hook people up to expensive brain scanners. What does this say about our decision-making? And what does it mean for consumers and the people marketing to them? There has been debate within neuroscience on whether a person’s movements to enact a decision can be modified once the brain’s “motor plan” has been made. Our research revealed not only that movements can be changed after a decision – “in flight” – but also the changes matched incoming information from a person’s senses. To study how our decisions unfold over time, we tracked people’s hand movements as they reached for different options shown in pictures – for example, in response to the question “is this picture a face or an object?” Put simply, reaching movements are shaped by ongoing thinking and decision-making. © 2010–2024, The Conversation US, Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 5: The Sensorimotor System
Link ID: 29387 - Posted: 07.11.2024

By Simon Makin Most of us have an “inner voice,” and we tend to assume everybody does, but recent evidence suggests that people vary widely in the extent to which they experience inner speech, from an almost constant patter to a virtual absence of self-talk. “Until you start asking the right questions you don’t know there’s even variation,” says Gary Lupyan, a cognitive scientist at the University of Wisconsin–Madison. “People are really surprised because they’d assumed everyone is like them.” A new study, from Lupyan and his colleague Johanne Nedergaard, a cognitive scientist at the University of Copenhagen, shows that not only are these differences real but they also have consequences for our cognition. Participants with weak inner voices did worse at psychological tasks that measure, say, verbal memory than did those with strong inner voices. The researchers have even proposed calling a lack of inner speech “anendophasia” and hope that naming it will help facilitate further research. The study adds to growing evidence that our inner mental worlds can be profoundly different. “It speaks to the surprising diversity of our subjective experiences,” Lupyan says. Psychologists think we use inner speech to assist in various mental functions. “Past research suggests inner speech is key in self-regulation and executive functioning, like task-switching, memory and decision-making,” says Famira Racy, an independent scholar who co-founded the Inner Speech Research Lab at Mount Royal University in Calgary. “Some researchers have even suggested that not having an inner voice may impact these and other areas important for a sense of self, although this is not a certainty.” Inner speech researchers know that it varies from person to person, but studies have typically used subjective measures, like questionnaires, and it is difficult to know for sure if what people say goes on in their heads is what really happens. “It’s very difficult to reflect on one’s own inner experiences, and most people aren’t very good at it when they start out,” says Charles Fernyhough, a psychologist at Durham University in England, who was not involved in the study. © 2024 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29382 - Posted: 07.06.2024

By George Musser Had you stumbled into a certain New York University auditorium in March 2023, you might have thought you were at pure neuroscience conference. In fact, it was a workshop on artificial intelligence—but your confusion could have been readily forgiven. Speakers talked about “ablation,” a procedure of creating brain lesions, as commonly done in animal model experiments. They mentioned “probing,” like using electrodes to tap into the brain’s signals. They presented linguistic analyses and cited long-standing debates in psychology over nature versus nurture. Plenty of the hundred or so researchers in attendance probably hadn’t worked with natural brains since dissecting frogs in seventh grade. But their language choices reflected a new milestone for their field: The most advanced AI systems, such as ChatGPT, have come to rival natural brains in size and complexity, and AI researchers are studying them almost as if they were studying a brain in a skull. As part of that, they are drawing on disciplines that traditionally take humans as their sole object of study: psychology, linguistics, philosophy of mind. And in return, their own discoveries have started to carry over to those other fields. These various disciplines now have such closely aligned goals and methods that they could unite into one field, Grace Lindsay, assistant professor of psychology and data science at New York University, argued at the workshop. She proposed calling this merged science “neural systems understanding.” “Honestly, it’s neuroscience that would benefit the most, I think,” Lindsay told her colleagues, noting that neuroscience still lacks a general theory of the brain. “The field that I come from, in my opinion, is not delivering. Neuroscience has been around for over 100 years. I really thought that, when people developed artificial neural systems, they could come to us.” © 2024 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 15: Language and Lateralization
Link ID: 29344 - Posted: 06.06.2024

By Dan Falk Some years ago, when he was still living in southern California, neuroscientist Christof Koch drank a bottle of Barolo wine while watching The Highlander, and then, at midnight, ran up to the summit of Mount Wilson, the 5,710-foot peak that looms over Los Angeles. After an hour of “stumbling around with my headlamp and becoming nauseated,” as he later described the incident, he realized the nighttime adventure was probably not a smart idea, and climbed back down, though not before shouting into the darkness the last line of William Ernest Henley’s 1875 poem “Invictus”: “I am the master of my fate / I am the captain of my soul.” Koch, who first rose to prominence for his collaborative work with the late Nobel Laureate Francis Crick, is hardly the only scientist to ponder the nature of the self—but he is perhaps the most adventurous, both in body and mind. He sees consciousness as the central mystery of our universe, and is willing to explore any reasonable idea in the search for an explanation. Over the years, Koch has toyed with a wide array of ideas, some of them distinctly speculative—like the idea that the Internet might become conscious, for example, or that with sufficient technology, multiple brains could be fused together, linking their accompanying minds along the way. (And yet, he does have his limits: He’s deeply skeptical both of the idea that we can “upload” our minds and of the “simulation hypothesis.”) In his new book, Then I Am Myself The World, Koch, currently the chief scientist at the Allen Institute for Brain Science in Seattle, ventures through the challenging landscape of integrated information theory (IIT), a framework that attempts to compute the amount of consciousness in a system based on the degree to which information is networked. Along the way, he struggles with what may be the most difficult question of all: How do our thoughts—seemingly ethereal and without mass or any other physical properties—have real-world consequences? © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29294 - Posted: 05.07.2024

By Steve Paulson These days, we’re inundated with speculation about the future of artificial intelligence—and specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan O’Gieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. She’s steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness. O’Gieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.) When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey. I hadn’t expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked O’Gieblyn if she would read from one of her notebooks, and she picked this passage: “In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone …” And so it went—strange, lyrical, and nonsensical—tapping into some part of herself that she didn’t know was there. That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind. © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29289 - Posted: 05.03.2024

By Dan Falk Daniel Dennett, who died in April at the age of 82, was a towering figure in the philosophy of mind. Known for his staunch physicalist stance, he argued that minds, like bodies, are the product of evolution. He believed that we are, in a sense, machines—but astoundingly complex ones, the result of millions of years of natural selection. Dennett wrote more than a dozen books, some of them aimed at a scholarly audience but many of them directed squarely at the inquisitive non-specialist—including bestsellers like Consciousness Explained, Breaking the Spell, and Darwin’s Dangerous Idea. Reading his works, one gets the impression of a mind jammed to the rafters with ideas. As Richard Dawkins put it in a blurb for Dennett’s last book, a memoir titled I’ve Been Thinking: “How unfair for one man to be blessed with such a torrent of stimulating thoughts.” Dennett spent decades puzzling over the existence of minds. How does non-thinking matter arrange itself into matter that can think, and even ponder its own existence? A long-time academic nemesis of Dennett’s, the philosopher David Chalmers, dubbed this the “Hard Problem” of consciousness. But Dennett felt this label needlessly turned a series of potentially-solvable problems into one giant unsolvable one: He was sure the so-called hard problem would evaporate once the various lesser (but still difficult) problems of understanding the brain’s mechanics were figured out. Because he viewed brains as miracle-free mechanisms, he saw no barrier to machine consciousness, at least in principle. Yet he had no fear of Terminator-style AI doomsday scenarios, either. (“The whole singularity stuff, that’s preposterous,” he once told an interviewer for The Guardian. “It distracts us from much more pressing problems.”) © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29285 - Posted: 05.02.2024

By John Horgan Philosopher Daniel Dennett died a few days ago, on April 19. When he argued that we overrate consciousness, he demonstrated, paradoxically, how conscious he was, and he made his audience more conscious. Dennett’s death feels like the end of an era, the era of ultramaterialist, ultra-Darwinian, swaggering, know-it-all scientism. Who’s left, Richard Dawkins? Dennett wasn’t as smart as he thought he was, I liked to say, because no one is. He lacked the self-doubt gene, but he forced me to doubt myself. He made me rethink what I think, and what more can you ask of a philosopher? I first encountered Dennett’s in-your-face brilliance in 1981 when I read The Mind’s I, a collection of essays he co-edited. And his name popped up at a consciousness shindig I attended earlier this month. To honor Dennett, I’m posting a revision of my 2017 critique of his claim that consciousness is an “illusion.” I’m also coining a phrase, “the Dennett paradox,”which is explained below. Of all the odd notions to emerge from debates over consciousness, the oddest is that it doesn’t exist, at least not in the way we think it does. It is an illusion, like “Santa Claus” or “American democracy.” René Descartes said consciousness is the one undeniable fact of our existence, and I find it hard to disagree. I’m conscious right now, as I type this sentence, and you are presumably conscious as you read it (although I can’t be absolutely sure). The idea that consciousness isn’t real has always struck me as absurd, but smart people espouse it. One of the smartest is philosopher Daniel Dennett, who has been questioning consciousness for decades, notably in his 1991 bestseller Consciousness Explained. © 2024 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29266 - Posted: 04.24.2024

By Dan Falk In 2022, researchers at the Bee Sensory and Behavioral Ecology Lab at Queen Mary University of London observed bumblebees doing something remarkable: The diminutive, fuzzy creatures were engaging in activity that could only be described as play. Given small wooden balls, the bees pushed them around and rotated them. The behavior had no obvious connection to mating or survival, nor was it rewarded by the scientists. It was, apparently, just for fun. The study on playful bees is part of a body of research that a group of prominent scholars of animal minds cited today, buttressing a new declaration that extends scientific support for consciousness to a wider suite of animals than has been formally acknowledged before. For decades, there’s been a broad agreement among scientists that animals similar to us — the great apes, for example — have conscious experience, even if their consciousness differs from our own. In recent years, however, researchers have begun to acknowledge that consciousness may also be widespread among animals that are very different from us, including invertebrates with completely different and far simpler nervous systems. The new declaration, signed by biologists and philosophers, formally embraces that view. It reads, in part: “The empirical evidence indicates at least a realistic possibility of conscious experience in all vertebrates (including all reptiles, amphibians and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans and insects).” Inspired by recent research findings that describe complex cognitive behaviors in these and other animals, the document represents a new consensus and suggests that researchers may have overestimated the degree of neural complexity required for consciousness. © 2024the Simons Foundation.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29264 - Posted: 04.20.2024

by Alex Blasdel Patient One was 24 years old and pregnant with her third child when she was taken off life support. It was 2014. A couple of years earlier, she had been diagnosed with a disorder that caused an irregular heartbeat, and during her two previous pregnancies she had suffered seizures and faintings. Four weeks into her third pregnancy, she collapsed on the floor of her home. Her mother, who was with her, called 911. By the time an ambulance arrived, Patient One had been unconscious for more than 10 minutes. Paramedics found that her heart had stopped. After being driven to a hospital where she couldn’t be treated, Patient One was taken to the emergency department at the University of Michigan. There, medical staff had to shock her chest three times with a defibrillator before they could restart her heart. She was placed on an external ventilator and pacemaker, and transferred to the neurointensive care unit, where doctors monitored her brain activity. She was unresponsive to external stimuli, and had a massive swelling in her brain. After she lay in a deep coma for three days, her family decided it was best to take her off life support. It was at that point – after her oxygen was turned off and nurses pulled the breathing tube from her throat – that Patient One became one of the most intriguing scientific subjects in recent history. For several years, Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness. Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 13: Memory and Learning
Link ID: 29236 - Posted: 04.02.2024

By Jyoti Madhusoodanan When the Philadelphia-based company Bioquark announced a plan in 2016 to regenerate neurons in brain-dead people, their proposal elicited skepticism and backlash. Researchers questioned the scientific merits of the planned study, which sought to inject stem cells and other materials into recently deceased subjects. Ethicists said it bordered on quackery and would exploit grieving families. Bioquark has since folded. But quietly, a physician who was involved in the controversial proposal, Himanshu Bansal, has continued the research. Bansal recently told Undark that he has been conducting work funded by him and his research team at a private hospital in Rudrapur, India, experimenting mostly with young adults who have succumbed to traffic accidents. He said he has data for 20 subjects for the first phase of the study and 11 for the second — some of whom showed glimmers of renewed electrical activity — and he plans to expand the study to include several more. Bansal said he has submitted his results to peer-reviewed journals over the past several years but has yet to find one that would publish them. Bansal may be among the more controversial figures conducting research with people who have been declared brain dead, but not by any stretch is he the only one. In recent years, high-profile experiments implanting non-human organs into human bodies, a procedure known as xenotransplantation, have fueled rising interest in using brain-dead subjects to study procedures that are too risky to perform on living people. With the support of a ventilator and other equipment, a person’s heart, kidneys, immune system, and other body parts can function for days, sometimes weeks or more, after brain death. For researchers who seek to understand drug delivery, organ transplantation, and other complexities of human physiology, these bodies can provide a more faithful simulacrum of a living human being than could be achieved with animals or lab-grown cells and tissues.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 1: Introduction: Scope and Outlook
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 29217 - Posted: 03.26.2024

By Kevin Mitchell It is often said that “the mind is what the brain does.” Modern neuroscience has indeed shown us that mental goings-on rely on and are in some sense entailed by neural goings-on. But the truth is that we have a poor handle on the nature of that relationship. One way to bridge that divide is to try to define the relationship between neural and mental representations. The basic premise of neuroscience is that patterns of neural activity carry some information — they are about something. But not all such patterns need be thought of as representations; many of them are just signals. Simple circuits such as the muscle stretch reflex or the eye-blink reflex, for example, are configured to respond to stimuli such as the lengthening of a muscle or a sudden bright light. But they don’t need to internally represent this information — or make that information available to other parts of the nervous system. They just need to respond to it. More complex information processing, by contrast, such as in our image-forming visual system, requires internal neural representation. By integrating signals from multiple photoreceptors, retinal ganglion cells carry information about patterns of light in the visual stimulus — particularly edges where the illumination changes from light to dark. This information is then made available to the thalamus and the cortical hierarchy, where additional processing goes on to extract higher- and higher-order features of the entire visual scene. Scientists have elucidated the logic of these hierarchical systems by studying the types of stimuli to which neurons are most sensitively tuned, known as “receptive fields.” If some neuron in an early cortical area responds selectively to, say, a vertical line in a certain part of the visual field, the inference is that when such a neuron is active, that is the information that it is representing. In this case, it is making that information available to the next level of the visual system — itself just a subsystem of the brain. © 2024 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 29148 - Posted: 02.13.2024

By Christian Guay & Emery Brown What does it mean to be conscious? People have been thinking and writing about this question for millennia. Yet many things about the conscious mind remain a mystery, including how to measure and assess it. What is a unit of consciousness? Are there different levels of consciousness? What happens to consciousness during sleep, coma and general anesthesia? As anesthesiologists, we think about these questions often. We make a promise to patients every day that they will be disconnected from the outside world and their inner thoughts during surgery, retain no memories of the experience and feel no pain. In this way, general anesthesia has enabled tremendous medical advances, from microscopic vascular repairs to solid organ transplants. In addition to their tremendous impact on clinical care, anesthetics have emerged as powerful scientific tools to probe questions about consciousness. They allow us to induce profound and reversible changes in conscious states—and study brain responses during these transitions. But one of the challenges that anesthesiologists face is measuring the transition from one state to another. That’s because many of the approaches that exist interrupt or disrupt what we are trying to study. Essentially, assessing the system affects the system. In studies of human consciousness, determining whether someone is conscious can arouse the person being studied—confounding that very assessment. To address this challenge, we adapted a simple approach we call the breathe-squeeze method. It offers us a way to study changes in conscious state without interrupting those shifts. To understand this approach, it helps to consider some insights from studies of consciousness that have used anesthetics. For decades researchers have used electroencephalography (EEG) to observe electrical activity in the brains of people receiving various anesthetics. They can then analyze that activity with EEG readings to characterize patterns that are specific to various anesthetics, so-called anesthetic signatures. © 2024 SCIENTIFIC AMERICAN

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 10: Biological Rhythms and Sleep
Link ID: 29116 - Posted: 01.27.2024

By Mariana Lenharo Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything. The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration. Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between. The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects. The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room. Their struggle to get the collaboration off the ground is mirrored in wider fractures in the field. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29106 - Posted: 01.18.2024

By Mariana Lenharo Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows — and they are expressing concern about the lack of inquiry into the question. In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use? Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Biden’s executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes. “With everything that’s going on in AI, inevitably there’s going to be other adjacent areas of science which are going to need to catch up,” Mason says. Consciousness is one of them. The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29065 - Posted: 12.27.2023

By Oshan Jarow Sometimes when I’m looking out across the northern meadow of Brooklyn’s Prospect Park, or even the concrete parking lot outside my office window, I wonder if someone like Shakespeare or Emily Dickinson could have taken in the same view and seen more. I don’t mean making out blurry details or more objects in the scene. But through the lens of their minds, could they encounter the exact same world as me and yet have a richer experience? One way to answer that question, at least as a thought experiment, could be to compare the electrical activity inside our brains while gazing out upon the same scene, and running some statistical analysis designed to actually tell us whose brain activity indicates more richness. But that’s just a loopy thought experiment, right? Not exactly. One of the newest frontiers in the science of the mind is the attempt to measure consciousness’s “complexity,” or how diverse and integrated electrical activity is across the brain. Philosophers and neuroscientists alike hypothesize that more complex brain activity signifies “richer” experiences. The idea of measuring complexity stems from information theory — a mathematical approach to understanding how information is stored, communicated, and processed —which doesn’t provide wonderfully intuitive examples of what more richness actually means. Unless you’re a computer person. “If you tried to upload the content onto a hard drive, it’s how much memory you’d need to be able to store the experience you’re having,” Adam Barrett, a professor of machine learning and data science at the University of Sussex, told me. Another approach to understanding richness is to look at how it changes in different mental states. Recent studies have found that measures of complexity are lowest in patients under general anesthesia, higher in ordinary wakefulness, and higher still in psychedelic trips, which can notoriously turn even the most mundane experiences — say, my view of the parking lot outside my office window — into profound and meaningful encounters.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29049 - Posted: 12.16.2023

By Amanda Gefter On a February morning in 1935, a disoriented homing pigeon flew into the open window of an unoccupied room at the Hotel New Yorker. It had a band around its leg, but where it came from, or was meant to be headed, no one could say. While management debated what to do, a maid rushed to the 33rd floor and knocked at the door of the hotel’s most infamous denizen: Nikola Tesla. The 78-year-old inventor quickly volunteered to take in the homeless pigeon. “Dr. Tesla … dropped work on a new electrical project, lest his charge require some little attention,” reported The New York Times. “The man who recently announced the discovery of an electrical death-beam, powerful enough to destroy 10,000 airplanes at a swoop, carefully spread towels on his window ledge and set down a little cup of seed.” Nikola Tesla—the Serbian-American scientist famous for designing the alternating current motor and the Tesla coil—had, for years, regularly been spotted skulking through the nighttime streets of midtown Manhattan, feeding the birds at all hours. In the dark, he’d sound a low whistle, and from the gloom, hordes of pigeons would flock to the old man, perching on his outstretched arms. He was known to keep baskets in his room as nests, along with caches of homemade seed mix, and to leave his windows perpetually open so the birds could come and go. Once, he was arrested for trying to lasso an injured homing pigeon in the plaza of St. Patrick’s Cathedral, and, from his holding cell in the 34th Street precinct, had to convince the officers that he was—or had been—one of the most famous inventors in the world. It had been years since he’d produced a successful invention. He was gaunt and broke—living off of debt and good graces—having been kicked out of a string of hotels, a trail of pigeon droppings and unpaid rent in his wake. He had no family or close friends, except for the birds. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29034 - Posted: 12.09.2023

By Erin Garcia de Jesús A new brain-monitoring device aims to be the Goldilocks of anesthesia delivery, dispensing drugs in just the right dose. No physician wants a patient to wake up during surgery — nor do patients. So anesthesiologists often give more drug than necessary to keep patients sedated during medical procedures or while on lifesaving machines like ventilators. But anesthetics can sometimes be harmful when given in excess, says David Mintz, an anesthesiologist at Johns Hopkins University. For instance, elderly people with cognitive conditions like dementia or age-related cognitive decline may be at higher risk of post-surgical confusion. Studies also hint that long periods of use in young children might cause behavioral problems. “The less we give of them, the better,” Mintz says. An automated anesthesia delivery system could help doctors find the right drug dose. The new device monitored rhesus macaques’ brain activity and supplied a common anesthetic called propofol in doses that were automatically adjusted every 20 seconds. Fluctuating doses ensured the animals received just enough drug — not too much or too little — to stay sedated for 125 minutes, researchers reported October 31 in PNAS Nexus. The study is a step toward devising and testing a system that would work for people. © Society for Science & the Public 2000–2023.

Related chapters from BN: Chapter 14: Biological Rhythms, Sleep, and Dreaming; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 10: Biological Rhythms and Sleep; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 29021 - Posted: 11.26.2023