Links for Keyword: Consciousness

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 332

By George Musser Had you stumbled into a certain New York University auditorium in March 2023, you might have thought you were at pure neuroscience conference. In fact, it was a workshop on artificial intelligence—but your confusion could have been readily forgiven. Speakers talked about “ablation,” a procedure of creating brain lesions, as commonly done in animal model experiments. They mentioned “probing,” like using electrodes to tap into the brain’s signals. They presented linguistic analyses and cited long-standing debates in psychology over nature versus nurture. Plenty of the hundred or so researchers in attendance probably hadn’t worked with natural brains since dissecting frogs in seventh grade. But their language choices reflected a new milestone for their field: The most advanced AI systems, such as ChatGPT, have come to rival natural brains in size and complexity, and AI researchers are studying them almost as if they were studying a brain in a skull. As part of that, they are drawing on disciplines that traditionally take humans as their sole object of study: psychology, linguistics, philosophy of mind. And in return, their own discoveries have started to carry over to those other fields. These various disciplines now have such closely aligned goals and methods that they could unite into one field, Grace Lindsay, assistant professor of psychology and data science at New York University, argued at the workshop. She proposed calling this merged science “neural systems understanding.” “Honestly, it’s neuroscience that would benefit the most, I think,” Lindsay told her colleagues, noting that neuroscience still lacks a general theory of the brain. “The field that I come from, in my opinion, is not delivering. Neuroscience has been around for over 100 years. I really thought that, when people developed artificial neural systems, they could come to us.” © 2024 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 15: Language and Lateralization
Link ID: 29344 - Posted: 06.06.2024

By Dan Falk Some years ago, when he was still living in southern California, neuroscientist Christof Koch drank a bottle of Barolo wine while watching The Highlander, and then, at midnight, ran up to the summit of Mount Wilson, the 5,710-foot peak that looms over Los Angeles. After an hour of “stumbling around with my headlamp and becoming nauseated,” as he later described the incident, he realized the nighttime adventure was probably not a smart idea, and climbed back down, though not before shouting into the darkness the last line of William Ernest Henley’s 1875 poem “Invictus”: “I am the master of my fate / I am the captain of my soul.” Koch, who first rose to prominence for his collaborative work with the late Nobel Laureate Francis Crick, is hardly the only scientist to ponder the nature of the self—but he is perhaps the most adventurous, both in body and mind. He sees consciousness as the central mystery of our universe, and is willing to explore any reasonable idea in the search for an explanation. Over the years, Koch has toyed with a wide array of ideas, some of them distinctly speculative—like the idea that the Internet might become conscious, for example, or that with sufficient technology, multiple brains could be fused together, linking their accompanying minds along the way. (And yet, he does have his limits: He’s deeply skeptical both of the idea that we can “upload” our minds and of the “simulation hypothesis.”) In his new book, Then I Am Myself The World, Koch, currently the chief scientist at the Allen Institute for Brain Science in Seattle, ventures through the challenging landscape of integrated information theory (IIT), a framework that attempts to compute the amount of consciousness in a system based on the degree to which information is networked. Along the way, he struggles with what may be the most difficult question of all: How do our thoughts—seemingly ethereal and without mass or any other physical properties—have real-world consequences? © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29294 - Posted: 05.07.2024

By Steve Paulson These days, we’re inundated with speculation about the future of artificial intelligence—and specifically how AI might take away our jobs, or steal the creative work of writers and artists, or even destroy the human species. The American writer Meghan O’Gieblyn also wonders about these things, and her essays offer pointed inquiries into the philosophical and spiritual underpinnings of this technology. She’s steeped in the latest AI developments but is also well-versed in debates about linguistics and the nature of consciousness. O’Gieblyn also writes about her own struggle to find deeper meaning in her life, which has led her down some unexpected rabbit holes. A former Christian fundamentalist, she later stumbled into transhumanism and, ultimately, plunged into the exploding world of AI. (She currently also writes an advice column for Wired magazine about tech and society.) When I visited her at her home in Madison, Wisconsin, I was curious if I might see any traces of this unlikely personal odyssey. I hadn’t expected her to pull out a stash of old notebooks filled with her automatic writing, composed while working with a hypnotist. I asked O’Gieblyn if she would read from one of her notebooks, and she picked this passage: “In all the times we came to bed, there was never any sleep. Dawn bells and doorbells and daffodils and the side of the road glaring with their faces undone …” And so it went—strange, lyrical, and nonsensical—tapping into some part of herself that she didn’t know was there. That led us into a wide-ranging conversation about the unconscious, creativity, the quest for transcendence, and the differences between machine intelligence and the human mind. © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29289 - Posted: 05.03.2024

By Dan Falk Daniel Dennett, who died in April at the age of 82, was a towering figure in the philosophy of mind. Known for his staunch physicalist stance, he argued that minds, like bodies, are the product of evolution. He believed that we are, in a sense, machines—but astoundingly complex ones, the result of millions of years of natural selection. Dennett wrote more than a dozen books, some of them aimed at a scholarly audience but many of them directed squarely at the inquisitive non-specialist—including bestsellers like Consciousness Explained, Breaking the Spell, and Darwin’s Dangerous Idea. Reading his works, one gets the impression of a mind jammed to the rafters with ideas. As Richard Dawkins put it in a blurb for Dennett’s last book, a memoir titled I’ve Been Thinking: “How unfair for one man to be blessed with such a torrent of stimulating thoughts.” Dennett spent decades puzzling over the existence of minds. How does non-thinking matter arrange itself into matter that can think, and even ponder its own existence? A long-time academic nemesis of Dennett’s, the philosopher David Chalmers, dubbed this the “Hard Problem” of consciousness. But Dennett felt this label needlessly turned a series of potentially-solvable problems into one giant unsolvable one: He was sure the so-called hard problem would evaporate once the various lesser (but still difficult) problems of understanding the brain’s mechanics were figured out. Because he viewed brains as miracle-free mechanisms, he saw no barrier to machine consciousness, at least in principle. Yet he had no fear of Terminator-style AI doomsday scenarios, either. (“The whole singularity stuff, that’s preposterous,” he once told an interviewer for The Guardian. “It distracts us from much more pressing problems.”) © 2024 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29285 - Posted: 05.02.2024

By John Horgan Philosopher Daniel Dennett died a few days ago, on April 19. When he argued that we overrate consciousness, he demonstrated, paradoxically, how conscious he was, and he made his audience more conscious. Dennett’s death feels like the end of an era, the era of ultramaterialist, ultra-Darwinian, swaggering, know-it-all scientism. Who’s left, Richard Dawkins? Dennett wasn’t as smart as he thought he was, I liked to say, because no one is. He lacked the self-doubt gene, but he forced me to doubt myself. He made me rethink what I think, and what more can you ask of a philosopher? I first encountered Dennett’s in-your-face brilliance in 1981 when I read The Mind’s I, a collection of essays he co-edited. And his name popped up at a consciousness shindig I attended earlier this month. To honor Dennett, I’m posting a revision of my 2017 critique of his claim that consciousness is an “illusion.” I’m also coining a phrase, “the Dennett paradox,”which is explained below. Of all the odd notions to emerge from debates over consciousness, the oddest is that it doesn’t exist, at least not in the way we think it does. It is an illusion, like “Santa Claus” or “American democracy.” René Descartes said consciousness is the one undeniable fact of our existence, and I find it hard to disagree. I’m conscious right now, as I type this sentence, and you are presumably conscious as you read it (although I can’t be absolutely sure). The idea that consciousness isn’t real has always struck me as absurd, but smart people espouse it. One of the smartest is philosopher Daniel Dennett, who has been questioning consciousness for decades, notably in his 1991 bestseller Consciousness Explained. © 2024 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29266 - Posted: 04.24.2024

By Dan Falk In 2022, researchers at the Bee Sensory and Behavioral Ecology Lab at Queen Mary University of London observed bumblebees doing something remarkable: The diminutive, fuzzy creatures were engaging in activity that could only be described as play. Given small wooden balls, the bees pushed them around and rotated them. The behavior had no obvious connection to mating or survival, nor was it rewarded by the scientists. It was, apparently, just for fun. The study on playful bees is part of a body of research that a group of prominent scholars of animal minds cited today, buttressing a new declaration that extends scientific support for consciousness to a wider suite of animals than has been formally acknowledged before. For decades, there’s been a broad agreement among scientists that animals similar to us — the great apes, for example — have conscious experience, even if their consciousness differs from our own. In recent years, however, researchers have begun to acknowledge that consciousness may also be widespread among animals that are very different from us, including invertebrates with completely different and far simpler nervous systems. The new declaration, signed by biologists and philosophers, formally embraces that view. It reads, in part: “The empirical evidence indicates at least a realistic possibility of conscious experience in all vertebrates (including all reptiles, amphibians and fishes) and many invertebrates (including, at minimum, cephalopod mollusks, decapod crustaceans and insects).” Inspired by recent research findings that describe complex cognitive behaviors in these and other animals, the document represents a new consensus and suggests that researchers may have overestimated the degree of neural complexity required for consciousness. © 2024the Simons Foundation.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29264 - Posted: 04.20.2024

by Alex Blasdel Patient One was 24 years old and pregnant with her third child when she was taken off life support. It was 2014. A couple of years earlier, she had been diagnosed with a disorder that caused an irregular heartbeat, and during her two previous pregnancies she had suffered seizures and faintings. Four weeks into her third pregnancy, she collapsed on the floor of her home. Her mother, who was with her, called 911. By the time an ambulance arrived, Patient One had been unconscious for more than 10 minutes. Paramedics found that her heart had stopped. After being driven to a hospital where she couldn’t be treated, Patient One was taken to the emergency department at the University of Michigan. There, medical staff had to shock her chest three times with a defibrillator before they could restart her heart. She was placed on an external ventilator and pacemaker, and transferred to the neurointensive care unit, where doctors monitored her brain activity. She was unresponsive to external stimuli, and had a massive swelling in her brain. After she lay in a deep coma for three days, her family decided it was best to take her off life support. It was at that point – after her oxygen was turned off and nurses pulled the breathing tube from her throat – that Patient One became one of the most intriguing scientific subjects in recent history. For several years, Jimo Borjigin, a professor of neurology at the University of Michigan, had been troubled by the question of what happens to us when we die. She had read about the near-death experiences of certain cardiac-arrest survivors who had undergone extraordinary psychic journeys before being resuscitated. Sometimes, these people reported travelling outside of their bodies towards overwhelming sources of light where they were greeted by dead relatives. Others spoke of coming to a new understanding of their lives, or encountering beings of profound goodness. Borjigin didn’t believe the content of those stories was true – she didn’t think the souls of dying people actually travelled to an afterworld – but she suspected something very real was happening in those patients’ brains. In her own laboratory, she had discovered that rats undergo a dramatic storm of many neurotransmitters, including serotonin and dopamine, after their hearts stop and their brains lose oxygen. She wondered if humans’ near-death experiences might spring from a similar phenomenon, and if it was occurring even in people who couldn’t be revived. © 2024 Guardian News & Media Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 13: Memory and Learning
Link ID: 29236 - Posted: 04.02.2024

By Jyoti Madhusoodanan When the Philadelphia-based company Bioquark announced a plan in 2016 to regenerate neurons in brain-dead people, their proposal elicited skepticism and backlash. Researchers questioned the scientific merits of the planned study, which sought to inject stem cells and other materials into recently deceased subjects. Ethicists said it bordered on quackery and would exploit grieving families. Bioquark has since folded. But quietly, a physician who was involved in the controversial proposal, Himanshu Bansal, has continued the research. Bansal recently told Undark that he has been conducting work funded by him and his research team at a private hospital in Rudrapur, India, experimenting mostly with young adults who have succumbed to traffic accidents. He said he has data for 20 subjects for the first phase of the study and 11 for the second — some of whom showed glimmers of renewed electrical activity — and he plans to expand the study to include several more. Bansal said he has submitted his results to peer-reviewed journals over the past several years but has yet to find one that would publish them. Bansal may be among the more controversial figures conducting research with people who have been declared brain dead, but not by any stretch is he the only one. In recent years, high-profile experiments implanting non-human organs into human bodies, a procedure known as xenotransplantation, have fueled rising interest in using brain-dead subjects to study procedures that are too risky to perform on living people. With the support of a ventilator and other equipment, a person’s heart, kidneys, immune system, and other body parts can function for days, sometimes weeks or more, after brain death. For researchers who seek to understand drug delivery, organ transplantation, and other complexities of human physiology, these bodies can provide a more faithful simulacrum of a living human being than could be achieved with animals or lab-grown cells and tissues.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 1: Introduction: Scope and Outlook
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 1: Cells and Structures: The Anatomy of the Nervous System
Link ID: 29217 - Posted: 03.26.2024

By Kevin Mitchell It is often said that “the mind is what the brain does.” Modern neuroscience has indeed shown us that mental goings-on rely on and are in some sense entailed by neural goings-on. But the truth is that we have a poor handle on the nature of that relationship. One way to bridge that divide is to try to define the relationship between neural and mental representations. The basic premise of neuroscience is that patterns of neural activity carry some information — they are about something. But not all such patterns need be thought of as representations; many of them are just signals. Simple circuits such as the muscle stretch reflex or the eye-blink reflex, for example, are configured to respond to stimuli such as the lengthening of a muscle or a sudden bright light. But they don’t need to internally represent this information — or make that information available to other parts of the nervous system. They just need to respond to it. More complex information processing, by contrast, such as in our image-forming visual system, requires internal neural representation. By integrating signals from multiple photoreceptors, retinal ganglion cells carry information about patterns of light in the visual stimulus — particularly edges where the illumination changes from light to dark. This information is then made available to the thalamus and the cortical hierarchy, where additional processing goes on to extract higher- and higher-order features of the entire visual scene. Scientists have elucidated the logic of these hierarchical systems by studying the types of stimuli to which neurons are most sensitively tuned, known as “receptive fields.” If some neuron in an early cortical area responds selectively to, say, a vertical line in a certain part of the visual field, the inference is that when such a neuron is active, that is the information that it is representing. In this case, it is making that information available to the next level of the visual system — itself just a subsystem of the brain. © 2024 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 29148 - Posted: 02.13.2024

By Christian Guay & Emery Brown What does it mean to be conscious? People have been thinking and writing about this question for millennia. Yet many things about the conscious mind remain a mystery, including how to measure and assess it. What is a unit of consciousness? Are there different levels of consciousness? What happens to consciousness during sleep, coma and general anesthesia? As anesthesiologists, we think about these questions often. We make a promise to patients every day that they will be disconnected from the outside world and their inner thoughts during surgery, retain no memories of the experience and feel no pain. In this way, general anesthesia has enabled tremendous medical advances, from microscopic vascular repairs to solid organ transplants. In addition to their tremendous impact on clinical care, anesthetics have emerged as powerful scientific tools to probe questions about consciousness. They allow us to induce profound and reversible changes in conscious states—and study brain responses during these transitions. But one of the challenges that anesthesiologists face is measuring the transition from one state to another. That’s because many of the approaches that exist interrupt or disrupt what we are trying to study. Essentially, assessing the system affects the system. In studies of human consciousness, determining whether someone is conscious can arouse the person being studied—confounding that very assessment. To address this challenge, we adapted a simple approach we call the breathe-squeeze method. It offers us a way to study changes in conscious state without interrupting those shifts. To understand this approach, it helps to consider some insights from studies of consciousness that have used anesthetics. For decades researchers have used electroencephalography (EEG) to observe electrical activity in the brains of people receiving various anesthetics. They can then analyze that activity with EEG readings to characterize patterns that are specific to various anesthetics, so-called anesthetic signatures. © 2024 SCIENTIFIC AMERICAN

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 10: Biological Rhythms and Sleep
Link ID: 29116 - Posted: 01.27.2024

By Mariana Lenharo Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything. The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration. Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between. The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects. The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room. Their struggle to get the collaboration off the ground is mirrored in wider fractures in the field. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29106 - Posted: 01.18.2024

By Mariana Lenharo Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows — and they are expressing concern about the lack of inquiry into the question. In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use? Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Biden’s executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes. “With everything that’s going on in AI, inevitably there’s going to be other adjacent areas of science which are going to need to catch up,” Mason says. Consciousness is one of them. The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29065 - Posted: 12.27.2023

By Oshan Jarow Sometimes when I’m looking out across the northern meadow of Brooklyn’s Prospect Park, or even the concrete parking lot outside my office window, I wonder if someone like Shakespeare or Emily Dickinson could have taken in the same view and seen more. I don’t mean making out blurry details or more objects in the scene. But through the lens of their minds, could they encounter the exact same world as me and yet have a richer experience? One way to answer that question, at least as a thought experiment, could be to compare the electrical activity inside our brains while gazing out upon the same scene, and running some statistical analysis designed to actually tell us whose brain activity indicates more richness. But that’s just a loopy thought experiment, right? Not exactly. One of the newest frontiers in the science of the mind is the attempt to measure consciousness’s “complexity,” or how diverse and integrated electrical activity is across the brain. Philosophers and neuroscientists alike hypothesize that more complex brain activity signifies “richer” experiences. The idea of measuring complexity stems from information theory — a mathematical approach to understanding how information is stored, communicated, and processed —which doesn’t provide wonderfully intuitive examples of what more richness actually means. Unless you’re a computer person. “If you tried to upload the content onto a hard drive, it’s how much memory you’d need to be able to store the experience you’re having,” Adam Barrett, a professor of machine learning and data science at the University of Sussex, told me. Another approach to understanding richness is to look at how it changes in different mental states. Recent studies have found that measures of complexity are lowest in patients under general anesthesia, higher in ordinary wakefulness, and higher still in psychedelic trips, which can notoriously turn even the most mundane experiences — say, my view of the parking lot outside my office window — into profound and meaningful encounters.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29049 - Posted: 12.16.2023

By Amanda Gefter On a February morning in 1935, a disoriented homing pigeon flew into the open window of an unoccupied room at the Hotel New Yorker. It had a band around its leg, but where it came from, or was meant to be headed, no one could say. While management debated what to do, a maid rushed to the 33rd floor and knocked at the door of the hotel’s most infamous denizen: Nikola Tesla. The 78-year-old inventor quickly volunteered to take in the homeless pigeon. “Dr. Tesla … dropped work on a new electrical project, lest his charge require some little attention,” reported The New York Times. “The man who recently announced the discovery of an electrical death-beam, powerful enough to destroy 10,000 airplanes at a swoop, carefully spread towels on his window ledge and set down a little cup of seed.” Nikola Tesla—the Serbian-American scientist famous for designing the alternating current motor and the Tesla coil—had, for years, regularly been spotted skulking through the nighttime streets of midtown Manhattan, feeding the birds at all hours. In the dark, he’d sound a low whistle, and from the gloom, hordes of pigeons would flock to the old man, perching on his outstretched arms. He was known to keep baskets in his room as nests, along with caches of homemade seed mix, and to leave his windows perpetually open so the birds could come and go. Once, he was arrested for trying to lasso an injured homing pigeon in the plaza of St. Patrick’s Cathedral, and, from his holding cell in the 34th Street precinct, had to convince the officers that he was—or had been—one of the most famous inventors in the world. It had been years since he’d produced a successful invention. He was gaunt and broke—living off of debt and good graces—having been kicked out of a string of hotels, a trail of pigeon droppings and unpaid rent in his wake. He had no family or close friends, except for the birds. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29034 - Posted: 12.09.2023

By Erin Garcia de Jesús A new brain-monitoring device aims to be the Goldilocks of anesthesia delivery, dispensing drugs in just the right dose. No physician wants a patient to wake up during surgery — nor do patients. So anesthesiologists often give more drug than necessary to keep patients sedated during medical procedures or while on lifesaving machines like ventilators. But anesthetics can sometimes be harmful when given in excess, says David Mintz, an anesthesiologist at Johns Hopkins University. For instance, elderly people with cognitive conditions like dementia or age-related cognitive decline may be at higher risk of post-surgical confusion. Studies also hint that long periods of use in young children might cause behavioral problems. “The less we give of them, the better,” Mintz says. An automated anesthesia delivery system could help doctors find the right drug dose. The new device monitored rhesus macaques’ brain activity and supplied a common anesthetic called propofol in doses that were automatically adjusted every 20 seconds. Fluctuating doses ensured the animals received just enough drug — not too much or too little — to stay sedated for 125 minutes, researchers reported October 31 in PNAS Nexus. The study is a step toward devising and testing a system that would work for people. © Society for Science & the Public 2000–2023.

Related chapters from BN: Chapter 14: Biological Rhythms, Sleep, and Dreaming; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 10: Biological Rhythms and Sleep; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 29021 - Posted: 11.26.2023

By Emily Cataneo It’s 1922. You’re a scientist presented with a hundred youths who, you’re told, will grow up to lead conventional adult lives — with one exception. In 40 years, one of the one hundred is going to become impulsive and criminal. You run blood tests on the subjects and discover nothing that indicates that one of them will go off the rails in four decades. And yet sure enough, 40 years later, one bad egg has started shoplifting and threatening strangers. With no physical evidence to explain his behavior, you conclude that this man has chosen to act out of his own free will. Now, imagine the same experiment starting in 2022. This time, you use the blood samples to sequence everyone’s genome. In one, you find a mutation that codes for something called tau protein in the brain and you realize that this individual will not become a criminal in 40 years out of choice, but rather due to dementia. It turns out he did not shoplift out of free will, but because of physical forces beyond his control. Now, take the experiment a step further. If a man opens fire in an elementary school and kills scores of children and teachers, should he be held responsible? Should he be reviled and punished? Or should observers, even the mourning families, accept that under the right circumstances, that shooter could have been them? Does the shooter have free will while the man with dementia does not? Can you explain why? These provocative, even disturbing questions about similar scenarios underlie two new books about whether humans have control over our personalities, opinions, actions, and fates. “Free Agents: How Evolution Gave Us Free Will,” by professor of genetics and neuroscience Kevin J. Mitchell, and “Determined: A Science of Life Without Free Will,” by biology and neurology professor Robert M. Sapolsky, both undertake the expansive task of using the tools of science to probe the question of whether we possess free will, a question with stark moral and existential implications for the way we structure human society.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29009 - Posted: 11.18.2023

By Dan Falk You’re thirsty so you reach for a glass of water. It’s either a freely chosen action or the inevitable result of the laws of nature, depending on who you ask. Do we have free will? The question is ancient—and vexing. Everyone seems to have pondered it, and many seem quite certain of the answer, which is typically either “yes” or “absolutely not.” One scientist in the “absolutely not” camp is Robert Sapolsky. In his new book, Determined: A Science of Life Without Free Will, the primatologist and Stanford professor of neurology spells out why we can’t possibly have free will. Why do we behave one way and not another? Why do we choose Brand A over Brand B, or vote for Candidate X over Candidate Y? Not because we have free will, but because every act and thought are the product of “cumulative biological and environmental luck.” Sapolsky tells readers that the “biology over which you had no control, interacting with the environment over which you had no control, made you you.” That is to say, “everything in your childhood, starting with how you were mothered within minutes of birth, was influenced by culture, which means as well by the centuries of ecological factors that influenced what kind of culture your ancestors invented, and by the evolutionary pressures that molded the species you belong to.” In Body ImageNO, WE DON’T: Robert Sapolsky on free will: “I have spent forever trying to understand where behavior comes from. And what you see is there’s absolutely no room for free will.” Photo courtesy of Christine Johnston. Sapolsky brings the same combination of earthy directness and literary flourish that marked his earlier books, including Why Zebras Don’t Get Ulcers, about the biology of stress, to this latest work. To summarize his point of view in Determined, he writes, “Or as Maria sings in The Sound of Music, ‘Nothing comes from nothing, nothing ever could.’” The affable, bushy-bearded Sapolsky is now in his mid 60s. During our recent interview over Zoom, I was on the lookout for any inconsistency; anything that might suggest that deep down he admits we really do make decisions, as many of us surely feel. But he was prepared and stuck to his guns. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28987 - Posted: 11.04.2023

By Darren Incorvaia The idea of a chicken running around with its head cut off, inspired by a real-life story, may make it seem like the bird doesn’t have much going on upstairs. But Sonja Hillemacher, an animal behavior researcher at the University of Bonn in Germany, always knew that chickens were more than mindless sources of wings and nuggets. “They are way smarter than you think,” Ms. Hillemacher said. Now, in a study published in the journal PLOS One on Wednesday, Ms. Hillemacher and her colleagues say they have found evidence that roosters can recognize themselves in mirrors. In addition to shedding new light on chicken intellect, the researchers hope that their experiment can prompt re-evaluations of the smarts of other animals. The mirror test is a common, but contested, test of self-awareness. It was introduced by the psychologist Gordon Gallup in 1970. He housed chimpanzees with mirrors and then marked their faces with red dye. The chimps didn’t seem to notice until they could see their reflections, and then they began inspecting and touching the marked spot on their faces, suggesting that they recognized themselves in the mirror. The mirror test has since been used to assess self-recognition in many other species. But only a few — such as dolphins and elephants — have passed. After being piloted on primates, the mirror test was “somehow sealed in a nearly magical way as sacred,” said Onur Güntürkün, a neuroscientist at Ruhr University Bochum in Germany and an author of the study who worked with Ms. Hillemacher and Inga Tiemann, also at the University of Bonn. But different cognitive processes are active in different situations, and there’s no reason to think that the mirror test is accurate for animals with vastly different sensory abilities and social systems than what chimps have. The roosters failed the classic mirror test. When the team marked them with pink powder, the birds showed no inclination to inspect or touch the smudge in front of the mirror the way that Dr. Gallup’s chimps did. As an alternative, the team tested rooster self-awareness in a more fowl friendly way. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28978 - Posted: 10.28.2023

By George Musser They call it the hard problem of consciousness, but a better term might be the impossible problem of consciousness. The whole point is that the qualitative aspects of our conscious experience, or “qualia,” are inexplicable. They slip through the explanatory framework of science, which is reductive: It explains things by breaking them down into parts and describing how they fit together. Subjective experience has an intrinsic je ne sais quoi that can’t be decomposed into parts or explained by relating one thing to another. Qualia can’t be grasped intellectually. They can only be experienced firsthand. For the past five years or so, I’ve been trying to untangle the cluster of theories that attempt to explain consciousness, traveling the world to interview neuroscientists, philosophers, artificial-intelligence researchers, and physicists—all of whom have something to say on the matter. Most duck the hard problem, either bracketing it until neuroscientists explain brain function more fully or accepting that consciousness has no deeper explanation and must be wired into the base level of reality. Although I made it a point to maintain an outsider’s view of science in my reporting, staying out of academic debates and finding value in every approach, I find both positions defensible but dispiriting. I cling to the intuition that consciousness must have some scientific explanation that we can achieve. But how? It’s hard to imagine how science could possibly expand its framework to accommodate the redness of red or the awfulness of fingernails on a chalkboard. But there is another option: to suppose that we are misconstruing our experience in some way. We think that it has intrinsic qualities, but maybe on closer inspection it doesn’t. Not that this is an easy position to take. Two leading theories of consciousness take a stab at it. Integrated Information Theory (IIT) says that the neural networks in our head are conscious since neurons act together in harmony—they form collective structures with properties beyond those of the individual cells. If so, subjective experience isn’t primitive and unanalyzable; in principle, you could follow the network’s transitions and read its mind. “What IIT tries to do is completely avoid any intrinsic quality in the traditional sense,” the father of IIT, Giulio Tononi, told me. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28970 - Posted: 10.25.2023

By Hope Reese There is no free will, according to Robert Sapolsky, a biologist and neurologist at Stanford University and a recipient of the MacArthur Foundation “genius” grant. Dr. Sapolsky worked for decades as a field primatologist before turning to neuroscience, and he has spent his career investigating behavior across the animal kingdom and writing about it in books including “Behave: The Biology of Humans at Our Best and Worst” and “Monkeyluv, and Other Essays on Our Lives as Animals.” In his latest book, “Determined: A Science of Life Without Free Will,” Dr. Sapolsky confronts and refutes the biological and philosophical arguments for free will. He contends that we are not free agents, but that biology, hormones, childhood and life circumstances coalesce to produce actions that we merely feel were ours to choose. It’s a provocative claim, he concedes, but he would be content if readers simply began to question the belief, which is embedded in our cultural conversation. Getting rid of free will “completely strikes at our sense of identity and autonomy and where we get meaning from,” Dr. Sapolsky said, and this makes the idea particularly hard to shake. There are major implications, he notes: Absent free will, no one should be held responsible for their behavior, good or bad. Dr. Sapolsky sees this as “liberating” for most people, for whom “life has been about being blamed and punished and deprived and ignored for things they have no control over.” He spoke in a series of interviews about the challenges that free will presents and how he stays motivated without it. These conversations were edited and condensed for clarity. To most people, free will means being in charge of our actions. What’s wrong with that outlook? It’s a completely useless definition. When most people think they’re discerning free will, what they mean is somebody intended to do what they did: Something has just happened; somebody pulled the trigger. They understood the consequences and knew that alternative behaviors were available. But that doesn’t remotely begin to touch it, because you’ve got to ask: Where did that intent come from? That’s what happened a minute before, in the years before, and everything in between. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28967 - Posted: 10.17.2023