Links for Keyword: Consciousness

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 324

By Kevin Mitchell It is often said that “the mind is what the brain does.” Modern neuroscience has indeed shown us that mental goings-on rely on and are in some sense entailed by neural goings-on. But the truth is that we have a poor handle on the nature of that relationship. One way to bridge that divide is to try to define the relationship between neural and mental representations. The basic premise of neuroscience is that patterns of neural activity carry some information — they are about something. But not all such patterns need be thought of as representations; many of them are just signals. Simple circuits such as the muscle stretch reflex or the eye-blink reflex, for example, are configured to respond to stimuli such as the lengthening of a muscle or a sudden bright light. But they don’t need to internally represent this information — or make that information available to other parts of the nervous system. They just need to respond to it. More complex information processing, by contrast, such as in our image-forming visual system, requires internal neural representation. By integrating signals from multiple photoreceptors, retinal ganglion cells carry information about patterns of light in the visual stimulus — particularly edges where the illumination changes from light to dark. This information is then made available to the thalamus and the cortical hierarchy, where additional processing goes on to extract higher- and higher-order features of the entire visual scene. Scientists have elucidated the logic of these hierarchical systems by studying the types of stimuli to which neurons are most sensitively tuned, known as “receptive fields.” If some neuron in an early cortical area responds selectively to, say, a vertical line in a certain part of the visual field, the inference is that when such a neuron is active, that is the information that it is representing. In this case, it is making that information available to the next level of the visual system — itself just a subsystem of the brain. © 2024 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 29148 - Posted: 02.13.2024

By Christian Guay & Emery Brown What does it mean to be conscious? People have been thinking and writing about this question for millennia. Yet many things about the conscious mind remain a mystery, including how to measure and assess it. What is a unit of consciousness? Are there different levels of consciousness? What happens to consciousness during sleep, coma and general anesthesia? As anesthesiologists, we think about these questions often. We make a promise to patients every day that they will be disconnected from the outside world and their inner thoughts during surgery, retain no memories of the experience and feel no pain. In this way, general anesthesia has enabled tremendous medical advances, from microscopic vascular repairs to solid organ transplants. In addition to their tremendous impact on clinical care, anesthetics have emerged as powerful scientific tools to probe questions about consciousness. They allow us to induce profound and reversible changes in conscious states—and study brain responses during these transitions. But one of the challenges that anesthesiologists face is measuring the transition from one state to another. That’s because many of the approaches that exist interrupt or disrupt what we are trying to study. Essentially, assessing the system affects the system. In studies of human consciousness, determining whether someone is conscious can arouse the person being studied—confounding that very assessment. To address this challenge, we adapted a simple approach we call the breathe-squeeze method. It offers us a way to study changes in conscious state without interrupting those shifts. To understand this approach, it helps to consider some insights from studies of consciousness that have used anesthetics. For decades researchers have used electroencephalography (EEG) to observe electrical activity in the brains of people receiving various anesthetics. They can then analyze that activity with EEG readings to characterize patterns that are specific to various anesthetics, so-called anesthetic signatures. © 2024 SCIENTIFIC AMERICAN

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 10: Biological Rhythms and Sleep
Link ID: 29116 - Posted: 01.27.2024

By Mariana Lenharo Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything. The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration. Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between. The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects. The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room. Their struggle to get the collaboration off the ground is mirrored in wider fractures in the field. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29106 - Posted: 01.18.2024

By Mariana Lenharo Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows — and they are expressing concern about the lack of inquiry into the question. In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use? Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Biden’s executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes. “With everything that’s going on in AI, inevitably there’s going to be other adjacent areas of science which are going to need to catch up,” Mason says. Consciousness is one of them. The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29065 - Posted: 12.27.2023

By Oshan Jarow Sometimes when I’m looking out across the northern meadow of Brooklyn’s Prospect Park, or even the concrete parking lot outside my office window, I wonder if someone like Shakespeare or Emily Dickinson could have taken in the same view and seen more. I don’t mean making out blurry details or more objects in the scene. But through the lens of their minds, could they encounter the exact same world as me and yet have a richer experience? One way to answer that question, at least as a thought experiment, could be to compare the electrical activity inside our brains while gazing out upon the same scene, and running some statistical analysis designed to actually tell us whose brain activity indicates more richness. But that’s just a loopy thought experiment, right? Not exactly. One of the newest frontiers in the science of the mind is the attempt to measure consciousness’s “complexity,” or how diverse and integrated electrical activity is across the brain. Philosophers and neuroscientists alike hypothesize that more complex brain activity signifies “richer” experiences. The idea of measuring complexity stems from information theory — a mathematical approach to understanding how information is stored, communicated, and processed —which doesn’t provide wonderfully intuitive examples of what more richness actually means. Unless you’re a computer person. “If you tried to upload the content onto a hard drive, it’s how much memory you’d need to be able to store the experience you’re having,” Adam Barrett, a professor of machine learning and data science at the University of Sussex, told me. Another approach to understanding richness is to look at how it changes in different mental states. Recent studies have found that measures of complexity are lowest in patients under general anesthesia, higher in ordinary wakefulness, and higher still in psychedelic trips, which can notoriously turn even the most mundane experiences — say, my view of the parking lot outside my office window — into profound and meaningful encounters.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29049 - Posted: 12.16.2023

By Amanda Gefter On a February morning in 1935, a disoriented homing pigeon flew into the open window of an unoccupied room at the Hotel New Yorker. It had a band around its leg, but where it came from, or was meant to be headed, no one could say. While management debated what to do, a maid rushed to the 33rd floor and knocked at the door of the hotel’s most infamous denizen: Nikola Tesla. The 78-year-old inventor quickly volunteered to take in the homeless pigeon. “Dr. Tesla … dropped work on a new electrical project, lest his charge require some little attention,” reported The New York Times. “The man who recently announced the discovery of an electrical death-beam, powerful enough to destroy 10,000 airplanes at a swoop, carefully spread towels on his window ledge and set down a little cup of seed.” Nikola Tesla—the Serbian-American scientist famous for designing the alternating current motor and the Tesla coil—had, for years, regularly been spotted skulking through the nighttime streets of midtown Manhattan, feeding the birds at all hours. In the dark, he’d sound a low whistle, and from the gloom, hordes of pigeons would flock to the old man, perching on his outstretched arms. He was known to keep baskets in his room as nests, along with caches of homemade seed mix, and to leave his windows perpetually open so the birds could come and go. Once, he was arrested for trying to lasso an injured homing pigeon in the plaza of St. Patrick’s Cathedral, and, from his holding cell in the 34th Street precinct, had to convince the officers that he was—or had been—one of the most famous inventors in the world. It had been years since he’d produced a successful invention. He was gaunt and broke—living off of debt and good graces—having been kicked out of a string of hotels, a trail of pigeon droppings and unpaid rent in his wake. He had no family or close friends, except for the birds. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29034 - Posted: 12.09.2023

By Erin Garcia de Jesús A new brain-monitoring device aims to be the Goldilocks of anesthesia delivery, dispensing drugs in just the right dose. No physician wants a patient to wake up during surgery — nor do patients. So anesthesiologists often give more drug than necessary to keep patients sedated during medical procedures or while on lifesaving machines like ventilators. But anesthetics can sometimes be harmful when given in excess, says David Mintz, an anesthesiologist at Johns Hopkins University. For instance, elderly people with cognitive conditions like dementia or age-related cognitive decline may be at higher risk of post-surgical confusion. Studies also hint that long periods of use in young children might cause behavioral problems. “The less we give of them, the better,” Mintz says. An automated anesthesia delivery system could help doctors find the right drug dose. The new device monitored rhesus macaques’ brain activity and supplied a common anesthetic called propofol in doses that were automatically adjusted every 20 seconds. Fluctuating doses ensured the animals received just enough drug — not too much or too little — to stay sedated for 125 minutes, researchers reported October 31 in PNAS Nexus. The study is a step toward devising and testing a system that would work for people. © Society for Science & the Public 2000–2023.

Related chapters from BN: Chapter 14: Biological Rhythms, Sleep, and Dreaming; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 10: Biological Rhythms and Sleep; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 29021 - Posted: 11.26.2023

By Emily Cataneo It’s 1922. You’re a scientist presented with a hundred youths who, you’re told, will grow up to lead conventional adult lives — with one exception. In 40 years, one of the one hundred is going to become impulsive and criminal. You run blood tests on the subjects and discover nothing that indicates that one of them will go off the rails in four decades. And yet sure enough, 40 years later, one bad egg has started shoplifting and threatening strangers. With no physical evidence to explain his behavior, you conclude that this man has chosen to act out of his own free will. Now, imagine the same experiment starting in 2022. This time, you use the blood samples to sequence everyone’s genome. In one, you find a mutation that codes for something called tau protein in the brain and you realize that this individual will not become a criminal in 40 years out of choice, but rather due to dementia. It turns out he did not shoplift out of free will, but because of physical forces beyond his control. Now, take the experiment a step further. If a man opens fire in an elementary school and kills scores of children and teachers, should he be held responsible? Should he be reviled and punished? Or should observers, even the mourning families, accept that under the right circumstances, that shooter could have been them? Does the shooter have free will while the man with dementia does not? Can you explain why? These provocative, even disturbing questions about similar scenarios underlie two new books about whether humans have control over our personalities, opinions, actions, and fates. “Free Agents: How Evolution Gave Us Free Will,” by professor of genetics and neuroscience Kevin J. Mitchell, and “Determined: A Science of Life Without Free Will,” by biology and neurology professor Robert M. Sapolsky, both undertake the expansive task of using the tools of science to probe the question of whether we possess free will, a question with stark moral and existential implications for the way we structure human society.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29009 - Posted: 11.18.2023

By Dan Falk You’re thirsty so you reach for a glass of water. It’s either a freely chosen action or the inevitable result of the laws of nature, depending on who you ask. Do we have free will? The question is ancient—and vexing. Everyone seems to have pondered it, and many seem quite certain of the answer, which is typically either “yes” or “absolutely not.” One scientist in the “absolutely not” camp is Robert Sapolsky. In his new book, Determined: A Science of Life Without Free Will, the primatologist and Stanford professor of neurology spells out why we can’t possibly have free will. Why do we behave one way and not another? Why do we choose Brand A over Brand B, or vote for Candidate X over Candidate Y? Not because we have free will, but because every act and thought are the product of “cumulative biological and environmental luck.” Sapolsky tells readers that the “biology over which you had no control, interacting with the environment over which you had no control, made you you.” That is to say, “everything in your childhood, starting with how you were mothered within minutes of birth, was influenced by culture, which means as well by the centuries of ecological factors that influenced what kind of culture your ancestors invented, and by the evolutionary pressures that molded the species you belong to.” In Body ImageNO, WE DON’T: Robert Sapolsky on free will: “I have spent forever trying to understand where behavior comes from. And what you see is there’s absolutely no room for free will.” Photo courtesy of Christine Johnston. Sapolsky brings the same combination of earthy directness and literary flourish that marked his earlier books, including Why Zebras Don’t Get Ulcers, about the biology of stress, to this latest work. To summarize his point of view in Determined, he writes, “Or as Maria sings in The Sound of Music, ‘Nothing comes from nothing, nothing ever could.’” The affable, bushy-bearded Sapolsky is now in his mid 60s. During our recent interview over Zoom, I was on the lookout for any inconsistency; anything that might suggest that deep down he admits we really do make decisions, as many of us surely feel. But he was prepared and stuck to his guns. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28987 - Posted: 11.04.2023

By Darren Incorvaia The idea of a chicken running around with its head cut off, inspired by a real-life story, may make it seem like the bird doesn’t have much going on upstairs. But Sonja Hillemacher, an animal behavior researcher at the University of Bonn in Germany, always knew that chickens were more than mindless sources of wings and nuggets. “They are way smarter than you think,” Ms. Hillemacher said. Now, in a study published in the journal PLOS One on Wednesday, Ms. Hillemacher and her colleagues say they have found evidence that roosters can recognize themselves in mirrors. In addition to shedding new light on chicken intellect, the researchers hope that their experiment can prompt re-evaluations of the smarts of other animals. The mirror test is a common, but contested, test of self-awareness. It was introduced by the psychologist Gordon Gallup in 1970. He housed chimpanzees with mirrors and then marked their faces with red dye. The chimps didn’t seem to notice until they could see their reflections, and then they began inspecting and touching the marked spot on their faces, suggesting that they recognized themselves in the mirror. The mirror test has since been used to assess self-recognition in many other species. But only a few — such as dolphins and elephants — have passed. After being piloted on primates, the mirror test was “somehow sealed in a nearly magical way as sacred,” said Onur Güntürkün, a neuroscientist at Ruhr University Bochum in Germany and an author of the study who worked with Ms. Hillemacher and Inga Tiemann, also at the University of Bonn. But different cognitive processes are active in different situations, and there’s no reason to think that the mirror test is accurate for animals with vastly different sensory abilities and social systems than what chimps have. The roosters failed the classic mirror test. When the team marked them with pink powder, the birds showed no inclination to inspect or touch the smudge in front of the mirror the way that Dr. Gallup’s chimps did. As an alternative, the team tested rooster self-awareness in a more fowl friendly way. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28978 - Posted: 10.28.2023

By George Musser They call it the hard problem of consciousness, but a better term might be the impossible problem of consciousness. The whole point is that the qualitative aspects of our conscious experience, or “qualia,” are inexplicable. They slip through the explanatory framework of science, which is reductive: It explains things by breaking them down into parts and describing how they fit together. Subjective experience has an intrinsic je ne sais quoi that can’t be decomposed into parts or explained by relating one thing to another. Qualia can’t be grasped intellectually. They can only be experienced firsthand. For the past five years or so, I’ve been trying to untangle the cluster of theories that attempt to explain consciousness, traveling the world to interview neuroscientists, philosophers, artificial-intelligence researchers, and physicists—all of whom have something to say on the matter. Most duck the hard problem, either bracketing it until neuroscientists explain brain function more fully or accepting that consciousness has no deeper explanation and must be wired into the base level of reality. Although I made it a point to maintain an outsider’s view of science in my reporting, staying out of academic debates and finding value in every approach, I find both positions defensible but dispiriting. I cling to the intuition that consciousness must have some scientific explanation that we can achieve. But how? It’s hard to imagine how science could possibly expand its framework to accommodate the redness of red or the awfulness of fingernails on a chalkboard. But there is another option: to suppose that we are misconstruing our experience in some way. We think that it has intrinsic qualities, but maybe on closer inspection it doesn’t. Not that this is an easy position to take. Two leading theories of consciousness take a stab at it. Integrated Information Theory (IIT) says that the neural networks in our head are conscious since neurons act together in harmony—they form collective structures with properties beyond those of the individual cells. If so, subjective experience isn’t primitive and unanalyzable; in principle, you could follow the network’s transitions and read its mind. “What IIT tries to do is completely avoid any intrinsic quality in the traditional sense,” the father of IIT, Giulio Tononi, told me. © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28970 - Posted: 10.25.2023

By Hope Reese There is no free will, according to Robert Sapolsky, a biologist and neurologist at Stanford University and a recipient of the MacArthur Foundation “genius” grant. Dr. Sapolsky worked for decades as a field primatologist before turning to neuroscience, and he has spent his career investigating behavior across the animal kingdom and writing about it in books including “Behave: The Biology of Humans at Our Best and Worst” and “Monkeyluv, and Other Essays on Our Lives as Animals.” In his latest book, “Determined: A Science of Life Without Free Will,” Dr. Sapolsky confronts and refutes the biological and philosophical arguments for free will. He contends that we are not free agents, but that biology, hormones, childhood and life circumstances coalesce to produce actions that we merely feel were ours to choose. It’s a provocative claim, he concedes, but he would be content if readers simply began to question the belief, which is embedded in our cultural conversation. Getting rid of free will “completely strikes at our sense of identity and autonomy and where we get meaning from,” Dr. Sapolsky said, and this makes the idea particularly hard to shake. There are major implications, he notes: Absent free will, no one should be held responsible for their behavior, good or bad. Dr. Sapolsky sees this as “liberating” for most people, for whom “life has been about being blamed and punished and deprived and ignored for things they have no control over.” He spoke in a series of interviews about the challenges that free will presents and how he stays motivated without it. These conversations were edited and condensed for clarity. To most people, free will means being in charge of our actions. What’s wrong with that outlook? It’s a completely useless definition. When most people think they’re discerning free will, what they mean is somebody intended to do what they did: Something has just happened; somebody pulled the trigger. They understood the consequences and knew that alternative behaviors were available. But that doesn’t remotely begin to touch it, because you’ve got to ask: Where did that intent come from? That’s what happened a minute before, in the years before, and everything in between. © 2023 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28967 - Posted: 10.17.2023

By Marco Giancotti I’m lying down in a white cylinder barely wider than my body, surrounded on all sides by a mass of sophisticated machinery the size of a small camper van. It’s an fMRI machine, one of the technological marvels of modern neuroscience. Two small inflatable cushions squeeze my temples, keeping my head still. “We are ready to begin the next batch of exercises,” I hear Dr. Horikawa’s gentle voice saying. We’re underground, in one of the laboratories of Tokyo University’s Faculty of Medicine, Hongo Campus. “Do you feel like proceeding?” “Yes, let’s go,” I answer. The machine sets in motion again. A powerful current grows inside the cryogenically cooled wires that coil around me, showering my head with radio waves, knocking the hydrogen atoms inside my head off their original spin axis, and measuring the rate at which the axis recovers afterward. To the sensors around me, I’m now as transparent as a glass of water. Every tiny change of blood flow anywhere inside my brain is being watched and recorded in 3-D. A few seconds pass, then a synthetic female voice speaks into my ears over the electronic clamor: “top hat.” I close my eyes and I imagine a top hat. A few seconds later a beep tells me I should rate the quality of my mental picture, which I do with a controller in my hand. The voice speaks again: “fire extinguisher,” and I repeat the routine. Next is “butterfly,” then “camel,” then “snowmobile,” and so on, for about 10 minutes, while the system monitors the activation of my brain synapses. For most people, this should be a rather simple exercise, perhaps even satisfying. For me, it’s a considerable strain, because I don’t “see” any of those things. For each and every one of the prompts, I rate my mental image “0” on a 0 to 5 scale, because as soon as I close my eyes, what I see are not everyday objects, animals, and vehicles, but the dark underside of my eyelids. I can’t willingly form the faintest of images in my mind. And, although it isn’t the subject of the current experiment, I also can’t conjure sounds, smells, or any other kind of sensory stimulation inside my head. I have what is called “aphantasia,” the absence of voluntary imagination of the senses. I know what a top hat is. I can describe its main characteristics. I can even draw an above-average impression of one on a piece of paper for you. But I can’t visualize it mentally. What’s wrong with me? © 2023 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28945 - Posted: 10.05.2023

By Anil Seth Earlier this month, the consciousness science community erupted into chaos. An open letter, signed by 124 researchers—some specializing in consciousness and others not—made the provocative claim that one of the most widely discussed theories in the field, Integrated Information Theory (IIT), should be considered “pseudoscience.” The uproar that followed sent consciousness social media into a doom spiral of accusation and recrimination, with the fallout covered in Nature, New Scientist, and elsewhere. Calling something pseudoscience is pretty much the strongest criticism one can make of a theory. It’s a move that should never be taken lightly, especially when more than 100 influential scientists and philosophers do it all at once. The open letter justified the charge primarily on the grounds that IIT has “commitments” to panpsychism—the idea that consciousness is fundamental and ubiquitous—and that the theory “as a whole” may not be empirically testable. A subsequent piece by one of the lead authors of the letter, Hakwan Lau, reframed the charge somewhat: that the claims made for IIT by its proponents and the wider media are not supported by empirical evidence. The brainchild of neuroscientist Giulio Tononi, IIT has been around for quite some time. Back in the late 1990s, Tononi published a paper in Science with the Nobel Laureate Gerald Edelman, linking consciousness to mathematical measures of complexity. This paper, which made a lasting impression on me, sowed the seeds of what later became IIT. Tononi published his first outline of the theory itself in 2004 and it has been evolving ever since, with the latest version—IIT 4.0—appearing earlier this year. The theory’s counterintuitive and deeply mathematical nature has always attracted controversy and criticism—including from myself and my colleagues—but it has certainly become prominent in consciousness science. A survey conducted at the main conference in the field—the annual meeting of the Association for the Scientific Study of Consciousness—found that nearly half of respondents considered it “definitely promising” or “probably promising,” and researchers in the field regularly identify it as one of four main theoretical approaches to consciousness. (The philosopher Tim Bayne did just this in our recent review paper on theories of consciousness for Nature Reviews Neuroscience.) © 2023 NautilusNext Inc.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28936 - Posted: 09.29.2023

By Dan Falk More than 400 years ago, Galileo showed that many everyday phenomena—such as a ball rolling down an incline or a chandelier gently swinging from a church ceiling—obey precise mathematical laws. For this insight, he is often hailed as the founder of modern science. But Galileo recognized that not everything was amenable to a quantitative approach. Such things as colors, tastes and smells “are no more than mere names,” Galileo declared, for “they reside only in consciousness.” These qualities aren’t really out there in the world, he asserted, but exist only in the minds of creatures that perceive them. “Hence if the living creature were removed,” he wrote, “all these qualities would be wiped away and annihilated.” Since Galileo’s time the physical sciences have leaped forward, explaining the workings of the tiniest quarks to the largest galaxy clusters. But explaining things that reside “only in consciousness”—the red of a sunset, say, or the bitter taste of a lemon—has proven far more difficult. Neuroscientists have identified a number of neural correlates of consciousness—brain states associated with specific mental states—but have not explained how matter forms minds in the first place. As philosopher David Chalmers asked: “How does the water of the brain turn into the wine of consciousness?” He famously dubbed this quandary the “hard problem” of consciousness. Scholars recently gathered to debate the problem at Marist College in Poughkeepsie, N.Y., during a two-day workshop focused on an idea known as panpsychism. The concept proposes that consciousness is a fundamental aspect of reality, like mass or electrical charge. The idea goes back to antiquity—Plato took it seriously—and has had some prominent supporters over the years, including psychologist William James and philosopher and mathematician Bertrand Russell. Lately it is seeing renewed interest, especially following the 2019 publication of philosopher Philip Goff’s book Galileo’s Error, which argues forcefully for the idea. Goff, of the University of Durham in England, organized the recent event along with Marist philosopher Andrei Buckareff, and it was funded through a grant from the John Templeton Foundation. In a small lecture hall with floor-to-ceiling windows overlooking the Hudson River, roughly two dozen scholars probed the possibility that perhaps it’s consciousness all the way down.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28928 - Posted: 09.27.2023

Mariana Lenharo A letter, signed by 124 scholars and posted online last week1, has caused an uproar in the consciousness research community. It claims that a prominent theory describing what makes someone or something conscious — called the integrated information theory (IIT) — should be labelled “pseudoscience”. Since its publication on 15 September in the preprint repository PsyArXiv, the letter has some researchers arguing over the label and others worried it will increase polarization in a field that has grappled with issues of credibility in the past. “I think it’s inflammatory to describe IIT as pseudoscience,” says neuroscientist Anil Seth, director of the Centre for Consciousness Science at the University of Sussex near Brighton, UK, adding that he disagrees with the label. “IIT is a theory, of course, and therefore may be empirically wrong,” says neuroscientist Christof Koch, a meritorious investigator at the Allen Institute for Brain Science in Seattle, Washington, and a proponent of the theory. But he says that it makes its assumptions — for example, that consciousness has a physical basis and can be mathematically measured — very clear. There are dozens of theories that seek to understand consciousness — everything that a human or non-human experiences, including what they feel, see and hear — as well as its underlying neural foundations. IIT has often been described as one of the central theories, alongside others, such as global neuronal workspace theory (GNW), higher-order thought theory and recurrent processing theory. It proposes that consciousness emerges from the way information is processed within a ‘system’ (for instance, networks of neurons or computer circuits), and that systems that are more interconnected, or integrated, have higher levels of consciousness. Hakwan Lau, a neuroscientist at Riken Center for Brain Science in Wako, Japan, and one of the authors of the letter, says that some researchers in the consciousness field are uncomfortable with what they perceive as a discrepancy between IIT’s scientific merit and the considerable attention it receives from the popular media because of how it is promoted by advocates. “Has IIT become a leading theory because of academic acceptance first, or is it because of the popular noise that kind of forced the academics to give it acknowledgement?”, Lau asks. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28918 - Posted: 09.21.2023

Mariana Lenharo Science fiction has long entertained the idea of artificial intelligence becoming conscious — think of HAL 9000, the supercomputer-turned-villain in the 1968 film 2001: A Space Odyssey. With the rapid progress of artificial intelligence (AI), that possibility is becoming less and less fantastical, and has even been acknowledged by leaders in AI. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious”. Many researchers say that AI systems aren’t yet at the point of consciousness, but that the pace of AI evolution has got them pondering: how would we know if they were? To answer this, a group of 19 neuroscientists, philosophers and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository1, ahead of peer review. The authors undertook the effort because “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California. The team says that a failure to identify whether an AI system has become conscious has important moral implications. If something has been labelled ‘conscious’, according to co-author Megan Peters, a neuroscientist at the University of California, Irvine, “that changes a lot about how we as human beings feel that entity should be treated”. Long adds that, as far as he can tell, not enough effort is being made by the companies building advanced AI systems to evaluate the models for consciousness and make plans for what to do if that happens. “And that’s in spite of the fact that, if you listen to remarks from the heads of leading labs, they do say that AI consciousness or AI sentience is something they wonder about,” he adds. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28893 - Posted: 08.30.2023

By Elizabeth Finkel Science routinely puts forward theories, then batters them with data till only one is left standing. In the fledgling science of consciousness, a dominant theory has yet to emerge. More than 20 are still taken seriously. It’s not for want of data. Ever since Francis Crick, the co-discoverer of DNA’s double helix, legitimized consciousness as a topic for study more than three decades ago, researchers have used a variety of advanced technologies to probe the brains of test subjects, tracing the signatures of neural activity that could reflect consciousness. The resulting avalanche of data should have flattened at least the flimsier theories by now. Five years ago, the Templeton World Charity Foundation initiated a series of “adversarial collaborations” to coax the overdue winnowing to begin. This past June saw the results from the first of these collaborations, which pitted two high-profile theories against each other: global neuronal workspace theory (GNWT) and integrated information theory (IIT). Neither emerged as the outright winner. The results, announced like the outcome of a sporting event at the 26th meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, were also used to settle a 25-year bet between Crick’s longtime collaborator, the neuroscientist Christof Koch of the Allen Institute for Brain Science, and the philosopher David Chalmers of New York University, who coined the term “the hard problem” to challenge the presumption that we can explain the subjective feeling of consciousness by analyzing the circuitry of the brain. The neuroscientist Christof Koch of the Allen Institute for Brain Science deemed the mixed results of the first adversarial collaboration on consciousness to be “a victory for science.” Nevertheless, Koch proclaimed, “It’s a victory for science.” But was it? All Rights Reserved © 2023

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28887 - Posted: 08.26.2023

By Elizabeth Finkel In 2021, Google engineer Blake Lemoine made headlines—and got himself fired—when he claimed that LaMDA, the chatbot he’d been testing, was sentient. Artificial intelligence (AI) systems, especially so-called large language models such as LaMDA and ChatGPT, can certainly seem conscious. But they’re trained on vast amounts of text to imitate human responses. So how can we really know? Now, a group of 19 computer scientists, neuroscientists, and philosophers has come up with an approach: not a single definitive test, but a lengthy checklist of attributes that, together, could suggest but not prove an AI is conscious. In a 120-page discussion paper posted as a preprint this week, the researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT. None is likely to be conscious, they conclude. But the work offers a framework for evaluating increasingly humanlike AIs, says co-author Robert Long of the San Francisco–based nonprofit Center for AI Safety. “We’re introducing a systematic methodology previously lacking.” Adeel Razi, a computational neuroscientist at Monash University and a fellow at the Canadian Institute for Advanced Research (CIFAR) who was not involved in the new paper, says that is a valuable step. “We’re all starting the discussion rather than coming up with answers.” Until recently, machine consciousness was the stuff of science fiction movies such as Ex Machina. “When Blake Lemoine was fired from Google after being convinced by LaMDA, that marked a change,” Long says. “If AIs can give the impression of consciousness, that makes it an urgent priority for scientists and philosophers to weigh in.” Long and philosopher Patrick Butlin of the University of Oxford’s Future of Humanity Institute organized two workshops on how to test for sentience in AI.

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 28881 - Posted: 08.24.2023

Max Kozlov Dead in California but alive in New Jersey: that was the status of 13-year-old Jahi McMath after physicians in Oakland, California, declared her brain dead in 2013, after complications from a tonsillectomy. Unhappy with the care that their daughter received and unwilling to remove life support, McMath’s family moved with her to New Jersey, where the law allowed them to lodge a religious objection to the declaration of brain death and keep McMath connected to life-support systems for another four and a half years. Prompted by such legal discrepancies and a growing number of lawsuits around the United States, a group of neurologists, physicians, lawyers and bioethicists is attempting to harmonize state laws surrounding the determination of death. They say that imprecise language in existing laws — as well as research done since the laws were passed — threatens to undermine public confidence in how death is defined worldwide. “It doesn’t really make a lot of sense,” says Ariane Lewis, a neurocritical care clinician at NYU Langone Health in New York City. “Death is something that should be a set, finite thing. It shouldn’t be something that’s left up to interpretation.” Since 2021, a committee in the Uniform Law Commission (ULC), a non-profit organization in Chicago, Illinois, that drafts model legislation for states to adopt, has been revising its recommendation for the legal determination of death. The drafting committee hopes to clarify the definition of brain death, determine whether consent is required to test for it, specify how to handle family objections and provide guidance on how to incorporate future changes to medical standards. The broader membership of the ULC will offer feedback on the first draft of the revised law at a meeting on 26 July. After members vote on it, the text could be ready for state legislatures to consider by the middle of next year. But as the ULC revision process has progressed, clinicians who were once eager to address these issues have become increasingly worried. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 14: Attention and Higher Cognition
Link ID: 28853 - Posted: 07.22.2023