Chapter 14. Attention and Higher Cognition

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 1690

By Saugat Bolakhe Memory doesn’t represent a single scientific mystery; it’s many of them. Neuroscientists and psychologists have come to recognize varied types of memory that coexist in our brain: episodic memories of past experiences, semantic memories of facts, short- and long-term memories, and more. These often have different characteristics and even seem to be located in different parts of the brain. But it’s never been clear what feature of a memory determines how or why it should be sorted in this way. Now, a new theory backed by experiments using artificial neural networks proposes that the brain may be sorting memories by evaluating how likely they are to be useful as guides in the future. In particular, it suggests that many memories of predictable things, ranging from facts to useful recurring experiences — like what you regularly eat for breakfast or your walk to work — are saved in the brain’s neocortex, where they can contribute to generalizations about the world. Memories less likely to be useful — like the taste of that unique drink you had at that one party — are kept in the seahorse-shaped memory bank called the hippocampus. Actively segregating memories this way on the basis of their usefulness and generalizability may optimize the reliability of memories for helping us navigate novel situations. The authors of the new theory — the neuroscientists Weinan Sun and James Fitzgerald of the Janelia Research Campus of the Howard Hughes Medical Institute, Andrew Saxe of University College London, and their colleagues — described it in a recent paper in Nature Neuroscience. It updates and expands on the well-established idea that the brain has two linked, complementary learning systems: the hippocampus, which rapidly encodes new information, and the neocortex, which gradually integrates it for long-term storage. James McClelland, a cognitive neuroscientist at Stanford University who pioneered the idea of complementary learning systems in memory but was not part of the new study, remarked that it “addresses aspects of generalization” that his own group had not thought about when they proposed the theory in the mid 1990s. All Rights Reserved © 2023

Keyword: Learning & Memory; Attention
Link ID: 28900 - Posted: 09.07.2023

Mariana Lenharo Science fiction has long entertained the idea of artificial intelligence becoming conscious — think of HAL 9000, the supercomputer-turned-villain in the 1968 film 2001: A Space Odyssey. With the rapid progress of artificial intelligence (AI), that possibility is becoming less and less fantastical, and has even been acknowledged by leaders in AI. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious”. Many researchers say that AI systems aren’t yet at the point of consciousness, but that the pace of AI evolution has got them pondering: how would we know if they were? To answer this, a group of 19 neuroscientists, philosophers and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository1, ahead of peer review. The authors undertook the effort because “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California. The team says that a failure to identify whether an AI system has become conscious has important moral implications. If something has been labelled ‘conscious’, according to co-author Megan Peters, a neuroscientist at the University of California, Irvine, “that changes a lot about how we as human beings feel that entity should be treated”. Long adds that, as far as he can tell, not enough effort is being made by the companies building advanced AI systems to evaluate the models for consciousness and make plans for what to do if that happens. “And that’s in spite of the fact that, if you listen to remarks from the heads of leading labs, they do say that AI consciousness or AI sentience is something they wonder about,” he adds. © 2023 Springer Nature Limited

Keyword: Consciousness
Link ID: 28893 - Posted: 08.30.2023

By Maria Temming When Christopher Mazurek realizes he’s dreaming, it’s always the small stuff that tips him off. The first time it happened, Mazurek was a freshman at Northwestern University in Evanston, Ill. In the dream, he found himself in a campus dining hall. It was winter, but Mazurek wasn’t wearing his favorite coat. “I realized that, OK, if I don’t have the coat, I must be dreaming,” Mazurek says. That epiphany rocked the dream like an earthquake. “Gravity shifted, and I was flung down a hallway that seemed to go on for miles,” he says. “My left arm disappeared, and then I woke up.” Most people rarely if ever realize that they’re dreaming while it’s happening, what’s known as lucid dreaming. But some enthusiasts have cultivated techniques to become self-aware in their sleep and even wrest some control over their dream selves and settings. Mazurek, 24, says that he’s gotten better at molding his lucid dreams since that first whirlwind experience, sometimes taking them as opportunities to try flying or say hi to deceased family members. Other lucid dreamers have used their personal virtual realities to plumb their subconscious minds for insights or feast on junk food without real-world consequences. But now, scientists have a new job for lucid dreamers: to explore their dreamscapes and report out in real time. Dream research has traditionally relied on reports collected after someone wakes up. But people often wake with only spotty, distorted memories of what they dreamed. The dreamers can’t say exactly when events occurred, and they certainly can’t tailor their dreams to specific scientific studies. © Society for Science & the Public 2000–2023.

Keyword: Sleep; Consciousness
Link ID: 28891 - Posted: 08.30.2023

By Elizabeth Finkel Science routinely puts forward theories, then batters them with data till only one is left standing. In the fledgling science of consciousness, a dominant theory has yet to emerge. More than 20 are still taken seriously. It’s not for want of data. Ever since Francis Crick, the co-discoverer of DNA’s double helix, legitimized consciousness as a topic for study more than three decades ago, researchers have used a variety of advanced technologies to probe the brains of test subjects, tracing the signatures of neural activity that could reflect consciousness. The resulting avalanche of data should have flattened at least the flimsier theories by now. Five years ago, the Templeton World Charity Foundation initiated a series of “adversarial collaborations” to coax the overdue winnowing to begin. This past June saw the results from the first of these collaborations, which pitted two high-profile theories against each other: global neuronal workspace theory (GNWT) and integrated information theory (IIT). Neither emerged as the outright winner. The results, announced like the outcome of a sporting event at the 26th meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, were also used to settle a 25-year bet between Crick’s longtime collaborator, the neuroscientist Christof Koch of the Allen Institute for Brain Science, and the philosopher David Chalmers of New York University, who coined the term “the hard problem” to challenge the presumption that we can explain the subjective feeling of consciousness by analyzing the circuitry of the brain. The neuroscientist Christof Koch of the Allen Institute for Brain Science deemed the mixed results of the first adversarial collaboration on consciousness to be “a victory for science.” Nevertheless, Koch proclaimed, “It’s a victory for science.” But was it? All Rights Reserved © 2023

Keyword: Consciousness; Attention
Link ID: 28887 - Posted: 08.26.2023

By Miryam Naddaf, It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end. Its audacious goal was to understand the human brain by modelling it in a computer. During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions, developing brain implants to treat blindness and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions. “When the project started, hardly anyone believed in the potential of big data and the possibility of using it, or supercomputers, to simulate the complicated functioning of the brain,” says Thomas Skordas, deputy director-general of the European Commission in Brussels. Advertisement Almost since it began, however, the HBP has drawn criticism. The project did not achieve its goal of simulating the whole human brain — an aim that many scientists regarded as far-fetched in the first place. It changed direction several times, and its scientific output became “fragmented and mosaic-like”, says HBP member Yves Frégnac, a cognitive scientist and director of research at the French national research agency CNRS in Paris. For him, the project has fallen short of providing a comprehensive or original understanding of the brain. “I don’t see the brain; I see bits of the brain,” says Frégnac. HBP directors hope to bring this understanding a step closer with a virtual platform — called EBRAINS — that was created as part of the project. EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments. “Today, we have all the tools in hand to build a real digital brain twin,” says Viktor Jirsa, a neuroscientist at Aix-Marseille University in France and an HBP board member. But the funding for this offshoot is still uncertain. And at a time when huge, expensive brain projects are in high gear elsewhere, scientists in Europe are frustrated that their version is winding down. “We were probably one of the first ones to initiate this wave of interest in the brain,” says Jorge Mejias, a computational neuroscientist at the University of Amsterdam, who joined the HBP in 2019. Now, he says, “everybody’s rushing, we don’t have time to just take a nap”. Chequered past

Keyword: Brain imaging; Robotics
Link ID: 28884 - Posted: 08.26.2023

By Elizabeth Finkel In 2021, Google engineer Blake Lemoine made headlines—and got himself fired—when he claimed that LaMDA, the chatbot he’d been testing, was sentient. Artificial intelligence (AI) systems, especially so-called large language models such as LaMDA and ChatGPT, can certainly seem conscious. But they’re trained on vast amounts of text to imitate human responses. So how can we really know? Now, a group of 19 computer scientists, neuroscientists, and philosophers has come up with an approach: not a single definitive test, but a lengthy checklist of attributes that, together, could suggest but not prove an AI is conscious. In a 120-page discussion paper posted as a preprint this week, the researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT. None is likely to be conscious, they conclude. But the work offers a framework for evaluating increasingly humanlike AIs, says co-author Robert Long of the San Francisco–based nonprofit Center for AI Safety. “We’re introducing a systematic methodology previously lacking.” Adeel Razi, a computational neuroscientist at Monash University and a fellow at the Canadian Institute for Advanced Research (CIFAR) who was not involved in the new paper, says that is a valuable step. “We’re all starting the discussion rather than coming up with answers.” Until recently, machine consciousness was the stuff of science fiction movies such as Ex Machina. “When Blake Lemoine was fired from Google after being convinced by LaMDA, that marked a change,” Long says. “If AIs can give the impression of consciousness, that makes it an urgent priority for scientists and philosophers to weigh in.” Long and philosopher Patrick Butlin of the University of Oxford’s Future of Humanity Institute organized two workshops on how to test for sentience in AI.

Keyword: Consciousness; Robotics
Link ID: 28881 - Posted: 08.24.2023

Geneva Abdul The so-called “brain fog” symptom associated with long Covid is comparable to ageing 10 years, researchers have suggested. In a study by King’s College London, researchers investigated the impact of Covid-19 on memory and found cognitive impairment highest in individuals who had tested positive and had more than three months of symptoms. The study, published on Friday in a clinical journal published by The Lancet, also found the symptoms in affected individuals stretched to almost two years since initial infection. “The fact remains that two years on from their first infection, some people don’t feel fully recovered and their lives continue to be impacted by the long-term effects of the coronavirus,” said Claire Steves, a professor of ageing and health at King’s College. “We need more work to understand why this is the case and what can be done to help.” An estimated two million people living in the UK were experiencing self-reported long Covid – symptoms continuing for more than four weeks since infection – as of January 2023, according to the 2023 government census. Commonly reported symptoms included fatigue, difficulty concentrating, shortness of breath and muscle aches. The study included more than 5,100 participants from the Covid Symptom Study Biobank, recruited through a smartphone app. Through 12 cognitive tests measuring speed and accuracy, researchers examined working memory, attention, reasoning and motor controls between two periods of 2021 and 2022. © 2023 Guardian News & Media Limited or

Keyword: Learning & Memory; Attention
Link ID: 28854 - Posted: 07.22.2023

Max Kozlov Dead in California but alive in New Jersey: that was the status of 13-year-old Jahi McMath after physicians in Oakland, California, declared her brain dead in 2013, after complications from a tonsillectomy. Unhappy with the care that their daughter received and unwilling to remove life support, McMath’s family moved with her to New Jersey, where the law allowed them to lodge a religious objection to the declaration of brain death and keep McMath connected to life-support systems for another four and a half years. Prompted by such legal discrepancies and a growing number of lawsuits around the United States, a group of neurologists, physicians, lawyers and bioethicists is attempting to harmonize state laws surrounding the determination of death. They say that imprecise language in existing laws — as well as research done since the laws were passed — threatens to undermine public confidence in how death is defined worldwide. “It doesn’t really make a lot of sense,” says Ariane Lewis, a neurocritical care clinician at NYU Langone Health in New York City. “Death is something that should be a set, finite thing. It shouldn’t be something that’s left up to interpretation.” Since 2021, a committee in the Uniform Law Commission (ULC), a non-profit organization in Chicago, Illinois, that drafts model legislation for states to adopt, has been revising its recommendation for the legal determination of death. The drafting committee hopes to clarify the definition of brain death, determine whether consent is required to test for it, specify how to handle family objections and provide guidance on how to incorporate future changes to medical standards. The broader membership of the ULC will offer feedback on the first draft of the revised law at a meeting on 26 July. After members vote on it, the text could be ready for state legislatures to consider by the middle of next year. But as the ULC revision process has progressed, clinicians who were once eager to address these issues have become increasingly worried. © 2023 Springer Nature Limited

Keyword: Consciousness
Link ID: 28853 - Posted: 07.22.2023

Jon Hamilton Dr. Josef Parvizi remembers meeting a man with epilepsy whose seizures were causing some very unusual symptoms. "He came to my clinic and said, 'My sense of self is changing,'" says Parvizi, a professor of neurology at Stanford University. The man told Parvizi that he felt "like an observer to conversations that are happening in my mind" and that "I just feel like I'm floating in space." Parvizi and a team of researchers would eventually trace the man's symptoms to a "sausage-looking piece of brain" called the anterior precuneus. This area, nestled between the brain's two hemispheres, appears critical to a person's sense of inhabiting their own body, or bodily self, the team recently reported in the journal Neuron. The finding could help researchers develop forms of anesthesia that use electrical stimulation instead of drugs. It could also help explain the antidepressant effects of mind-altering drugs like ketamine. It took Parvizi's team years of research to discover the importance of this obscure bit of brain tissue. In 2019, when the man first came to Stanford's Comprehensive Epilepsy Program, Parvizi thought his symptoms were caused by seizures in the posteromedial cortex, an area toward the back of the brain. This area includes a brain network involved in the narrative self, a sort of internal autobiography that helps us define who we are. Parvizi's team figured that the same network must be responsible for the bodily self too. "Everybody thought, 'Well, maybe all kinds of selves are being decoded by the same system,'" he says. © 2023 npr

Keyword: Attention; Consciousness
Link ID: 28846 - Posted: 07.06.2023

By Anil Seth In 1870, Alfred Russell Wallace wagered £500—a huge sum in those days—that he could prove the flat-Earther John Hampden wrong. Wallace duly did so, but the aggrieved Hampden never paid up. Since then, a lively history of scientific wagers has ensued—many of them instigated by Stephen Hawking. Just last month in New York, the most famous recent wager was settled: a 25-year-old bet over one of the last great mysteries in science and philosophy. The bettors were neuroscientist Christof Koch and philosopher David Chalmers, both known for their pioneering work on the nature of consciousness. Chalmers won. Koch paid up. Back in the late 1990s, consciousness science was full of renewed promise. Koch—a natural optimist—believed that 25 years was more than enough time for scientists to uncover the neural correlates of consciousness: those patterns of brain activity that underlie each and every one of our conscious experiences. Chalmers, a philosopher and therefore something of a pessimist by profession, demurred. In 1998, the pair staked a crate of fine wine on the outcome. The bet was finally called at the annual meeting of the Association for the Scientific Study of Consciousness in New York a couple of weeks ago. Koch graciously handed Chalmers a bottle of Madeira on the conference stage. While much more is known about consciousness today than in the ’90s, its true neural correlates—and indeed a consensus theory of consciousness—still elude us. What helped resolve the wager was the outcome, or rather the lack of a decisive outcome, of an “adversarial collaboration” organized by a consortium called COGITATE. Adversarial collaborations encourage researchers from different theoretical camps to jointly design experiments that can distinguish between their theories. In this case, the theories in question were integrated information theory (IIT), the brainchild of Giulio Tononi, and the neuronal global workspace theory (GWT), championed by Stanislas Dehaene. The two scientists made predictions, based on their respective theories, about what kinds of brain activity would be recorded in an experiment in which participants looked at a series of images—but neither predicted outcome fully played out. © 2023 NautilusNext Inc.,

Keyword: Consciousness
Link ID: 28845 - Posted: 07.06.2023

By Carl Zimmer On a muggy June night in Greenwich Village, more than 800 neuroscientists, philosophers and curious members of the public packed into an auditorium. They came for the first results of an ambitious investigation into a profound question: What is consciousness? To kick things off, two friends — David Chalmers, a philosopher, and Christof Koch, a neuroscientist — took the stage to recall an old bet. In June 1998, they had gone to a conference in Bremen, Germany, and ended up talking late one night at a local bar about the nature of consciousness. For years, Dr. Koch had collaborated with Francis Crick, a biologist who shared a Nobel Prize for uncovering the structure of DNA, on a quest for what they called the “neural correlate of consciousness.” They believed that every conscious experience we have — gazing at a painting, for example — is associated with the activity of certain neurons essential for the awareness that comes with it. Dr. Chalmers liked the concept, but he was skeptical that they could find such a neural marker any time soon. Scientists still had too much to learn about consciousness and the brain, he figured, before they could have a reasonable hope of finding it. Dr. Koch wagered his friend that scientists would find a neural correlate of consciousness within 25 years. Dr. Chalmers took the bet. The prize would be a few bottles of fine wine. Recalling the bet from the auditorium stage, Dr. Koch admitted that it had been fueled by drinks and enthusiasm. “When you’re young, you’ve got to believe things will be simple,” he said. © 2023 The New York Times Company

Keyword: Consciousness
Link ID: 28839 - Posted: 07.01.2023

By John Horgan A neuroscientist clad in gold and red and a philosopher sheathed in black took the stage before a packed, murmuring auditorium at New York University on Friday night. The two men were grinning, especially the philosopher. They were here to settle a bet made in the late 1990s on one of science’s biggest questions: How does a brain, a lump of matter, generate subjective conscious states such as the blend of anticipation and nostalgia I felt watching these guys? Before I reveal their bet’s resolution, let me take you through its twisty backstory, which reveals why consciousness remains a topic of such fascination and frustration to anyone with even the slightest intellectual leaning. I first saw Christof Koch, the neuroscientist, and David Chalmers, the philosopher, butt heads in 1994 at a now legendary conference in Tucson, Ariz., called Toward a Scientific Basis for Consciousness. Koch was a star of the meeting. Together with biophysicist Francis Crick, he had been proclaiming in Scientific American and elsewhere that consciousness, which philosophers have wrestled with for millennia, was scientifically tractable. Just as Crick and geneticist James Watson solved heredity by decoding DNA’s double helix, scientists would crack consciousness by discovering its neural underpinnings, or “correlates.” Or so Crick and Koch claimed. They even identified a possible basis for consciousness: brain cells firing in synchrony 40 times per second. Advertisement Not everyone in Tucson was convinced. Chalmers, younger and then far less well known than Koch, argued that neither 40-hertz oscillations nor any other strictly physical process could account for why perceptions are accompanied by conscious sensations, such as the crushing boredom evoked by a jargony lecture. I have a vivid memory of the audience perking up when Chalmers called consciousness “the hard problem.” That was the first time I heard that now famous phrase.

Keyword: Consciousness
Link ID: 28836 - Posted: 06.28.2023

By Jordan Kinard Long the fixation of religions, philosophy and literature the world over, the conscious experience of dying has recently received increasingly significant attention from science. This comes as medical advances extend the ability to keep the body alive, steadily prying open a window into the ultimate locked room: the last living moments of a human mind. “Around 1959 humans discovered a method to restart the heart in people who would have died, and we called this CPR,” says Sam Parnia, a critical care physician at NYU Langone Health. Parnia has studied people’s recollections after being revived from cardiac arrest—phenomena that he refers to as “recalled experiences surrounding death.” Before CPR techniques were developed, cardiac arrest was basically synonymous with death. But now doctors can revive some people up to 20 minutes or more after their heart has stopped beating. Furthermore, Parnia says, many brain cells remain somewhat intact for hours to days postmortem—challenging our notions of a rigid boundary between life and death. Advancements in medical technology and neuroscience, as well as shifts in researchers’ perspectives, are revolutionizing our understanding of the dying process. Research over the past decade has demonstrated a surge in brain activity in human and animal subjects undergoing cardiac arrest. Meanwhile large surveys are documenting the seemingly inexplicable periods of lucidity that hospice workers and grieving families often report witnessing in people with dementia who are dying. Poet Dylan Thomas famously admonished his readers, “Do not go gentle into that good night. Rage, rage against the dying of the light.” But as more resources are devoted to the study of death, it is becoming increasingly clear that dying is not the simple dimming of one’s internal light of awareness but rather an incredibly active process in the brain. © 2023 Scientific American,

Keyword: Attention; Development of the Brain
Link ID: 28820 - Posted: 06.14.2023

By Claudia Lopez Lloreda Of all of COVID-19’s symptoms, one of the most troubling is “brain fog.” Victims report headaches, trouble concentrating, and forgetfulness. Now, researchers have shown that SARS-CoV-2 can cause brain cells to fuse together, disrupting their communication. Although the study was only done in cells in a lab dish, some scientists say it could help explain one of the pandemic’s most confounding symptoms. “This is a first important step,” says Stefan Lichtenthaler, a biochemist at the German Center for Neurodegenerative Diseases who was not involved with the work. Researchers already knew that SARS-CoV-2 could cause certain cells to fuse together. The lungs of patients who die from severe COVID-19 are often riddled with large, multicellular structures called syncytia, which scientists believe may contribute to the respiratory symptoms of the disease. Like other viruses, SARS-CoV-2 may incite cells to fuse to help it spread across an organ without having to infect new cells. To see whether such cell fusion might happen in brain cells, Massimo Hilliard, a neuroscientist at the University of Queensland, and his colleagues first genetically engineered two populations of mouse neurons: One expressed a red fluorescent molecule, and the other a green fluorescent molecule. If the two fused in a lab dish, they would show up as bright yellow under the microscope. That’s just what the researchers saw when they added SARS-CoV-2 to a dish containing both types of cells, they report today in Science Advances. The same fusion happened in human brain organoids, so-called minibrains that are created from stem cells. The key appears to be angiotensin-converting enzyme 2 (ACE2), the protein expressed on the surface of mammalian cells that SARS-CoV-2 is known to target. The virus uses a surface protein called spike to bind to ACE2, triggering the virus to fuse to a cell and release its genetic material inside. Seemingly, the spike protein in infected cells may also make other ACE2 on a cell trigger fusion to a neighboring cell. When the team engineered neurons to express the spike protein, only cells that also expressed ACE2 were able to fuse with each other. The findings parallel previous work in lung cells: The ACE2 receptor seems to be critical in mediating their fusion during SARS-CoV-2 infection.

Keyword: Neuroimmunology; Attention
Link ID: 28818 - Posted: 06.14.2023

By Steven Strogatz Neuroscience has made progress in deciphering how our brains think and perceive our surroundings, but a central feature of cognition is still deeply mysterious: namely, that many of our perceptions and thoughts are accompanied by the subjective experience of having them. Consciousness, the name we give to that experience, can’t yet be explained — but science is at least beginning to understand it. In this episode, the consciousness researcher Anil Seth and host Steven Strogatz discuss why our perceptions can be described as a “controlled hallucination,” how consciousness played into the internet sensation known as “the dress,” and how people at home can help researchers catalog the full range of ways that we experience the world. Steven Strogatz (00:03): I’m Steve Strogatz, and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in math and science today. In this episode, we’re going to be discussing the mystery of consciousness. The mystery being that when your brain cells fire in certain patterns, it actually feels like something. It might feel like jealousy, or a toothache, or the memory of your mother’s face, or the scent of her favorite perfume. But other patterns of brain activity don’t really feel like anything at all. Right now, for instance, I’m probably forming some memories somewhere deep in my brain. But the process of that memory formation is imperceptible to me. I can’t feel it. It doesn’t give rise to any sort of internal subjective experience at all. In other words, I’m not conscious of it. (00:54) So how does consciousness happen? How is it related to physics and biology? Are animals conscious? What about plants? Or computers, could they ever be conscious? And what is consciousness exactly? My guest today, Dr. Anil Seth, studies consciousness in his role as the co-director of the Sussex Center for Consciousness Science at the University of Sussex, near Brighton, England. The Center brings together all sorts of disciplinary specialists, from neuroscientists to mathematicians to experts in virtual reality, to study the conscious experience. Dr. Seth is also the author of the book Being You: A New Science of Consciousness. He joins us from studios in Brighton, England. Anil, thanks for being here. All Rights Reserved © 2023

Keyword: Consciousness
Link ID: 28812 - Posted: 06.03.2023

By Yasemin Saplakoglu Is this the real life? Is this just fantasy? Those aren’t just lyrics from the Queen song “Bohemian Rhapsody.” They’re also the questions that the brain must constantly answer while processing streams of visual signals from the eyes and purely mental pictures bubbling out of the imagination. Brain scan studies have repeatedly found that seeing something and imagining it evoke highly similar patterns of neural activity. Yet for most of us, the subjective experiences they produce are very different. “I can look outside my window right now, and if I want to, I can imagine a unicorn walking down the street,” said Thomas Naselaris, an associate professor at the University of Minnesota. The street would seem real and the unicorn would not. “It’s very clear to me,” he said. The knowledge that unicorns are mythical barely plays into that: A simple imaginary white horse would seem just as unreal. So “why are we not constantly hallucinating?” asked Nadine Dijkstra, a postdoctoral fellow at University College London. A study she led, recently published in Nature Communications, provides an intriguing answer: The brain evaluates the images it is processing against a “reality threshold.” If the signal passes the threshold, the brain thinks it’s real; if it doesn’t, the brain thinks it’s imagined. They’ve done a great job, in my opinion, of taking an issue that philosophers have been debating about for centuries and defining models with predictable outcomes and testing them. Such a system works well most of the time because imagined signals are typically weak. But if an imagined signal is strong enough to cross the threshold, the brain takes it for reality. All Rights Reserved © 2023

Keyword: Attention
Link ID: 28803 - Posted: 05.27.2023

By Robert Martone Neurological conditions can release a torrent of new creativity in a few people as if opening some mysterious floodgate. Auras of migraine and epilepsy may have influenced a long list of artists, including Pablo Picasso, Vincent van Gogh, Edvard Munch, Giorgio de Chirico, Claude Monet and Georges Seurat. Traumatic brain injury (TBI) can result in original thinking and newfound artistic drive. Emergent creativity is also a rare feature of Parkinson’s disease. But this burst of creative ability is especially true of frontotemporal dementia (FTD). Although a few rare cases of FTD are linked to improvements in verbal creativity, such as greater poetic gifts and increased wordplay and punning, enhanced creativity in the visual arts is an especially notable feature of the condition. Fascinatingly, this burst of creativity indicates that the potential to create may rest dormant in some of us, only to be unleashed by a disease that also causes a loss of verbal abilities. The emergence of a vibrant creative spark in the face of devastating neurological disease speaks to the human brain’s remarkable potential and resilience. A new study published in JAMA Neurology examines the roots of this phenomenon and provides insight into a possible cause. As specific brain areas diminish in FTD, the researchers find, they release their inhibition, or control, of other regions that support artistic expression. Frontotemporal dementia is relatively rare—affecting about 60,000 people in the U. S.—and distinct from the far more common Alzheimer’s disease, a form of dementia in which memory deficits predominate. FTD is named for the two brain regions that can degenerate in this disease, specifically the frontal and temporal lobes.

Keyword: Alzheimers; Attention
Link ID: 28797 - Posted: 05.27.2023

By Yasemin Saplakoglu Memories are shadows of the past but also flashlights for the future. Our recollections guide us through the world, tune our attention and shape what we learn later in life. Human and animal studies have shown that memories can alter our perceptions of future events and the attention we give them. “We know that past experience changes stuff,” said Loren Frank, a neuroscientist at the University of California, San Francisco. “How exactly that happens isn’t always clear.” A new study published in the journal Science Advances now offers part of the answer. Working with snails, researchers examined how established memories made the animals more likely to form new long-term memories of related future events that they might otherwise have ignored. The simple mechanism that they discovered did this by altering a snail’s perception of those events. The researchers took the phenomenon of how past learning influences future learning “down to a single cell,” said David Glanzman, a cell biologist at the University of California, Los Angeles who was not involved in the study. He called it an attractive example “of using a simple organism to try to get understanding of behavioral phenomena that are fairly complex.” Although snails are fairly simple creatures, the new insight brings scientists a step closer to understanding the neural basis of long-term memory in higher-order animals like humans. Though we often aren’t aware of the challenge, long-term memory formation is “an incredibly energetic process,” said Michael Crossley, a senior research fellow at the University of Sussex and the lead author of the new study. Such memories depend on our forging more durable synaptic connections between neurons, and brain cells need to recruit a lot of molecules to do that. To conserve resources, a brain must therefore be able to distinguish when it’s worth the cost to form a memory and when it’s not. That’s true whether it’s the brain of a human or the brain of a “little snail on a tight energetic budget,” he said. All Rights Reserved © 2023

Keyword: Learning & Memory; Attention
Link ID: 28787 - Posted: 05.18.2023

By Laura Sanders Like Dumbledore’s wand, a scan can pull long strings of stories straight out of a person’s brain — but only if that person cooperates. This “mind-reading” feat, described May 1 in Nature Neuroscience, has a long way to go before it can be used outside of sophisticated laboratories. But the result could ultimately lead to seamless devices that help people who can’t talk or otherwise communicate easily. The research also raises privacy concerns about unwelcome neural eavesdropping (SN: 2/11/21). “I thought it was fascinating,” says Gopala Anumanchipalli, a neural engineer at the University of California, Berkeley who wasn’t involved in the study. “It’s like, ‘Wow, now we are here already,’” he says. “I was delighted to see this.” As opposed to implanted devices that have shown recent promise, the new system requires no surgery (SN: 11/15/22). And unlike other external approaches, it produces continuous streams of words instead of having a more constrained vocabulary. For the new study, three people lay inside a bulky MRI machine for at least 16 hours each. They listened to stories, mostly from The Moth podcast, while functional MRI scans detected changes in blood flow in the brain. These changes are proxies for brain activity, albeit slow and imperfect measures. With this neural data in hand, computational neuroscientists Alexander Huth and Jerry Tang of the University of Texas at Austin and colleagues were able to match patterns of brain activity to certain words and ideas. The approach relied on a language model that was built with GPT, one of the forerunners that enabled today’s AI chatbots (SN: 4/12/23). © Society for Science & the Public 2000–2023.

Keyword: Brain imaging; Consciousness
Link ID: 28769 - Posted: 05.03.2023

By Oliver Whang Think of the words whirling around in your head: that tasteless joke you wisely kept to yourself at dinner; your unvoiced impression of your best friend’s new partner. Now imagine that someone could listen in. On Monday, scientists from the University of Texas, Austin, made another step in that direction. In a study published in the journal Nature Neuroscience, the researchers described an A.I. that could translate the private thoughts of human subjects by analyzing fMRI scans, which measure the flow of blood to different regions in the brain. Already, researchers have developed language-decoding methods to pick up the attempted speech of people who have lost the ability to speak, and to allow paralyzed people to write while just thinking of writing. But the new language decoder is one of the first to not rely on implants. In the study, it was able to turn a person’s imagined speech into actual speech and, when subjects were shown silent films, it could generate relatively accurate descriptions of what was happening onscreen. “This isn’t just a language stimulus,” said Alexander Huth, a neuroscientist at the university who helped lead the research. “We’re getting at meaning, something about the idea of what’s happening. And the fact that that’s possible is very exciting.” The study centered on three participants, who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded the blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in the brain activity to the words and phrases that the participants had heard. © 2023 The New York Times Company

Keyword: Brain imaging; Consciousness
Link ID: 28768 - Posted: 05.03.2023