Chapter 18. Attention and Higher Cognition

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 41 - 60 of 1695

By Dan Falk More than 400 years ago, Galileo showed that many everyday phenomena—such as a ball rolling down an incline or a chandelier gently swinging from a church ceiling—obey precise mathematical laws. For this insight, he is often hailed as the founder of modern science. But Galileo recognized that not everything was amenable to a quantitative approach. Such things as colors, tastes and smells “are no more than mere names,” Galileo declared, for “they reside only in consciousness.” These qualities aren’t really out there in the world, he asserted, but exist only in the minds of creatures that perceive them. “Hence if the living creature were removed,” he wrote, “all these qualities would be wiped away and annihilated.” Since Galileo’s time the physical sciences have leaped forward, explaining the workings of the tiniest quarks to the largest galaxy clusters. But explaining things that reside “only in consciousness”—the red of a sunset, say, or the bitter taste of a lemon—has proven far more difficult. Neuroscientists have identified a number of neural correlates of consciousness—brain states associated with specific mental states—but have not explained how matter forms minds in the first place. As philosopher David Chalmers asked: “How does the water of the brain turn into the wine of consciousness?” He famously dubbed this quandary the “hard problem” of consciousness. Scholars recently gathered to debate the problem at Marist College in Poughkeepsie, N.Y., during a two-day workshop focused on an idea known as panpsychism. The concept proposes that consciousness is a fundamental aspect of reality, like mass or electrical charge. The idea goes back to antiquity—Plato took it seriously—and has had some prominent supporters over the years, including psychologist William James and philosopher and mathematician Bertrand Russell. Lately it is seeing renewed interest, especially following the 2019 publication of philosopher Philip Goff’s book Galileo’s Error, which argues forcefully for the idea. Goff, of the University of Durham in England, organized the recent event along with Marist philosopher Andrei Buckareff, and it was funded through a grant from the John Templeton Foundation. In a small lecture hall with floor-to-ceiling windows overlooking the Hudson River, roughly two dozen scholars probed the possibility that perhaps it’s consciousness all the way down.

Keyword: Consciousness; Attention
Link ID: 28928 - Posted: 09.27.2023

Mariana Lenharo A letter, signed by 124 scholars and posted online last week1, has caused an uproar in the consciousness research community. It claims that a prominent theory describing what makes someone or something conscious — called the integrated information theory (IIT) — should be labelled “pseudoscience”. Since its publication on 15 September in the preprint repository PsyArXiv, the letter has some researchers arguing over the label and others worried it will increase polarization in a field that has grappled with issues of credibility in the past. “I think it’s inflammatory to describe IIT as pseudoscience,” says neuroscientist Anil Seth, director of the Centre for Consciousness Science at the University of Sussex near Brighton, UK, adding that he disagrees with the label. “IIT is a theory, of course, and therefore may be empirically wrong,” says neuroscientist Christof Koch, a meritorious investigator at the Allen Institute for Brain Science in Seattle, Washington, and a proponent of the theory. But he says that it makes its assumptions — for example, that consciousness has a physical basis and can be mathematically measured — very clear. There are dozens of theories that seek to understand consciousness — everything that a human or non-human experiences, including what they feel, see and hear — as well as its underlying neural foundations. IIT has often been described as one of the central theories, alongside others, such as global neuronal workspace theory (GNW), higher-order thought theory and recurrent processing theory. It proposes that consciousness emerges from the way information is processed within a ‘system’ (for instance, networks of neurons or computer circuits), and that systems that are more interconnected, or integrated, have higher levels of consciousness. Hakwan Lau, a neuroscientist at Riken Center for Brain Science in Wako, Japan, and one of the authors of the letter, says that some researchers in the consciousness field are uncomfortable with what they perceive as a discrepancy between IIT’s scientific merit and the considerable attention it receives from the popular media because of how it is promoted by advocates. “Has IIT become a leading theory because of academic acceptance first, or is it because of the popular noise that kind of forced the academics to give it acknowledgement?”, Lau asks. © 2023 Springer Nature Limited

Keyword: Consciousness
Link ID: 28918 - Posted: 09.21.2023

Kimberlee D'Ardenne Dopamine seems to be having a moment in the zeitgeist. You may have read about it in the news, seen viral social media posts about “dopamine hacking” or listened to podcasts about how to harness what this molecule is doing in your brain to improve your mood and productivity. But recent neuroscience research suggests that popular strategies to control dopamine are based on an overly narrow view of how it functions. Dopamine is one of the brain’s neurotransmitters – tiny molecules that act as messengers between neurons. It is known for its role in tracking your reaction to rewards such as food, sex, money or answering a question correctly. There are many kinds of dopamine neurons located in the uppermost region of the brainstem that manufacture and release dopamine throughout the brain. Whether neuron type affects the function of the dopamine it produces has been an open question. Recently published research reports a relationship between neuron type and dopamine function, and one type of dopamine neuron has an unexpected function that will likely reshape how scientists, clinicians and the public understand this neurotransmitter. Dopamine is involved with more than just pleasure. Dopamine neuron firing Dopamine is famous for the role it plays in reward processing, an idea that dates back at least 50 years. Dopamine neurons monitor the difference between the rewards you thought you would get from a behavior and what you actually got. Neuroscientists call this difference a reward prediction error. Understand new developments in science, health and technology, each week Eating dinner at a restaurant that just opened and looks likely to be nothing special shows reward prediction errors in action. If your meal is very good, that results in a positive reward prediction error, and you are likely to return and order the same meal in the future. Each time you return, the reward prediction error shrinks until it eventually reaches zero when you fully expect a delicious dinner. But if your first meal was terrible, that results in a negative reward prediction error, and you probably won’t go back to the restaurant. Dopamine neurons communicate reward prediction errors to the brain through their firing rates and patterns of dopamine release, which the brain uses for learning. They fire in two ways. © 2010–2023, The Conversation US, Inc.

Keyword: Drug Abuse; Learning & Memory
Link ID: 28917 - Posted: 09.21.2023

By Jim Davies Think of what you want to eat for dinner this weekend. What popped into mind? Pizza? Sushi? Clam chowder? Why did those foods (or whatever foods you imagined) appear in your consciousness and not something else? Psychologists have long held that when we are making a decision about a particular category of thing, we tend to bring to mind items that are typical or common in our culture or everyday lives, or ones we value the most. On this view, whatever foods you conjured up are likely ones that you eat often, or love to eat. Sounds intuitive. But a recent paper published in Cognition suggests it’s more complicated than that. Tracey Mills, a research assistant working at MIT, led the study along with Jonathan Phillips, a cognitive scientist and philosopher at Dartmouth College. They put over 2,000 subjects, recruited online, through a series of seven experiments that allowed them to test a novel approach for understanding which ideas within a category will pop into our consciousness—and which won’t. In this case, they had subjects think about zoo animals, holidays, jobs, kitchen appliances, chain restaurants, sports, and vegetables. What they found is that what makes a particular thing come to mind—such as a lion when one is considering zoo animals—is determined not by how valuable or familiar it is, but by where it lies in a multidimensional idea grid that could be said to resemble a kind of word cloud. “Under the hypothesis we argue for,” Mills and Phillips write, “the process of calling members of a category to mind might be modeled as a search through feature space, weighted toward certain features that are relevant for that category.” Historical “value” just happens to be one dimension that is particularly relevant when one is talking about dinner, but is less relevant for categories such as zoo animals or, say, crimes, they write. © 2023 NautilusNext Inc., All rights reserved.

Keyword: Attention; Learning & Memory
Link ID: 28910 - Posted: 09.16.2023

Nicola Davis Science correspondent Whether it’s seeing Jesus in burnt toast, a goofy grin in the grooves of a cheese grater, or simply the man in the moon, humans have long perceived faces in unlikely places. Now researchers say the tendency may not be fixed in adults, suggesting it appears to be enhanced in women who have just given birth. The scientists suggest the finding could be down to postpartum women having higher levels of oxytocin, colloquially referred to as the “love” or “trust” hormone because of its role in social bonding. “These data, collected online, suggest that our sensitivity to face-like patterns is not fixed and may change throughout adulthood,” the team write. Writing in the journal Biology Letters, researchers from Australia’s University of Queensland and the University of the Sunshine Coast describe how they set out to investigate whether the propensity to see faces in inanimate objects – a phenomenon known as face pareidolia – changes during life. Previous research has suggested that when humans are given oxytocin, their ability to recognise certain emotions in faces increases. As a result, the team wanted to explore if the hormone could play a role in how sensitive individuals are towards seeing faces in inanimate objects. The researchers used an online platform to recruit women, with participants asked if they were pregnant or had just given birth – the latter being a period when oxytocin levels are generally increased. The women were each shown 320 images in a random order online and asked to rate on an 11-point scale how easily they could see a face. While 32 of the images were of human faces, 256 were of inanimate objects with patterns that could be said to resemble a face, and 32 depicted inanimate objects with no such facial patterns. The team gathered data from 84 pregnant women, 79 women who had given birth in the past year, and 216 women who did not report being pregnant or having recently had a baby. © 2023 Guardian News & Media Limited

Keyword: Sexual Behavior; Attention
Link ID: 28906 - Posted: 09.13.2023

By Saugat Bolakhe Memory doesn’t represent a single scientific mystery; it’s many of them. Neuroscientists and psychologists have come to recognize varied types of memory that coexist in our brain: episodic memories of past experiences, semantic memories of facts, short- and long-term memories, and more. These often have different characteristics and even seem to be located in different parts of the brain. But it’s never been clear what feature of a memory determines how or why it should be sorted in this way. Now, a new theory backed by experiments using artificial neural networks proposes that the brain may be sorting memories by evaluating how likely they are to be useful as guides in the future. In particular, it suggests that many memories of predictable things, ranging from facts to useful recurring experiences — like what you regularly eat for breakfast or your walk to work — are saved in the brain’s neocortex, where they can contribute to generalizations about the world. Memories less likely to be useful — like the taste of that unique drink you had at that one party — are kept in the seahorse-shaped memory bank called the hippocampus. Actively segregating memories this way on the basis of their usefulness and generalizability may optimize the reliability of memories for helping us navigate novel situations. The authors of the new theory — the neuroscientists Weinan Sun and James Fitzgerald of the Janelia Research Campus of the Howard Hughes Medical Institute, Andrew Saxe of University College London, and their colleagues — described it in a recent paper in Nature Neuroscience. It updates and expands on the well-established idea that the brain has two linked, complementary learning systems: the hippocampus, which rapidly encodes new information, and the neocortex, which gradually integrates it for long-term storage. James McClelland, a cognitive neuroscientist at Stanford University who pioneered the idea of complementary learning systems in memory but was not part of the new study, remarked that it “addresses aspects of generalization” that his own group had not thought about when they proposed the theory in the mid 1990s. All Rights Reserved © 2023

Keyword: Learning & Memory; Attention
Link ID: 28900 - Posted: 09.07.2023

Mariana Lenharo Science fiction has long entertained the idea of artificial intelligence becoming conscious — think of HAL 9000, the supercomputer-turned-villain in the 1968 film 2001: A Space Odyssey. With the rapid progress of artificial intelligence (AI), that possibility is becoming less and less fantastical, and has even been acknowledged by leaders in AI. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious”. Many researchers say that AI systems aren’t yet at the point of consciousness, but that the pace of AI evolution has got them pondering: how would we know if they were? To answer this, a group of 19 neuroscientists, philosophers and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository1, ahead of peer review. The authors undertook the effort because “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California. The team says that a failure to identify whether an AI system has become conscious has important moral implications. If something has been labelled ‘conscious’, according to co-author Megan Peters, a neuroscientist at the University of California, Irvine, “that changes a lot about how we as human beings feel that entity should be treated”. Long adds that, as far as he can tell, not enough effort is being made by the companies building advanced AI systems to evaluate the models for consciousness and make plans for what to do if that happens. “And that’s in spite of the fact that, if you listen to remarks from the heads of leading labs, they do say that AI consciousness or AI sentience is something they wonder about,” he adds. © 2023 Springer Nature Limited

Keyword: Consciousness
Link ID: 28893 - Posted: 08.30.2023

By Maria Temming When Christopher Mazurek realizes he’s dreaming, it’s always the small stuff that tips him off. The first time it happened, Mazurek was a freshman at Northwestern University in Evanston, Ill. In the dream, he found himself in a campus dining hall. It was winter, but Mazurek wasn’t wearing his favorite coat. “I realized that, OK, if I don’t have the coat, I must be dreaming,” Mazurek says. That epiphany rocked the dream like an earthquake. “Gravity shifted, and I was flung down a hallway that seemed to go on for miles,” he says. “My left arm disappeared, and then I woke up.” Most people rarely if ever realize that they’re dreaming while it’s happening, what’s known as lucid dreaming. But some enthusiasts have cultivated techniques to become self-aware in their sleep and even wrest some control over their dream selves and settings. Mazurek, 24, says that he’s gotten better at molding his lucid dreams since that first whirlwind experience, sometimes taking them as opportunities to try flying or say hi to deceased family members. Other lucid dreamers have used their personal virtual realities to plumb their subconscious minds for insights or feast on junk food without real-world consequences. But now, scientists have a new job for lucid dreamers: to explore their dreamscapes and report out in real time. Dream research has traditionally relied on reports collected after someone wakes up. But people often wake with only spotty, distorted memories of what they dreamed. The dreamers can’t say exactly when events occurred, and they certainly can’t tailor their dreams to specific scientific studies. © Society for Science & the Public 2000–2023.

Keyword: Sleep; Consciousness
Link ID: 28891 - Posted: 08.30.2023

By Elizabeth Finkel Science routinely puts forward theories, then batters them with data till only one is left standing. In the fledgling science of consciousness, a dominant theory has yet to emerge. More than 20 are still taken seriously. It’s not for want of data. Ever since Francis Crick, the co-discoverer of DNA’s double helix, legitimized consciousness as a topic for study more than three decades ago, researchers have used a variety of advanced technologies to probe the brains of test subjects, tracing the signatures of neural activity that could reflect consciousness. The resulting avalanche of data should have flattened at least the flimsier theories by now. Five years ago, the Templeton World Charity Foundation initiated a series of “adversarial collaborations” to coax the overdue winnowing to begin. This past June saw the results from the first of these collaborations, which pitted two high-profile theories against each other: global neuronal workspace theory (GNWT) and integrated information theory (IIT). Neither emerged as the outright winner. The results, announced like the outcome of a sporting event at the 26th meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, were also used to settle a 25-year bet between Crick’s longtime collaborator, the neuroscientist Christof Koch of the Allen Institute for Brain Science, and the philosopher David Chalmers of New York University, who coined the term “the hard problem” to challenge the presumption that we can explain the subjective feeling of consciousness by analyzing the circuitry of the brain. The neuroscientist Christof Koch of the Allen Institute for Brain Science deemed the mixed results of the first adversarial collaboration on consciousness to be “a victory for science.” Nevertheless, Koch proclaimed, “It’s a victory for science.” But was it? All Rights Reserved © 2023

Keyword: Consciousness; Attention
Link ID: 28887 - Posted: 08.26.2023

By Miryam Naddaf, It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end. Its audacious goal was to understand the human brain by modelling it in a computer. During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions, developing brain implants to treat blindness and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions. “When the project started, hardly anyone believed in the potential of big data and the possibility of using it, or supercomputers, to simulate the complicated functioning of the brain,” says Thomas Skordas, deputy director-general of the European Commission in Brussels. Advertisement Almost since it began, however, the HBP has drawn criticism. The project did not achieve its goal of simulating the whole human brain — an aim that many scientists regarded as far-fetched in the first place. It changed direction several times, and its scientific output became “fragmented and mosaic-like”, says HBP member Yves Frégnac, a cognitive scientist and director of research at the French national research agency CNRS in Paris. For him, the project has fallen short of providing a comprehensive or original understanding of the brain. “I don’t see the brain; I see bits of the brain,” says Frégnac. HBP directors hope to bring this understanding a step closer with a virtual platform — called EBRAINS — that was created as part of the project. EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments. “Today, we have all the tools in hand to build a real digital brain twin,” says Viktor Jirsa, a neuroscientist at Aix-Marseille University in France and an HBP board member. But the funding for this offshoot is still uncertain. And at a time when huge, expensive brain projects are in high gear elsewhere, scientists in Europe are frustrated that their version is winding down. “We were probably one of the first ones to initiate this wave of interest in the brain,” says Jorge Mejias, a computational neuroscientist at the University of Amsterdam, who joined the HBP in 2019. Now, he says, “everybody’s rushing, we don’t have time to just take a nap”. Chequered past

Keyword: Brain imaging; Robotics
Link ID: 28884 - Posted: 08.26.2023

By Elizabeth Finkel In 2021, Google engineer Blake Lemoine made headlines—and got himself fired—when he claimed that LaMDA, the chatbot he’d been testing, was sentient. Artificial intelligence (AI) systems, especially so-called large language models such as LaMDA and ChatGPT, can certainly seem conscious. But they’re trained on vast amounts of text to imitate human responses. So how can we really know? Now, a group of 19 computer scientists, neuroscientists, and philosophers has come up with an approach: not a single definitive test, but a lengthy checklist of attributes that, together, could suggest but not prove an AI is conscious. In a 120-page discussion paper posted as a preprint this week, the researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT. None is likely to be conscious, they conclude. But the work offers a framework for evaluating increasingly humanlike AIs, says co-author Robert Long of the San Francisco–based nonprofit Center for AI Safety. “We’re introducing a systematic methodology previously lacking.” Adeel Razi, a computational neuroscientist at Monash University and a fellow at the Canadian Institute for Advanced Research (CIFAR) who was not involved in the new paper, says that is a valuable step. “We’re all starting the discussion rather than coming up with answers.” Until recently, machine consciousness was the stuff of science fiction movies such as Ex Machina. “When Blake Lemoine was fired from Google after being convinced by LaMDA, that marked a change,” Long says. “If AIs can give the impression of consciousness, that makes it an urgent priority for scientists and philosophers to weigh in.” Long and philosopher Patrick Butlin of the University of Oxford’s Future of Humanity Institute organized two workshops on how to test for sentience in AI.

Keyword: Consciousness; Robotics
Link ID: 28881 - Posted: 08.24.2023

Geneva Abdul The so-called “brain fog” symptom associated with long Covid is comparable to ageing 10 years, researchers have suggested. In a study by King’s College London, researchers investigated the impact of Covid-19 on memory and found cognitive impairment highest in individuals who had tested positive and had more than three months of symptoms. The study, published on Friday in a clinical journal published by The Lancet, also found the symptoms in affected individuals stretched to almost two years since initial infection. “The fact remains that two years on from their first infection, some people don’t feel fully recovered and their lives continue to be impacted by the long-term effects of the coronavirus,” said Claire Steves, a professor of ageing and health at King’s College. “We need more work to understand why this is the case and what can be done to help.” An estimated two million people living in the UK were experiencing self-reported long Covid – symptoms continuing for more than four weeks since infection – as of January 2023, according to the 2023 government census. Commonly reported symptoms included fatigue, difficulty concentrating, shortness of breath and muscle aches. The study included more than 5,100 participants from the Covid Symptom Study Biobank, recruited through a smartphone app. Through 12 cognitive tests measuring speed and accuracy, researchers examined working memory, attention, reasoning and motor controls between two periods of 2021 and 2022. © 2023 Guardian News & Media Limited or

Keyword: Learning & Memory; Attention
Link ID: 28854 - Posted: 07.22.2023

Max Kozlov Dead in California but alive in New Jersey: that was the status of 13-year-old Jahi McMath after physicians in Oakland, California, declared her brain dead in 2013, after complications from a tonsillectomy. Unhappy with the care that their daughter received and unwilling to remove life support, McMath’s family moved with her to New Jersey, where the law allowed them to lodge a religious objection to the declaration of brain death and keep McMath connected to life-support systems for another four and a half years. Prompted by such legal discrepancies and a growing number of lawsuits around the United States, a group of neurologists, physicians, lawyers and bioethicists is attempting to harmonize state laws surrounding the determination of death. They say that imprecise language in existing laws — as well as research done since the laws were passed — threatens to undermine public confidence in how death is defined worldwide. “It doesn’t really make a lot of sense,” says Ariane Lewis, a neurocritical care clinician at NYU Langone Health in New York City. “Death is something that should be a set, finite thing. It shouldn’t be something that’s left up to interpretation.” Since 2021, a committee in the Uniform Law Commission (ULC), a non-profit organization in Chicago, Illinois, that drafts model legislation for states to adopt, has been revising its recommendation for the legal determination of death. The drafting committee hopes to clarify the definition of brain death, determine whether consent is required to test for it, specify how to handle family objections and provide guidance on how to incorporate future changes to medical standards. The broader membership of the ULC will offer feedback on the first draft of the revised law at a meeting on 26 July. After members vote on it, the text could be ready for state legislatures to consider by the middle of next year. But as the ULC revision process has progressed, clinicians who were once eager to address these issues have become increasingly worried. © 2023 Springer Nature Limited

Keyword: Consciousness
Link ID: 28853 - Posted: 07.22.2023

Jon Hamilton Dr. Josef Parvizi remembers meeting a man with epilepsy whose seizures were causing some very unusual symptoms. "He came to my clinic and said, 'My sense of self is changing,'" says Parvizi, a professor of neurology at Stanford University. The man told Parvizi that he felt "like an observer to conversations that are happening in my mind" and that "I just feel like I'm floating in space." Parvizi and a team of researchers would eventually trace the man's symptoms to a "sausage-looking piece of brain" called the anterior precuneus. This area, nestled between the brain's two hemispheres, appears critical to a person's sense of inhabiting their own body, or bodily self, the team recently reported in the journal Neuron. The finding could help researchers develop forms of anesthesia that use electrical stimulation instead of drugs. It could also help explain the antidepressant effects of mind-altering drugs like ketamine. It took Parvizi's team years of research to discover the importance of this obscure bit of brain tissue. In 2019, when the man first came to Stanford's Comprehensive Epilepsy Program, Parvizi thought his symptoms were caused by seizures in the posteromedial cortex, an area toward the back of the brain. This area includes a brain network involved in the narrative self, a sort of internal autobiography that helps us define who we are. Parvizi's team figured that the same network must be responsible for the bodily self too. "Everybody thought, 'Well, maybe all kinds of selves are being decoded by the same system,'" he says. © 2023 npr

Keyword: Attention; Consciousness
Link ID: 28846 - Posted: 07.06.2023

By Anil Seth In 1870, Alfred Russell Wallace wagered £500—a huge sum in those days—that he could prove the flat-Earther John Hampden wrong. Wallace duly did so, but the aggrieved Hampden never paid up. Since then, a lively history of scientific wagers has ensued—many of them instigated by Stephen Hawking. Just last month in New York, the most famous recent wager was settled: a 25-year-old bet over one of the last great mysteries in science and philosophy. The bettors were neuroscientist Christof Koch and philosopher David Chalmers, both known for their pioneering work on the nature of consciousness. Chalmers won. Koch paid up. Back in the late 1990s, consciousness science was full of renewed promise. Koch—a natural optimist—believed that 25 years was more than enough time for scientists to uncover the neural correlates of consciousness: those patterns of brain activity that underlie each and every one of our conscious experiences. Chalmers, a philosopher and therefore something of a pessimist by profession, demurred. In 1998, the pair staked a crate of fine wine on the outcome. The bet was finally called at the annual meeting of the Association for the Scientific Study of Consciousness in New York a couple of weeks ago. Koch graciously handed Chalmers a bottle of Madeira on the conference stage. While much more is known about consciousness today than in the ’90s, its true neural correlates—and indeed a consensus theory of consciousness—still elude us. What helped resolve the wager was the outcome, or rather the lack of a decisive outcome, of an “adversarial collaboration” organized by a consortium called COGITATE. Adversarial collaborations encourage researchers from different theoretical camps to jointly design experiments that can distinguish between their theories. In this case, the theories in question were integrated information theory (IIT), the brainchild of Giulio Tononi, and the neuronal global workspace theory (GWT), championed by Stanislas Dehaene. The two scientists made predictions, based on their respective theories, about what kinds of brain activity would be recorded in an experiment in which participants looked at a series of images—but neither predicted outcome fully played out. © 2023 NautilusNext Inc.,

Keyword: Consciousness
Link ID: 28845 - Posted: 07.06.2023

By Carl Zimmer On a muggy June night in Greenwich Village, more than 800 neuroscientists, philosophers and curious members of the public packed into an auditorium. They came for the first results of an ambitious investigation into a profound question: What is consciousness? To kick things off, two friends — David Chalmers, a philosopher, and Christof Koch, a neuroscientist — took the stage to recall an old bet. In June 1998, they had gone to a conference in Bremen, Germany, and ended up talking late one night at a local bar about the nature of consciousness. For years, Dr. Koch had collaborated with Francis Crick, a biologist who shared a Nobel Prize for uncovering the structure of DNA, on a quest for what they called the “neural correlate of consciousness.” They believed that every conscious experience we have — gazing at a painting, for example — is associated with the activity of certain neurons essential for the awareness that comes with it. Dr. Chalmers liked the concept, but he was skeptical that they could find such a neural marker any time soon. Scientists still had too much to learn about consciousness and the brain, he figured, before they could have a reasonable hope of finding it. Dr. Koch wagered his friend that scientists would find a neural correlate of consciousness within 25 years. Dr. Chalmers took the bet. The prize would be a few bottles of fine wine. Recalling the bet from the auditorium stage, Dr. Koch admitted that it had been fueled by drinks and enthusiasm. “When you’re young, you’ve got to believe things will be simple,” he said. © 2023 The New York Times Company

Keyword: Consciousness
Link ID: 28839 - Posted: 07.01.2023

By John Horgan A neuroscientist clad in gold and red and a philosopher sheathed in black took the stage before a packed, murmuring auditorium at New York University on Friday night. The two men were grinning, especially the philosopher. They were here to settle a bet made in the late 1990s on one of science’s biggest questions: How does a brain, a lump of matter, generate subjective conscious states such as the blend of anticipation and nostalgia I felt watching these guys? Before I reveal their bet’s resolution, let me take you through its twisty backstory, which reveals why consciousness remains a topic of such fascination and frustration to anyone with even the slightest intellectual leaning. I first saw Christof Koch, the neuroscientist, and David Chalmers, the philosopher, butt heads in 1994 at a now legendary conference in Tucson, Ariz., called Toward a Scientific Basis for Consciousness. Koch was a star of the meeting. Together with biophysicist Francis Crick, he had been proclaiming in Scientific American and elsewhere that consciousness, which philosophers have wrestled with for millennia, was scientifically tractable. Just as Crick and geneticist James Watson solved heredity by decoding DNA’s double helix, scientists would crack consciousness by discovering its neural underpinnings, or “correlates.” Or so Crick and Koch claimed. They even identified a possible basis for consciousness: brain cells firing in synchrony 40 times per second. Advertisement Not everyone in Tucson was convinced. Chalmers, younger and then far less well known than Koch, argued that neither 40-hertz oscillations nor any other strictly physical process could account for why perceptions are accompanied by conscious sensations, such as the crushing boredom evoked by a jargony lecture. I have a vivid memory of the audience perking up when Chalmers called consciousness “the hard problem.” That was the first time I heard that now famous phrase.

Keyword: Consciousness
Link ID: 28836 - Posted: 06.28.2023

By Jordan Kinard Long the fixation of religions, philosophy and literature the world over, the conscious experience of dying has recently received increasingly significant attention from science. This comes as medical advances extend the ability to keep the body alive, steadily prying open a window into the ultimate locked room: the last living moments of a human mind. “Around 1959 humans discovered a method to restart the heart in people who would have died, and we called this CPR,” says Sam Parnia, a critical care physician at NYU Langone Health. Parnia has studied people’s recollections after being revived from cardiac arrest—phenomena that he refers to as “recalled experiences surrounding death.” Before CPR techniques were developed, cardiac arrest was basically synonymous with death. But now doctors can revive some people up to 20 minutes or more after their heart has stopped beating. Furthermore, Parnia says, many brain cells remain somewhat intact for hours to days postmortem—challenging our notions of a rigid boundary between life and death. Advancements in medical technology and neuroscience, as well as shifts in researchers’ perspectives, are revolutionizing our understanding of the dying process. Research over the past decade has demonstrated a surge in brain activity in human and animal subjects undergoing cardiac arrest. Meanwhile large surveys are documenting the seemingly inexplicable periods of lucidity that hospice workers and grieving families often report witnessing in people with dementia who are dying. Poet Dylan Thomas famously admonished his readers, “Do not go gentle into that good night. Rage, rage against the dying of the light.” But as more resources are devoted to the study of death, it is becoming increasingly clear that dying is not the simple dimming of one’s internal light of awareness but rather an incredibly active process in the brain. © 2023 Scientific American,

Keyword: Attention; Development of the Brain
Link ID: 28820 - Posted: 06.14.2023

By Claudia Lopez Lloreda Of all of COVID-19’s symptoms, one of the most troubling is “brain fog.” Victims report headaches, trouble concentrating, and forgetfulness. Now, researchers have shown that SARS-CoV-2 can cause brain cells to fuse together, disrupting their communication. Although the study was only done in cells in a lab dish, some scientists say it could help explain one of the pandemic’s most confounding symptoms. “This is a first important step,” says Stefan Lichtenthaler, a biochemist at the German Center for Neurodegenerative Diseases who was not involved with the work. Researchers already knew that SARS-CoV-2 could cause certain cells to fuse together. The lungs of patients who die from severe COVID-19 are often riddled with large, multicellular structures called syncytia, which scientists believe may contribute to the respiratory symptoms of the disease. Like other viruses, SARS-CoV-2 may incite cells to fuse to help it spread across an organ without having to infect new cells. To see whether such cell fusion might happen in brain cells, Massimo Hilliard, a neuroscientist at the University of Queensland, and his colleagues first genetically engineered two populations of mouse neurons: One expressed a red fluorescent molecule, and the other a green fluorescent molecule. If the two fused in a lab dish, they would show up as bright yellow under the microscope. That’s just what the researchers saw when they added SARS-CoV-2 to a dish containing both types of cells, they report today in Science Advances. The same fusion happened in human brain organoids, so-called minibrains that are created from stem cells. The key appears to be angiotensin-converting enzyme 2 (ACE2), the protein expressed on the surface of mammalian cells that SARS-CoV-2 is known to target. The virus uses a surface protein called spike to bind to ACE2, triggering the virus to fuse to a cell and release its genetic material inside. Seemingly, the spike protein in infected cells may also make other ACE2 on a cell trigger fusion to a neighboring cell. When the team engineered neurons to express the spike protein, only cells that also expressed ACE2 were able to fuse with each other. The findings parallel previous work in lung cells: The ACE2 receptor seems to be critical in mediating their fusion during SARS-CoV-2 infection.

Keyword: Neuroimmunology; Attention
Link ID: 28818 - Posted: 06.14.2023

By Steven Strogatz Neuroscience has made progress in deciphering how our brains think and perceive our surroundings, but a central feature of cognition is still deeply mysterious: namely, that many of our perceptions and thoughts are accompanied by the subjective experience of having them. Consciousness, the name we give to that experience, can’t yet be explained — but science is at least beginning to understand it. In this episode, the consciousness researcher Anil Seth and host Steven Strogatz discuss why our perceptions can be described as a “controlled hallucination,” how consciousness played into the internet sensation known as “the dress,” and how people at home can help researchers catalog the full range of ways that we experience the world. Steven Strogatz (00:03): I’m Steve Strogatz, and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in math and science today. In this episode, we’re going to be discussing the mystery of consciousness. The mystery being that when your brain cells fire in certain patterns, it actually feels like something. It might feel like jealousy, or a toothache, or the memory of your mother’s face, or the scent of her favorite perfume. But other patterns of brain activity don’t really feel like anything at all. Right now, for instance, I’m probably forming some memories somewhere deep in my brain. But the process of that memory formation is imperceptible to me. I can’t feel it. It doesn’t give rise to any sort of internal subjective experience at all. In other words, I’m not conscious of it. (00:54) So how does consciousness happen? How is it related to physics and biology? Are animals conscious? What about plants? Or computers, could they ever be conscious? And what is consciousness exactly? My guest today, Dr. Anil Seth, studies consciousness in his role as the co-director of the Sussex Center for Consciousness Science at the University of Sussex, near Brighton, England. The Center brings together all sorts of disciplinary specialists, from neuroscientists to mathematicians to experts in virtual reality, to study the conscious experience. Dr. Seth is also the author of the book Being You: A New Science of Consciousness. He joins us from studios in Brighton, England. Anil, thanks for being here. All Rights Reserved © 2023

Keyword: Consciousness
Link ID: 28812 - Posted: 06.03.2023