Chapter 14. Attention and Higher Cognition
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Chris Berdik Yale psychiatrist Albert Powers didn’t know what to expect as he strolled among the tarot card readers, astrologers, and crystal vendors at the psychic fair held at the Best Western outside North Haven, Connecticut, on a cloudy November Saturday in 2014. At his clinic, Powers worked with young people, mostly teenagers, who had started hearing voices. His patients and their families were worried that the voices might be precursors of psychosis such as schizophrenia. Sometimes, they were. But Powers also knew that lots of people occasionally heard voices — between 7 and 15 percent of the population, according to studies — and about 75 percent of those people lived otherwise normal lives. WHAT I LEFT OUT is a recurring feature in which book authors are invited to share anecdotes and narratives that, for whatever reason, did not make it into their final manuscripts. In this installment, journalist Chris Berdik shares a story that didn’t make it into his recent book “Clamor: How Noise Took Over the World and How We Can Take It Back” (Norton, 272 pages). He wanted to study high-functioning voice hearers, and a gathering of psychics seemed like a good place to find them. If clinicians could better distinguish voice hearers who develop psychosis and lose touch with reality from those who don’t, he thought, then maybe he could help steer more patients down a healthier path. Powers introduced himself to the fair’s organizer and explained the sort of person he hoped to find. The organizer directed him to a nearby table where he met a smiley, middle-aged medium. The woman had a day job as an emergency services dispatcher, but the voices made frequent appearances in her daily life, and her side hustle was communicating with the dead.
Keyword: Schizophrenia; Attention
Link ID: 29809 - Posted: 05.28.2025
Dobromir Rahnev Is it possible to upload the consciousness of your mind into a computer? – Amreen, age 15, New Delhi, India The concept, cool yet maybe a little creepy, is known as mind uploading. Think of it as a way to create a copy of your brain, a transmission of your mind and consciousness into a computer. There you would live digitally, perhaps forever. You’d have an awareness of yourself, you’d retain your memories and still feel like you. But you wouldn’t have a body. Within that simulated environment, you could do anything you do in real life – eating, driving a car, playing sports. You could also do things impossible in the real world, like walking through walls, flying like a bird or traveling to other planets. The only limit is what science can realistically simulate. Doable? Theoretically, mind uploading should be possible. Still, you may wonder how it could happen. After all, researchers have barely begun to understand the brain. Yet science has a track record of turning theoretical possibilities into reality. Just because a concept seems terribly, unimaginably difficult doesn’t mean it’s impossible. Consider that science took humankind to the Moon, sequenced the human genome and eradicated smallpox. Those things too were once considered unlikely. As a brain scientist who studies perception, I fully expect mind uploading to one day be a reality. But as of today, we’re nowhere close. Learn about the latest, most interesting health and science research © 2010–2025, The Conversation US, Inc.
Keyword: Consciousness; Robotics
Link ID: 29803 - Posted: 05.24.2025
By Sara Novak Just a few weeks after they hatch, baby male zebra finches begin to babble, spending much of the day testing their vocal chords. Dad helps out, singing to his hatchlings during feedings, so that the babies can internalize his tune, the same mating refrain shared by all male zebra finches. Soon, these tiny Australian birds begin to rehearse the song itself, repeating it up to 10,000 times a day, without any clear reward other than their increasing perfection of the melody. The baby birds’ painstaking devotion to mastering their song led Duke University neuroscientist Richard Mooney and his Duke colleague John Pearson to wonder whether the birds could help us better understand the nature of self-directed learning. In humans, language and musical expression are thought to be self-directed—spontaneous, adaptive and intrinsically reinforced. In a study recently published in Nature, the scientists tracked the brain signals and levels of dopamine, a neurotransmitter involved in reward and movement, in the brains of five male baby Zebra finches while they were singing. They also measured song quality for each rendition the birds sang, in terms of both pitch and vigor, as well as the quality of song performance relative to the bird’s age. What they found is that dopamine levels in the baby birds’ brains closely matched the birds’ performance of the song, suggesting it plays a central role in the learning process. Scientists have long known that learning that is powered by external rewards, such as grades, praise or sugary treats, is driven by dopamine—which is thought to chart the differences between expected and experienced rewards. But while they have suspected that self-directed learning is likewise guided by dopamine, it had been difficult to test that hypothesis until now. © 2025 NautilusNext Inc.,
Keyword: Attention; Sexual Behavior
Link ID: 29800 - Posted: 05.24.2025
By Gina Kolata Dr. Geoffrey Manley, a neurosurgeon at the University of California, San Francisco, wants the medical establishment to change the way it deals with brain injuries. His work is motivated in part by what happened to a police officer he treated in 2002, just after completing his medical training. The man arrived at the emergency room unconscious, in a coma. He had been in a terrible car crash while pursuing a criminal. Two days later, Dr. Manley’s mentor said it was time to tell the man’s family there was no hope. His life support should be withdrawn. He should be allowed to die. Dr. Manley resisted. The patient’s brain oxygen levels were encouraging. Seven days later the policeman was still in a coma. Dr. Manley’s mentor again pressed him to talk to the man’s family about withdrawing life support. Again, Dr. Manley resisted. Ten days after the accident, the policeman began to come out of his coma. Three years later he was back at work and was named San Francisco Police Officer of the Month. In 2010, he was Police Officer of the Year “That case, and another like it,” Dr. Manley said, “changed my practice.” But little has changed in the world of traumatic brain injuries since Dr. Manley’s patient woke up. Assessments of who will recover and how severely patients are injured are pretty much the same, which results in patients being told they “just” have a concussion, who then have trouble getting care for recurring symptoms like memory lapses or headaches. And it results in some patients in the position of that policemen, who have their life support withdrawn when they might have recovered. Now, though, Dr. Manley and 93 others from 14 countries are proposing a new way to evaluate patients. They published their classification system Tuesday in the journal Lancet Neurology. © 2025 The New York Times Company
Keyword: Brain Injury/Concussion; Consciousness
Link ID: 29798 - Posted: 05.21.2025
By Mac Shine The brain is an endlessly dynamic machine; it can wake you from sleep, focus your attention, spark a memory or help you slam on the brakes while driving. But what makes this precision possible? How can the brain dial up just the right amount of alertness or inhibition, and only when it’s needed? A new study, out today in Nature, may have found part of the answer in an unlikely place: a cluster of small, largely overlooked inhibitory neurons nestled next to one of the brain’s most powerful arousal hubs, the locus coeruleus (LC). Led by Michael R. Bruchas, a neuroscientist at the University of Washington, the study is a tour de force in neural sleuthing, employing methods ranging from viral tracing and electrophysiology to imaging and behavior to map an elusive cell population known as the pericoeruleus. In a world where we’re constantly being pinged, alerted, nudged and notified, the ability to not react—to gate our arousal and filter our responses—may be one of the brain’s most underappreciated superpowers. Here I discuss the results with Bruchas—and what he and his team found is remarkable. Far from being a passive neighbor to the LC, the pericoeruleus appears to act as a kind of micromanager of arousal, selectively inhibiting different subgroups of LC neurons depending on the behavioral context. If the LC is like a floodlight that bathes the brain in noradrenaline—raising alertness, sharpening perception and mobilizing attention—then the pericoeruleus may be the finely-tuned lens that directs where and when that light shines. It’s a subtle but powerful form of control, and one that challenges traditional views of how the LC operates. For decades, the LC has been thought of primarily as a global broadcaster: When it fires, it releases norepinephrine widely across the cortex, preparing the brain for action. But this new work is the latest in a recent line of inquiry that has challenged this simplicity—suggesting that the system is more complex and nuanced than previously thought. “We’re beginning to see that the locus coeruleus doesn’t just flood the brain with arousal – it targets specific outputs, and the pericoeruleus plays a key role in gating that process,” said Li Li, one of the co-first authors of the paper and a former postdoctoral researcher in Bruchas’ lab, now assistant professor of anesthesiology at Seattle Children’s Hospital. © 2025 Simons Foundation
Keyword: Attention
Link ID: 29787 - Posted: 05.14.2025
By Asher Elbein True friends, most people would agree, are there for each other. Sometimes that means offering emotional support. Sometimes it means helping each other move. And if you’re a superb starling — a flamboyant, chattering songbird native to the African savanna — it means stuffing bugs down the throats of your friends’ offspring, secure in the expectation that they’ll eventually do the same for yours. Scientists have long known that social animals usually put blood relatives first. But for a study published Wednesday in the journal Nature, researchers crunched two decades of field data to show that unrelated members of a superb starling flock often help each other raise chicks, trading assistance to one another over years in a behavior that was not previously known. “We think that these reciprocal helping relationships are a way to build ties,” said Dustin Rubenstein, a professor of ecology at Columbia University and an author of the paper. Superb starlings are distinctive among animals that breed cooperatively, said Alexis Earl, a biologist at Cornell University and an author of the paper. Their flocks mix family groups with immigrants from other groups. New parents rely on up to 16 helpers, which bring chicks extra food and help run off predators. Dr. Rubenstein’s lab has maintained a 20-year field study of the species that included 40 breeding seasons. It has recorded thousands of interactions between hundreds of the chattering birds and collected DNA to examine their genetic relationships. When Dr. Earl, then a graduate student in the lab, began crunching the data, she and her colleagues weren’t shocked to see that birds largely helped relatives, the way an aunt or uncle may swoop in to babysit and give parents a break. © 2025 The New York Times Company
Keyword: Evolution; Emotions
Link ID: 29780 - Posted: 05.10.2025
By Carl Zimmer Consciousness may be a mystery, but that doesn’t mean that neuroscientists don’t have any explanations for it. Far from it. “In the field of consciousness, there are already so many theories that we don’t need more theories,” said Oscar Ferrante, a neuroscientist at the University of Birmingham. If you’re looking for a theory to explain how our brains give rise to subjective, inner experiences, you can check out Adaptive Resonance Theory. Or consider Dynamic Core Theory. Don’t forget First Order Representational Theory, not to mention semantic pointer competition theory. The list goes on: A 2021 survey identified 29 different theories of consciousness. Dr. Ferrante belongs to a group of scientists who want to lower that number, perhaps even down to just one. But they face a steep challenge, thanks to how scientists often study consciousness: Devise a theory, run experiments to build evidence for it, and argue that it’s better than the others. “We are not incentivized to kill our own ideas,” said Lucia Melloni, a neuroscientist at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Seven years ago, Dr. Melloni and 41 other scientists embarked on a major study on consciousness that she hoped would break this pattern. Their plan was to bring together two rival groups to design an experiment to see how well both theories did at predicting what happens in our brains during a conscious experience. The team, called the Cogitate Consortium, published its results on Wednesday in the journal Nature. But along the way, the study became subject to the same sharp-elbowed conflicts they had hoped to avoid. Dr. Melloni and a group of like-minded scientists began drawing up plans for their study in 2018. They wanted to try an approach known as adversarial collaboration, in which scientists with opposing theories join forces with neutral researchers. The team chose two theories to test. © 2025 The New York Times Company
Keyword: Consciousness
Link ID: 29773 - Posted: 05.03.2025
By Anil Seth On stage in New York a couple years ago, noted neuroscientist Christof Koch handed a very nice bottle of Madeira wine to philosopher David Chalmers. Chalmers had won a quarter-century-long bet about consciousness—or at least our understanding of it. Nautilus Members enjoy an ad-free experience. Log in or Join now . The philosopher had challenged the neuroscientist in 1998—with a crate of fine wine on the line—that in 25 years, science would still not have located the seat of consciousness in the brain. The philosopher was right. But not without an extraordinary—and revealing—effort on the part of consciousness researchers and theorists. Backing up that concession were the results of a long and thorough “adversarial collaboration” that compared two leading theories about consciousness, testing each with rigorous experimental data. Now we finally learn more about the details of this work in a new paper in the journal Nature. Nicknamed COGITATE, the collaboration pitted “global neuronal workspace theory” (GNWT)—an idea advocated by cognitive neuroscientist Stanislas Dehaene, which associates consciousness with the broadcast of information throughout large swathes of the brain—against “integrated information theory” (IIT)—the idea from neuroscientist Giulio Tononi, which identifies consciousness with the intrinsic cause-and-effect power of brain networks. The adversarial collaboration involved the architects of both theories sitting down together, along with other researchers who would lead and execute the project (hats off to them), to decide on experiments that could potentially distinguish between the theories—ideally supporting one and challenging the other. Deciding on the theory-based predictions, and on experiments good enough to test them, was never going to be easy. In consciousness research, it is especially hard since—as philosopher Tim Bayne and I noted—theories often make different assumptions, and attempt to explain different things even if, on the face of it, they are all theories of “consciousness.” © 2025 NautilusNext Inc.,
Keyword: Consciousness
Link ID: 29772 - Posted: 05.03.2025
By Allison Parshall ] Where in the brain does consciousness originate? Theories abound, but neuroscientists still haven’t coalesced around one explanation, largely because it’s such a hard question to probe with the scientific method. Unlike other phenomena studied by science, consciousness cannot be observed externally. “I observe your behavior. I observe your brain, if I do an intracranial EEG [electroencephalography] study. But I don’t ever observe your experience,” says Robert Chis-Ciure, a postdoctoral researcher studying consciousness at the University of Sussex in England. Scientists have landed on two leading theories to explain how consciousness emerges: integrated information theory, or IIT, and global neuronal workspace theory, or GNWT. These frameworks couldn’t be more different—they rest on different assumptions, draw from different fields of science and may even define consciousness in different ways, explains Anil K. Seth, a consciousness researcher at the University of Sussex. To compare them directly, researchers organized a group of 12 laboratories called the Cogitate Consortium to test the theories’ predictions against each other in a large brain-imaging study. The result, published in full on Wednesday in Nature, was effectively a draw and raised far more questions than it answered. The preliminary findings were posted to the preprint server bioRxiv in 2023. And only a few months later, a group of scholars publicly called IIT “pseudoscience” and attempted to excise it from the field. As the dust settles, leading consciousness researchers say that the Cogitate results point to a way forward for understanding how consciousness arises—no matter what theory eventually comes out on top. “We all are very good at constructing castles in the sky” with abstract ideas, says Chis-Ciure, who was not involved in the new study. “But with data, you make those more grounded.” © 2025 SCIENTIFIC AMERICAN,
Keyword: Consciousness
Link ID: 29771 - Posted: 05.03.2025
By Yasemin Saplakoglu In 1943, a pair of neuroscientists were trying to describe how the human nervous system works when they accidentally laid the foundation for artificial intelligence. In their mathematical framework (opens a new tab) for how systems of cells can encode and process information, Warren McCulloch and Walter Pitts argued that each brain cell, or neuron, could be thought of as a logic device: It either turns on or it doesn’t. A network of such “all-or-none” neurons, they wrote, can perform simple calculations through true or false statements. “They were actually, in a sense, describing the very first artificial neural network,” said Tomaso Poggio (opens a new tab) of the Massachusetts Institute of Technology, who is one of the founders of computational neuroscience. McCulloch and Pitts’ framework laid the groundwork for many of the neural networks that underlie the most powerful AI systems. These algorithms, built to recognize patterns in data, have become so competent at complex tasks that their products can seem eerily human. ChatGPT’s text is so conversational and personal that some people are falling in love (opens a new tab). Image generators can create pictures so realistic that it can be hard to tell when they’re fake. And deep learning algorithms are solving scientific problems that have stumped humans for decades. These systems’ abilities are part of the reason the AI vocabulary is so rich in language from human thought, such as intelligence, learning and hallucination. But there is a problem: The initial McCulloch and Pitts framework is “complete rubbish,” said the science historian Matthew Cobb (opens a new tab) of the University of Manchester, who wrote the book The Idea of the Brain: The Past and Future of Neuroscience (opens a new tab). “Nervous systems aren’t wired up like that at all.” A promotional card for Quanta's AI series, which reads Science Promise and the Peril of AI, Explore the Series" When you poke at even the most general comparison between biological and artificial intelligence — that both learn by processing information across layers of networked nodes — their similarities quickly crumble. © 2025 Simons Foundation
Keyword: Consciousness; Robotics
Link ID: 29770 - Posted: 05.03.2025
By Matt Richtel So sharp are partisan divisions these days that it can seem as if people are experiencing entirely different realities. Maybe they actually are, according to Leor Zmigrod, a neuroscientist and political psychologist at Cambridge University. In a new book, “The Ideological Brain: The Radical Science of Flexible Thinking,” Dr. Zmigrod explores the emerging evidence that brain physiology and biology help explain not just why people are prone to ideology but how they perceive and share information. What is ideology? It’s a narrative about how the world works and how it should work. This potentially could be the social world or the natural world. But it’s not just a story: It has really rigid prescriptions for how we should think, how we should act, how we should interact with other people. An ideology condemns any deviation from its prescribed rules. You write that rigid thinking can be tempting. Why is that? Ideologies satisfy the need to try to understand the world, to explain it. And they satisfy our need for connection, for community, for just a sense that we belong to something. There’s also a resource question. Exploring the world is really cognitively expensive, and just exploiting known patterns and rules can seem to be the most efficient strategy. Also, many people argue — and many ideologies will try to tell you — that adhering to rules is the only good way to live and to live morally. I actually come at it from a different perspective: Ideologies numb our direct experience of the world. They narrow our capacity to adapt to the world, to understand evidence, to distinguish between credible evidence and not credible evidence. Ideologies are rarely, if ever, good. Q: In the book, you describe research showing that ideological thinkers can be less reliable narrators. Can you explain? Remarkably, we can observe this effect in children. In the 1940s, Else Frenkel-Brunswik, a psychologist at the University of California, Berkeley, interviewed hundreds of children and tested their levels of prejudice and authoritarianism, like whether they championed conformity and obedience or play and imagination. When children were told a story about new pupils at a fictional school and asked to recount the story later, there were significant differences in what the most prejudiced children remembered, as opposed to the most liberal children. © 2025 The New York Times Company
Keyword: Emotions; Attention
Link ID: 29737 - Posted: 04.09.2025
Alexandra Topping The benefits of taking drugs for attention deficit hyperactivity disorder outweigh the impact of increases in blood pressure and heart rate, according to a new study. An international team of researchers led by scientists from the University of Southampton found the majority of children taking ADHD medication experienced small increases in blood pressure and pulse rates, but that the drugs had “overall small effects”. They said the study’s findings highlighted the need for “careful monitoring”. Prof Samuele Cortese, the senior lead author of the study, from the University of Southampton, said the risks and benefits of taking any medication had to be assessed together, but for ADHD drugs the risk-benefit ratio was “reassuring”. “We found an overall small increase in blood pressure and pulse for the majority of children taking ADHD medications,” he said. “Other studies show clear benefits in terms of reductions in mortality risk and improvement in academic functions, as well as a small increased risk of hypertension, but not other cardiovascular diseases. Overall, the risk-benefit ratio is reassuring for people taking ADHD medications.” About 3 to 4% of adults and 5% of children in the UK are believed to have ADHD, a neurodevelopmental disorder with symptoms including impulsiveness, disorganisation and difficulty focusing, according to the National Institute for Health and Care Excellence (Nice). Doctors can prescribe stimulants, such as methylphenidate, of which the best-known brand is Ritalin. Other stimulant medications used to treat ADHD include lisdexamfetamine and dexamfetamine. Non-stimulant drugs include atomoxetine, an sNRI (selective norepinephrine reuptake inhibitor), and guanfacine. © 2025 Guardian News & Media Limited
Keyword: ADHD; Drug Abuse
Link ID: 29734 - Posted: 04.09.2025
By Smriti Mallapaty Neuroscientists have observed for the first time how structures deep in the brain are activated when the brain becomes aware of its own thoughts, known as conscious perception1. The brain is constantly bombarded with sights, sounds and other stimuli, but people are only ever aware of a sliver of the world around them — the taste of a piece of chocolate or the sound of someone’s voice, for example. Researchers have long known that the outer layer of the brain, called the cerebral cortex, plays a part in this experience of being aware of specific thoughts. The involvement of deeper brain structures has been much harder to elucidate, because they can be accessed only with invasive surgery. Designing experiments to test the concept in animals is also tricky. But studying these regions would allow researchers to broaden their theories of consciousness beyond the brain’s outer wrapping, say researchers. “The field of consciousness studies has evoked a lot of criticism and scepticism because this is a phenomenon that is so hard to study,” says Liad Mudrik, a neuroscientist at Tel Aviv University in Israel. But scientists have increasingly been using systematic and rigorous methods to investigate consciousness, she says. Aware or not In a study published in Science today1, Mingsha Zhang, a neuroscientist at Beijing Normal University, focused on the thalamus. This region at the centre of the brain is involved in processing sensory information and working memory, and is thought to have a role in conscious perception. Participants were already undergoing therapy for severe and persistent headaches, for which they had thin electrodes injected deep into their brains. This allowed Zhang and his colleagues to study their brain signals and measure conscious awareness. © 2025 Springer Nature Limited
Keyword: Consciousness
Link ID: 29731 - Posted: 04.05.2025
By Christina Caron Health Secretary Robert F. Kennedy Jr. has often criticized prescription stimulants, such as Adderall, that are primarily used to treat attention deficit hyperactivity disorder. “We have damaged this entire generation,” he said last year during a podcast, referring to the number of children taking psychiatric medications. “We have poisoned them.” In February, the “Make America Healthy Again” commission, led by Mr. Kennedy, announced plans to evaluate the “threat” posed by drugs like prescription stimulants. But are they a threat? And if so, to whom? Like many medications, prescription stimulants have potential side effects, and there are people who misuse them. Yet these drugs are also considered some of the most effective and well-researched treatments that psychiatry has to offer, said Dr. Jeffrey H. Newcorn, the director of the Division of A.D.H.D. and Learning Disorders at the Icahn School of Medicine at Mount Sinai in New York. Here are some answers to common questions and concerns about stimulants. What are prescription stimulants? Prescription stimulants are drugs that help change the way the brain works by increasing the communication among neurons. They are divided into two classes: methylphenidates (like Ritalin, Focalin and Concerta) and amphetamines (like Vyvanse and Adderall). © 2025 The New York Times Company
Keyword: ADHD; Drug Abuse
Link ID: 29723 - Posted: 04.02.2025
By Christina Caron On TikTok, misinformation about attention deficit hyperactivity disorder can be tricky to spot, according to a new study. The study, published on Wednesday in the journal PLOS One, found that fewer than 50 percent of the claims made in some of the most popular A.D.H.D. videos on TikTok offered information that matched diagnostic criteria or professional treatment recommendations for the disorder. And, the researchers found, even study participants who had already been diagnosed with A.D.H.D. had trouble discerning which information was most reliable. About half of the TikTok creators included in the study were using the platform to sell products, such as fidget spinners, or services like coaching. None of them were licensed mental health professionals. The lack of nuance is concerning, said Vasileia Karasavva, a Ph.D. student in clinical psychology at the University of British Columbia in Vancouver and the lead author of the study. If TikTok creators talk about difficulty concentrating, she added, they don’t typically mention that the symptom is not specific to A.D.H.D. or that it could also be a manifestation of a different mental disorder, like depression or anxiety. “The last thing we want to do is discourage people from expressing how they’re feeling, what they’re experiencing and finding community online,” Ms. Karasavva said. “At the same time, it might be that you self-diagnose with something that doesn’t apply to you, and then you don’t get the help that you actually need.” Ms. Karasavva’s results echo those of a 2022 study that also analyzed 100 popular TikTok videos about A.D.H.D. and found that half of them were misleading. “The data are alarming,” said Stephen P. Hinshaw, a professor of psychology and an expert in A.D.H.D. at the University of California, Berkeley, who was not involved in either study. The themes of the videos might easily resonate with viewers, he added, but “accurate diagnosis takes access, time and money.” © 2025 The New York Times Company
Keyword: ADHD
Link ID: 29714 - Posted: 03.22.2025
By Claudia López Lloreda For a neuroscientist, the opportunity to record single neurons in people doesn’t knock every day. It is so rare, in fact, that after 14 years of waiting by the door, Florian Mormann says he has recruited just 110 participants—all with intractable epilepsy. All participants had electrodes temporarily implanted in their brains to monitor their seizures. But the slow work to build this cohort is starting to pay off for Mormann, a group leader at the University of Bonn, and for other researchers taking a similar approach, according to a flurry of studies published in the past year. For instance, certain neurons selectively respond not only to particular scents but also to the words and images associated with them, Mormann and his colleagues reported in October. Other neurons help to encode stimuli, form memories and construct representations of the world, recent work from other teams reveals. Cortical neurons encode specific information about the phonetics of speech, two independent teams reported last year. Hippocampal cells contribute to working memory and map out time in novel ways, two other teams discovered last year, and some cells in the region encode information related to a person’s changing knowledge about the world, a study published in August found. These studies offer the chance to answer questions about human brain function that remain challenging to answer using animal models, says Ziv Williams, associate professor of neurosurgery at Harvard Medical School, who led one of the teams that worked on speech phonetics. “Concept cells,” he notes by way of example, such as those Mormann identified, or the “Jennifer Aniston” neurons famously described in a 2005 study, have proved elusive in the monkey brain. © 2025 Simons Foundation
Keyword: Attention; Learning & Memory
Link ID: 29709 - Posted: 03.19.2025
By Kelly Servick New York City—A recent meeting here on consciousness started from a relatively uncontroversial premise: A newly fertilized human egg isn’t conscious, and a preschooler is, so consciousness must emerge somewhere in between. But the gathering, sponsored by New York University (NYU), quickly veered into more unsettled territory. At the Infant Consciousness Conference from 28 February to 1 March, researchers explored when and how consciousness might arise, and how to find out. They also considered hints from recent brain imaging studies that the capacity for consciousness could emerge before birth, toward the end of gestation. “Fetal consciousness would have been a less central topic at a meeting like this a few years ago,” says Claudia Passos-Ferreira, a bioethicist at NYU who co-organized the gathering. The conversation has implications for how best to care for premature infants, she says, and intersects with thorny issues such as abortion. “Whatever you claim about this, there are some moral implications.” How to define consciousness is itself the subject of debate. “Each of us might have a slightly different definition,” neuroscientist Lorina Naci of Trinity College Dublin acknowledged at the meeting before describing how she views consciousness—as the capacity to have an experience or a subjective point of view. There’s also vigorous debate about where consciousness arises in the brain and what types of neural activity define it. That makes it hard to agree on specific markers of consciousness in beings—such as babies—that can’t talk about their experience. Further complicating the picture, the nature of consciousness could be different for infants than adults, researchers noted at the meeting. And it may emerge gradually versus all at once, on different timescales for different individuals.
Keyword: Consciousness; Development of the Brain
Link ID: 29703 - Posted: 03.12.2025
By Mark Humphries There are many ways neuroscience could end. Prosaically, society may just lose interest. Of all the ways we can use our finite resources, studying the brain has only recently become one; it may one day return to dust. Other things may take precedence, like feeding the planet or preventing an asteroid strike. Or neuroscience may end as an incidental byproduct, one of the consequences of war or of thoughtlessly disassembling a government or of being sideswiped by a chunk of space rock. We would prefer it to end on our own terms. We would like neuroscience to end when we understand the brain. Which raises the obvious question: Is this possible? For the answer to be yes, three things need to be true: that there is a finite amount of stuff to know, that stuff is physically accessible and that we understand all the stuff we obtain. But each of these we can reasonably doubt. The existence of a finite amount of knowledge is not a given. Some arguments suggest that an infinite amount of knowledge is not only possible but inevitable. Physicist David Deutsch proposes the seemingly innocuous idea that knowledge grows when we find a good explanation for a phenomenon, an explanation whose details are hard to vary without changing its predictions and hence breaking it as an explanation. Bad explanations are those whose details can be varied without consequence. Ancient peoples attributing the changing seasons to the gods is a bad explanation, for those gods and their actions can be endlessly varied without altering the existence of four seasons occurring in strict order. Our attributing the changing seasons to the Earth’s tilt in its orbit of the sun is a good explanation, for if we omit the tilt, we lose the four seasons and the opposite patterns of seasons in the Northern and Southern hemispheres. A good explanation means we have nailed down some property of the universe sufficiently well that something can be built upon it. © 2025 Simons Foundation
Keyword: Consciousness
Link ID: 29702 - Posted: 03.12.2025
By Felicity Nelson A region in the brainstem, called the median raphe nucleus, contains neurons that control perseverance and exploration.Credit: K H Fung/Science Photo Library Whether mice persist with a task, explore new options or give up comes down to the activity of three types of neuron in the brain. In experiments, researchers at University College London (UCL) were able to control the three behaviours by switching the neurons on and off in a part of the animals’ brainstem called the median raphe nucleus. The findings are reported in Nature today1. “It’s quite remarkable that manipulation of specific neural subtypes in the median raphe nucleus mediates certain strategic behaviours,” says neuroscientist Roger Marek at the Queensland Brain Institute in Brisbane, Australia, who was not involved in the work. Whether these behaviours are controlled in the same way in humans needs to be confirmed, but if they are, this could be relevant to certain neuropsychiatric conditions that are associated with imbalances in the three behavioural strategies, says Sonja Hofer, a co-author of the paper and a systems neuroscientist at UCL. For instance, an overly high drive to persist with familiar actions and repetitive behaviours can be observed in people with obsessive–compulsive disorder and autism, she says. Conversely, pathological disengagement and lack of motivation are symptoms of major depressive disorder, and an excessive drive to explore and inability to persevere with a task is seen in attention deficit hyperactivity disorder. “It could be that changes in the firing rate of specific median raphe cell types could contribute to certain aspects of these conditions,” says Hofer. © 2025 Springer Nature Limited
Keyword: Attention
Link ID: 29696 - Posted: 03.08.2025
By Ingrid Wickelgren After shuffling the cards in a standard 52-card deck, Alex Mullen, a three-time world memory champion, can memorize their order in under 20 seconds. As he flips though the cards, he takes a mental walk through a house. At each point in his journey — the mailbox, front door, staircase and so on — he attaches a card. To recall the cards, he relives the trip. This technique, called “method of loci” or “memory palace,” is effective because it mirrors the way the brain naturally constructs narrative memories: Mullen’s memory for the card order is built on the scaffold of a familiar journey. We all do something similar every day, as we use familiar sequences of events, such as the repeated steps that unfold during a meal at a restaurant or a trip through the airport, as a home for specific details — an exceptional appetizer or an object flagged at security. The general narrative makes the noteworthy features easier to recall later. “You are taking these details and connecting them to this prior knowledge,” said Christopher Baldassano (opens a new tab), a cognitive neuroscientist at Columbia University. “We think this is how you create your autobiographical memories.” Psychologists empirically introduced (opens a new tab) this theory some 50 years ago, but proof of such scaffolds in the brain was missing. Then, in 2018, Baldassano found it: neural fingerprints of narrative experience, derived from brain scans, that replay sequentially during standard life events. He believes that the brain builds a rich library of scripts for expected scenarios — restaurant or airport, business deal or marriage proposal — over a person’s lifetime. These standardized scripts, and departures from them, influence how and how well we remember specific instances of these event types, his lab has found. And recently, in a paper published in Current Biology in fall 2024, they showed that individuals can select a dominant script (opens a new tab) for a complex, real-world event — for example, while watching a marriage proposal in a restaurant, we might opt, subconsciously, for either a proposal or a restaurant script — which determines what details we remember. © 2025 Simons Foundation
Keyword: Learning & Memory; Attention
Link ID: 29685 - Posted: 02.26.2025


.gif)

