Links for Keyword: Attention

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 736

By Diana Kwon The ability to conjure pictures in the mind’s eye enables us to remember the past and imagine the future. It also allows us to plan, navigate and create works of art. In a study published April 9 in Science, researchers report that imagining an object reactivates some of the same neurons involved in seeing it in the first place, providing new insight into how mental imagery is produced in the brain. Previous research had hinted that the neurons involved in perceiving and imagining images overlapped. These studies used various methods, such as asking participants to view and then imagine pictures while lying in a functional MRI scanner, to show that the same brain regions were involved in these processes. But whether the same individual neurons were involved remained an open question, says Ueli Rutishauser, a neuroscientist at Cedars-Sinai Medical Center in Los Angeles. Because measuring neuronal activity requires electrodes in the brain, Rutishauser and colleagues studied 16 adults with epilepsy who had already had electrodes temporarily implanted into their brains to identify the origin of their seizures. Participants viewed hundreds of images from five categories — faces, text, plants, animals and everyday objects — while researchers recorded activity from over 700 neurons in the ventral temporal cortex, a region involved in representing visual objects. Of those, about 450 selectively responded to individual categories. Machine learning then revealed that 80 percent of those category-responsive neurons were selective to specific visual features within the images. © Society for Science & the Public 2000–2026.

Related chapters from BN: Chapter 10: Vision: From Eye to Brain; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 7: Vision: From Eye to Brain; Chapter 14: Attention and Higher Cognition
Link ID: 30195 - Posted: 04.11.2026

By Mac Shine The brain is arguably the most complex object in the known universe, and neuroscience—the discipline charged with understanding it—has grown to match that complexity. Today, the field spans everything from the molecular choreography of a single synapse to the large-scale network dynamics that give rise to conscious experience. It is simultaneously one of the most exciting and most disorienting fields to work in. The conceptual map that connects our different subfields hasn’t been written yet. But a new study published in Aperture Neuro in February takes a remarkable step toward drawing that map. Led by Mario Senden, a computational neuroscientist at Maastricht University, the work applies state-of-the-art text embedding and community detection algorithms to nearly half a million neuroscience abstracts published between 1999 and 2023. It carves the literature into 175 distinct research clusters, characterizing each one along dimensions ranging from spatial scale to theoretical orientation. What emerges is a portrait of a discipline that is, in many ways, healthier than it might appear from the inside. Despite its staggering diversity—clusters range from AMPA receptor trafficking to the neural underpinnings of consciousness—the field is remarkably well integrated; the vast majority of research communities actively draw on and feed into one another. The cluster of resting-state functional MRI dynamics and the molecular mechanisms of hippocampal plasticity emerge as some of the field’s great intellectual hubs, providing conceptual and methodological scaffolding for dozens of downstream communities. But the map also has its fault lines. Microscale and macroscale research communities operate in two largely separate epistemic worlds, divided by spatial scale and by the training trajectories that produce different kinds of neuroscientists. Temporal scales are integrated only pairwise, never holistically. And perhaps most provocatively: Not a single cluster in the entire 175-cluster solution is organized around a theoretical framework. The Bayesian brain, the free energy principle and predictive coding are common targets of empirical science, yet none of them anchor their own research community. Theory, it seems, is something neuroscience does around the edges of the phenomena it is really interested in. © 2026 Simons Foundation

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 1: Cells and Structures: The Anatomy of the Nervous System; Chapter 14: Attention and Higher Cognition
Link ID: 30192 - Posted: 04.08.2026

Carlo Iacono Everyone is panicking about the death of reading. The statistics look damning: the share of Americans who read for pleasure on an average day has fallen by more than 40 per cent over the past 20 years, according to research published in iScience this year. The OECD calls the 2022 decline in educational outcomes ‘unprecedented’ across developed nations. In the OECD’s latest adult-skills survey, Denmark and Finland were the only participating countries where average literacy proficiency improved over the past decade. Your nephew speaks in TikTok references. Democracy itself apparently hangs by the thread of our collective attention span. This narrative has a seductive simplicity. Screens are destroying civilisation. Children can no longer think. We are witnessing the twilight of the literate mind. A recent Substack essay by James Marriott proclaimed the arrival of a ‘post-literate society’ and invited us to accept this as a fait accompli. (Marriott does also write for The Times.) The diagnosis is familiar: technology has fundamentally degraded our capacity for sustained thought, and there’s nothing to be done except write elegiac essays from a comfortable distance. I spend my working life in a university library, watching how people actually engage with information. What I observe doesn’t match this narrative. Not because the problems aren’t real, but because the diagnosis is wrong. The declinist position rests on a category error: treating ‘screen culture’ as a unified phenomenon with inherent cognitive properties. As if the same device that delivers algorithmically curated rage-bait and also the complete works of Shakespeare is itself the problem rather than how we decide to use it. © Aeon Media Group Ltd. 2012-2026.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 15: Language and Lateralization
Link ID: 30132 - Posted: 02.21.2026

Elizabeth Quill Think about your breakfast this morning. Can you imagine the pattern on your coffee mug? The sheen of the jam on your half-eaten toast? Most of us can call up such pictures in our minds. We can visualize the past and summon images of the future. But for an estimated 4% of people, this mental imagery is weak or absent. When researchers ask them to imagine something familiar, they might have a concept of what it is, and words and associations might come to mind, but they describe their mind’s eye as dark or even blank. Systems neuroscientist Mac Shine at the University of Sydney, Australia, first realized that his mental experience differed in this way in 2013. He and his colleagues were trying to understand how certain types of hallucination come about1, and were discussing the vividness of mental imagery. “When I close my eyes, there’s absolutely nothing there,” Shine recalls telling his colleagues. They immediately asked him what he was talking about. “Whoa. What’s going on?” Shine thought. Neither he nor his colleagues had realized how much variation there is in the experiences people have when they close their eyes. This moment of revelation is common to many people who don’t form mental images. They report that they might never have thought about this aspect of their inner life if not for a chance conversation, a high-school psychology class or an article they stumbled across (see ‘How do you imagine?’). Although scientists have known for more than a century that mental imagery varies between people, the topic received a surge of attention when, a decade ago, an influential paper coined the term aphantasia to describe the experience of people with no mental imagery2. © 2026 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 30107 - Posted: 02.04.2026

By Amy X. Wang Alice, fumbling through Wonderland, comes across a mushroom. One bite of it shrinks her down in size. Chowing on the other side makes her swell up, huge, taller than the treetops. Urgently, Alice sets to work “nibbling first at one and then at the other, and growing sometimes taller and sometimes shorter,” until finally she succeeds in “bringing herself down to her usual height” — whereupon everything feels “quite strange.” Is this Lewis Carroll’s 1865 fantasy tale or … the average body-conscious, improvement-obsessed 2026 Whole Foods shopper? Mushrooms, long venerated in literature as dark transformative forces, have become Goopified. Nowadays, you can chug “adaptogenic mushroom coffee,” slurp “functional mushroom cocoa,” doze off with “mushroom sleep drops” or ingest/imbibe any number of other tinctures in the billion-dollar fungal supplements market that promise to fine-tune, or even totally recalibrate, the self. The latest and hottest items in this booming new retail category are mushroom gummies, gushed over by wellness influencers, spilling out from supermarket shelves right there next to your standard cough drops and protein bars. Fungi have aided medical advances like antibiotics and statins, it’s true, and certain species have shown promising results in fighting Parkinson’s or cancer — but what these pastel gumdrops proffer is a broader, more elliptical “cellular well-being.” The mystique feels intentional on product-makers’ part: Like Carroll’s baffled heroine, maybe you’re meant to be in a bit of thrall to the mysterious, almighty mushroom — lurching through Wonderland, charmed and confused by design. After all, you wonder, what are these ancient, alien creatures, growing in the secret dark? Hippocrates was supposedly using them to cauterize wounds around the 5th century B.C.E. In the Super Mario video games, mushrooms might give you extra lives; in HBO’s “The Last of Us,” they bring about the ruin of human civilization. © 2026 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 4: Development of the Brain
Link ID: 30102 - Posted: 01.31.2026

By Pria Anand I loved literature before I loved medicine, and as a medical student, I often found that my textbooks left me cold, their medical jargon somehow missing the point of profound diseases able to rewrite a person’s life and identity. I was born, I decided, a century too late: I found the stories I craved, not in contemporary textbooks, but in outdated case reports, 18th- and 19-century descriptions of how the diseases I was studying might shape the life of a single patient. These reports were alive with vivid details: how someone’s vision loss affected their golf game or their smoking habit, their work or their love life. They were all tragedies: Each ended with an autopsy, a patient’s brain dissected to discover where, exactly, the problem lay, to inch closer to an understanding of the geography of the soul. To write these case studies, neurologists awaited the deaths and brains of living patients, robbing their subjects of the ability to choose what would become of their own bodies—the ability to write the endings of their own stories—after they had already been sapped of agency by their illnesses. Among these case reports was one from a forbidding state hospital in the north of Moscow: the story of a 19th-century Russian journalist referred to simply as “a learned man.” The journalist suffered a type of alcoholic dementia because of the brandy he often drank to cure his writer’s block and he developed a profound amnesia. He could not remember where he was or why. He could win a game of checkers but would forget that he had even played the minute the game ended. In the place of these lost memories, the journalist’s imagination spun elaborate narratives; he believed he had written an article when in fact he had barely begun to conceive it before he became sick, would describe the prior day’s visit to a far-off place when in actuality he had been too weak to get out of bed, and maintained that some of his possessions—kept in a hospital safe—had been taken from him as part of an elaborate heist. Sacks’ journals suggest he injected his own experiences into the stories of his patients. © 2026 NautilusNext Inc.,

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 30089 - Posted: 01.21.2026

By Claudia López Lloreda A new commentary calls into question a 2024 paper that described a universal pattern of cortical brain oscillations. But that team has provided a more expansive analysis in response and stands by its original conclusions. Both articles were published today in “Matters Arising” in Nature Neuroscience. Ultimately, the back-and-forth suggests that a frequency “motif” may exist, but it may not be as general as the original study proposed, says Aitor Morales-Gregorio, a postdoctoral researcher at Charles University, who was not involved with any of the work. “The [2024] conclusions are way too optimistic about how general and how universal this principle might be.” The 2024 study identified a brain-wave motif in 14 cortical areas in macaques: Alpha and beta rhythms predominated in the deeper layers, whereas gamma bands appeared in the more superficial layers. Because this motif also showed up in marmosets and humans, the researchers speculated that it may be a universal mechanism for cortical computation in primates. “Results typically come with a level of variability, of noise, of uncertainty,” says 2024 study investigator Diego Mendoza-Halliday, assistant professor of neuroscience at the University of Pittsburgh. But this pattern “was just there the whole time, at all times, in many, many of the recordings.” The team leveraged the findings to create an algorithm that detects Layer 4 of the cortex. But the pattern is “by no means universal,” according to the new commentary, which found the motif in about 60 percent of the recordings in an independent monkey dataset. Further, the algorithm trained to identify Layer 4 of the cortex is unreliable, the commentary shows. © 2025 Simons Foundation

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 14: Attention and Higher Cognition
Link ID: 30044 - Posted: 12.13.2025

By Nora Bradford Here are three words: pine, crab, sauce. There’s a fourth word that combines with each of the others to create another common word. What is it? When the answer finally comes to you, it’ll likely feel instantaneous. You might even say “Aha!” This kind of sudden realization is known as insight, and a research team recently uncovered how the brain produces it (opens a new tab), which suggests why insightful ideas tend to stick in our memory. Maxi Becker (opens a new tab), a cognitive neuroscientist at Duke University, first got interested in insight after reading the landmark 1962 book The Structure of Scientific Revolutions (opens a new tab) by the historian and philosopher of science Thomas Kuhn. “He describes how some ideas are so powerful that they can completely shift the way an entire field thinks,” she said. “That got me wondering: How does the brain come up with those kinds of ideas? How can a single thought change how we see the world?” Such moments of insight are written across history. According to the Roman architect and engineer Vitruvius, in the third century BCE the Greek mathematician Archimedes suddenly exclaimed “Eureka!” after he slid into a bathtub and saw the water level rise by an amount equal to his submerged volume (although this tale may be apocryphal (opens a new tab)). In the 17th century, according to lore, Sir Isaac Newton had a breakthrough in understanding gravity after an apple fell on his head. In the early 1900s, Einstein came to a sudden realization that “if a man falls freely, he would not feel his weight,” which led him to his theory of relativity, as he later described in a lecture. Insights are not limited to geniuses: We have these cognitive experiences all the time when solving riddles or dealing with social or intellectual problems. They are distinct from analytical problem-solving, such as the process of doing formulaic algebra, in which you arrive at a solution slowly and gradually as if you’re getting warmer. Instead, insights often follow periods of confusion. You never feel as if you’re getting warmer; rather, you go from cold to hot, seemingly in an instant. Or, as the neuropsychologist Donald Hebb, known for his work building neurobiological models of learning, wrote in the 1940s, sometimes “learning occurs as a single jump, an all-or-none affair.” © 2025 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 30004 - Posted: 11.08.2025

By Claudia López Lloreda The process of making a decision engages neurons across the entire brain, according to a new mouse dataset created by an international collaboration. “Many, many areas are recruited even for what are arguably rather simple decisions,” says Anne Churchland, professor of neurobiology at University of California, Los Angeles and one of the founding members of the collaboration, called the International Brain Laboratory (IBL). The canonical model suggests that the activity underlying vision-dependent decisions goes from the visual thalamus to the primary visual cortex and association areas, and then possibly to the frontal cortex, Churchland says. But the new findings suggest that “maybe there’s more parallel processing and less of a straightforward circuit than we thought.” Churchland and other scientists established the IBL in 2017 out of frustration with small-scale studies of decision-making that analyzed only one or two brain regions at a time. The IBL aimed to study how the brain integrates information and makes a decision at scale. “We came together as a large group with the realization that a large team effort could be transformative in these questions that had been kind of stymieing all of us,” Churchland says. After years of standardizing their methods and instrumentation across the 12 participating labs, the IBL team constructed a brain-wide map of neural activity in mice as they complete a decision-making task. That map, published today in Nature, reveals that the activity associated with choices and motor actions shows up widely across the brain. The same is true for the activity underlying decisions based on prior knowledge, according to a companion paper by the same team, also published today in Nature. © 2025 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29918 - Posted: 09.06.2025

By Dan Falk I’ve been fascinated by time for as long as I can remember. In my undergraduate physics classes, time always lurked in the background—it was the “t” that the professors sprinkled into their equations—but it was never quite clear what time actually was. Years later, I wrote a book about time, but even with chapters on Newton and Einstein, and a solid dose of philosophy, something was missing. Nautilus Members enjoy an ad-free experience. Log in or Join now . For starters, we know clocks and watches work, but how do we tell time? If you’re watching network TV and a commercial break begins, you know you have time to use the bathroom or perhaps make a sandwich—in fact, you can probably arrange to be back in front of the TV just as the ads are ending. What makes you so good at judging these intervals of time? I figured that Dean Buonomano, being a neuroscientist, might have some of the answers. Buonomano is known for developing the idea that the key mechanism is not a single clock-like structure in the brain but rather networks of neurons working together, known as “neural dynamics.” But as Buonomano sees it, the brain does much more than keep track of time; in fact, it might be said to create it. It’s thanks to our brains that we feel time’s “flow,” even though nothing in physics points to such a flow out there in the world. Perhaps even more crucially, the brain allows us to engage in “mental time travel”—the ability to recall past events and imagine future happenings. This capability, he argues, was essential in shaping humanity’s path from the African savannah to today’s globe-spanning civilization. © 2025 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29851 - Posted: 07.12.2025

Humberto Basilio Mindia Wichert has taken part in plenty of brain experiments as a cognitive-neuroscience graduate student at the Humboldt University of Berlin, but none was as challenging as one he faced in 2023. Inside a stark white room, he stared at a flickering screen that flashed a different image every 10 seconds. His task was to determine what familiar object appeared in each image. But, at least at first, the images looked like nothing more than a jumble of black and white patches. “I’m very competitive with myself,” says Wichert. “I felt really frustrated.” Cognitive neuroscientist Maxi Becker, now at Duke University in Durham, North Carolina, chose the images in an attempt to spark a fleeting mental phenomenon that people often experience but can’t control or fully explain. Study participants puzzling out what is depicted in the images — known as Mooney images, after a researcher who published a set of them in the 1950s1 — can’t rely on analytical thinking. Instead, the answer must arrive all at once, like a flash of lightning in the dark (take Nature’s Mooney-images quiz below). Becker asked some of the participants to view the images while lying inside a functional magnetic resonance imaging (fMRI) scanner, so she could track tiny shifts in blood flow corresponding to brain activity. She hoped to determine which regions produce ‘aha!’ moments. Over the past two decades, scientists studying such moments of insight — also known as eureka moments — have used the tools of neuroscience to reveal which regions of the brain are active and how they interact when discovery strikes. They’ve refined the puzzles they use to trigger insight and the measurements they take, in an attempt to turn a self-reported, subjective experience into something that can be documented and rigorously studied. This foundational work has led to new questions, including why some people are more insightful than others, what mental states could encourage insight and how insight might boost memory. © 2025 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 29844 - Posted: 06.28.2025

Myrian Wares for Quanta Magazine You’ve just gotten home from an exhausting day. All you want to do is put your feet up and zone out to whatever is on television. Though the inactivity may feel like a well-earned rest, your brain is not just chilling. In fact, it is using nearly as much energy as it did during your stressful activity, according to recent research. Sharna Jamadar (opens a new tab), a neuroscientist at Monash University in Australia, and her colleagues reviewed research from her lab and others around the world to estimate the metabolic cost of cognition (opens a new tab) — that is, how much energy it takes to power the human brain. Surprisingly, they concluded that effortful, goal-directed tasks use only 5% more energy than restful brain activity. In other words, we use our brain just a small fraction more when engaging in focused cognition than when the engine is idling. It often feels as though we allocate our mental energy through strenuous attention and focus. But the new research builds on a growing understanding that the majority of the brain’s function goes to maintenance. While many neuroscientists have historically focused on active, outward cognition, such as attention, problem-solving, working memory and decision-making, it’s becoming clear that beneath the surface, our background processing is a hidden hive of activity. Our brains regulate our bodies’ key physiological systems, allocating resources where they’re needed as we consciously and subconsciously react to the demands of our ever-changing environments. “There is this sentiment that the brain is for thinking,” said Jordan Theriault (opens a new tab), a neuroscientist at Northeastern University who was not involved in the new analysis. “Where, metabolically, [the brain’s function is] mostly spent on managing your body, regulating and coordinating between organs, managing this expensive system which it’s attached to, and navigating a complicated external environment.” © 2025 Simons Foundation.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 13: Homeostasis: Active Regulation of the Internal Environment
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 9: Homeostasis: Active Regulation of the Internal Environment
Link ID: 29825 - Posted: 06.07.2025

Nicola Davis Science correspondent Whether it is doing sums or working out what to text your new date, some tasks produce a furrowed brow. Now scientists say they have come up with a device to monitor such effort: an electronic tattoo, stuck to the forehead. The researchers say the device could prove valuable among pilots, healthcare workers and other professions where managing mental workload is crucial to preventing catastrophes. “For this kind of high-demand and high-stake scenario, eventually we hope to have this real-time mental workload decoder that can give people some warning and alert so that they can self-adjust, or they can ask AI or a co-worker to offload some of their work,” said Dr Nanshu Lu, an author of the research from the University of Texas at Austin, adding the device may not only help workers avoid serious mistakes but also protect their health. Writing in the journal Device, Lu and colleagues describe how using questionnaires to investigate mental workload is problematic, not least as people are poor at objectively judging cognitive effort and they are usually conducted after a task. Meanwhile, existing electroencephalography (EEG) and electrooculography (EOG) devices, that can be used to assess mental workload by measuring brain waves and eye movements respectively, are wired, bulky and prone to erroneous measurements arising from movements. By contrast, the “e-tattoo” is a lightweight, flexible, wireless device. © 2025 Guardian News & Media Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 11: Emotions, Aggression, and Stress
Link ID: 29815 - Posted: 05.31.2025

By Laura Dattaro One of Clay Holroyd’s mostly highly cited papers is a null result. In 2005, he tested a theory he had proposed about a brain response to unexpected rewards and disappointments, but the findings—now cited more than 600 times—didn’t match his expectations, he says. In the years since, other researchers have run similar tests, many of which contradicted Holroyd’s results. But in 2021, EEGManyLabs announced that it would redo Holroyd’s original experiment across 13 labs. In their replication effort, the researchers increased the sample size from 17 to 370 people. The results—the first from EEGManyLabs—published in January in Cortex, failed to replicate the null result, effectively confirming Holroyd’s theory. “Fundamentally, I thought that maybe it was a power issue,” says Holroyd, a cognitive neuroscientist at Ghent University. “Now this replication paper quite nicely showed that it was a power issue.” The two-decade tale demonstrates why pursuing null findings and replications—the focus of this newsletter—is so important. Holroyd’s 2002 theory proposed that previously observed changes in dopamine associated with unexpectedly positive or negative results cause neural responses that can be measured with EEG. The more surprising a result, he posited, the larger the response. To test the idea, Holroyd and his colleagues used a gambling-like task in which they told participants the odds of correctly identifying which of four choices would lead to a 10-cent reward. In reality, the reward was random. When participants received no reward, their neural reaction to the negative result was equally strong regardless of which odds they had been given, contradicting the theory. © 2025 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29814 - Posted: 05.31.2025

By Sara Novak Just a few weeks after they hatch, baby male zebra finches begin to babble, spending much of the day testing their vocal chords. Dad helps out, singing to his hatchlings during feedings, so that the babies can internalize his tune, the same mating refrain shared by all male zebra finches. Soon, these tiny Australian birds begin to rehearse the song itself, repeating it up to 10,000 times a day, without any clear reward other than their increasing perfection of the melody. The baby birds’ painstaking devotion to mastering their song led Duke University neuroscientist Richard Mooney and his Duke colleague John Pearson to wonder whether the birds could help us better understand the nature of self-directed learning. In humans, language and musical expression are thought to be self-directed—spontaneous, adaptive and intrinsically reinforced. In a study recently published in Nature, the scientists tracked the brain signals and levels of dopamine, a neurotransmitter involved in reward and movement, in the brains of five male baby Zebra finches while they were singing. They also measured song quality for each rendition the birds sang, in terms of both pitch and vigor, as well as the quality of song performance relative to the bird’s age. What they found is that dopamine levels in the baby birds’ brains closely matched the birds’ performance of the song, suggesting it plays a central role in the learning process. Scientists have long known that learning that is powered by external rewards, such as grades, praise or sugary treats, is driven by dopamine—which is thought to chart the differences between expected and experienced rewards. But while they have suspected that self-directed learning is likewise guided by dopamine, it had been difficult to test that hypothesis until now. © 2025 NautilusNext Inc.,

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 15: Language and Lateralization
Link ID: 29800 - Posted: 05.24.2025

By Mac Shine The brain is an endlessly dynamic machine; it can wake you from sleep, focus your attention, spark a memory or help you slam on the brakes while driving. But what makes this precision possible? How can the brain dial up just the right amount of alertness or inhibition, and only when it’s needed? A new study, out today in Nature, may have found part of the answer in an unlikely place: a cluster of small, largely overlooked inhibitory neurons nestled next to one of the brain’s most powerful arousal hubs, the locus coeruleus (LC). Led by Michael R. Bruchas, a neuroscientist at the University of Washington, the study is a tour de force in neural sleuthing, employing methods ranging from viral tracing and electrophysiology to imaging and behavior to map an elusive cell population known as the pericoeruleus. In a world where we’re constantly being pinged, alerted, nudged and notified, the ability to not react—to gate our arousal and filter our responses—may be one of the brain’s most underappreciated superpowers. Here I discuss the results with Bruchas—and what he and his team found is remarkable. Far from being a passive neighbor to the LC, the pericoeruleus appears to act as a kind of micromanager of arousal, selectively inhibiting different subgroups of LC neurons depending on the behavioral context. If the LC is like a floodlight that bathes the brain in noradrenaline—raising alertness, sharpening perception and mobilizing attention—then the pericoeruleus may be the finely-tuned lens that directs where and when that light shines. It’s a subtle but powerful form of control, and one that challenges traditional views of how the LC operates. For decades, the LC has been thought of primarily as a global broadcaster: When it fires, it releases norepinephrine widely across the cortex, preparing the brain for action. But this new work is the latest in a recent line of inquiry that has challenged this simplicity—suggesting that the system is more complex and nuanced than previously thought. “We’re beginning to see that the locus coeruleus doesn’t just flood the brain with arousal – it targets specific outputs, and the pericoeruleus plays a key role in gating that process,” said Li Li, one of the co-first authors of the paper and a former postdoctoral researcher in Bruchas’ lab, now assistant professor of anesthesiology at Seattle Children’s Hospital. © 2025 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29787 - Posted: 05.14.2025

By Claudia López Lloreda For a neuroscientist, the opportunity to record single neurons in people doesn’t knock every day. It is so rare, in fact, that after 14 years of waiting by the door, Florian Mormann says he has recruited just 110 participants—all with intractable epilepsy. All participants had electrodes temporarily implanted in their brains to monitor their seizures. But the slow work to build this cohort is starting to pay off for Mormann, a group leader at the University of Bonn, and for other researchers taking a similar approach, according to a flurry of studies published in the past year. For instance, certain neurons selectively respond not only to particular scents but also to the words and images associated with them, Mormann and his colleagues reported in October. Other neurons help to encode stimuli, form memories and construct representations of the world, recent work from other teams reveals. Cortical neurons encode specific information about the phonetics of speech, two independent teams reported last year. Hippocampal cells contribute to working memory and map out time in novel ways, two other teams discovered last year, and some cells in the region encode information related to a person’s changing knowledge about the world, a study published in August found. These studies offer the chance to answer questions about human brain function that remain challenging to answer using animal models, says Ziv Williams, associate professor of neurosurgery at Harvard Medical School, who led one of the teams that worked on speech phonetics. “Concept cells,” he notes by way of example, such as those Mormann identified, or the “Jennifer Aniston” neurons famously described in a 2005 study, have proved elusive in the monkey brain. © 2025 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 29709 - Posted: 03.19.2025

By Felicity Nelson A region in the brainstem, called the median raphe nucleus, contains neurons that control perseverance and exploration.Credit: K H Fung/Science Photo Library Whether mice persist with a task, explore new options or give up comes down to the activity of three types of neuron in the brain. In experiments, researchers at University College London (UCL) were able to control the three behaviours by switching the neurons on and off in a part of the animals’ brainstem called the median raphe nucleus. The findings are reported in Nature today1. “It’s quite remarkable that manipulation of specific neural subtypes in the median raphe nucleus mediates certain strategic behaviours,” says neuroscientist Roger Marek at the Queensland Brain Institute in Brisbane, Australia, who was not involved in the work. Whether these behaviours are controlled in the same way in humans needs to be confirmed, but if they are, this could be relevant to certain neuropsychiatric conditions that are associated with imbalances in the three behavioural strategies, says Sonja Hofer, a co-author of the paper and a systems neuroscientist at UCL. For instance, an overly high drive to persist with familiar actions and repetitive behaviours can be observed in people with obsessive–compulsive disorder and autism, she says. Conversely, pathological disengagement and lack of motivation are symptoms of major depressive disorder, and an excessive drive to explore and inability to persevere with a task is seen in attention deficit hyperactivity disorder. “It could be that changes in the firing rate of specific median raphe cell types could contribute to certain aspects of these conditions,” says Hofer. © 2025 Springer Nature Limited

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29696 - Posted: 03.08.2025

Nell Greenfieldboyce People are constantly looking at the behavior of others and coming up with ideas about what might be going on in their heads. Now, a new study of bonobos adds to evidence that they might do the same thing. Specifically, some bonobos were more likely to point to the location of a treat when they knew that a human companion was not aware of where it had been hidden, according to a study which appears in the Proceedings of the National Academy of Sciences. The findings add to a long-running debate about whether humans have a unique ability to imagine and understand the mental states of others. Some researchers say this kind of "theory of mind" may be practiced more widely in the animal kingdom, and potentially watching it in action was quite the experience. "It's quite surreal. I mean, I've worked with primates for quite some years now and you never get used to it," says Luke Townrow, a PhD student at Johns Hopkins University. "We found evidence that they are tailoring their communication based on what I know." Hmmm, where is the grape? To see what bonobos might know about what humans around them know, Townrow worked with Chris Krupenye of Johns Hopkins University to devise a simple experiment. "It's always a challenge for us, that animals don't speak, so we can't just ask them what they're thinking. We have to come up with creative, experimental designs that allow them to express their knowledge," says Krupenye. © 2025 npr

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29658 - Posted: 02.05.2025

By Kristel Tjandra Close your eyes and picture an apple—what do you see? Most people will conjure up a vivid image of the fruit, but for the roughly one in 100 individuals with aphantasia, nothing will appear in the mind’s eye at all. Now, scientists have discovered that in people with this inability to form mental images, visual processing areas of the brain still light up when they try to do so. The study, published today in Current Biology, suggests aphantasia is not caused by a complete deficit in visual processing, as researchers have previously proposed. Visual brain areas are still active when aphantasic people are asked to imagine—but that activity doesn’t translate into conscious experience. The work offers new clues about the neurological differences underlying this little-explored condition. The study authors “take a very strong, mechanistic approach,” says Sarah Shomstein, a vision scientist at George Washington University who was not involved in the study. “It was asking the right questions and using the right methods.” Some scientists suspect aphantasia may be caused by a malfunction in the primary visual cortex, the first area in the brain to process images. “Typically, primary cortex is thought to be the engine of visual perception,” says Joel Pearson, a neuroscientist at the University of New South Wales Sydney who co-led the study. “If you don’t have activity there, you’re not going to have perceptual consciousness.” To see what was going on in this region in aphantasics, the team used functional magnetic resonance imaging to measure the brain activity of 14 people with aphantasia and 18 neurotypical controls as they repeatedly saw two simple patterns, made up of either green vertical lines or red horizontal lines. They then repeated the experiment, this time asking participants to simply imagine the two images.

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 10: Vision: From Eye to Brain
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 7: Vision: From Eye to Brain
Link ID: 29624 - Posted: 01.11.2025