Links for Keyword: Learning & Memory
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Marta Hill Every year, black-capped chickadees perform an impressive game of hide-and-seek. These highly visual birds cache tens of thousands of surplus food morsels and then recover them during leaner times. Place cells in the hippocampus may help the birds keep track of their hidden bounty, according to a study published 11 June in Nature. The cells activate not only when a bird visits a food stash but also when it looks at the stash from far away, the study shows. “What is really profound about the work is it’s trying to unpack how it is that we’re able to combine visual information, which is based on where we currently are in the world, with our understanding of the space around us and how we can navigate it,” says Nick Turk-Browne, professor of psychology and director of the Wu Tsai Institute at Yale University, who was not involved in the study. With each gaze shift, the hippocampus first predicts what the bird is about to see and then reacts to what it actually sees, the study shows. “It really fits beautifully into this picture of this dual role for the system in representing actual and representing possible,” says Loren Frank, professor of physiology and psychiatry at the University of California, San Francisco, who was not involved in the work. The findings help explain how the various functions of the hippocampus—navigation, perception, learning and memory—work together, Turk-Browne adds. “If we can have a smart, abstract representation of place that doesn’t depend on actually physically being there, then you can imagine how this can be used to construct memories.” © 2025 Simons Foundation
Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29827 - Posted: 06.14.2025
By Sydney Wyatt Donald Hebb famously proposed in 1949 that when neurons fire together, the synaptic connections between them strengthen, forming the basis for long-term memories. That theory—which held up in experiments in rat hippocampal slice cultures—has shaped how researchers understand synaptic plasticity ever since. But a new computational modeling study adds to mounting evidence that Hebbian plasticity does not always explain how changing neuronal connections enable learning. Rather, behavioral timescale synaptic plasticity (BTSP), which can strengthen synapses even when neurons fire out of sync, better captures the changes seen in CA1 hippocampal cells as mice learn to navigate a new environment, the study suggests. Hebbian spike-timing-dependent plasticity occurs when a neuron fires just ahead of one it synapses onto, leading to a stronger connection between the two cells. BTSP, on the other hand, relies on a complex spike, or a burst of action potentials, in the postsynaptic cell, which triggers a calcium signal that travels across the dendritic arbor. The signal strengthens synaptic connections with the presynaptic cell that were active within seconds of that spike, causing larger changes in synaptic strength. BTSP helps hippocampal cells establish their place fields, the positions at which they fire, previous work suggests. But it was unclear whether it also contributes to learning, says Mark Sheffield, associate professor of neurobiology at the University of Chicago, who led the new study. The new findings suggest that it does—challenging how researchers traditionally think about plasticity mechanisms in the hippocampus, says Jason Shepherd, associate professor of neurobiology at the University of Utah, who was not involved in the research. “The classic rules of plasticity that we have been sort of thinking about for decades may not be actually how the brain works, and that’s a big deal.” © 2025 Simons Foundation
Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29810 - Posted: 05.28.2025
By Sydney Wyatt The red nucleus—a pale pink brainstem structure that coordinates limb movements in quadruped animals—also projects to brain areas that shape reward-motivated and action-based movements in people, according to a new functional imaging study. The finding suggests the region, like the cerebral cortex, took on a more complex role over the course of evolution. Many researchers had assumed that brainstem structures remained stuck in evolutionarily ancient roles, says Joan Baizer, professor of physiology and biophysics at the University at Buffalo. Activity in the red nucleus, a structure that emerged once animals began to use limbs for walking, coordinates the speed and accuracy of those movements in rats and helps to control posture in monkeys, previous electrophysiological recordings have shown. And in nonhuman primates, neurons in the red nucleus project to the motor cortex and spinal cord, anatomical studies have demonstrated, seemingly confirming the area’s role in motor function. By contrast, the human red nucleus primarily connects to cortical and subcortical regions involved in action control, reward and motivated behavior, the new work reveals. “If this is such a motor structure, why isn’t it projecting to the spinal cord? That doesn’t really fit with our notion of what this structure is supposed to be doing,” says study investigator Samuel Krimmel, a postdoctoral fellow in Nico Dosenbach’s lab. The new imaging suggests that, at least in people, the neural underpinnings of motivated movement—previously considered to be the role of higher-order brain areas—reach “all the way down into the brainstem,” says Dosenbach, professor of neurology at Washington University School of Medicine, who led the work. The findings were published last month in Nature Communications. © 2025 Simons Foundation
Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 29790 - Posted: 05.17.2025
By Ajdina Halilovic When Todd Sacktor (opens a new tab) was about to turn 3, his 4-year-old sister died of leukemia. “An empty bedroom next to mine. A swing set with two seats instead of one,” he said, recalling the lingering traces of her presence in the house. “There was this missing person — never spoken of — for which I had only one memory.” That memory, faint but enduring, was set in the downstairs den of their home. A young Sacktor asked his sister to read him a book, and she brushed him off: “Go ask your mother.” Sacktor glumly trudged up the stairs to the kitchen. It’s remarkable that, more than 60 years later, Sacktor remembers this fleeting childhood moment at all. The astonishing nature of memory is that every recollection is a physical trace, imprinted into brain tissue by the molecular machinery of neurons. How the essence of a lived moment is encoded and later retrieved remains one of the central unanswered questions in neuroscience. Sacktor became a neuroscientist in pursuit of an answer. At the State University of New York Downstate in Brooklyn, he studies the molecules involved in maintaining the neuronal connections underlying memory. The question that has always held his attention was first articulated in 1984 (opens a new tab) by the famed biologist Francis Crick: How can memories persist for years, even decades, when the body’s molecules degrade and are replaced in a matter of days, weeks or, at most, months? In 2024, working alongside a team that included his longtime collaborator André Fenton (opens a new tab), a neuroscientist at New York University, Sacktor offered a potential explanation in a paper published in Science Advances. The researchers discovered that a persistent bond between two proteins (opens a new tab) is associated with the strengthening of synapses, which are the connections between neurons. Synaptic strengthening is thought to be fundamental to memory formation. As these proteins degrade, new ones take their place in a connected molecular swap that maintains the bond’s integrity and, therefore, the memory. © 2025 Simons Foundation
Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29784 - Posted: 05.11.2025
By Elise Cutts Food poisoning isn’t an experience you’re likely to forget — and now, scientists know why. A study published April 2 in Nature has unraveled neural circuitry in mice that makes food poisoning so memorable. “We’ve all experienced food poisoning at some point … And not only is it terrible in the moment, but it leads us to not eat those foods again,” says Christopher Zimmerman of Princeton University. Luckily, developing a distaste for foul food doesn’t take much practice — one ill-fated encounter with an undercooked enchilada or contaminated hamburger is enough, even if it takes hours or days for symptoms to set in. The same is true for other animals, making food poisoning one of the best ways to study how our brains connect events separated in time, says neuroscientist Richard Palmiter of the University of Washington in Seattle. Mice usually need an immediate reward or punishment to learn something, Palmiter says; even just a minute’s delay between cause (say, pulling a lever) and effect (getting a treat) is enough to prevent mice from learning. Not so for food poisoning. Despite substantial delays, their brains have no trouble associating an unfamiliar food in the past with tummy torment in the present. Researchers knew that a brain region called the amygdala represents flavors and decides whether or not they’re gross. Palmiter’s group had also shown that the gut tells the brain it’s feeling icky by activating specific “alarm” neurons, called CGRP neurons. “They respond to everything that’s bad,” Palmiter says. © Society for Science & the Public 2000–2025.
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 13: Homeostasis: Active Regulation of the Internal Environment
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 9: Homeostasis: Active Regulation of the Internal Environment
Link ID: 29756 - Posted: 04.23.2025
William Wright & Takaki Komiyama Every day, people are constantly learning and forming new memories. When you pick up a new hobby, try a recipe a friend recommended or read the latest world news, your brain stores many of these memories for years or decades. But how does your brain achieve this incredible feat? In our newly published research in the journal Science, we have identified some of the “rules” the brain uses to learn. Learning in the brain The human brain is made up of billions of nerve cells. These neurons conduct electrical pulses that carry information, much like how computers use binary code to carry data. These electrical pulses are communicated with other neurons through connections between them called synapses. Individual neurons have branching extensions known as dendrites that can receive thousands of electrical inputs from other cells. Dendrites transmit these inputs to the main body of the neuron, where it then integrates all these signals to generate its own electrical pulses. It is the collective activity of these electrical pulses across specific groups of neurons that form the representations of different information and experiences within the brain. For decades, neuroscientists have thought that the brain learns by changing how neurons are connected to one another. As new information and experiences alter how neurons communicate with each other and change their collective activity patterns, some synaptic connections are made stronger while others are made weaker. This process of synaptic plasticity is what produces representations of new information and experiences within your brain. In order for your brain to produce the correct representations during learning, however, the right synaptic connections must undergo the right changes at the right time. The “rules” that your brain uses to select which synapses to change during learning – what neuroscientists call the credit assignment problem – have remained largely unclear. © 2010–2025, The Conversation US, Inc.
Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29754 - Posted: 04.23.2025
By RJ Mackenzie New footage documents microglia pruning synapses at high resolution and in real time. The recordings, published in January, add a new twist to a convoluted debate about the range of these cells’ responsibilities. Microglia are the brain’s resident immune cells. For about a decade, some have also credited them with pruning excess synaptic connections during early brain development. But that idea was based on static images showing debris from destroyed synapses within the cells—which left open the possibility that microglia clean up after neurons do the actual pruning. In the January movies, though, a microglia cell expressing a green fluorescent protein clearly reaches out a ghostly green tentacle to a budding presynapse on a neuron and lifts it away, leaving the neighboring blue axon untouched. “Their imaging is superb,” says Amanda Sierra, a researcher at the Achucarro Basque Center for Neuroscience, who was not involved in the work. But “one single video, or even two single videos, however beautiful they are, are not sufficient evidence that this is the major mechanism of synapse elimination,” she says. In the new study, researchers isolated microglia and neurons from mice and grew them in culture with astrocytes, labeling the microglia, synapses and axons with different fluorescent dyes. Their approach ensured that the microglia formed ramified processes—thin, branching extensions that don’t form when they are cultured in isolation, says Ryuta Koyama, director of the Department of Translational Neurobiology at Japan’s National Center of Neurology and Psychiatry, who led the work. “People now know that ramified processes of microglia are really necessary to pick up synapses,” he says. “In normal culture systems, you can’t find ramified processes. They look amoeboid.” © 2025 Simons Foundation
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29720 - Posted: 03.27.2025
Ari Daniel Tristan Yates has no doubt about her first memory, even if it is a little fuzzy. "I was about three and a half in Callaway Gardens in Georgia," she recalls, "just running around with my twin sister trying to pick up Easter eggs." But she has zero memories before that, which is typical. This amnesia of our babyhood is pretty much the rule. "We have memories from what happened earlier today and memories from what happened earlier last week and even from a few years ago," says Yates, who's a cognitive neuroscientist at Columbia University. "But all of us lack memories from our infancy." Is that because we don't make memories when we're babies, or is there something else responsible? Now, in new research published by Yates and her colleagues in the journal Science, they propose that babies are able to form memories, even if they become inaccessible later in life. These results might reveal something crucial about the earliest moments of our development. "That's the time when we learn who our parents are, that's when we learn language, that's when we learn how to walk," Yates says. "What happens in your brain in the first two years of life is magnificent," says Nick Turk-Browne, a cognitive neuroscientist at Yale University. "That's the period of by far the greatest plasticity across your whole life span. And better understanding how your brain learns and remembers in infancy lays the foundation for everything you know and do for the rest of your life. © 2025 npr
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29715 - Posted: 03.22.2025
By Laura Sanders There are countless metaphors for memory. It’s a leaky bucket, a steel trap, a file cabinet, words written in sand. But one of the most evocative — and neuroscientifically descriptive — invokes Lego bricks. A memory is like a Lego tower. It’s built from the ground up, then broken down, put away in bins and rebuilt in a slightly different form each time it’s taken out. This metaphor is beautifully articulated by psychologists Ciara Greene and Gillian Murphy in their new book, Memory Lane. Perhaps the comparison speaks to me because I have watched my kids create elaborate villages of Lego bricks, only to be dismantled, put away (after much nagging) and reconstructed, always with a similar overall structure but with minor and occasionally major changes. These villages’ blueprints are largely stable, but also fluid and flexible, subject to the material whims of the builders at any point in time. Memory works this way, too, Greene and Murphy propose. Imagine your own memory lane as a series of buildings, modified in ways both small and big each time you call them to mind. “As we walk down Memory Lane, the buildings we pass — our memories of individual events — are under constant reconstruction,” Greene and Murphy write. In accessible prose, the book covers a lot of ground, from how we form memories to how delicate those memories really are. Readers may find it interesting (or perhaps upsetting) to learn how bad we all are at remembering why we did something, from trivial choices, like buying an album, to consequential ones, such as a yes or no vote on an abortion referendum. People change their reasoning — or at least, their memories of their reasoning — on these sorts of events all the time. © Society for Science & the Public 2000–2025
Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29712 - Posted: 03.22.2025
By Tim Vernimmen On a rainy day in July 2024, Tim Bliss and Terje Lømo are in the best of moods, chuckling and joking over brunch, occasionally pounding the table to make a point. They’re at Lømo’s house near Oslo, Norway, where they’ve met to write about the late neuroscientist Per Andersen, in whose lab they conducted groundbreaking experiments more than 50 years ago. The duo only ever wrote one research paper together, in 1973, but that work is now considered a turning point in the study of learning and memory. Published in the Journal of Physiology, it was the first demonstration that when a neuron — a cell that receives and sends signals throughout the nervous system — signals to another neuron frequently enough, the second neuron will later respond more strongly to new signals, not for just seconds or minutes, but for hours. It would take decades to fully understand the implications of their research, but Bliss and Lømo had discovered something momentous: a phenomenon called long-term potentiation, or LTP, which researchers now know is fundamental to the brain’s ability to learn and remember. Today, scientists agree that LTP plays a major role in the strengthening of neuronal connections, or synapses, that allow the brain to adjust in response to experience. And growing evidence suggests that LTP may also be crucially involved in a variety of problems, including memory deficits and pain disorders. Bliss and Lømo never wrote another research article together. In fact, they would soon stop working on LTP — Bliss for about a decade, Lømo for the rest of his life. Although the researchers knew they had discovered something important, at first the paper “didn’t make a big splash,” Bliss says. By the early 1970s, neuroscientist Eric Kandel had demonstrated that some simple forms of learning can be explained by chemical changes in synapses — at least in a species of sea slug. But scientists didn’t yet know if such findings applied to mammals, or if they could explain more complex and enduring types of learning, such as the formation of memories that may last for years.
Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29694 - Posted: 03.05.2025
By Ingrid Wickelgren After shuffling the cards in a standard 52-card deck, Alex Mullen, a three-time world memory champion, can memorize their order in under 20 seconds. As he flips though the cards, he takes a mental walk through a house. At each point in his journey — the mailbox, front door, staircase and so on — he attaches a card. To recall the cards, he relives the trip. This technique, called “method of loci” or “memory palace,” is effective because it mirrors the way the brain naturally constructs narrative memories: Mullen’s memory for the card order is built on the scaffold of a familiar journey. We all do something similar every day, as we use familiar sequences of events, such as the repeated steps that unfold during a meal at a restaurant or a trip through the airport, as a home for specific details — an exceptional appetizer or an object flagged at security. The general narrative makes the noteworthy features easier to recall later. “You are taking these details and connecting them to this prior knowledge,” said Christopher Baldassano (opens a new tab), a cognitive neuroscientist at Columbia University. “We think this is how you create your autobiographical memories.” Psychologists empirically introduced (opens a new tab) this theory some 50 years ago, but proof of such scaffolds in the brain was missing. Then, in 2018, Baldassano found it: neural fingerprints of narrative experience, derived from brain scans, that replay sequentially during standard life events. He believes that the brain builds a rich library of scripts for expected scenarios — restaurant or airport, business deal or marriage proposal — over a person’s lifetime. These standardized scripts, and departures from them, influence how and how well we remember specific instances of these event types, his lab has found. And recently, in a paper published in Current Biology in fall 2024, they showed that individuals can select a dominant script (opens a new tab) for a complex, real-world event — for example, while watching a marriage proposal in a restaurant, we might opt, subconsciously, for either a proposal or a restaurant script — which determines what details we remember. © 2025 Simons Foundation
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29685 - Posted: 02.26.2025
By Michael S. Rosenwald In early February, Vishvaa Rajakumar, a 20-year-old Indian college student, won the Memory League World Championship, an online competition pitting people against one another with challenges like memorizing the order of 80 random numbers faster than most people can tie a shoelace. The renowned neuroscientist Eleanor Maguire, who died in January, studied mental athletes like Mr. Rajakumar and found that many of them used the ancient Roman “method of loci,” a memorization trick also known as the “memory palace.” The technique takes several forms, but it generally involves visualizing a large house and assigning memories to rooms. Mentally walking through the house fires up the hippocampus, the seahorse-shaped engine of memory deep in the brain that consumed Dr. Maguire’s career. We asked Mr. Rajakumar about his strategies of memorization. His answers, lightly edited and condensed for clarity, are below. Q. How do you prepare for the Memory League World Championship? Hydration is very important because it helps your brain. When you memorize things, you usually sub-vocalize, and it helps to have a clear throat. Let’s say you’re reading a book. You’re not reading it out loud, but you are vocalizing within yourself. If you don’t drink a lot of water, your speed will be a bit low. If you drink a lot of water, it will be more and more clear and you can read it faster. Q. What does your memory palace look like? Let’s say my first location is my room where I sleep. My second location is the kitchen. And the third location is my hall. The fourth location is my veranda. Another location is my bathroom. Let’s say I am memorizing a list of words. Let’s say 10 words. What I do is, I take a pair of words, make a story out of them and place them in a location. And I take the next two words. I make a story out of them. I place them in the second location. The memory palace will help you to remember the sequence. © 2025 The New York Times Company
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29673 - Posted: 02.15.2025
By Angie Voyles Askham Identifying what a particular neuromodulator does in the brain—let alone how such molecules interact—has vexed researchers for decades. Dopamine agonists increase reward-seeking, whereas serotonin agonists decrease it, for example, suggesting that the two neuromodulators act in opposition. And yet, neurons in the brain’s limbic regions release both chemicals in response to a reward (and also to a punishment), albeit on different timescales, electrophysiological recordings have revealed, pointing to a complementary relationship. This dual response suggests that the interplay between dopamine and serotonin may be important for learning. But no tools existed to simultaneously manipulate the neuromodulators and test their respective roles in a particular area of the brain—at least, not until now—says Robert Malenka, professor of psychiatry and behavioral sciences at Stanford University. As it turns out, serotonin and dopamine join forces in the nucleus accumbens during reinforcement learning, according to a new study Malenka led, yet they act in opposition: dopamine as a gas pedal and serotonin as a brake on signaling that a stimulus is rewarding. The mice he and his colleagues studied learned faster and performed more reliably when the team optogenetically pressed on the animals’ dopamine “gas” as they simultaneously eased off the serotonin “brake.” “It adds a very rich and beguiling picture of the interaction between dopamine and serotonin,” says Peter Dayan, director of computational neuroscience at the Max Planck Institute for Biological Cybernetics. In 2002, Dayan proposed a different framework for how dopamine and serotonin might work in opposition, but he was not involved in the new study. The new work “partially recapitulates” that 2002 proposal, Dayan adds, “but also poses many more questions.” © 2025 Simons Foundation
Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29672 - Posted: 02.15.2025
By Michael S. Rosenwald Eleanor Maguire, a cognitive neuroscientist whose research on the human hippocampus — especially those belonging to London taxi drivers — transformed the understanding of memory, revealing that a key structure in the brain can be strengthened like a muscle, died on Jan. 4 in London. She was 54. Her death, at a hospice facility, was confirmed by Cathy Price, her colleague at the U.C.L. Queen Square Institute of Neurology. Dr. Maguire was diagnosed with spinal cancer in 2022 and had recently developed pneumonia. Working for 30 years in a small, tight-knit lab, Dr. Maguire obsessed over the hippocampus — a seahorse-shaped engine of memory deep in the brain — like a meticulous, relentless detective trying to solve a cold case. An early pioneer of using functional magnetic resonance imaging (f.M.R.I.) on living subjects, Dr. Maguire was able to look inside human brains as they processed information. Her studies revealed that the hippocampus can grow, and that memory is not a replay of the past but rather an active reconstructive process that shapes how people imagine the future. “She was absolutely one of the leading researchers of her generation in the world on memory,” Chris Frith, an emeritus professor of neuropsychology at University College London, said in an interview. “She changed our understanding of memory, and I think she also gave us important new ways of studying it.” In 1995, while she was a postdoctoral fellow in Dr. Frith’s lab, she was watching television one evening when she stumbled on “The Knowledge,” a quirky film about prospective London taxi drivers memorizing the city’s 25,000 streets to prepare for a three-year-long series of licensing tests. Dr. Maguire, who said she rarely drove because she feared never arriving at her destination, was mesmerized. “I am absolutely appalling at finding my way around,” she once told The Daily Telegraph. “I wondered, ‘How are some people so bloody good and I am so terrible?’” In the first of a series of studies, Dr. Maguire and her colleagues scanned the brains of taxi drivers while quizzing them about the shortest routes between various destinations in London. © 2025 The New York Times Company
Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29671 - Posted: 02.15.2025
By Yasemin Saplakoglu Imagine you’re on a first date, sipping a martini at a bar. You eat an olive and patiently listen to your date tell you about his job at a bank. Your brain is processing this scene, in part, by breaking it down into concepts. Bar. Date. Martini. Olive. Bank. Deep in your brain, neurons known as concept cells are firing. You might have concept cells that fire for martinis but not for olives. Or ones that fire for bars — perhaps even that specific bar, if you’ve been there before. The idea of a “bank” also has its own set of concept cells, maybe millions of them. And there, in that dimly lit bar, you’re starting to form concept cells for your date, whether you like him or not. Those cells will fire when something reminds you of him. Concept neurons fire for their concept no matter how it is presented: in real life or a photo, in text or speech, on television or in a podcast. “It’s more abstract, really different from what you’re seeing,” said Elizabeth Buffalo (opens a new tab), a neuroscientist at the University of Washington. For decades, neuroscientists mocked the idea that the brain could have such intense selectivity, down to the level of an individual neuron: How could there be one or more neurons for each of the seemingly countless concepts we engage with over a lifetime? “It’s inefficient. It’s not economic,” people broadly agreed, according to the neurobiologist Florian Mormann (opens a new tab) at the University of Bonn. But when researchers identified concept cells in the early 2000s, the laughter started to fade. Over the past 20 years, they have established that concept cells not only exist but are critical to the way the brain abstracts and stores information. New studies, including one recently published in Nature Communications, have suggested that they may be central to how we form and retrieve memory. © 2025 Simons Foundation
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29639 - Posted: 01.22.2025
Rachael Elward Lauren Ford Severance, which imagines a world where a person’s work and personal lives are surgically separated, will soon return to Apple TV+ for a second season. While the concept of this gripping piece of science fiction is far-fetched, it touches on some interesting neuroscience. Can a person’s mind really be surgically split in two? Remarkably, “split-brain” patients have existed since the 1940s. To control epilepsy symptoms, these patients underwent a surgery to separate the left and right hemispheres. Similar surgeries still happen today. Later research on this type of surgery showed that the separated hemispheres of split-brain patients could process information independently. This raises the uncomfortable possibility that the procedure creates two separate minds living in one brain. In season one of Severance, Helly R (Britt Lower) experienced a conflict between her “innie” (the side of her mind that remembered her work life) and her “outie” (the side outside of work). Similarly, there is evidence of a conflict between the two hemispheres of real split-brain patients. When speaking with split-brain patients, you are usually communicating with the left hemisphere of the brain, which controls speech. However, some patients can communicate from their right hemisphere by writing, for example, or arranging Scrabble letters. A young patient was asked what job he would like in the future. His left hemisphere chose an office job making technical drawings. His right hemisphere, however, arranged letters to spell “automobile racer”. Split brain patients have also reported “alien hand syndrome”, where one of their hands is perceived to be moving of its own volition. These observations suggest that two separate conscious “people” may coexist in one brain and may have conflicting goals. In Severance, however, both the innie and the outie have access to speech. This is one indicator that the fictional “severance procedure” must involve a more complex separation of the brain’s networks. © 2010–2025, The Conversation US, Inc.
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29635 - Posted: 01.18.2025
By Anna Victoria Molofsky Twenty years ago, a remarkable discovery upended our understanding of the range of elements that can shape neuronal function: A team in Europe demonstrated that enzymatic digestion of the extracellular matrix (ECM)—a latticework of proteins that surrounds all brain cells—could restore plasticity to the visual cortex even after the region’s “critical period” had ended. Other studies followed, showing that ECM digestion could also alter learning in the hippocampus and other brain circuits. These observations established that proteins outside neurons can control synaptic plasticity. We now know that up to 20 percent of the brain is extracellular space, filled with hundreds of ECM proteins—a “matrisome” that plays multiple roles, including modulating synaptic function and myelin formation. ECM genes in the human brain are different than those in other species, suggesting that the proteins they encode could be part of what makes our brains unique and keeps them healthy. In a large population study, posted as a preprint on bioRxiv last year, that examined blood protein biomarkers of organ aging, for example, the presence of ECM proteins was most highly correlated with a youthful brain. Matrisome proteins are also dysregulated in astrocytes from people at high risk for Alzheimer’s disease, another study showed. Despite the influence of these proteins and the ongoing work of a few dedicated researchers, however, the ECM field has not caught on. I would challenge a room full of neuroscientists to name one protein in the extracellular matrix. To this day, the only ECM components most neuroscientists have heard of are “perineuronal nets”—structures that play an important role in stabilizing synapses but make up just a tiny fraction of the matrisome. A respectable scientific journal, covering its own paper that identified a critical impact of ECM, called it “brain goop.” © 2025 Simons Foundation
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29633 - Posted: 01.18.2025
By Laura Sanders Recovery from PTSD comes with key changes in the brain’s memory system, a new study finds. These differences were found in the brains of 19 people who developed post-traumatic stress disorder after the 2015 terrorist attacks in Paris — and then recovered over the following years. The results, published January 8 in Science Advances, point to the complexity of PTSD, but also to ways that brains can reshape themselves as they recover. With memory tasks and brain scans, the study provides a cohesive look at the recovering brain, says cognitive neuroscientist Vishnu Murty of the University of Oregon in Eugene. “It’s pulled together a lot of pieces that were floating around in the field.” On the night of November 13, 2015, terrorists attacked a crowded stadium, a theater and restaurants in Paris. In the years after, PTSD researchers were able to study some of the people who endured that trauma. Just over half the 100 people who volunteered for the study had PTSD initially. Of those, 34 still had the disorder two to three years later; 19 had recovered by two to three years. People who developed PTSD showed differences in how their brains handled intrusive memories, laboratory-based tests of memory revealed. Participants learned pairs of random words and pictures — a box of tissues with the word “work,” for example. PTSD involves pairs of associated stimuli too, though in much more complicated ways. A certain smell or sound, for instance, can be linked with the memory of trauma. © Society for Science & the Public 2000–2025.
Related chapters from BN: Chapter 15: Emotions, Aggression, and Stress; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress; Chapter 13: Memory and Learning
Link ID: 29622 - Posted: 01.11.2025
By McKenzie Prillaman A peek into living tissue from human hippocampi, a brain region crucial for memory and learning, revealed relatively few cell-to-cell connections for the vast number of nerve cells. But signals sent via those sparse connections proved extremely reliable and precise, researchers report December 11 in Cell. One seahorse-shaped hippocampus sits deep within each hemisphere of the mammalian brain. In each hippocampus’s CA3 area, humans have about 1.7 million nerve cells called pyramidal cells. This subregion is thought to be the most internally connected part of the brain in mammals. But much information about nerve cells in this structure has come from studies in mice, which have only 110,000 pyramidal cells in each CA3 subregion. Previously discovered differences between mouse and human hippocampi hinted that animals with more nerve cells may have fewer connections — or synapses — between them, says cellular neuroscientist Peter Jonas of the Institute of Science and Technology Austria in Klosterneuburg. To see if this held true, he and his colleagues examined tissue taken with consent from eight patients who underwent brain surgery to treat epilepsy. Recording electrical activity from human pyramidal cells in the CA3 area suggested that about 10 synapses existed for every 800 cell pairs tested. In mice, that concentration roughly tripled. Despite the relatively scant nerve cell connections in humans, those cells showed steady and robust activity when sending signals to one another — unlike mouse pyramidal cells. © Society for Science & the Public 2000–2025
Related chapters from BN: Chapter 17: Learning and Memory; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29616 - Posted: 01.08.2025
By Traci Watson New clues have emerged in the mystery of how the brain avoids ‘catastrophic forgetting’ — the distortion and overwriting of previously established memories when new ones are created. A research team has found that, at least in mice, the brain processes new and old memories in separate phases of sleep, which might prevent mixing between the two. Assuming that the finding is confirmed in other animals, “I put all my money that this segregation will also occur in humans”, says György Buzsáki, a systems neuroscientist at New York University in New York City. That’s because memory is an evolutionarily ancient system, says Buzsáki, who was not part of the research team but once supervised the work of some of its members. The work was published on Wednesday in Nature1. Scientists have long known that, during sleep, the brain ‘replays’ recent experiences: the same neurons involved in an experience fire in the same order. This mechanism helps to solidify the experience as a memory and prepare it for long-term storage. To study brain function during sleep, the research team exploited a quirk of mice: their eyes are partially open during some stages of slumber. The team monitored one eye in each mouse as it slept. During a deep phase of sleep, the researchers observed the pupils shrink and then return to their original, larger size repeatedly, with each cycle lasting roughly one minute. Neuron recordings showed that most of the brain’s replay of experiences took place when the animals’ pupils were small. That led the scientists to wonder whether pupil size and memory processing are linked. To find out, they enlisted a technique called optogenetics, which uses light to either trigger or suppress the electrical activity of genetically engineered neurons in the brain. First, they trained engineered mice to find a sweet treat hidden on a platform. Immediately after these lessons, as the mice slept, the authors used optogenetics to reduce bursts of neuronal firing that have been linked to replay. They did so during both the small-pupil and large-pupil stages of sleep. © 2025 Springer Nature Limited
Related chapters from BN: Chapter 14: Biological Rhythms, Sleep, and Dreaming; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 10: Biological Rhythms and Sleep; Chapter 13: Memory and Learning
Link ID: 29615 - Posted: 01.04.2025