Chapter 17. Learning and Memory
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Marta Hill Every year, black-capped chickadees perform an impressive game of hide-and-seek. These highly visual birds cache tens of thousands of surplus food morsels and then recover them during leaner times. Place cells in the hippocampus may help the birds keep track of their hidden bounty, according to a study published 11 June in Nature. The cells activate not only when a bird visits a food stash but also when it looks at the stash from far away, the study shows. “What is really profound about the work is it’s trying to unpack how it is that we’re able to combine visual information, which is based on where we currently are in the world, with our understanding of the space around us and how we can navigate it,” says Nick Turk-Browne, professor of psychology and director of the Wu Tsai Institute at Yale University, who was not involved in the study. With each gaze shift, the hippocampus first predicts what the bird is about to see and then reacts to what it actually sees, the study shows. “It really fits beautifully into this picture of this dual role for the system in representing actual and representing possible,” says Loren Frank, professor of physiology and psychiatry at the University of California, San Francisco, who was not involved in the work. The findings help explain how the various functions of the hippocampus—navigation, perception, learning and memory—work together, Turk-Browne adds. “If we can have a smart, abstract representation of place that doesn’t depend on actually physically being there, then you can imagine how this can be used to construct memories.” © 2025 Simons Foundation
Keyword: Learning & Memory
Link ID: 29827 - Posted: 06.14.2025
By Laura Dattaro One of Clay Holroyd’s mostly highly cited papers is a null result. In 2005, he tested a theory he had proposed about a brain response to unexpected rewards and disappointments, but the findings—now cited more than 600 times—didn’t match his expectations, he says. In the years since, other researchers have run similar tests, many of which contradicted Holroyd’s results. But in 2021, EEGManyLabs announced that it would redo Holroyd’s original experiment across 13 labs. In their replication effort, the researchers increased the sample size from 17 to 370 people. The results—the first from EEGManyLabs—published in January in Cortex, failed to replicate the null result, effectively confirming Holroyd’s theory. “Fundamentally, I thought that maybe it was a power issue,” says Holroyd, a cognitive neuroscientist at Ghent University. “Now this replication paper quite nicely showed that it was a power issue.” The two-decade tale demonstrates why pursuing null findings and replications—the focus of this newsletter—is so important. Holroyd’s 2002 theory proposed that previously observed changes in dopamine associated with unexpectedly positive or negative results cause neural responses that can be measured with EEG. The more surprising a result, he posited, the larger the response. To test the idea, Holroyd and his colleagues used a gambling-like task in which they told participants the odds of correctly identifying which of four choices would lead to a 10-cent reward. In reality, the reward was random. When participants received no reward, their neural reaction to the negative result was equally strong regardless of which odds they had been given, contradicting the theory. © 2025 Simons Foundation
Keyword: Attention; Learning & Memory
Link ID: 29814 - Posted: 05.31.2025
By Sydney Wyatt Donald Hebb famously proposed in 1949 that when neurons fire together, the synaptic connections between them strengthen, forming the basis for long-term memories. That theory—which held up in experiments in rat hippocampal slice cultures—has shaped how researchers understand synaptic plasticity ever since. But a new computational modeling study adds to mounting evidence that Hebbian plasticity does not always explain how changing neuronal connections enable learning. Rather, behavioral timescale synaptic plasticity (BTSP), which can strengthen synapses even when neurons fire out of sync, better captures the changes seen in CA1 hippocampal cells as mice learn to navigate a new environment, the study suggests. Hebbian spike-timing-dependent plasticity occurs when a neuron fires just ahead of one it synapses onto, leading to a stronger connection between the two cells. BTSP, on the other hand, relies on a complex spike, or a burst of action potentials, in the postsynaptic cell, which triggers a calcium signal that travels across the dendritic arbor. The signal strengthens synaptic connections with the presynaptic cell that were active within seconds of that spike, causing larger changes in synaptic strength. BTSP helps hippocampal cells establish their place fields, the positions at which they fire, previous work suggests. But it was unclear whether it also contributes to learning, says Mark Sheffield, associate professor of neurobiology at the University of Chicago, who led the new study. The new findings suggest that it does—challenging how researchers traditionally think about plasticity mechanisms in the hippocampus, says Jason Shepherd, associate professor of neurobiology at the University of Utah, who was not involved in the research. “The classic rules of plasticity that we have been sort of thinking about for decades may not be actually how the brain works, and that’s a big deal.” © 2025 Simons Foundation
Keyword: Learning & Memory
Link ID: 29810 - Posted: 05.28.2025
By Ajdina Halilovic When Todd Sacktor (opens a new tab) was about to turn 3, his 4-year-old sister died of leukemia. “An empty bedroom next to mine. A swing set with two seats instead of one,” he said, recalling the lingering traces of her presence in the house. “There was this missing person — never spoken of — for which I had only one memory.” That memory, faint but enduring, was set in the downstairs den of their home. A young Sacktor asked his sister to read him a book, and she brushed him off: “Go ask your mother.” Sacktor glumly trudged up the stairs to the kitchen. It’s remarkable that, more than 60 years later, Sacktor remembers this fleeting childhood moment at all. The astonishing nature of memory is that every recollection is a physical trace, imprinted into brain tissue by the molecular machinery of neurons. How the essence of a lived moment is encoded and later retrieved remains one of the central unanswered questions in neuroscience. Sacktor became a neuroscientist in pursuit of an answer. At the State University of New York Downstate in Brooklyn, he studies the molecules involved in maintaining the neuronal connections underlying memory. The question that has always held his attention was first articulated in 1984 (opens a new tab) by the famed biologist Francis Crick: How can memories persist for years, even decades, when the body’s molecules degrade and are replaced in a matter of days, weeks or, at most, months? In 2024, working alongside a team that included his longtime collaborator André Fenton (opens a new tab), a neuroscientist at New York University, Sacktor offered a potential explanation in a paper published in Science Advances. The researchers discovered that a persistent bond between two proteins (opens a new tab) is associated with the strengthening of synapses, which are the connections between neurons. Synaptic strengthening is thought to be fundamental to memory formation. As these proteins degrade, new ones take their place in a connected molecular swap that maintains the bond’s integrity and, therefore, the memory. © 2025 Simons Foundation
Keyword: Learning & Memory
Link ID: 29784 - Posted: 05.11.2025
By Giorgia Guglielmi Newly formed memories change over the course of a night’s sleep, a new study in rats suggests. The results reveal that memory processing and consolidation is more complex and prolonged than previously understood, says study investigator Jozsef Csicsvari, professor of systems neuroscience at the Institute of Science and Technology Austria. Sleep has long been known to help consolidate memories, though most studies have tracked only a few hours of this process. The new work monitored memory-related brain activity patterns across almost an entire day—representing a significant step forward, says Lisa Genzel, associate professor of neuroscience at Radboud University, who wasn’t involved in the research. That’s “a heroic effort,” she says. Csicsvari and his team implanted wireless electrodes into the hippocampus of three rats and recorded neuronal activity as the animals learned to navigate a maze in search of hidden pieces of food, rested or slept for 16 to 20 hours after, and then revisited the same food locations the following day. The neurons that fired during learning became active again throughout the rest period, especially during sleep, the team found. This reactivation is a key part of memory consolidation, and it doesn’t just happen immediately after learning; instead, it continues for hours, the study shows. And while the animals slept, their brain activity patterns gradually shifted to resemble the post-sleep recall patterns—a change known as “representational drift” that likely helps the brain weave new information into what it already knows, Csicsvari says. Some neuron groups may be more involved than others in updating memories, the work showed. Some cell types remained stable, whereas others changed their activity. For example, hippocampal neurons called CA1 pyramidal cells showed distinct firing patterns during memory reactivation. And interneurons, too, appeared to play a supporting role, mirroring the changes in pyramidal cells. The team published their findings in Neuron in March. © 2025 Simons Foundation
Keyword: Sleep; Learning & Memory
Link ID: 29777 - Posted: 05.07.2025
By Elise Cutts Food poisoning isn’t an experience you’re likely to forget — and now, scientists know why. A study published April 2 in Nature has unraveled neural circuitry in mice that makes food poisoning so memorable. “We’ve all experienced food poisoning at some point … And not only is it terrible in the moment, but it leads us to not eat those foods again,” says Christopher Zimmerman of Princeton University. Luckily, developing a distaste for foul food doesn’t take much practice — one ill-fated encounter with an undercooked enchilada or contaminated hamburger is enough, even if it takes hours or days for symptoms to set in. The same is true for other animals, making food poisoning one of the best ways to study how our brains connect events separated in time, says neuroscientist Richard Palmiter of the University of Washington in Seattle. Mice usually need an immediate reward or punishment to learn something, Palmiter says; even just a minute’s delay between cause (say, pulling a lever) and effect (getting a treat) is enough to prevent mice from learning. Not so for food poisoning. Despite substantial delays, their brains have no trouble associating an unfamiliar food in the past with tummy torment in the present. Researchers knew that a brain region called the amygdala represents flavors and decides whether or not they’re gross. Palmiter’s group had also shown that the gut tells the brain it’s feeling icky by activating specific “alarm” neurons, called CGRP neurons. “They respond to everything that’s bad,” Palmiter says. © Society for Science & the Public 2000–2025.
Keyword: Learning & Memory; Emotions
Link ID: 29756 - Posted: 04.23.2025
William Wright & Takaki Komiyama Every day, people are constantly learning and forming new memories. When you pick up a new hobby, try a recipe a friend recommended or read the latest world news, your brain stores many of these memories for years or decades. But how does your brain achieve this incredible feat? In our newly published research in the journal Science, we have identified some of the “rules” the brain uses to learn. Learning in the brain The human brain is made up of billions of nerve cells. These neurons conduct electrical pulses that carry information, much like how computers use binary code to carry data. These electrical pulses are communicated with other neurons through connections between them called synapses. Individual neurons have branching extensions known as dendrites that can receive thousands of electrical inputs from other cells. Dendrites transmit these inputs to the main body of the neuron, where it then integrates all these signals to generate its own electrical pulses. It is the collective activity of these electrical pulses across specific groups of neurons that form the representations of different information and experiences within the brain. For decades, neuroscientists have thought that the brain learns by changing how neurons are connected to one another. As new information and experiences alter how neurons communicate with each other and change their collective activity patterns, some synaptic connections are made stronger while others are made weaker. This process of synaptic plasticity is what produces representations of new information and experiences within your brain. In order for your brain to produce the correct representations during learning, however, the right synaptic connections must undergo the right changes at the right time. The “rules” that your brain uses to select which synapses to change during learning – what neuroscientists call the credit assignment problem – have remained largely unclear. © 2010–2025, The Conversation US, Inc.
Keyword: Learning & Memory
Link ID: 29754 - Posted: 04.23.2025
By Gayoung Lee edited by Allison Parshall Crows sometimes have a bad rap: they’re said to be loud and disruptive, and myths surrounding the birds tend to link them to death or misfortune. But crows deserve more love and charity, says Andreas Nieder, a neurophysiologist at the University of Tübingen in Germany. They not only can be incredibly cute, cuddly and social but also are extremely smart—especially when it comes to geometry, as Nieder has found. In a paper published on Friday in Science Advances, Nieder and his colleagues report that crows display an impressive aptitude at distinguishing shapes by using geometric irregularities as a cognitive cue. These crows could even discern quite subtle differences. For the experiment, the crows perched in front of a digital screen that, almost like a video game, displayed progressively more complex combinations of shapes. First, the crows were taught to peck at a certain shape for a reward. Then they were presented with that same shape among five others—for example, one star shape placed among five moon shapes—and were rewarded if they correctly picked the "outlier." “Initially [the outlier] was very obvious,” Nieder says. But once the crows appeared to have familiarized themselves with how the “game” worked, Nieder and his team introduced more similar quadrilateral shapes to see if the crows would still be able to identify outliers. “And they could tell us, for instance, if they saw a figure that was just not a square, slightly skewed, among all the other squares,” Nieder says. “They really could do this spontaneously [and] discriminate the outlier shapes based on the geometric differences without us needing them to train them additionally.” Even when the researchers stopped rewarding them with treats, the crows continued to peck the outliers. © 2024 SCIENTIFIC AMERICAN,
Keyword: Evolution; Intelligence
Link ID: 29741 - Posted: 04.12.2025
By Yasemin Saplakoglu Humans tend to put our own intelligence on a pedestal. Our brains can do math, employ logic, explore abstractions and think critically. But we can’t claim a monopoly on thought. Among a variety of nonhuman species known to display intelligent behavior, birds have been shown time and again to have advanced cognitive abilities. Ravens plan (opens a new tab) for the future, crows count and use tools (opens a new tab), cockatoos open and pillage (opens a new tab) booby-trapped garbage cans, and chickadees keep track (opens a new tab) of tens of thousands of seeds cached across a landscape. Notably, birds achieve such feats with brains that look completely different from ours: They’re smaller and lack the highly organized structures that scientists associate with mammalian intelligence. “A bird with a 10-gram brain is doing pretty much the same as a chimp with a 400-gram brain,” said Onur Güntürkün (opens a new tab), who studies brain structures at Ruhr University Bochum in Germany. “How is it possible?” Researchers have long debated about the relationship between avian and mammalian intelligences. One possibility is that intelligence in vertebrates — animals with backbones, including mammals and birds — evolved once. In that case, both groups would have inherited the complex neural pathways that support cognition from a common ancestor: a lizardlike creature that lived 320 million years ago, when Earth’s continents were squished into one landmass. The other possibility is that the kinds of neural circuits that support vertebrate intelligence evolved independently in birds and mammals. It’s hard to track down which path evolution took, given that any trace of the ancient ancestor’s actual brain vanished in a geological blink. So biologists have taken other approaches — such as comparing brain structures in adult and developing animals today — to piece together how this kind of neurobiological complexity might have emerged. © 2025 Simons Foundation
Keyword: Intelligence; Evolution
Link ID: 29738 - Posted: 04.09.2025
By Rodrigo Pérez Ortega It’s clear a child’s early experiences can leave a lasting imprint on how their brain forms and functions. Now, a new study reveals how various environmental factors, including financial struggles and neighborhood safety, affect the quality of the brain’s white matter—the wiring that connects different brain regions—and in turn, a child’s cognitive abilities. The work, published today in the Proceedings of the National Academy of Sciences, also points to social factors that can boost resilience in a young brain. “It’s a really impressive, compelling paper about the long-term consequences of growing up in undersupported environments,” says John Gabrieli, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the study. White matter consists of nerve fibers facilitating communication between brain regions. They are sheathed in an insulating material called myelin that gives white matter its color. Much of the research to date on how the brain supports cognition has focused on gray matter, tissue mostly made of the cell bodies of neurons that process information, which shows up as gray on brain scans. But complex cognitive tasks are “a symphony of a network” formed by multiple brain areas, Gabrieli says. “And the white matter is what mediates that communication.” Previous studies have linked poverty and childhood trauma—among other adverse environments—with a lower quality of white matter in children and lower scores on cognitive tests. However, these studies included a small number or participants and only looked at one or a few environmental variables at a time. For a more complete picture, developmental neuroscientist Sofia Carozza at Brigham and Women’s Hospital and colleagues analyzed data from more than 9000 participants in the Adolescent Brain Cognitive Development (ABCD) Study. Funded by the National Institutes of Health and established in 2015, ABCD is the largest longitudinal study of brain development in a representative group of U.S. children. Surveys of participants and their parents provide data on their home environment, including household income and parents’ level of education. At age 9 or 10, ABCD participants got a form of magnetic resonance imaging that measures the movement of water in the brain. From the strength of this directional signal, researchers can infer how robust and organized the bundles of white matter fibers are, and whether they have signs of deterioration or damage. © 2025 American Association for the Advancement of Science.
Keyword: Development of the Brain; Learning & Memory
Link ID: 29736 - Posted: 04.09.2025
By RJ Mackenzie New footage documents microglia pruning synapses at high resolution and in real time. The recordings, published in January, add a new twist to a convoluted debate about the range of these cells’ responsibilities. Microglia are the brain’s resident immune cells. For about a decade, some have also credited them with pruning excess synaptic connections during early brain development. But that idea was based on static images showing debris from destroyed synapses within the cells—which left open the possibility that microglia clean up after neurons do the actual pruning. In the January movies, though, a microglia cell expressing a green fluorescent protein clearly reaches out a ghostly green tentacle to a budding presynapse on a neuron and lifts it away, leaving the neighboring blue axon untouched. “Their imaging is superb,” says Amanda Sierra, a researcher at the Achucarro Basque Center for Neuroscience, who was not involved in the work. But “one single video, or even two single videos, however beautiful they are, are not sufficient evidence that this is the major mechanism of synapse elimination,” she says. In the new study, researchers isolated microglia and neurons from mice and grew them in culture with astrocytes, labeling the microglia, synapses and axons with different fluorescent dyes. Their approach ensured that the microglia formed ramified processes—thin, branching extensions that don’t form when they are cultured in isolation, says Ryuta Koyama, director of the Department of Translational Neurobiology at Japan’s National Center of Neurology and Psychiatry, who led the work. “People now know that ramified processes of microglia are really necessary to pick up synapses,” he says. “In normal culture systems, you can’t find ramified processes. They look amoeboid.” © 2025 Simons Foundation
Keyword: Learning & Memory; Glia
Link ID: 29720 - Posted: 03.27.2025
Ari Daniel Tristan Yates has no doubt about her first memory, even if it is a little fuzzy. "I was about three and a half in Callaway Gardens in Georgia," she recalls, "just running around with my twin sister trying to pick up Easter eggs." But she has zero memories before that, which is typical. This amnesia of our babyhood is pretty much the rule. "We have memories from what happened earlier today and memories from what happened earlier last week and even from a few years ago," says Yates, who's a cognitive neuroscientist at Columbia University. "But all of us lack memories from our infancy." Is that because we don't make memories when we're babies, or is there something else responsible? Now, in new research published by Yates and her colleagues in the journal Science, they propose that babies are able to form memories, even if they become inaccessible later in life. These results might reveal something crucial about the earliest moments of our development. "That's the time when we learn who our parents are, that's when we learn language, that's when we learn how to walk," Yates says. "What happens in your brain in the first two years of life is magnificent," says Nick Turk-Browne, a cognitive neuroscientist at Yale University. "That's the period of by far the greatest plasticity across your whole life span. And better understanding how your brain learns and remembers in infancy lays the foundation for everything you know and do for the rest of your life. © 2025 npr
Keyword: Learning & Memory; Development of the Brain
Link ID: 29715 - Posted: 03.22.2025
By Laura Sanders There are countless metaphors for memory. It’s a leaky bucket, a steel trap, a file cabinet, words written in sand. But one of the most evocative — and neuroscientifically descriptive — invokes Lego bricks. A memory is like a Lego tower. It’s built from the ground up, then broken down, put away in bins and rebuilt in a slightly different form each time it’s taken out. This metaphor is beautifully articulated by psychologists Ciara Greene and Gillian Murphy in their new book, Memory Lane. Perhaps the comparison speaks to me because I have watched my kids create elaborate villages of Lego bricks, only to be dismantled, put away (after much nagging) and reconstructed, always with a similar overall structure but with minor and occasionally major changes. These villages’ blueprints are largely stable, but also fluid and flexible, subject to the material whims of the builders at any point in time. Memory works this way, too, Greene and Murphy propose. Imagine your own memory lane as a series of buildings, modified in ways both small and big each time you call them to mind. “As we walk down Memory Lane, the buildings we pass — our memories of individual events — are under constant reconstruction,” Greene and Murphy write. In accessible prose, the book covers a lot of ground, from how we form memories to how delicate those memories really are. Readers may find it interesting (or perhaps upsetting) to learn how bad we all are at remembering why we did something, from trivial choices, like buying an album, to consequential ones, such as a yes or no vote on an abortion referendum. People change their reasoning — or at least, their memories of their reasoning — on these sorts of events all the time. © Society for Science & the Public 2000–2025
Keyword: Learning & Memory
Link ID: 29712 - Posted: 03.22.2025
By Claudia López Lloreda For a neuroscientist, the opportunity to record single neurons in people doesn’t knock every day. It is so rare, in fact, that after 14 years of waiting by the door, Florian Mormann says he has recruited just 110 participants—all with intractable epilepsy. All participants had electrodes temporarily implanted in their brains to monitor their seizures. But the slow work to build this cohort is starting to pay off for Mormann, a group leader at the University of Bonn, and for other researchers taking a similar approach, according to a flurry of studies published in the past year. For instance, certain neurons selectively respond not only to particular scents but also to the words and images associated with them, Mormann and his colleagues reported in October. Other neurons help to encode stimuli, form memories and construct representations of the world, recent work from other teams reveals. Cortical neurons encode specific information about the phonetics of speech, two independent teams reported last year. Hippocampal cells contribute to working memory and map out time in novel ways, two other teams discovered last year, and some cells in the region encode information related to a person’s changing knowledge about the world, a study published in August found. These studies offer the chance to answer questions about human brain function that remain challenging to answer using animal models, says Ziv Williams, associate professor of neurosurgery at Harvard Medical School, who led one of the teams that worked on speech phonetics. “Concept cells,” he notes by way of example, such as those Mormann identified, or the “Jennifer Aniston” neurons famously described in a 2005 study, have proved elusive in the monkey brain. © 2025 Simons Foundation
Keyword: Attention; Learning & Memory
Link ID: 29709 - Posted: 03.19.2025
By Angie Voyles Askham Synaptic plasticity in the hippocampus involves both strengthening relevant connections and weakening irrelevant ones. That sapping of synaptic links, called long-term depression (LTD), can occur through two distinct routes: the activity of either NMDA receptors or metabotropic glutamate receptors (mGluRs). The mGluR-dependent form of LTD, required for immediate translation of mRNAs at the synapse, appears to go awry in fragile X syndrome, a genetic condition that stems from loss of the protein FMRP and is characterized by intellectual disability and often autism. Possibly as a result, mice that model fragile X exhibit altered protein synthesis regulation in the hippocampus, an increase in dendritic spines and overactive neurons. Treatments for fragile X that focus on dialing down the mGluR pathway and tamping down protein synthesis at the synapse have shown success in quelling those traits in mice, but they have repeatedly failed in human clinical trials. But the alternative pathway—via the NMDA receptor—may provide better results, according to a new study. Signaling through the NMDA receptor subunit GluN2B can also decrease spine density and alleviate fragile-X-linked traits in mice, the work shows. “You don’t have to modulate the protein synthesis directly,” says Lynn Raymond, professor of psychiatry and chair in neuroscience at the University of British Columbia, who was not involved in the work. Instead, activation of part of the GluN2B subunit can indirectly shift the balance of mRNAs that are translated at the synapse. “It’s just another piece of the puzzle, but I think it’s a very important piece,” she says. Whether this insight will advance fragile X treatments remains to be seen, says Wayne Sossin, professor of neurology and neurosurgery at Montreal Neurological Institute-Hospital, who was not involved in the study. Multiple groups have cured fragile-X-like traits in mice by altering what happens at the synapse, he says. “Altering translation in a number of ways seems to change the balance that is off when you lose FMRP. And it’s not really clear how specific that is for FMRP.” © 2025 Simons Foundation
Keyword: Development of the Brain; Learning & Memory
Link ID: 29700 - Posted: 03.12.2025
By Tim Vernimmen On a rainy day in July 2024, Tim Bliss and Terje Lømo are in the best of moods, chuckling and joking over brunch, occasionally pounding the table to make a point. They’re at Lømo’s house near Oslo, Norway, where they’ve met to write about the late neuroscientist Per Andersen, in whose lab they conducted groundbreaking experiments more than 50 years ago. The duo only ever wrote one research paper together, in 1973, but that work is now considered a turning point in the study of learning and memory. Published in the Journal of Physiology, it was the first demonstration that when a neuron — a cell that receives and sends signals throughout the nervous system — signals to another neuron frequently enough, the second neuron will later respond more strongly to new signals, not for just seconds or minutes, but for hours. It would take decades to fully understand the implications of their research, but Bliss and Lømo had discovered something momentous: a phenomenon called long-term potentiation, or LTP, which researchers now know is fundamental to the brain’s ability to learn and remember. Today, scientists agree that LTP plays a major role in the strengthening of neuronal connections, or synapses, that allow the brain to adjust in response to experience. And growing evidence suggests that LTP may also be crucially involved in a variety of problems, including memory deficits and pain disorders. Bliss and Lømo never wrote another research article together. In fact, they would soon stop working on LTP — Bliss for about a decade, Lømo for the rest of his life. Although the researchers knew they had discovered something important, at first the paper “didn’t make a big splash,” Bliss says. By the early 1970s, neuroscientist Eric Kandel had demonstrated that some simple forms of learning can be explained by chemical changes in synapses — at least in a species of sea slug. But scientists didn’t yet know if such findings applied to mammals, or if they could explain more complex and enduring types of learning, such as the formation of memories that may last for years.
Keyword: Learning & Memory
Link ID: 29694 - Posted: 03.05.2025
By Ingrid Wickelgren After shuffling the cards in a standard 52-card deck, Alex Mullen, a three-time world memory champion, can memorize their order in under 20 seconds. As he flips though the cards, he takes a mental walk through a house. At each point in his journey — the mailbox, front door, staircase and so on — he attaches a card. To recall the cards, he relives the trip. This technique, called “method of loci” or “memory palace,” is effective because it mirrors the way the brain naturally constructs narrative memories: Mullen’s memory for the card order is built on the scaffold of a familiar journey. We all do something similar every day, as we use familiar sequences of events, such as the repeated steps that unfold during a meal at a restaurant or a trip through the airport, as a home for specific details — an exceptional appetizer or an object flagged at security. The general narrative makes the noteworthy features easier to recall later. “You are taking these details and connecting them to this prior knowledge,” said Christopher Baldassano (opens a new tab), a cognitive neuroscientist at Columbia University. “We think this is how you create your autobiographical memories.” Psychologists empirically introduced (opens a new tab) this theory some 50 years ago, but proof of such scaffolds in the brain was missing. Then, in 2018, Baldassano found it: neural fingerprints of narrative experience, derived from brain scans, that replay sequentially during standard life events. He believes that the brain builds a rich library of scripts for expected scenarios — restaurant or airport, business deal or marriage proposal — over a person’s lifetime. These standardized scripts, and departures from them, influence how and how well we remember specific instances of these event types, his lab has found. And recently, in a paper published in Current Biology in fall 2024, they showed that individuals can select a dominant script (opens a new tab) for a complex, real-world event — for example, while watching a marriage proposal in a restaurant, we might opt, subconsciously, for either a proposal or a restaurant script — which determines what details we remember. © 2025 Simons Foundation
Keyword: Learning & Memory; Attention
Link ID: 29685 - Posted: 02.26.2025
By Michael S. Rosenwald In early February, Vishvaa Rajakumar, a 20-year-old Indian college student, won the Memory League World Championship, an online competition pitting people against one another with challenges like memorizing the order of 80 random numbers faster than most people can tie a shoelace. The renowned neuroscientist Eleanor Maguire, who died in January, studied mental athletes like Mr. Rajakumar and found that many of them used the ancient Roman “method of loci,” a memorization trick also known as the “memory palace.” The technique takes several forms, but it generally involves visualizing a large house and assigning memories to rooms. Mentally walking through the house fires up the hippocampus, the seahorse-shaped engine of memory deep in the brain that consumed Dr. Maguire’s career. We asked Mr. Rajakumar about his strategies of memorization. His answers, lightly edited and condensed for clarity, are below. Q. How do you prepare for the Memory League World Championship? Hydration is very important because it helps your brain. When you memorize things, you usually sub-vocalize, and it helps to have a clear throat. Let’s say you’re reading a book. You’re not reading it out loud, but you are vocalizing within yourself. If you don’t drink a lot of water, your speed will be a bit low. If you drink a lot of water, it will be more and more clear and you can read it faster. Q. What does your memory palace look like? Let’s say my first location is my room where I sleep. My second location is the kitchen. And the third location is my hall. The fourth location is my veranda. Another location is my bathroom. Let’s say I am memorizing a list of words. Let’s say 10 words. What I do is, I take a pair of words, make a story out of them and place them in a location. And I take the next two words. I make a story out of them. I place them in the second location. The memory palace will help you to remember the sequence. © 2025 The New York Times Company
Keyword: Learning & Memory; Attention
Link ID: 29673 - Posted: 02.15.2025
By Angie Voyles Askham Identifying what a particular neuromodulator does in the brain—let alone how such molecules interact—has vexed researchers for decades. Dopamine agonists increase reward-seeking, whereas serotonin agonists decrease it, for example, suggesting that the two neuromodulators act in opposition. And yet, neurons in the brain’s limbic regions release both chemicals in response to a reward (and also to a punishment), albeit on different timescales, electrophysiological recordings have revealed, pointing to a complementary relationship. This dual response suggests that the interplay between dopamine and serotonin may be important for learning. But no tools existed to simultaneously manipulate the neuromodulators and test their respective roles in a particular area of the brain—at least, not until now—says Robert Malenka, professor of psychiatry and behavioral sciences at Stanford University. As it turns out, serotonin and dopamine join forces in the nucleus accumbens during reinforcement learning, according to a new study Malenka led, yet they act in opposition: dopamine as a gas pedal and serotonin as a brake on signaling that a stimulus is rewarding. The mice he and his colleagues studied learned faster and performed more reliably when the team optogenetically pressed on the animals’ dopamine “gas” as they simultaneously eased off the serotonin “brake.” “It adds a very rich and beguiling picture of the interaction between dopamine and serotonin,” says Peter Dayan, director of computational neuroscience at the Max Planck Institute for Biological Cybernetics. In 2002, Dayan proposed a different framework for how dopamine and serotonin might work in opposition, but he was not involved in the new study. The new work “partially recapitulates” that 2002 proposal, Dayan adds, “but also poses many more questions.” © 2025 Simons Foundation
Keyword: Learning & Memory
Link ID: 29672 - Posted: 02.15.2025
By Michael S. Rosenwald Eleanor Maguire, a cognitive neuroscientist whose research on the human hippocampus — especially those belonging to London taxi drivers — transformed the understanding of memory, revealing that a key structure in the brain can be strengthened like a muscle, died on Jan. 4 in London. She was 54. Her death, at a hospice facility, was confirmed by Cathy Price, her colleague at the U.C.L. Queen Square Institute of Neurology. Dr. Maguire was diagnosed with spinal cancer in 2022 and had recently developed pneumonia. Working for 30 years in a small, tight-knit lab, Dr. Maguire obsessed over the hippocampus — a seahorse-shaped engine of memory deep in the brain — like a meticulous, relentless detective trying to solve a cold case. An early pioneer of using functional magnetic resonance imaging (f.M.R.I.) on living subjects, Dr. Maguire was able to look inside human brains as they processed information. Her studies revealed that the hippocampus can grow, and that memory is not a replay of the past but rather an active reconstructive process that shapes how people imagine the future. “She was absolutely one of the leading researchers of her generation in the world on memory,” Chris Frith, an emeritus professor of neuropsychology at University College London, said in an interview. “She changed our understanding of memory, and I think she also gave us important new ways of studying it.” In 1995, while she was a postdoctoral fellow in Dr. Frith’s lab, she was watching television one evening when she stumbled on “The Knowledge,” a quirky film about prospective London taxi drivers memorizing the city’s 25,000 streets to prepare for a three-year-long series of licensing tests. Dr. Maguire, who said she rarely drove because she feared never arriving at her destination, was mesmerized. “I am absolutely appalling at finding my way around,” she once told The Daily Telegraph. “I wondered, ‘How are some people so bloody good and I am so terrible?’” In the first of a series of studies, Dr. Maguire and her colleagues scanned the brains of taxi drivers while quizzing them about the shortest routes between various destinations in London. © 2025 The New York Times Company
Keyword: Learning & Memory
Link ID: 29671 - Posted: 02.15.2025