Links for Keyword: Learning & Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1328

Peter Lukacs Popular wisdom holds we can ‘rewire’ our brains: after a stroke, after trauma, after learning a new skill, even with 10 minutes a day on the right app. The phrase is everywhere, offering something most of us want to believe: that when the brain suffers an assault, it can be restored with mechanical precision. But ‘rewiring’ is a risky metaphor. It borrows its confidence from engineering, where a faulty system can be repaired by swapping out the right component; it also smuggles that confidence into biology, where change is slower, messier and often incomplete. The phrase has become a cultural mantra that is easier to comprehend than the scientific term, neuroplasticity – the brain’s ability to change and form new neural connections throughout life. But what does it really mean to ‘rewire’ the brain? Is it a helpful shorthand for describing the remarkable plasticity of our nervous system or has it become a misleading oversimplification that distorts our grasp of science? After all, ‘rewiring your brain’ sounds like more than metaphor. It implies an engineering project: a system whose parts can be removed, replaced and optimised. The promise is both alluring and oddly mechanical. The metaphor actually did come from engineering. To an engineer, rewiring means replacing old and faulty circuits with new ones. As the vocabulary of technology crept into everyday life, it brought with it a new way of thinking about the human mind. Medical roots of the phrase trace back to 1912, when the British surgeon W Deane Butcher compared the body’s neural system to a house’s electrical wiring, describing how nerves connect to muscles much like wires connect appliances to a power source. By the 1920s, the Harvard psychologist Leonard Troland was referring to the visual system as ‘an extremely intricate telegraphic system’, reinforcing the comparison between brain function and electrical networks. © Aeon Media Group Ltd. 2012-2026.

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 30108 - Posted: 02.04.2026

By Yasemin Saplakoglu On a remote island in the Indian Ocean, six closely watched bats took to the star-draped skies. As they flew across the seven-acre speck of land, devices implanted in their brains pinged data back to a group of sleepy-eyed neuroscientists monitoring them from below. The researchers were working to understand how these flying mammals, who have brains not unlike our own, develop a sense of direction while navigating a new environment. The research, published in Science, reported that the bats used a network of brain cells (opens a new tab) that informed their sense of direction around the island. Their “internal compass” was tuned by neither the Earth’s magnetic field nor the stars in the sky, but rather by landmarks that informed a mental map of the animal’s environment. These first-ever wild experiments in mammalian mapmaking confirm decades of lab results and support one of two competing theories about how an internal neural compass anchors itself to the environment. “Now we’re understanding a basic principle about how the mammalian brain works” under natural, real-world conditions, said the behavioral neuroscientist Paul Dudchenko (opens a new tab), who studies spatial navigation at the University of Stirling in the United Kingdom and was not involved in the study. “It will be a paper people will be talking about for 50 years.” Follow-up experiments that haven’t yet been published show that other cells critical to navigation encode much more information in the wild than they do in the lab, emphasizing the need to test neurobiological theories in the real world. Neuroscientists believe that a similar internal compass, composed of neurons known as “head direction cells,” might also exist in the human brain — though they haven’t yet been located. If they are someday found, the mechanism could shed light on common sensations such as getting “turned around” and quickly reorienting oneself. It might even explain why some of us are so bad at finding our way. © 2026 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 30094 - Posted: 01.24.2026

By Erin Garcia de Jesús A deck brush can be a good tool for the right task. Just ask Veronika, the Brown Swiss cow. Veronika uses both ends of a deck brush to scratch various parts of her body, researchers report January 19 in Current Biology. It’s the first reported tool use in a cow, a species that is often “cognitively underestimated,” the researchers say. Cows usually rub against trees, rocks or wooden planks to scratch, but Veronika’s handy tool allows her to reach parts of her body that she couldn’t otherwise, says Antonio Osuna-Mascaró, a cognitive biologist at the Messerli Research Institute of the University of Veterinary Medicine, Vienna. It’s unclear how the cow figured it out, but “somehow Veronika learned to use tools, and she’s doing something that other cows simply can’t.” Veronika, a pet cow that lives in a pasture on a small Austrian farm, picks up the brush by its handle with her tongue and twists her neck to place the brush where she needs it. Setting the brush in front of her in different orientations showed that she uses the hard, bristled end to target most areas, including the tough, thick skin on her back. She also uses the nonbristled end, slowly moving the handle over softer body parts such as her belly button and udder. Veronika uses different parts of a deck brush to reach various parts of her body. She uses the brush end to scratch large areas such as her thigh (top left) and back (top right). She uses the handle to scratch more delicate areas such as her navel flap (bottom left) and anus (bottom right). © Society for Science & the Public 2000–2026.

Related chapters from BN: Chapter 6: Evolution of the Brain and Behavior; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 30088 - Posted: 01.21.2026

Lynne Peeples Sometimes the hardest part of doing an unpleasant task is simply getting started — typing the first word of a long report, lifting a dirty dish on the top of an overfilled sink or removing clothes from an unused exercise machine. The obstacle isn’t necessarily a lack of interest in completing a task, but the brain’s resistance to taking the first step. Now, scientists might have identified the neural circuit behind this resistance, and a way to ease it. In a study1 published today in Current Biology, researchers describe a pathway in the brain that seems to act as a ‘motivation brake’, dampening the drive to begin a task. When the team selectively suppressed this circuit in macaque monkeys, goal-directed behaviour rebounded. “The change after this modulation was dramatic,” says study co-author Ken-ichi Amemori, a neuroscientist at Kyoto University in Japan. The motivation brake, which can be particularly stubborn for people with certain psychiatric conditions, such as schizophrenia and major depressive disorder, is distinct from the avoidance of tasks driven by risk aversion in anxiety disorders. Pearl Chiu, a computational psychiatrist at Virginia Tech in Roanoke, who was not involved in the study, says that understanding this difference is essential for developing new treatments and refining current ones. “Being able to restore motivation, that’s especially exciting,” she says. Motivated macaques Previous work on task initiation has implicated a neural circuit connecting two parts of the brain known as the ventral striatum and ventral pallidum, both of which are involved in processing motivation and reward2,3,4. But attempts to isolate the circuit’s role have fallen short. Electrical stimulation, for example, inadvertently activates downstream regions, affecting motivation, but also anxiety. © 2026 Springer Nature Limited

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 11: Emotions, Aggression, and Stress
Link ID: 30079 - Posted: 01.14.2026

By Holly Barker In early life, astrocytes help to mold neural pathways in response to the environment. In adulthood, however, those cells curb plasticity by secreting a protein that stabilizes circuits, according to a mouse study published last month in Nature. “It’s a new and unique take on the field,” says Ciaran Murphy-Royal, assistant professor of neuroscience at Montreal University, who was not involved in the study. Most research focuses on how glial cells drive plasticity but “not how they apply the brakes,” he says. Astrocytes promote synaptic remodeling during the development of sensory circuits by secreting factors and exerting physical control—in humans, a single astrocyte can clamp onto 2 million synapses, previous studies suggest. But the glial cells are also responsible for shutting down critical periods for vision and motor circuits in mice and fruit flies, respectively. It has been unclear whether this loss of plasticity can be reversed. Some evidence hints that modifying the neuronal environment—through matrix degradation or transplantation of young neurons—can rekindle flexibility in adult brains. The new findings confirm that in adulthood, plasticity is only dormant, rather than lost entirely, says Nicola Allen, professor of molecular neurobiology at the Salk Institute for Biological Studies and an investigator on the new paper. “Neurons don’t lose an intrinsic ability to remodel, but that process is controlled by secreted factors in the environment,” she says. Specifically, astrocytes orchestrate that dormancy by releasing CCN1, a protein that stabilizes circuits by prompting the maturation of inhibitory neurons and glial cells, Allen’s team found. The findings suggest that astrocytes have an active role in stabilizing adult brain circuits. © 2026 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 30069 - Posted: 01.07.2026

David Adam In a town on the shores of Lake Geneva sit clumps of living human brain cells for hire. These blobs, about the size of a grain of sand, can receive electrical signals and respond to them — much as computers do. Research teams from around the world can send the blobs tasks, in the hope that they will process the information and send a signal back. Welcome to the world of wetware, or biocomputers. In a handful of academic laboratories and companies, researchers are growing human neurons and trying to turn them into functional systems equivalent to biological transistors. These networks of neurons, they argue, could one day offer the power of a supercomputer without the outsized power consumption. The results so far are limited. But keen scientists are already buying or borrowing online access to these brain-cell processors — or even investing tens of thousands of dollars to secure their own models. Some want to use these biocomputers as straightforward replacements for ordinary computers, whereas others want to use them to study how brains work. “Trying to understand biological intelligence is a very interesting scientific problem,” says Benjamin Ward-Cherrier, a robotics researcher at the University of Bristol, UK, who rents time on the Swiss brain blobs. “And looking at it from the bottom up — with simple small versions of our brain and building those up — I think is a better way of doing it than top down.” Biocomputing advocates claim that these systems could one day rival the capability of artificial intelligence and the potential of quantum computers. Other researchers who work with human neurons are more sceptical of what’s possible. And they warn that hype — and the science-fictional allure of what are sometimes labelled brain-in-a-jar systems — could even be counterproductive. If the idea that these systems possess sentience and consciousness takes hold, there could be repercussions for the research community. © 2025 Springer Nature Limited

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 30010 - Posted: 11.12.2025

By Kevin Berger Steve Ramirez was feeling on top of the world in 2015. His father, Pedro Ramirez, had snuck into the United States in the 1980s to escape the civil war in El Salvador. Pedro Ramirez held jobs as a door-to-door salesman for tombstones, a janitor in a diner, and a technician in an animal lab. After years of ’round-the-clock work, Pedro Ramirez became a U.S. citizen. And here was his son, born in America, with a Ph.D. from the Massachusetts Institute of Technology, still in his 20s, being celebrated as one of the most exciting and promising neuroscientists in the country. Steve Ramirez had published research papers with his MIT mentor Xu Liu that reported how they used lasers to erase fear memories, spur positive memories, and even fabricate new memories in the brain. The experiments were only in mice. But they were impressive. Memories are made of networks of brain cells called engrams. The lasers targeted specific cells in engrams. Zap those cells and the whole engram was muted. The pair of neuroscientists gave a popular TED Talk on memory manipulation and were featured in international press stories that invariably mentioned the plotlines in the movies Eternal Sunshine of the Spotless Mind and Inception could be real. Bad memories could be deleted. New memories could be implanted. One night in 2013 Ramirez and Liu were celebrating the publication of one of their papers in a jazz lounge at the top of the Prudential Building in Boston. The music was grooving, and the city below glittered like stars. Ramirez thought, I’ve never been so happy and so fully alive. In early 2015, Liu, age 37, died suddenly. There had been no warning signs. Ramirez had never had a friend like Liu. Liu opened his mind to experiences in science he couldn’t have imagined. Their relationship felt organic from Ramirez’s first day in the lab. Liu joked they would always have chemistry doing science together. Grief is when the future your brain plans for is cut off. Ramirez’s thoughts of doing science without Liu became a trapdoor that landed him in a cellar of pain. © 2025 NautilusNext Inc.,

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 4: Development of the Brain
Link ID: 30007 - Posted: 11.12.2025

Katie Kavanagh Why are we able to remember emotional events so well? According to a study published today in Nature1, a type of cell in the brain called an astrocyte is a key player in stabilizing memories for long-term recall. Astrocytes were thought to simply support neurons in creating the physical traces of memories in the brain, but the study found that they have a much more active role — and can even be directly triggered by repeated emotional experiences. The researchers behind the finding suggest that the cells could be a fresh target for treating memory conditions such as those associated with post-traumatic stress disorder and Alzheimer’s disease. “We provide an answer to the question of how a specific memory is stored for the long term,” says study co-author Jun Nagai, a neuroscientist at RIKEN Center for Brain Science in Wako, Japan. By studying astrocytes, Nagai said, the study identifies how the brain selectively filters important memories at the cellular level. Stable memories Nagai and his colleagues focused on the question of memory stabilization: how a short-term memory becomes more permanent in the brain. Previous research had found physical traces of memories in neuronal networks in brain regions such as the hippocampus and amygdala2. But it was unclear how these ‘engrams’ were stored in the brain as lasting memories after repeated exposure to the same stimulus. To dig deeper, the researchers developed a method for measuring activation patterns in astrocytes across a whole brain of a mouse as it completes a memory task. They measured the upregulation of a gene called Fos — an early marker of cell activity that is associated with the physical traces of memories in the brain3. © 2025 Springer Nature Limited

Related chapters from BN: Chapter 15: Emotions, Aggression, and Stress; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress; Chapter 13: Memory and Learning
Link ID: 29975 - Posted: 10.18.2025

By Claire L. Evans In 1983, the octogenarian geneticist Barbara McClintock stood at the lectern of the Karolinska Institute in Stockholm. She was famously publicity averse — nearly a hermit — but it’s customary for people to speak when they’re awarded a Nobel Prize, so she delivered a halting account of the experiments that had led to her discovery, in the early 1950s, of how DNA sequences can relocate across the genome. Near the end of the speech, blinking through wire-framed glasses, she changed the subject, asking: “What does a cell know of itself?” McClintock had a reputation for eccentricity. Still, her question seemed more likely to come from a philosopher than a plant geneticist. She went on to describe lab experiments in which she had seen plant cells respond in a “thoughtful manner.” Faced with unexpected stress, they seemed to adjust in ways that were “beyond our present ability to fathom.” What does a cell know of itself? It would be the work of future biologists, she said, to find out. Forty years later, McClintock’s question hasn’t lost its potency. Some of those future biologists are now hard at work unpacking what “knowing” might mean for a single cell, as they hunt for signs of basic cognitive phenomena — like the ability to remember and learn — in unicellular creatures and nonneural human cells alike. Science has long taken the view that a multicellular nervous system is a prerequisite for such abilities, but new research is revealing that single cells, too, keep a record of their experiences for what appear to be adaptive purposes. In a provocative study published in Nature Communications late last year, the neuroscientist Nikolay Kukushkin and his mentor Thomas J. Carew at New York University showed that human kidney cells growing in a dish can “remember” patterns of chemical signals (opens a new tab) when they’re presented at regularly spaced intervals — a memory phenomenon common to all animals, but unseen outside the nervous system until now. Kukushkin is part of a small but enthusiastic cohort of researchers studying “aneural,” or brainless, forms of memory. What does a cell know of itself? So far, their research suggests that the answer to McClintock’s question might be: much more than you think. © 2025 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29872 - Posted: 08.02.2025

By Marta Hill Every year, black-capped chickadees perform an impressive game of hide-and-seek. These highly visual birds cache tens of thousands of surplus food morsels and then recover them during leaner times. Place cells in the hippocampus may help the birds keep track of their hidden bounty, according to a study published 11 June in Nature. The cells activate not only when a bird visits a food stash but also when it looks at the stash from far away, the study shows. “What is really profound about the work is it’s trying to unpack how it is that we’re able to combine visual information, which is based on where we currently are in the world, with our understanding of the space around us and how we can navigate it,” says Nick Turk-Browne, professor of psychology and director of the Wu Tsai Institute at Yale University, who was not involved in the study. With each gaze shift, the hippocampus first predicts what the bird is about to see and then reacts to what it actually sees, the study shows. “It really fits beautifully into this picture of this dual role for the system in representing actual and representing possible,” says Loren Frank, professor of physiology and psychiatry at the University of California, San Francisco, who was not involved in the work. The findings help explain how the various functions of the hippocampus—navigation, perception, learning and memory—work together, Turk-Browne adds. “If we can have a smart, abstract representation of place that doesn’t depend on actually physically being there, then you can imagine how this can be used to construct memories.” © 2025 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29827 - Posted: 06.14.2025

By Sydney Wyatt Donald Hebb famously proposed in 1949 that when neurons fire together, the synaptic connections between them strengthen, forming the basis for long-term memories. That theory—which held up in experiments in rat hippocampal slice cultures—has shaped how researchers understand synaptic plasticity ever since. But a new computational modeling study adds to mounting evidence that Hebbian plasticity does not always explain how changing neuronal connections enable learning. Rather, behavioral timescale synaptic plasticity (BTSP), which can strengthen synapses even when neurons fire out of sync, better captures the changes seen in CA1 hippocampal cells as mice learn to navigate a new environment, the study suggests. Hebbian spike-timing-dependent plasticity occurs when a neuron fires just ahead of one it synapses onto, leading to a stronger connection between the two cells. BTSP, on the other hand, relies on a complex spike, or a burst of action potentials, in the postsynaptic cell, which triggers a calcium signal that travels across the dendritic arbor. The signal strengthens synaptic connections with the presynaptic cell that were active within seconds of that spike, causing larger changes in synaptic strength. BTSP helps hippocampal cells establish their place fields, the positions at which they fire, previous work suggests. But it was unclear whether it also contributes to learning, says Mark Sheffield, associate professor of neurobiology at the University of Chicago, who led the new study. The new findings suggest that it does—challenging how researchers traditionally think about plasticity mechanisms in the hippocampus, says Jason Shepherd, associate professor of neurobiology at the University of Utah, who was not involved in the research. “The classic rules of plasticity that we have been sort of thinking about for decades may not be actually how the brain works, and that’s a big deal.” © 2025 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29810 - Posted: 05.28.2025

By Sydney Wyatt The red nucleus—a pale pink brainstem structure that coordinates limb movements in quadruped animals—also projects to brain areas that shape reward-motivated and action-based movements in people, according to a new functional imaging study. The finding suggests the region, like the cerebral cortex, took on a more complex role over the course of evolution. Many researchers had assumed that brainstem structures remained stuck in evolutionarily ancient roles, says Joan Baizer, professor of physiology and biophysics at the University at Buffalo. Activity in the red nucleus, a structure that emerged once animals began to use limbs for walking, coordinates the speed and accuracy of those movements in rats and helps to control posture in monkeys, previous electrophysiological recordings have shown. And in nonhuman primates, neurons in the red nucleus project to the motor cortex and spinal cord, anatomical studies have demonstrated, seemingly confirming the area’s role in motor function. By contrast, the human red nucleus primarily connects to cortical and subcortical regions involved in action control, reward and motivated behavior, the new work reveals. “If this is such a motor structure, why isn’t it projecting to the spinal cord? That doesn’t really fit with our notion of what this structure is supposed to be doing,” says study investigator Samuel Krimmel, a postdoctoral fellow in Nico Dosenbach’s lab. The new imaging suggests that, at least in people, the neural underpinnings of motivated movement—previously considered to be the role of higher-order brain areas—reach “all the way down into the brainstem,” says Dosenbach, professor of neurology at Washington University School of Medicine, who led the work. The findings were published last month in Nature Communications. © 2025 Simons Foundation

Related chapters from BN: Chapter 11: Motor Control and Plasticity; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 5: The Sensorimotor System
Link ID: 29790 - Posted: 05.17.2025

By Ajdina Halilovic When Todd Sacktor (opens a new tab) was about to turn 3, his 4-year-old sister died of leukemia. “An empty bedroom next to mine. A swing set with two seats instead of one,” he said, recalling the lingering traces of her presence in the house. “There was this missing person — never spoken of — for which I had only one memory.” That memory, faint but enduring, was set in the downstairs den of their home. A young Sacktor asked his sister to read him a book, and she brushed him off: “Go ask your mother.” Sacktor glumly trudged up the stairs to the kitchen. It’s remarkable that, more than 60 years later, Sacktor remembers this fleeting childhood moment at all. The astonishing nature of memory is that every recollection is a physical trace, imprinted into brain tissue by the molecular machinery of neurons. How the essence of a lived moment is encoded and later retrieved remains one of the central unanswered questions in neuroscience. Sacktor became a neuroscientist in pursuit of an answer. At the State University of New York Downstate in Brooklyn, he studies the molecules involved in maintaining the neuronal connections underlying memory. The question that has always held his attention was first articulated in 1984 (opens a new tab) by the famed biologist Francis Crick: How can memories persist for years, even decades, when the body’s molecules degrade and are replaced in a matter of days, weeks or, at most, months? In 2024, working alongside a team that included his longtime collaborator André Fenton (opens a new tab), a neuroscientist at New York University, Sacktor offered a potential explanation in a paper published in Science Advances. The researchers discovered that a persistent bond between two proteins (opens a new tab) is associated with the strengthening of synapses, which are the connections between neurons. Synaptic strengthening is thought to be fundamental to memory formation. As these proteins degrade, new ones take their place in a connected molecular swap that maintains the bond’s integrity and, therefore, the memory. © 2025 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29784 - Posted: 05.11.2025

By Elise Cutts Food poisoning isn’t an experience you’re likely to forget — and now, scientists know why. A study published April 2 in Nature has unraveled neural circuitry in mice that makes food poisoning so memorable. “We’ve all experienced food poisoning at some point … And not only is it terrible in the moment, but it leads us to not eat those foods again,” says Christopher Zimmerman of Princeton University. Luckily, developing a distaste for foul food doesn’t take much practice — one ill-fated encounter with an undercooked enchilada or contaminated hamburger is enough, even if it takes hours or days for symptoms to set in. The same is true for other animals, making food poisoning one of the best ways to study how our brains connect events separated in time, says neuroscientist Richard Palmiter of the University of Washington in Seattle. Mice usually need an immediate reward or punishment to learn something, Palmiter says; even just a minute’s delay between cause (say, pulling a lever) and effect (getting a treat) is enough to prevent mice from learning. Not so for food poisoning. Despite substantial delays, their brains have no trouble associating an unfamiliar food in the past with tummy torment in the present. Researchers knew that a brain region called the amygdala represents flavors and decides whether or not they’re gross. Palmiter’s group had also shown that the gut tells the brain it’s feeling icky by activating specific “alarm” neurons, called CGRP neurons. “They respond to everything that’s bad,” Palmiter says. © Society for Science & the Public 2000–2025.

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 13: Homeostasis: Active Regulation of the Internal Environment
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 9: Homeostasis: Active Regulation of the Internal Environment
Link ID: 29756 - Posted: 04.23.2025

William Wright & Takaki Komiyama Every day, people are constantly learning and forming new memories. When you pick up a new hobby, try a recipe a friend recommended or read the latest world news, your brain stores many of these memories for years or decades. But how does your brain achieve this incredible feat? In our newly published research in the journal Science, we have identified some of the “rules” the brain uses to learn. Learning in the brain The human brain is made up of billions of nerve cells. These neurons conduct electrical pulses that carry information, much like how computers use binary code to carry data. These electrical pulses are communicated with other neurons through connections between them called synapses. Individual neurons have branching extensions known as dendrites that can receive thousands of electrical inputs from other cells. Dendrites transmit these inputs to the main body of the neuron, where it then integrates all these signals to generate its own electrical pulses. It is the collective activity of these electrical pulses across specific groups of neurons that form the representations of different information and experiences within the brain. For decades, neuroscientists have thought that the brain learns by changing how neurons are connected to one another. As new information and experiences alter how neurons communicate with each other and change their collective activity patterns, some synaptic connections are made stronger while others are made weaker. This process of synaptic plasticity is what produces representations of new information and experiences within your brain. In order for your brain to produce the correct representations during learning, however, the right synaptic connections must undergo the right changes at the right time. The “rules” that your brain uses to select which synapses to change during learning – what neuroscientists call the credit assignment problem – have remained largely unclear. © 2010–2025, The Conversation US, Inc.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29754 - Posted: 04.23.2025

By RJ Mackenzie New footage documents microglia pruning synapses at high resolution and in real time. The recordings, published in January, add a new twist to a convoluted debate about the range of these cells’ responsibilities. Microglia are the brain’s resident immune cells. For about a decade, some have also credited them with pruning excess synaptic connections during early brain development. But that idea was based on static images showing debris from destroyed synapses within the cells—which left open the possibility that microglia clean up after neurons do the actual pruning. In the January movies, though, a microglia cell expressing a green fluorescent protein clearly reaches out a ghostly green tentacle to a budding presynapse on a neuron and lifts it away, leaving the neighboring blue axon untouched. “Their imaging is superb,” says Amanda Sierra, a researcher at the Achucarro Basque Center for Neuroscience, who was not involved in the work. But “one single video, or even two single videos, however beautiful they are, are not sufficient evidence that this is the major mechanism of synapse elimination,” she says. In the new study, researchers isolated microglia and neurons from mice and grew them in culture with astrocytes, labeling the microglia, synapses and axons with different fluorescent dyes. Their approach ensured that the microglia formed ramified processes—thin, branching extensions that don’t form when they are cultured in isolation, says Ryuta Koyama, director of the Department of Translational Neurobiology at Japan’s National Center of Neurology and Psychiatry, who led the work. “People now know that ramified processes of microglia are really necessary to pick up synapses,” he says. “In normal culture systems, you can’t find ramified processes. They look amoeboid.” © 2025 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29720 - Posted: 03.27.2025

Ari Daniel Tristan Yates has no doubt about her first memory, even if it is a little fuzzy. "I was about three and a half in Callaway Gardens in Georgia," she recalls, "just running around with my twin sister trying to pick up Easter eggs." But she has zero memories before that, which is typical. This amnesia of our babyhood is pretty much the rule. "We have memories from what happened earlier today and memories from what happened earlier last week and even from a few years ago," says Yates, who's a cognitive neuroscientist at Columbia University. "But all of us lack memories from our infancy." Is that because we don't make memories when we're babies, or is there something else responsible? Now, in new research published by Yates and her colleagues in the journal Science, they propose that babies are able to form memories, even if they become inaccessible later in life. These results might reveal something crucial about the earliest moments of our development. "That's the time when we learn who our parents are, that's when we learn language, that's when we learn how to walk," Yates says. "What happens in your brain in the first two years of life is magnificent," says Nick Turk-Browne, a cognitive neuroscientist at Yale University. "That's the period of by far the greatest plasticity across your whole life span. And better understanding how your brain learns and remembers in infancy lays the foundation for everything you know and do for the rest of your life. © 2025 npr

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29715 - Posted: 03.22.2025

By Laura Sanders There are countless metaphors for memory. It’s a leaky bucket, a steel trap, a file cabinet, words written in sand. But one of the most evocative — and neuroscientifically descriptive — invokes Lego bricks. A memory is like a Lego tower. It’s built from the ground up, then broken down, put away in bins and rebuilt in a slightly different form each time it’s taken out. This metaphor is beautifully articulated by psychologists Ciara Greene and Gillian Murphy in their new book, Memory Lane. Perhaps the comparison speaks to me because I have watched my kids create elaborate villages of Lego bricks, only to be dismantled, put away (after much nagging) and reconstructed, always with a similar overall structure but with minor and occasionally major changes. These villages’ blueprints are largely stable, but also fluid and flexible, subject to the material whims of the builders at any point in time. Memory works this way, too, Greene and Murphy propose. Imagine your own memory lane as a series of buildings, modified in ways both small and big each time you call them to mind. “As we walk down Memory Lane, the buildings we pass — our memories of individual events — are under constant reconstruction,” Greene and Murphy write. In accessible prose, the book covers a lot of ground, from how we form memories to how delicate those memories really are. Readers may find it interesting (or perhaps upsetting) to learn how bad we all are at remembering why we did something, from trivial choices, like buying an album, to consequential ones, such as a yes or no vote on an abortion referendum. People change their reasoning — or at least, their memories of their reasoning — on these sorts of events all the time. © Society for Science & the Public 2000–2025

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29712 - Posted: 03.22.2025

By Tim Vernimmen On a rainy day in July 2024, Tim Bliss and Terje Lømo are in the best of moods, chuckling and joking over brunch, occasionally pounding the table to make a point. They’re at Lømo’s house near Oslo, Norway, where they’ve met to write about the late neuroscientist Per Andersen, in whose lab they conducted groundbreaking experiments more than 50 years ago. The duo only ever wrote one research paper together, in 1973, but that work is now considered a turning point in the study of learning and memory. Published in the Journal of Physiology, it was the first demonstration that when a neuron — a cell that receives and sends signals throughout the nervous system — signals to another neuron frequently enough, the second neuron will later respond more strongly to new signals, not for just seconds or minutes, but for hours. It would take decades to fully understand the implications of their research, but Bliss and Lømo had discovered something momentous: a phenomenon called long-term potentiation, or LTP, which researchers now know is fundamental to the brain’s ability to learn and remember. Today, scientists agree that LTP plays a major role in the strengthening of neuronal connections, or synapses, that allow the brain to adjust in response to experience. And growing evidence suggests that LTP may also be crucially involved in a variety of problems, including memory deficits and pain disorders. Bliss and Lømo never wrote another research article together. In fact, they would soon stop working on LTP — Bliss for about a decade, Lømo for the rest of his life. Although the researchers knew they had discovered something important, at first the paper “didn’t make a big splash,” Bliss says. By the early 1970s, neuroscientist Eric Kandel had demonstrated that some simple forms of learning can be explained by chemical changes in synapses — at least in a species of sea slug. But scientists didn’t yet know if such findings applied to mammals, or if they could explain more complex and enduring types of learning, such as the formation of memories that may last for years.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29694 - Posted: 03.05.2025

By Ingrid Wickelgren After shuffling the cards in a standard 52-card deck, Alex Mullen, a three-time world memory champion, can memorize their order in under 20 seconds. As he flips though the cards, he takes a mental walk through a house. At each point in his journey — the mailbox, front door, staircase and so on — he attaches a card. To recall the cards, he relives the trip. This technique, called “method of loci” or “memory palace,” is effective because it mirrors the way the brain naturally constructs narrative memories: Mullen’s memory for the card order is built on the scaffold of a familiar journey. We all do something similar every day, as we use familiar sequences of events, such as the repeated steps that unfold during a meal at a restaurant or a trip through the airport, as a home for specific details — an exceptional appetizer or an object flagged at security. The general narrative makes the noteworthy features easier to recall later. “You are taking these details and connecting them to this prior knowledge,” said Christopher Baldassano (opens a new tab), a cognitive neuroscientist at Columbia University. “We think this is how you create your autobiographical memories.” Psychologists empirically introduced (opens a new tab) this theory some 50 years ago, but proof of such scaffolds in the brain was missing. Then, in 2018, Baldassano found it: neural fingerprints of narrative experience, derived from brain scans, that replay sequentially during standard life events. He believes that the brain builds a rich library of scripts for expected scenarios — restaurant or airport, business deal or marriage proposal — over a person’s lifetime. These standardized scripts, and departures from them, influence how and how well we remember specific instances of these event types, his lab has found. And recently, in a paper published in Current Biology in fall 2024, they showed that individuals can select a dominant script (opens a new tab) for a complex, real-world event — for example, while watching a marriage proposal in a restaurant, we might opt, subconsciously, for either a proposal or a restaurant script — which determines what details we remember. © 2025 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29685 - Posted: 02.26.2025