Links for Keyword: Learning & Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1283

By Bob Holmes Like many of the researchers who study how people find their way from place to place, David Uttal is a poor navigator. “When I was 13 years old, I got lost on a Boy Scout hike, and I was lost for two and a half days,” recalls the Northwestern University cognitive scientist. And he’s still bad at finding his way around. The world is full of people like Uttal — and their opposites, the folks who always seem to know exactly where they are and how to get where they want to go. Scientists sometimes measure navigational ability by asking someone to point toward an out-of-sight location — or, more challenging, to imagine they are someplace else and point in the direction of a third location — and it’s immediately obvious that some people are better at it than others. “People are never perfect, but they can be as accurate as single-digit degrees off, which is incredibly accurate,” says Nora Newcombe, a cognitive psychologist at Temple University who coauthored a look at how navigational ability develops in the 2022 Annual Review of Developmental Psychology. But others, when asked to indicate the target’s direction, seem to point at random. “They have literally no idea where it is.” While it’s easy to show that people differ in navigational ability, it has proved much harder for scientists to explain why. There’s new excitement brewing in the navigation research world, though. By leveraging technologies such as virtual reality and GPS tracking, scientists have been able to watch hundreds, sometimes even millions, of people trying to find their way through complex spaces, and to measure how well they do. Though there’s still much to learn, the research suggests that to some extent, navigation skills are shaped by upbringing. Nurturing navigation skills

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29255 - Posted: 04.13.2024

By Markham Heid The human hand is a marvel of nature. No other creature on Earth, not even our closest primate relatives, has hands structured quite like ours, capable of such precise grasping and manipulation. But we’re doing less intricate hands-on work than we used to. A lot of modern life involves simple movements, such as tapping screens and pushing buttons, and some experts believe our shift away from more complex hand activities could have consequences for how we think and feel. “When you look at the brain’s real estate — how it’s divided up, and where its resources are invested — a huge portion of it is devoted to movement, and especially to voluntary movement of the hands,” said Kelly Lambert, a professor of behavioral neuroscience at the University of Richmond in Virginia. Dr. Lambert, who studies effort-based rewards, said that she is interested in “the connection between the effort we put into something and the reward we get from it” and that she believes working with our hands might be uniquely gratifying. In some of her research on animals, Dr. Lambert and her colleagues found that rats that used their paws to dig up food had healthier stress hormone profiles and were better at problem solving compared with rats that were given food without having to dig. She sees some similarities in studies on people, which have found that a whole range of hands-on activities — such as knitting, gardening and coloring — are associated with cognitive and emotional benefits, including improvements in memory and attention, as well as reductions in anxiety and depression symptoms. These studies haven’t determined that hand involvement, specifically, deserves the credit. The researchers who looked at coloring, for example, speculated that it might promote mindfulness, which could be beneficial for mental health. Those who have studied knitting said something similar. “The rhythm and repetition of knitting a familiar or established pattern was calming, like meditation,” said Catherine Backman, a professor emeritus of occupational therapy at the University of British Columbia in Canada who has examined the link between knitting and well-being. © 2024 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 29231 - Posted: 04.02.2024

By Jake Buehler Much like squirrels, black-capped chickadees hide their food, keeping track of many thousands of little treasures wedged into cracks or holes in tree bark. When a bird returns to one of their many food caches, a particular set of nerve cells in the memory center of their brains gives a brief flash of activity. When the chickadee goes to another stash, a different combination of neurons lights up. These neural combinations act like bar codes, and identifying them may give key insights into how episodic memories — accounts of specific past events, like what you did on your birthday last year or where you’ve left your wallet — are encoded and recalled in the brain, researchers report March 29 in Cell. This kind of memory is challenging to study in animals, says Selmaan Chettih, a neuroscientist at Columbia University. “You can’t just ask a mouse what memories it formed today.” But chickadees’ very precise behavior provides a golden opportunity for researchers. Every time a chickadee makes a cache, it represents a single, well-defined moment logged in the hippocampus, a structure in the vertebrate brain vital for memory. To study the birds’ episodic memory, Chettih and his colleagues built a special arena made of 128 small, artificial storage sites. The team inserted small probes into five chickadees’ brains to track the electrical activity of individual neurons, comparing that activity with detailed recordings of the birds’ body positions and behaviors. A black-capped chickadee stores sunflower seeds in an artificial arena made of 128 different perches and pockets. These birds excel at finding their hidden food stashes. The aim of the setup was to see how their brain stores and retrieves the memory of each hidey-hole. Researchers closely observed five chickadees, comparing their caching behavior with the activity from nerve cells in their hippocampus, the brain’s memory center. © Society for Science & the Public 2000–2024.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29228 - Posted: 03.30.2024

By Angie Voyles Askham For Christopher Zimmerman, it was oysters: After a bout of nausea on a beach vacation, he could hardly touch the mollusks for months. For others, that gut-lurching trigger is white chocolate, margaritas or spicy cinnamon candy. Whatever the taste, most people know the feeling of not being able to stomach a food after it has caused—or seemed to cause—illness. That response helps us learn which foods are safe, making it essential for survival. But how the brain links an unpleasant gastric event to food consumed hours prior has long posed a mystery, says Zimmerman, who is a postdoctoral fellow in Ilana Witten’s lab at Princeton University. The time scale for this sort of conditioned food aversion is an order of magnitude different from other types of learning, which involve delays of only a few seconds, says Peter Dayan, director of computational neuroscience at the Max Planck Institute for Biological Cybernetics, who was not involved in the work. “You need to have something that bridges that gap in time” between eating and feeling ill, he says. A newly identified neuronal circuit can do just that. Neurons in the mouse brainstem that respond to drug-induced nausea reactivate a specific subset of cells in the animals’ central amygdala that encode information about a recently tasted food. And that reactivation happens with novel—but not familiar—flavors, according to work that Zimmerman presented at the annual COSYNE meeting in Lisbon last month. With new flavors, animals seem primed to recall a recent meal if they get sick, Zimmerman says. As he put it in his talk, “it suggests that the common phrase we associate with unexpected nausea, that ‘it must be something I ate,’ is literally built into the brain in the form of this evolutionarily hard-wired prior.” © 2024 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 13: Homeostasis: Active Regulation of the Internal Environment
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 9: Homeostasis: Active Regulation of the Internal Environment
Link ID: 29226 - Posted: 03.30.2024

By Max Kozlov Neurons (shown here in a coloured scanning electron micrograph) mend broken DNA during memory formation. Credit: Ted Kinsman/Science Photo Library When a long-term memory forms, some brain cells experience a rush of electrical activity so strong that it snaps their DNA. Then, an inflammatory response kicks in, repairing this damage and helping to cement the memory, a study in mice shows. The findings, published on 27 March in Nature1, are “extremely exciting”, says Li-Huei Tsai, a neurobiologist at the Massachusetts Institute of Technology in Cambridge who was not involved in the work. They contribute to the picture that forming memories is a “risky business”, she says. Normally, breaks in both strands of the double helix DNA molecule are associated with diseases including cancer. But in this case, the DNA damage-and-repair cycle offers one explanation for how memories might form and last. It also suggests a tantalizing possibility: this cycle might be faulty in people with neurodegenerative diseases such as Alzheimer’s, causing a build-up of errors in a neuron’s DNA, says study co-author Jelena Radulovic, a neuroscientist at the Albert Einstein College of Medicine in New York City. This isn’t the first time that DNA damage has been associated with memory. In 2021, Tsai and her colleagues showed that double-stranded DNA breaks are widespread in the brain, and linked them with learning2. To better understand the part these DNA breaks play in memory formation, Radulovic and her colleagues trained mice to associate a small electrical shock with a new environment, so that when the animals were once again put into that environment, they would ‘remember’ the experience and show signs of fear, such as freezing in place. Then the researchers examined gene activity in neurons in a brain area key to memory — the hippocampus. They found that some genes responsible for inflammation were active in a set of neurons four days after training. Three weeks after training, the same genes were much less active. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29223 - Posted: 03.28.2024

By Holly Barker Our understanding of memory is often summed up by a well-worn mantra: Neurons that fire together wire together. Put another way, when two brain cells simultaneously send out an impulse, their synapses strengthen, whereas connections between less active neurons slowly diminish. But there may be more to it, a new preprint suggests: To consolidate memories, synapses may also influence neighboring neurons by using a previously unknown means of communication. When synapses strengthen, they release a virus-like particle that weakens the surrounding cells’ connections, the new work shows. This novel form of plasticity may aid memory by helping some synapses to shout above the background neuronal hubbub, the researchers say. The mechanism involves the neuronal gene ARC, which is known to contribute to learning and memory and encodes a protein that assembles into virus-like capsids—protein shells that viruses use to package and spread their genetic material. ARC capsids enclose ARC messenger RNA and transfer it to nearby neurons, according to a 2018 study. This leads to an increase in ARC protein and, in turn, a decrease in the number of excitatory AMPA receptors at those cells’ synapses, the preprint shows. “ARC has this crazy virus-like biology,” says Jason Shepherd, associate professor of neurobiology at the University of Utah, who led the 2018 study and the new work. But how ARC capsids form and eject from neurons was unclear, he says. As it turns out, synaptic strengthening spurs ARC capsid release, according to the preprint. When neuronal connections strengthen, ARC capsids are packaged into vesicles, which then bubble out of neurons through their interactions with a protein called IRSp53. Surrounding cells absorb the vesicles containing ARC, which tamps down their synapses, the new work suggests. © 2024 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29209 - Posted: 03.23.2024

By Claudia López Lloreda Loss of smell, headaches, memory problems: COVID-19 can bring about a troubling storm of neurological symptoms that make everyday tasks difficult. Now new research adds to the evidence that inflammation in the brain might underlie these symptoms. Not all data point in the same direction. Some new studies suggest that SARS-CoV-2, the virus that causes COVID-19, directly infects brain cells. Those findings bolster the hypothesis that direct infection contributes to COVID-19-related brain problems. But the idea that brain inflammation is key has gotten fresh support: one study, for example, has identified specific brain areas prone to inflammation in people with COVID-191. “The whole body of literature is starting to come together a little bit more now and give us some more concrete answers,” says Nicola Fletcher, a neurovirologist at University College Dublin. Immunological storm When researchers started looking for a culprit for the brain problems caused by COVID-19, inflammation quickly became a key suspect. That’s because inflammation — the flood of immune cells and chemicals that the body releases against intruders — has been linked to the cognitive symptoms caused by other viruses, such as HIV. SARS-CoV-2 stimulates a strong immune response throughout the body, but it was unclear whether brain cells themselves contributed to this response and, if so, how. Helena Radbruch, a neuropathologist at the Charité – Berlin University Medicine, and her colleagues looked at brain samples from people who’d died of COVID-19. They didn’t find any cells infected with SARS-CoV-2. But they did find these people had more immune activity in certain brain areas than did people who died from other causes. This unusual activity was noticeable in regions such as the olfactory bulb, which is involved in smell, and the brainstem, which controls some bodily functions, such as breathing. It was seen only in the brains of people who had died soon after catching the virus. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29202 - Posted: 03.21.2024

By Julian E. Barnes New studies by the National Institutes of Health failed to find evidence of brain injury in scans or blood markers of the diplomats and spies who suffered symptoms of Havana syndrome, bolstering the conclusions of U.S. intelligence agencies about the strange health incidents. Spy agencies have concluded that the debilitating symptoms associated with Havana syndrome, including dizziness and migraines, are not the work of a hostile foreign power. They have not identified a weapon or device that caused the injuries, and intelligence analysts now believe the symptoms are most likely explained by environmental factors, existing medical conditions or stress. The lead scientist on one of the two new studies said that while the study was not designed to find a cause, the findings were consistent with those determinations. The authors said the studies are at odds with findings from researchers at the University of Pennsylvania, who found differences in brain scans of people with Havana syndrome symptoms and a control group Dr. David Relman, a prominent scientist who has had access to the classified files involving the cases and representatives of people suffering from Havana syndrome, said the new studies were flawed. Many brain injuries are difficult to detect with scans or blood markers, he said. He added that the findings do not dispute that an external force, like a directed energy device, could have injured the current and former government workers. The studies were published in The Journal of the American Medical Association on Monday alongside an editorial by Dr. Relman that was critical of the findings. © 2024 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 29196 - Posted: 03.19.2024

By Laura Dattaro Steven McCarroll just wanted to compare how different cell types express genes in people with and without schizophrenia. But when he sequenced the transcriptomes of more than 1 million cortical cells from 191 postmortem brains, what leapt out from the data went far beyond his simple case-control comparison: Astrocytes and neurons from all of the brains coordinate their expression of certain genes needed for healthy synapses, a relationship the team dubbed the Synaptic Neuron-and-Astrocyte Program (SNAP) and described in a paper published in Nature today. “The data led us to something much more exciting and surprising than what we set out to do,” says McCarroll, professor of biomedical science and genetics at Harvard Medical School. SNAP is an intricate dance, McCarroll and his colleagues found: The more a person’s neurons express synaptic genes, so too do their astrocytes, but this coordination wanes in older people and those with schizophrenia. Because astrocytes — a type of glial cell — and neurons are in constant communication and the findings are correlational, it’s unclear which cell type choreographs this dance. But other evidence suggests that astrocytes take the lead, says Stephen Quake, professor of bioengineering at Stanford University, who was not involved in McCarroll’s work. In mice trained to fear a foot shock, for example, neurons involved in memory formation express neurotensin, whereas astrocytes express a receptor for it, Quake and his colleagues reported last month in Nature. But when they inhibited the animals’ astrocytes during fear training, the mice performed worse on memory tests, suggesting those cells play an active role in long-term memory formation, Quake says — and govern the relationship McCarroll found. © 2024 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29183 - Posted: 03.07.2024

By Erica Goode Authors don’t get to choose what’s going on in the world when their books are published. More than a few luckless writers ended up with a publication date of Sept. 11, 2001, or perhaps Nov. 8, 2016, the day Donald Trump was elected. But Charan Ranganath, the author of “Why We Remember: Unlocking Memory’s Power to Hold on to What Matters,”was more fortunate. His book went on sale last month, not long after the Department of Justice released a report describing President Joe Biden as an “elderly man with a poor memory” who, in interviews, was “struggling to remember events,” including the year that his son Beau died. BOOK REVIEW — “Why We Remember: Unlocking Memory’s Power to Hold on to What Matters,” by Charan Ranganath (Doubleday, 304 pages). The special counsel’s report immediately became a topic of intense discussion — disputed by the White House, seized on by many Republicans, analyzed by media commentators, and satirized by late-night television hosts. But for Ranganath, a psychologist and neuroscientist at the University of California, Davis, who for decades has been studying the workings of memory, the report’s release was a stroke of luck. His book, which dispels many widespread but wrongheaded assumptions about memory — including some to which that special counsel Robert K. Hur appears to subscribe — could easily have been written as a corrective response. If Ranganath has a central message, it is that we are far too concerned about forgetting. Memory does not work like a recording device, preserving everything we have heard, seen, said, and done. Not remembering names or exact dates; having no recollection of the details of a conversation; being unable to recall where you left your glasses or your keys; or watching movies you saw in the past as if you are seeing them for the first time — these are not the symptoms of a failing brain.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29172 - Posted: 03.02.2024

Terry Gross When cognitive neuroscientist Charan Ranganath meets someone for the first time, he's often asked, "Why am I so forgetful?" But Ranganath says he's more interested in what we remember, rather than the things we forget. "We're not designed to carry tons and tons of junk with us. I don't know that anyone would want to remember every temporary password that they've ever had," he says. "I think what [the human brain is] designed for is to carry what we need and to deploy it rapidly when we need it." Ranganath directs the Dynamic Memory Lab at the University of California, Davis, where he's a professor of psychology and neuroscience. In the new book, Why We Remember, he writes about the fundamental mechanisms of memory — and why memories often change over time. Sponsor Message Ranganath recently wrote an op-ed for The New York Times in which he reflected on President Biden's memory gaffes — and the role that memory plays in the current election cycle. "I'm just not in the position to say anything about the specifics of [either Biden or Trump's] memory problems," he says. "This is really more of an issue of people understanding what happens with aging. And, one of the nice things about writing this editorial is I got a lot of feedback from people who felt personally relieved by this because they're worried about their own memories." I think it would be a good idea to have a comprehensive physical and mental health evaluation that's fairly transparent. We certainly have transparency or seek transparency about other things like a candidate's finances, for instance. And obviously health is a very important factor. And I think at the end of the day, we'll still be in a position of saying, "OK, what's enough? What's the line between healthy and unhealthy?" But I think it's important to do because yes, as we get older we do have memory problems. ... © 2024 npr

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29166 - Posted: 02.27.2024

By David Marchese Our memories form the bedrock of who we are. Those recollections, in turn, are built on one very simple assumption: This happened. But things are not quite so simple. “We update our memories through the act of remembering,” says Charan Ranganath, a professor of psychology and neuroscience at the University of California, Davis, and the author of the illuminating new book “Why We Remember.” “So it creates all these weird biases and infiltrates our decision making. It affects our sense of who we are.” Rather than being photo-accurate repositories of past experience, Ranganath argues, our memories function more like active interpreters, working to help us navigate the present and future. The implication is that who we are, and the memories we draw on to determine that, are far less fixed than you might think. “Our identities,” Ranganath says, “are built on shifting sand.” What is the most common misconception about memory? People believe that memory should be effortless, but their expectations for how much they should remember are totally out of whack with how much they’re capable of remembering.1 Another misconception is that memory is supposed to be an archive of the past. We expect that we should be able to replay the past like a movie in our heads. The problem with that assumption is that we don’t replay the past as it happened; we do it through a lens of interpretation and imagination. Semantic memory is the term for the memory of facts and knowledge about the world. standpoint? It’s exceptionally hard to answer the question of how much we can remember. What I’ll say is that we can remember an extraordinary amount of detail that would make you feel at times as if you have a photographic memory. We’re capable of these extraordinary feats. I would argue that we’re all everyday-memory experts, because we have this exceptional semantic memory, which is the scaffold for episodic memory. I know it sounds squirmy to say, “Well, I can’t answer the question of how much we remember,” but I don’t want readers to walk away thinking memory is all made up. © 2024 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29134 - Posted: 02.06.2024

By Ashley Juavinett In the 2010 award-winning film “Inception,” Leonardo DiCaprio’s character and others run around multiple layers of someone’s consciousness, trying to implant an idea in the person’s mind. If you can plant something deep enough, the film suggests, you can make them believe it is their own idea. The film was billed as science fiction, but three years later, in 2013, researchers actually did this — in a mouse, at least. The work focused on the hippocampus, along with its closely interconnected structures, long recognized by scientists to hold our dearest memories. If you damage significant portions of just one region of your hippocampus, the dentate gyrus, you’ll lose the ability to form new memories. How these memories are stored, however, is still up for debate. One early but persistent idea posits that enduring changes in our neural circuitry, or “engrams,” may represent the physical traces of specific memories. An engram is sometimes thought of as a group of cells, along with their synaptic weights and connections throughout the brain. In sum, the engram is what DiCaprio’s character would have had to discreetly manipulate in his target. In 2012, a team in Susumu Tonegawa’s lab at the Massachusetts Institute of Technology (MIT) showed that you could mark the cells of a real memory engram and reactivate them later. Taking that work one step further, Steve Ramirez, Xu Liu and others in Tonegawa’s lab demonstrated the following year that you can implant a memory of something that never even happened. In doing so, they turned science fiction into reality, one tiny foot shock at a time. Published in Science, Ramirez and Liu’s study is a breath of fresh air, scientifically speaking. The abstract starts with one of the shortest sentences you’ll ever find in a scientific manuscript: “Memories can be unreliable.” The entire paper is extremely readable, and there is no shortage of related papers and review articles that you could give your students to read for additional context. © 2024 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29131 - Posted: 02.06.2024

By Ellen Barry At the root of post-traumatic stress disorder, or PTSD, is a memory that cannot be controlled. It may intrude on everyday activity, thrusting a person into the middle of a horrifying event, or surface as night terrors or flashbacks. Decades of treatment of military veterans and sexual assault survivors have left little doubt that traumatic memories function differently from other memories. A group of researchers at Yale University and the Icahn School of Medicine at Mount Sinai set out to find empirical evidence of those differences. The team conducted brain scans of 28 people with PTSD while they listened to recorded narrations of their own memories. Some of the recorded memories were neutral, some were simply “sad,” and some were traumatic. The brain scans found clear differences, the researchers reported in a paper published on Thursday in the journal Nature Neuroscience. The people listening to the sad memories, which often involved the death of a family member, showed consistently high engagement of the hippocampus, part of the brain that organizes and contextualizes memories. When the same people listened to their traumatic memories — of sexual assaults, fires, school shootings and terrorist attacks — the hippocampus was not involved. “What it tells us is that the brain is in a different state in the two memories,” said Daniela Schiller, a neuroscientist at the Icahn School of Medicine at Mount Sinai and one of the authors of the study. She noted that therapies for PTSD often sought to help people organize their memory so they can view it as distant from the present. “Now we find something that potentially can explain it in the brain,” she said. “The brain doesn’t look like it’s in a state of memory; it looks like it is a state of present experience.” Indeed, the authors conclude in the paper, “traumatic memories are not experienced as © 2023 The New York Times Company

Related chapters from BN: Chapter 15: Emotions, Aggression, and Stress; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress; Chapter 13: Memory and Learning
Link ID: 29030 - Posted: 12.02.2023

By John Krakauer & Tamar Makin The human brain’s ability to adapt and change, known as neuroplasticity, has long captivated both the scientific community and the public imagination. It’s a concept that brings hope and fascination, especially when we hear extraordinary stories of, for example, blind individuals developing heightened senses that enable them to navigate through a cluttered room purely based on echolocation or stroke survivors miraculously regaining motor abilities once thought lost. For years, the notion that neurological challenges such as blindness, deafness, amputation or stroke lead to dramatic and significant changes in brain function has been widely accepted. These narratives paint a picture of a highly malleable brain that is capable of dramatic reorganization to compensate for lost functions. It’s an appealing notion: the brain, in response to injury or deficit, unlocks untapped potentials, rewires itself to achieve new capabilities and self-repurposes its regions to achieve new functions. This idea can also be linked with the widespread, though inherently false, myth that we only use 10 percent of our brain, suggesting that we have extensive neural reserves to lean on in times of need. But how accurate is this portrayal of the brain’s adaptive abilities to reorganize? Are we truly able to tap into reserves of unused brain potential following an injury, or have these captivating stories led to a misunderstanding of the brain’s true plastic nature? In a paper we wrote for the journal eLife, we delved into the heart of these questions, analyzing classical studies and reevaluating long-held beliefs about cortical reorganization and neuroplasticity. What we found offers a compelling new perspective on how the brain adapts to change and challenges some of the popularized notions about its flexible capacity for recovery. The roots of this fascination can be traced back to neuroscientist Michael Merzenich’s pioneering work, and it was popularized through books such as Norman Doidge’s The Brain That Changes Itself. Merzenich’s insights were built on the influential studies of Nobel Prize–winning neuroscientists David Hubel and Torsten Wiesel, who explored ocular dominance in kittens. © 2023 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 16: Psychopathology: Biological Basis of Behavior Disorders; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 12: Psychopathology: The Biology of Behavioral Disorders; Chapter 13: Memory and Learning
Link ID: 29019 - Posted: 11.22.2023

By Catherine Offord Close your eyes and picture yourself running an errand across town. You can probably imagine the turns you’d need to take and the landmarks you’d encounter. This ability to conjure such scenarios in our minds is thought to be crucial to humans’ capacity to plan ahead. But it may not be uniquely human: Rats also seem to be able to “imagine” moving through mental environments, researchers report today in Science. Rodents trained to navigate within a virtual arena could, in return for a reward, activate the same neural patterns they’d shown while navigating—even when they were standing still. That suggests rodents can voluntarily access mental maps of places they’ve previously visited. “We know humans carry around inside their heads representations of all kinds of spaces: rooms in your house, your friends’ houses, shops, libraries, neighborhoods,” says Sean Polyn, a psychologist at Vanderbilt University who was not involved in the research. “Just by the simple act of reminiscing, we can place ourselves in these spaces—to think that we’ve got an animal analog of that very human imaginative act is very impressive.” Researchers think humans’ mental maps are encoded in the hippocampus, a brain region involved in memory. As we move through an environment, cells in this region fire in particular patterns depending on our location. When we later revisit—or simply think about visiting—those locations, the same hippocampal signatures are activated. Rats also encode spatial information in the hippocampus. But it’s been impossible to establish whether they have a similar capacity for voluntary mental navigation because of the practical challenges of getting a rodent to think about a particular place on cue, says study author Chongxi Lai, who conducted the work while a graduate student and later a postdoc at the Howard Hughes Medical Institute’s Janelia Research Campus. In their new study, Lai, along with Janelia neuroscientist Albert Lee and colleagues, found a way around this problem by developing a brain-machine interface that rewarded rats for navigating their surroundings using only their thoughts.

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 28989 - Posted: 11.04.2023

By Jake Buehler A fruit bat hanging in the corner of a cave stirs; it is ready to move. It scans the space to look for a free perch and then takes flight, adjusting its membranous wings to angle an approach to a spot next to one of its fuzzy fellows. As it does so, neurological data lifted from its brain is broadcast to sensors installed in the cave’s walls. This is no balmy cave along the Mediterranean Sea. The group of Egyptian fruit bats is in Berkeley, California, navigating an artificial cave in a laboratory that researchers have set up to study the inner workings of the animals’ minds. The researchers had an idea: that as a bat navigates its physical environment, it’s also navigating a network of social relationships. They wanted to know whether the bats use the same or different parts of their brain to map these intersecting realities. In a new study published in Nature in August, the scientists revealed that these maps overlap. The brain cells informing a bat of its own location also encode details about other bats nearby — not only their location, but also their identities. The findings raise the intriguing possibility that evolution can program those neurons for multiple purposes to serve the needs of different species. The neurons in question are located in the hippocampus, a structure deep within the mammalian brain that is involved in the creation of long-term memories. A special population of hippocampal neurons, known as place cells, are thought to create an internal navigation system. First identified in the rat hippocampus in 1971 by the neuroscientist John O’Keefe, place cells fire when an animal is in a particular location; different place cells encode different places. This system helps animals determine where they are, where they need to go and how to get from here to there. In 2014, O’Keefe was awarded the Nobel Prize for his discovery of place cells, and over the last several decades they have been identified in multiple primate species, including humans. However, moving from place to place isn’t the only way an animal can experience a change in its surroundings. In your home, the walls and furniture mostly stay the same from day to day, said Michael Yartsev, who studies the neural basis of natural behavior at the University of California, Berkeley and co-led the new work. But the social context of your living space could change quite regularly. © 2023 An editorially independent publication supported by the Simons Foundation.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 28982 - Posted: 11.01.2023

By Clay Risen Endel Tulving, whose insights into the structure of human memory and the way we recall the past revolutionized the field of cognitive psychology, died on Sept. 11 in Mississauga, Ontario. He was 96. His daughters, Linda Tulving and Elo Tulving-Blais, said his death, at an assisted living home, was caused by complications of a stroke. Until Dr. Tulving began his pathbreaking work in the 1960s, most cognitive psychologists were more interested in understanding how people learn things than in how they retain and recall them. When they did think about memory, they often depicted it as one giant cerebral warehouse, packed higgledy-piggledy, with only a vague conception of how we retrieved those items. This, they asserted, was the realm of “the mind,” an untestable, almost philosophical construct. Dr. Tulving, who spent most of his career at the University of Toronto, first made his name with a series of clever experiments and papers, demonstrating how the mind organizes memories and how it uses contextual cues to retrieve them. Forgetting, he posited, was less about information loss than it was about the lack of cues to retrieve it. He established his legacy with a chapter in the 1972 book “Organization of Memory,” which he edited with Wayne Donaldson. In that chapter, he argued for a taxonomy of memory types. He started with two: procedural memory, which is largely unconscious and involves things like how to walk or ride a bicycle, and declarative memory, which is conscious and discrete. © 2023 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 28934 - Posted: 09.29.2023

By Veronique Greenwood In the dappled sunlit waters of Caribbean mangrove forests, tiny box jellyfish bob in and out of the shade. Box jellies are distinguished from true jellyfish in part by their complex visual system — the grape-size predators have 24 eyes. But like other jellyfish, they are brainless, controlling their cube-shaped bodies with a distributed network of neurons. That network, it turns out, is more sophisticated than you might assume. On Friday, researchers published a report in the journal Current Biology indicating that the box jellyfish species Tripedalia cystophora have the ability to learn. Because box jellyfish diverged from our part of the animal kingdom long ago, understanding their cognitive abilities could help scientists trace the evolution of learning. The tricky part about studying learning in box jellies was finding an everyday behavior that scientists could train the creatures to perform in the lab. Anders Garm, a biologist at the University of Copenhagen and an author of the new paper, said his team decided to focus on a swift about-face that box jellies execute when they are about to hit a mangrove root. These roots rise through the water like black towers, while the water around them appears pale by comparison. But the contrast between the two can change from day to day, as silt clouds the water and makes it more difficult to tell how far away a root is. How do box jellies tell when they are getting too close? “The hypothesis was, they need to learn this,” Dr. Garm said. “When they come back to these habitats, they have to learn, how is today’s water quality? How is the contrast changing today?” In the lab, researchers produced images of alternating dark and light stripes, representing the mangrove roots and water, and used them to line the insides of buckets about six inches wide. When the stripes were a stark black and white, representing optimum water clarity, box jellies never got close to the bucket walls. With less contrast between the stripes, however, box jellies immediately began to run into them. This was the scientists’ chance to see if they would learn. © 2023 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 28925 - Posted: 09.23.2023

By Saugat Bolakhe Memory doesn’t represent a single scientific mystery; it’s many of them. Neuroscientists and psychologists have come to recognize varied types of memory that coexist in our brain: episodic memories of past experiences, semantic memories of facts, short- and long-term memories, and more. These often have different characteristics and even seem to be located in different parts of the brain. But it’s never been clear what feature of a memory determines how or why it should be sorted in this way. Now, a new theory backed by experiments using artificial neural networks proposes that the brain may be sorting memories by evaluating how likely they are to be useful as guides in the future. In particular, it suggests that many memories of predictable things, ranging from facts to useful recurring experiences — like what you regularly eat for breakfast or your walk to work — are saved in the brain’s neocortex, where they can contribute to generalizations about the world. Memories less likely to be useful — like the taste of that unique drink you had at that one party — are kept in the seahorse-shaped memory bank called the hippocampus. Actively segregating memories this way on the basis of their usefulness and generalizability may optimize the reliability of memories for helping us navigate novel situations. The authors of the new theory — the neuroscientists Weinan Sun and James Fitzgerald of the Janelia Research Campus of the Howard Hughes Medical Institute, Andrew Saxe of University College London, and their colleagues — described it in a recent paper in Nature Neuroscience. It updates and expands on the well-established idea that the brain has two linked, complementary learning systems: the hippocampus, which rapidly encodes new information, and the neocortex, which gradually integrates it for long-term storage. James McClelland, a cognitive neuroscientist at Stanford University who pioneered the idea of complementary learning systems in memory but was not part of the new study, remarked that it “addresses aspects of generalization” that his own group had not thought about when they proposed the theory in the mid 1990s. All Rights Reserved © 2023

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 28900 - Posted: 09.07.2023