Links for Keyword: Learning & Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1279

By Max Kozlov Neurons (shown here in a coloured scanning electron micrograph) mend broken DNA during memory formation. Credit: Ted Kinsman/Science Photo Library When a long-term memory forms, some brain cells experience a rush of electrical activity so strong that it snaps their DNA. Then, an inflammatory response kicks in, repairing this damage and helping to cement the memory, a study in mice shows. The findings, published on 27 March in Nature1, are “extremely exciting”, says Li-Huei Tsai, a neurobiologist at the Massachusetts Institute of Technology in Cambridge who was not involved in the work. They contribute to the picture that forming memories is a “risky business”, she says. Normally, breaks in both strands of the double helix DNA molecule are associated with diseases including cancer. But in this case, the DNA damage-and-repair cycle offers one explanation for how memories might form and last. It also suggests a tantalizing possibility: this cycle might be faulty in people with neurodegenerative diseases such as Alzheimer’s, causing a build-up of errors in a neuron’s DNA, says study co-author Jelena Radulovic, a neuroscientist at the Albert Einstein College of Medicine in New York City. This isn’t the first time that DNA damage has been associated with memory. In 2021, Tsai and her colleagues showed that double-stranded DNA breaks are widespread in the brain, and linked them with learning2. To better understand the part these DNA breaks play in memory formation, Radulovic and her colleagues trained mice to associate a small electrical shock with a new environment, so that when the animals were once again put into that environment, they would ‘remember’ the experience and show signs of fear, such as freezing in place. Then the researchers examined gene activity in neurons in a brain area key to memory — the hippocampus. They found that some genes responsible for inflammation were active in a set of neurons four days after training. Three weeks after training, the same genes were much less active. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29223 - Posted: 03.28.2024

By Holly Barker Our understanding of memory is often summed up by a well-worn mantra: Neurons that fire together wire together. Put another way, when two brain cells simultaneously send out an impulse, their synapses strengthen, whereas connections between less active neurons slowly diminish. But there may be more to it, a new preprint suggests: To consolidate memories, synapses may also influence neighboring neurons by using a previously unknown means of communication. When synapses strengthen, they release a virus-like particle that weakens the surrounding cells’ connections, the new work shows. This novel form of plasticity may aid memory by helping some synapses to shout above the background neuronal hubbub, the researchers say. The mechanism involves the neuronal gene ARC, which is known to contribute to learning and memory and encodes a protein that assembles into virus-like capsids—protein shells that viruses use to package and spread their genetic material. ARC capsids enclose ARC messenger RNA and transfer it to nearby neurons, according to a 2018 study. This leads to an increase in ARC protein and, in turn, a decrease in the number of excitatory AMPA receptors at those cells’ synapses, the preprint shows. “ARC has this crazy virus-like biology,” says Jason Shepherd, associate professor of neurobiology at the University of Utah, who led the 2018 study and the new work. But how ARC capsids form and eject from neurons was unclear, he says. As it turns out, synaptic strengthening spurs ARC capsid release, according to the preprint. When neuronal connections strengthen, ARC capsids are packaged into vesicles, which then bubble out of neurons through their interactions with a protein called IRSp53. Surrounding cells absorb the vesicles containing ARC, which tamps down their synapses, the new work suggests. © 2024 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29209 - Posted: 03.23.2024

By Claudia López Lloreda Loss of smell, headaches, memory problems: COVID-19 can bring about a troubling storm of neurological symptoms that make everyday tasks difficult. Now new research adds to the evidence that inflammation in the brain might underlie these symptoms. Not all data point in the same direction. Some new studies suggest that SARS-CoV-2, the virus that causes COVID-19, directly infects brain cells. Those findings bolster the hypothesis that direct infection contributes to COVID-19-related brain problems. But the idea that brain inflammation is key has gotten fresh support: one study, for example, has identified specific brain areas prone to inflammation in people with COVID-191. “The whole body of literature is starting to come together a little bit more now and give us some more concrete answers,” says Nicola Fletcher, a neurovirologist at University College Dublin. Immunological storm When researchers started looking for a culprit for the brain problems caused by COVID-19, inflammation quickly became a key suspect. That’s because inflammation — the flood of immune cells and chemicals that the body releases against intruders — has been linked to the cognitive symptoms caused by other viruses, such as HIV. SARS-CoV-2 stimulates a strong immune response throughout the body, but it was unclear whether brain cells themselves contributed to this response and, if so, how. Helena Radbruch, a neuropathologist at the Charité – Berlin University Medicine, and her colleagues looked at brain samples from people who’d died of COVID-19. They didn’t find any cells infected with SARS-CoV-2. But they did find these people had more immune activity in certain brain areas than did people who died from other causes. This unusual activity was noticeable in regions such as the olfactory bulb, which is involved in smell, and the brainstem, which controls some bodily functions, such as breathing. It was seen only in the brains of people who had died soon after catching the virus. © 2024 Springer Nature Limited

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29202 - Posted: 03.21.2024

By Julian E. Barnes New studies by the National Institutes of Health failed to find evidence of brain injury in scans or blood markers of the diplomats and spies who suffered symptoms of Havana syndrome, bolstering the conclusions of U.S. intelligence agencies about the strange health incidents. Spy agencies have concluded that the debilitating symptoms associated with Havana syndrome, including dizziness and migraines, are not the work of a hostile foreign power. They have not identified a weapon or device that caused the injuries, and intelligence analysts now believe the symptoms are most likely explained by environmental factors, existing medical conditions or stress. The lead scientist on one of the two new studies said that while the study was not designed to find a cause, the findings were consistent with those determinations. The authors said the studies are at odds with findings from researchers at the University of Pennsylvania, who found differences in brain scans of people with Havana syndrome symptoms and a control group Dr. David Relman, a prominent scientist who has had access to the classified files involving the cases and representatives of people suffering from Havana syndrome, said the new studies were flawed. Many brain injuries are difficult to detect with scans or blood markers, he said. He added that the findings do not dispute that an external force, like a directed energy device, could have injured the current and former government workers. The studies were published in The Journal of the American Medical Association on Monday alongside an editorial by Dr. Relman that was critical of the findings. © 2024 The New York Times Company

Related chapters from BN: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Higher Cognition; Chapter 13: Memory and Learning
Link ID: 29196 - Posted: 03.19.2024

By Laura Dattaro Steven McCarroll just wanted to compare how different cell types express genes in people with and without schizophrenia. But when he sequenced the transcriptomes of more than 1 million cortical cells from 191 postmortem brains, what leapt out from the data went far beyond his simple case-control comparison: Astrocytes and neurons from all of the brains coordinate their expression of certain genes needed for healthy synapses, a relationship the team dubbed the Synaptic Neuron-and-Astrocyte Program (SNAP) and described in a paper published in Nature today. “The data led us to something much more exciting and surprising than what we set out to do,” says McCarroll, professor of biomedical science and genetics at Harvard Medical School. SNAP is an intricate dance, McCarroll and his colleagues found: The more a person’s neurons express synaptic genes, so too do their astrocytes, but this coordination wanes in older people and those with schizophrenia. Because astrocytes — a type of glial cell — and neurons are in constant communication and the findings are correlational, it’s unclear which cell type choreographs this dance. But other evidence suggests that astrocytes take the lead, says Stephen Quake, professor of bioengineering at Stanford University, who was not involved in McCarroll’s work. In mice trained to fear a foot shock, for example, neurons involved in memory formation express neurotensin, whereas astrocytes express a receptor for it, Quake and his colleagues reported last month in Nature. But when they inhibited the animals’ astrocytes during fear training, the mice performed worse on memory tests, suggesting those cells play an active role in long-term memory formation, Quake says — and govern the relationship McCarroll found. © 2024 Simons Foundation

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29183 - Posted: 03.07.2024

By Erica Goode Authors don’t get to choose what’s going on in the world when their books are published. More than a few luckless writers ended up with a publication date of Sept. 11, 2001, or perhaps Nov. 8, 2016, the day Donald Trump was elected. But Charan Ranganath, the author of “Why We Remember: Unlocking Memory’s Power to Hold on to What Matters,”was more fortunate. His book went on sale last month, not long after the Department of Justice released a report describing President Joe Biden as an “elderly man with a poor memory” who, in interviews, was “struggling to remember events,” including the year that his son Beau died. BOOK REVIEW — “Why We Remember: Unlocking Memory’s Power to Hold on to What Matters,” by Charan Ranganath (Doubleday, 304 pages). The special counsel’s report immediately became a topic of intense discussion — disputed by the White House, seized on by many Republicans, analyzed by media commentators, and satirized by late-night television hosts. But for Ranganath, a psychologist and neuroscientist at the University of California, Davis, who for decades has been studying the workings of memory, the report’s release was a stroke of luck. His book, which dispels many widespread but wrongheaded assumptions about memory — including some to which that special counsel Robert K. Hur appears to subscribe — could easily have been written as a corrective response. If Ranganath has a central message, it is that we are far too concerned about forgetting. Memory does not work like a recording device, preserving everything we have heard, seen, said, and done. Not remembering names or exact dates; having no recollection of the details of a conversation; being unable to recall where you left your glasses or your keys; or watching movies you saw in the past as if you are seeing them for the first time — these are not the symptoms of a failing brain.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29172 - Posted: 03.02.2024

Terry Gross When cognitive neuroscientist Charan Ranganath meets someone for the first time, he's often asked, "Why am I so forgetful?" But Ranganath says he's more interested in what we remember, rather than the things we forget. "We're not designed to carry tons and tons of junk with us. I don't know that anyone would want to remember every temporary password that they've ever had," he says. "I think what [the human brain is] designed for is to carry what we need and to deploy it rapidly when we need it." Ranganath directs the Dynamic Memory Lab at the University of California, Davis, where he's a professor of psychology and neuroscience. In the new book, Why We Remember, he writes about the fundamental mechanisms of memory — and why memories often change over time. Sponsor Message Ranganath recently wrote an op-ed for The New York Times in which he reflected on President Biden's memory gaffes — and the role that memory plays in the current election cycle. "I'm just not in the position to say anything about the specifics of [either Biden or Trump's] memory problems," he says. "This is really more of an issue of people understanding what happens with aging. And, one of the nice things about writing this editorial is I got a lot of feedback from people who felt personally relieved by this because they're worried about their own memories." I think it would be a good idea to have a comprehensive physical and mental health evaluation that's fairly transparent. We certainly have transparency or seek transparency about other things like a candidate's finances, for instance. And obviously health is a very important factor. And I think at the end of the day, we'll still be in a position of saying, "OK, what's enough? What's the line between healthy and unhealthy?" But I think it's important to do because yes, as we get older we do have memory problems. ... © 2024 npr

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 29166 - Posted: 02.27.2024

By David Marchese Our memories form the bedrock of who we are. Those recollections, in turn, are built on one very simple assumption: This happened. But things are not quite so simple. “We update our memories through the act of remembering,” says Charan Ranganath, a professor of psychology and neuroscience at the University of California, Davis, and the author of the illuminating new book “Why We Remember.” “So it creates all these weird biases and infiltrates our decision making. It affects our sense of who we are.” Rather than being photo-accurate repositories of past experience, Ranganath argues, our memories function more like active interpreters, working to help us navigate the present and future. The implication is that who we are, and the memories we draw on to determine that, are far less fixed than you might think. “Our identities,” Ranganath says, “are built on shifting sand.” What is the most common misconception about memory? People believe that memory should be effortless, but their expectations for how much they should remember are totally out of whack with how much they’re capable of remembering.1 Another misconception is that memory is supposed to be an archive of the past. We expect that we should be able to replay the past like a movie in our heads. The problem with that assumption is that we don’t replay the past as it happened; we do it through a lens of interpretation and imagination. Semantic memory is the term for the memory of facts and knowledge about the world. standpoint? It’s exceptionally hard to answer the question of how much we can remember. What I’ll say is that we can remember an extraordinary amount of detail that would make you feel at times as if you have a photographic memory. We’re capable of these extraordinary feats. I would argue that we’re all everyday-memory experts, because we have this exceptional semantic memory, which is the scaffold for episodic memory. I know it sounds squirmy to say, “Well, I can’t answer the question of how much we remember,” but I don’t want readers to walk away thinking memory is all made up. © 2024 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 29134 - Posted: 02.06.2024

By Ashley Juavinett In the 2010 award-winning film “Inception,” Leonardo DiCaprio’s character and others run around multiple layers of someone’s consciousness, trying to implant an idea in the person’s mind. If you can plant something deep enough, the film suggests, you can make them believe it is their own idea. The film was billed as science fiction, but three years later, in 2013, researchers actually did this — in a mouse, at least. The work focused on the hippocampus, along with its closely interconnected structures, long recognized by scientists to hold our dearest memories. If you damage significant portions of just one region of your hippocampus, the dentate gyrus, you’ll lose the ability to form new memories. How these memories are stored, however, is still up for debate. One early but persistent idea posits that enduring changes in our neural circuitry, or “engrams,” may represent the physical traces of specific memories. An engram is sometimes thought of as a group of cells, along with their synaptic weights and connections throughout the brain. In sum, the engram is what DiCaprio’s character would have had to discreetly manipulate in his target. In 2012, a team in Susumu Tonegawa’s lab at the Massachusetts Institute of Technology (MIT) showed that you could mark the cells of a real memory engram and reactivate them later. Taking that work one step further, Steve Ramirez, Xu Liu and others in Tonegawa’s lab demonstrated the following year that you can implant a memory of something that never even happened. In doing so, they turned science fiction into reality, one tiny foot shock at a time. Published in Science, Ramirez and Liu’s study is a breath of fresh air, scientifically speaking. The abstract starts with one of the shortest sentences you’ll ever find in a scientific manuscript: “Memories can be unreliable.” The entire paper is extremely readable, and there is no shortage of related papers and review articles that you could give your students to read for additional context. © 2024 Simons Foundation

Related chapters from BN: Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 14: Attention and Higher Cognition
Link ID: 29131 - Posted: 02.06.2024

By Ellen Barry At the root of post-traumatic stress disorder, or PTSD, is a memory that cannot be controlled. It may intrude on everyday activity, thrusting a person into the middle of a horrifying event, or surface as night terrors or flashbacks. Decades of treatment of military veterans and sexual assault survivors have left little doubt that traumatic memories function differently from other memories. A group of researchers at Yale University and the Icahn School of Medicine at Mount Sinai set out to find empirical evidence of those differences. The team conducted brain scans of 28 people with PTSD while they listened to recorded narrations of their own memories. Some of the recorded memories were neutral, some were simply “sad,” and some were traumatic. The brain scans found clear differences, the researchers reported in a paper published on Thursday in the journal Nature Neuroscience. The people listening to the sad memories, which often involved the death of a family member, showed consistently high engagement of the hippocampus, part of the brain that organizes and contextualizes memories. When the same people listened to their traumatic memories — of sexual assaults, fires, school shootings and terrorist attacks — the hippocampus was not involved. “What it tells us is that the brain is in a different state in the two memories,” said Daniela Schiller, a neuroscientist at the Icahn School of Medicine at Mount Sinai and one of the authors of the study. She noted that therapies for PTSD often sought to help people organize their memory so they can view it as distant from the present. “Now we find something that potentially can explain it in the brain,” she said. “The brain doesn’t look like it’s in a state of memory; it looks like it is a state of present experience.” Indeed, the authors conclude in the paper, “traumatic memories are not experienced as © 2023 The New York Times Company

Related chapters from BN: Chapter 15: Emotions, Aggression, and Stress; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 11: Emotions, Aggression, and Stress; Chapter 13: Memory and Learning
Link ID: 29030 - Posted: 12.02.2023

By John Krakauer & Tamar Makin The human brain’s ability to adapt and change, known as neuroplasticity, has long captivated both the scientific community and the public imagination. It’s a concept that brings hope and fascination, especially when we hear extraordinary stories of, for example, blind individuals developing heightened senses that enable them to navigate through a cluttered room purely based on echolocation or stroke survivors miraculously regaining motor abilities once thought lost. For years, the notion that neurological challenges such as blindness, deafness, amputation or stroke lead to dramatic and significant changes in brain function has been widely accepted. These narratives paint a picture of a highly malleable brain that is capable of dramatic reorganization to compensate for lost functions. It’s an appealing notion: the brain, in response to injury or deficit, unlocks untapped potentials, rewires itself to achieve new capabilities and self-repurposes its regions to achieve new functions. This idea can also be linked with the widespread, though inherently false, myth that we only use 10 percent of our brain, suggesting that we have extensive neural reserves to lean on in times of need. But how accurate is this portrayal of the brain’s adaptive abilities to reorganize? Are we truly able to tap into reserves of unused brain potential following an injury, or have these captivating stories led to a misunderstanding of the brain’s true plastic nature? In a paper we wrote for the journal eLife, we delved into the heart of these questions, analyzing classical studies and reevaluating long-held beliefs about cortical reorganization and neuroplasticity. What we found offers a compelling new perspective on how the brain adapts to change and challenges some of the popularized notions about its flexible capacity for recovery. The roots of this fascination can be traced back to neuroscientist Michael Merzenich’s pioneering work, and it was popularized through books such as Norman Doidge’s The Brain That Changes Itself. Merzenich’s insights were built on the influential studies of Nobel Prize–winning neuroscientists David Hubel and Torsten Wiesel, who explored ocular dominance in kittens. © 2023 SCIENTIFIC AMERICAN,

Related chapters from BN: Chapter 16: Psychopathology: Biological Basis of Behavior Disorders; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 12: Psychopathology: The Biology of Behavioral Disorders; Chapter 13: Memory and Learning
Link ID: 29019 - Posted: 11.22.2023

By Catherine Offord Close your eyes and picture yourself running an errand across town. You can probably imagine the turns you’d need to take and the landmarks you’d encounter. This ability to conjure such scenarios in our minds is thought to be crucial to humans’ capacity to plan ahead. But it may not be uniquely human: Rats also seem to be able to “imagine” moving through mental environments, researchers report today in Science. Rodents trained to navigate within a virtual arena could, in return for a reward, activate the same neural patterns they’d shown while navigating—even when they were standing still. That suggests rodents can voluntarily access mental maps of places they’ve previously visited. “We know humans carry around inside their heads representations of all kinds of spaces: rooms in your house, your friends’ houses, shops, libraries, neighborhoods,” says Sean Polyn, a psychologist at Vanderbilt University who was not involved in the research. “Just by the simple act of reminiscing, we can place ourselves in these spaces—to think that we’ve got an animal analog of that very human imaginative act is very impressive.” Researchers think humans’ mental maps are encoded in the hippocampus, a brain region involved in memory. As we move through an environment, cells in this region fire in particular patterns depending on our location. When we later revisit—or simply think about visiting—those locations, the same hippocampal signatures are activated. Rats also encode spatial information in the hippocampus. But it’s been impossible to establish whether they have a similar capacity for voluntary mental navigation because of the practical challenges of getting a rodent to think about a particular place on cue, says study author Chongxi Lai, who conducted the work while a graduate student and later a postdoc at the Howard Hughes Medical Institute’s Janelia Research Campus. In their new study, Lai, along with Janelia neuroscientist Albert Lee and colleagues, found a way around this problem by developing a brain-machine interface that rewarded rats for navigating their surroundings using only their thoughts.

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 28989 - Posted: 11.04.2023

By Jake Buehler A fruit bat hanging in the corner of a cave stirs; it is ready to move. It scans the space to look for a free perch and then takes flight, adjusting its membranous wings to angle an approach to a spot next to one of its fuzzy fellows. As it does so, neurological data lifted from its brain is broadcast to sensors installed in the cave’s walls. This is no balmy cave along the Mediterranean Sea. The group of Egyptian fruit bats is in Berkeley, California, navigating an artificial cave in a laboratory that researchers have set up to study the inner workings of the animals’ minds. The researchers had an idea: that as a bat navigates its physical environment, it’s also navigating a network of social relationships. They wanted to know whether the bats use the same or different parts of their brain to map these intersecting realities. In a new study published in Nature in August, the scientists revealed that these maps overlap. The brain cells informing a bat of its own location also encode details about other bats nearby — not only their location, but also their identities. The findings raise the intriguing possibility that evolution can program those neurons for multiple purposes to serve the needs of different species. The neurons in question are located in the hippocampus, a structure deep within the mammalian brain that is involved in the creation of long-term memories. A special population of hippocampal neurons, known as place cells, are thought to create an internal navigation system. First identified in the rat hippocampus in 1971 by the neuroscientist John O’Keefe, place cells fire when an animal is in a particular location; different place cells encode different places. This system helps animals determine where they are, where they need to go and how to get from here to there. In 2014, O’Keefe was awarded the Nobel Prize for his discovery of place cells, and over the last several decades they have been identified in multiple primate species, including humans. However, moving from place to place isn’t the only way an animal can experience a change in its surroundings. In your home, the walls and furniture mostly stay the same from day to day, said Michael Yartsev, who studies the neural basis of natural behavior at the University of California, Berkeley and co-led the new work. But the social context of your living space could change quite regularly. © 2023 An editorially independent publication supported by the Simons Foundation.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 28982 - Posted: 11.01.2023

By Clay Risen Endel Tulving, whose insights into the structure of human memory and the way we recall the past revolutionized the field of cognitive psychology, died on Sept. 11 in Mississauga, Ontario. He was 96. His daughters, Linda Tulving and Elo Tulving-Blais, said his death, at an assisted living home, was caused by complications of a stroke. Until Dr. Tulving began his pathbreaking work in the 1960s, most cognitive psychologists were more interested in understanding how people learn things than in how they retain and recall them. When they did think about memory, they often depicted it as one giant cerebral warehouse, packed higgledy-piggledy, with only a vague conception of how we retrieved those items. This, they asserted, was the realm of “the mind,” an untestable, almost philosophical construct. Dr. Tulving, who spent most of his career at the University of Toronto, first made his name with a series of clever experiments and papers, demonstrating how the mind organizes memories and how it uses contextual cues to retrieve them. Forgetting, he posited, was less about information loss than it was about the lack of cues to retrieve it. He established his legacy with a chapter in the 1972 book “Organization of Memory,” which he edited with Wayne Donaldson. In that chapter, he argued for a taxonomy of memory types. He started with two: procedural memory, which is largely unconscious and involves things like how to walk or ride a bicycle, and declarative memory, which is conscious and discrete. © 2023 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 28934 - Posted: 09.29.2023

By Veronique Greenwood In the dappled sunlit waters of Caribbean mangrove forests, tiny box jellyfish bob in and out of the shade. Box jellies are distinguished from true jellyfish in part by their complex visual system — the grape-size predators have 24 eyes. But like other jellyfish, they are brainless, controlling their cube-shaped bodies with a distributed network of neurons. That network, it turns out, is more sophisticated than you might assume. On Friday, researchers published a report in the journal Current Biology indicating that the box jellyfish species Tripedalia cystophora have the ability to learn. Because box jellyfish diverged from our part of the animal kingdom long ago, understanding their cognitive abilities could help scientists trace the evolution of learning. The tricky part about studying learning in box jellies was finding an everyday behavior that scientists could train the creatures to perform in the lab. Anders Garm, a biologist at the University of Copenhagen and an author of the new paper, said his team decided to focus on a swift about-face that box jellies execute when they are about to hit a mangrove root. These roots rise through the water like black towers, while the water around them appears pale by comparison. But the contrast between the two can change from day to day, as silt clouds the water and makes it more difficult to tell how far away a root is. How do box jellies tell when they are getting too close? “The hypothesis was, they need to learn this,” Dr. Garm said. “When they come back to these habitats, they have to learn, how is today’s water quality? How is the contrast changing today?” In the lab, researchers produced images of alternating dark and light stripes, representing the mangrove roots and water, and used them to line the insides of buckets about six inches wide. When the stripes were a stark black and white, representing optimum water clarity, box jellies never got close to the bucket walls. With less contrast between the stripes, however, box jellies immediately began to run into them. This was the scientists’ chance to see if they would learn. © 2023 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 28925 - Posted: 09.23.2023

By Saugat Bolakhe Memory doesn’t represent a single scientific mystery; it’s many of them. Neuroscientists and psychologists have come to recognize varied types of memory that coexist in our brain: episodic memories of past experiences, semantic memories of facts, short- and long-term memories, and more. These often have different characteristics and even seem to be located in different parts of the brain. But it’s never been clear what feature of a memory determines how or why it should be sorted in this way. Now, a new theory backed by experiments using artificial neural networks proposes that the brain may be sorting memories by evaluating how likely they are to be useful as guides in the future. In particular, it suggests that many memories of predictable things, ranging from facts to useful recurring experiences — like what you regularly eat for breakfast or your walk to work — are saved in the brain’s neocortex, where they can contribute to generalizations about the world. Memories less likely to be useful — like the taste of that unique drink you had at that one party — are kept in the seahorse-shaped memory bank called the hippocampus. Actively segregating memories this way on the basis of their usefulness and generalizability may optimize the reliability of memories for helping us navigate novel situations. The authors of the new theory — the neuroscientists Weinan Sun and James Fitzgerald of the Janelia Research Campus of the Howard Hughes Medical Institute, Andrew Saxe of University College London, and their colleagues — described it in a recent paper in Nature Neuroscience. It updates and expands on the well-established idea that the brain has two linked, complementary learning systems: the hippocampus, which rapidly encodes new information, and the neocortex, which gradually integrates it for long-term storage. James McClelland, a cognitive neuroscientist at Stanford University who pioneered the idea of complementary learning systems in memory but was not part of the new study, remarked that it “addresses aspects of generalization” that his own group had not thought about when they proposed the theory in the mid 1990s. All Rights Reserved © 2023

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 28900 - Posted: 09.07.2023

By Alla Katsnelson Our understanding of animal minds is undergoing a remarkable transformation. Just three decades ago, the idea that a broad array of creatures have individual personalities was highly suspect in the eyes of serious animal scientists — as were such seemingly fanciful notions as fish feeling pain, bees appreciating playtime and cockatoos having culture. Today, though, scientists are rethinking the very definition of what it means to be sentient and seeing capacity for complex cognition and subjective experience in a great variety of creatures — even if their inner worlds differ greatly from our own. Such discoveries are thrilling, but they probably wouldn’t have surprised Charles Henry Turner, who died a century ago, in 1923. An American zoologist and comparative psychologist, he was one of the first scientists to systematically probe complex cognition in animals considered least likely to possess it. Turner primarily studied arthropods such as spiders and bees, closely observing them and setting up trailblazing experiments that hinted at cognitive abilities more complex than most scientists at the time suspected. Turner also explored differences in how individuals within a species behaved — a precursor of research today on what some scientists refer to as personality. Most of Turner’s contemporaries believed that “lowly” critters such as insects and spiders were tiny automatons, preprogrammed to perform well-defined functions. “Turner was one of the first, and you might say should be given the lion’s share of credit, for changing that perception,” says Charles Abramson, a comparative psychologist at Oklahoma State University in Stillwater who has done extensive biographical research on Turner and has been petitioning the US Postal Service for years to issue a stamp commemorating him. Turner also challenged the views that animals lacked the capacity for intelligent problem-solving and that they behaved based on instinct or, at best, learned associations, and that individual differences were just noisy data. But just as the scientific establishment of the time lacked the imagination to believe that animals other than human beings can have complex intelligence and subjectivity of experience, it also lacked the collective imagination to envision Turner, a Black scientist, as an equal among them. The hundredth anniversary of Turner’s death offers an opportunity to consider what we may have missed out on by their oversight. © 2023 Annual Reviews

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 28869 - Posted: 08.09.2023

By Yasemin Saplakoglu On warm summer nights, green lacewings flutter around bright lanterns in backyards and at campsites. The insects, with their veil-like wings, are easily distracted from their natural preoccupation with sipping on flower nectar, avoiding predatory bats and reproducing. Small clutches of the eggs they lay hang from long stalks on the underside of leaves and sway like fairy lights in the wind. The dangling ensembles of eggs are beautiful but also practical: They keep the hatching larvae from immediately eating their unhatched siblings. With sickle-like jaws that pierce their prey and suck them dry, lacewing larvae are “vicious,” said James Truman, a professor emeritus of development, cell and molecular biology at the University of Washington. “It’s like ‘Beauty and the Beast’ in one animal.” This Jekyll-and-Hyde dichotomy is made possible by metamorphosis, the phenomenon best known for transforming caterpillars into butterflies. In its most extreme version, complete metamorphosis, the juvenile and adult forms look and act like totally different species. Metamorphosis is not an exception in the animal kingdom; it’s almost a rule. More than 80% of the known animal species today, mainly insects, amphibians and marine invertebrates, undergo some form of metamorphosis or have complex, multistage life cycles. The process of metamorphosis presents many mysteries, but some of the most deeply puzzling ones center on the nervous system. At the center of this phenomenon is the brain, which must code for not one but multiple different identities. After all, the life of a flying, mate-seeking insect is very different from the life of a hungry caterpillar. For the past half-century, researchers have probed the question of how a network of neurons that encodes one identity — that of a hungry caterpillar or a murderous lacewing larva — shifts to encode an adult identity that encompasses a completely different set of behaviors and needs. Truman and his team have now learned how much metamorphosis reshuffles parts of the brain. In a recent study published in the journal eLife, they traced dozens of neurons in the brains of fruit flies going through metamorphosis. They found that, unlike the tormented protagonist of Franz Kafka’s short story “The Metamorphosis,” who awakes one day as a monstrous insect, adult insects likely can’t remember much of their larval life. Although many of the larval neurons in the study endured, the part of the insect brain that Truman’s group examined was dramatically rewired. That overhaul of neural connections mirrored a similarly dramatic shift in the behavior of the insects as they changed from crawling, hungry larvae to flying, mate-seeking adults. All Rights Reserved © 2023

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 28860 - Posted: 07.27.2023

Geneva Abdul The so-called “brain fog” symptom associated with long Covid is comparable to ageing 10 years, researchers have suggested. In a study by King’s College London, researchers investigated the impact of Covid-19 on memory and found cognitive impairment highest in individuals who had tested positive and had more than three months of symptoms. The study, published on Friday in a clinical journal published by The Lancet, also found the symptoms in affected individuals stretched to almost two years since initial infection. “The fact remains that two years on from their first infection, some people don’t feel fully recovered and their lives continue to be impacted by the long-term effects of the coronavirus,” said Claire Steves, a professor of ageing and health at King’s College. “We need more work to understand why this is the case and what can be done to help.” An estimated two million people living in the UK were experiencing self-reported long Covid – symptoms continuing for more than four weeks since infection – as of January 2023, according to the 2023 government census. Commonly reported symptoms included fatigue, difficulty concentrating, shortness of breath and muscle aches. The study included more than 5,100 participants from the Covid Symptom Study Biobank, recruited through a smartphone app. Through 12 cognitive tests measuring speed and accuracy, researchers examined working memory, attention, reasoning and motor controls between two periods of 2021 and 2022. © 2023 Guardian News & Media Limited or

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 28854 - Posted: 07.22.2023

Lilly Tozer Injecting ageing monkeys with a ‘longevity factor’ protein can improve their cognitive function, a study reveals. The findings, published on 3 July in Nature Aging1, could lead to new treatments for neurodegenerative diseases. It is the first time that restoring levels of klotho — a naturally occurring protein that declines in our bodies with age — has been shown to improve cognition in a primate. Previous research on mice had shown that injections of klotho can extend the animals’ lives and increases synaptic plasticity2 — the capacity to control communication between neurons, at junctions called synapses. “Given the close genetic and physiological parallels between primates and humans, this could suggest potential applications for treating human cognitive disorders,” says Marc Busche, a neurologist at the UK Dementia Research Institute group at University College London. The protein is named after the Greek goddess Clotho, one of the Fates, who spins the thread of life. The study involved testing the cognitive abilities of old rhesus macaques (Macaca mulatta), aged around 22 years on average, before and after a single injection of klotho. To do this, researchers used a behavioural experiment to test for spatial memory: the monkeys had to remember the location of an edible treat, placed in one of several wells by the investigator, after it was hidden from them. Study co-author Dena Dubal, a physician-researcher at the University of California, San Francisco, compares the test to recalling where you left your car in a car park, or remembering a sequence of numbers a couple of minutes after hearing it. Such tasks become harder with age. The monkeys performed significantly better in these tests after receiving klotho — before the injections they identified the correct wells around 45% of the time, compared with around 60% of the time after injection. The improvement was sustained for at least two weeks. Unlike in previous studies involving mice, relatively low doses of klotho were effective. This adds an element of complexity to the findings, which suggests a more nuanced mode of actions than was previously thought, Busche says. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 28847 - Posted: 07.06.2023