Chapter 17. Learning and Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 21 - 40 of 1892

By Catherine Offord Close your eyes and picture yourself running an errand across town. You can probably imagine the turns you’d need to take and the landmarks you’d encounter. This ability to conjure such scenarios in our minds is thought to be crucial to humans’ capacity to plan ahead. But it may not be uniquely human: Rats also seem to be able to “imagine” moving through mental environments, researchers report today in Science. Rodents trained to navigate within a virtual arena could, in return for a reward, activate the same neural patterns they’d shown while navigating—even when they were standing still. That suggests rodents can voluntarily access mental maps of places they’ve previously visited. “We know humans carry around inside their heads representations of all kinds of spaces: rooms in your house, your friends’ houses, shops, libraries, neighborhoods,” says Sean Polyn, a psychologist at Vanderbilt University who was not involved in the research. “Just by the simple act of reminiscing, we can place ourselves in these spaces—to think that we’ve got an animal analog of that very human imaginative act is very impressive.” Researchers think humans’ mental maps are encoded in the hippocampus, a brain region involved in memory. As we move through an environment, cells in this region fire in particular patterns depending on our location. When we later revisit—or simply think about visiting—those locations, the same hippocampal signatures are activated. Rats also encode spatial information in the hippocampus. But it’s been impossible to establish whether they have a similar capacity for voluntary mental navigation because of the practical challenges of getting a rodent to think about a particular place on cue, says study author Chongxi Lai, who conducted the work while a graduate student and later a postdoc at the Howard Hughes Medical Institute’s Janelia Research Campus. In their new study, Lai, along with Janelia neuroscientist Albert Lee and colleagues, found a way around this problem by developing a brain-machine interface that rewarded rats for navigating their surroundings using only their thoughts.

Keyword: Learning & Memory; Attention
Link ID: 28989 - Posted: 11.04.2023

By Jake Buehler A fruit bat hanging in the corner of a cave stirs; it is ready to move. It scans the space to look for a free perch and then takes flight, adjusting its membranous wings to angle an approach to a spot next to one of its fuzzy fellows. As it does so, neurological data lifted from its brain is broadcast to sensors installed in the cave’s walls. This is no balmy cave along the Mediterranean Sea. The group of Egyptian fruit bats is in Berkeley, California, navigating an artificial cave in a laboratory that researchers have set up to study the inner workings of the animals’ minds. The researchers had an idea: that as a bat navigates its physical environment, it’s also navigating a network of social relationships. They wanted to know whether the bats use the same or different parts of their brain to map these intersecting realities. In a new study published in Nature in August, the scientists revealed that these maps overlap. The brain cells informing a bat of its own location also encode details about other bats nearby — not only their location, but also their identities. The findings raise the intriguing possibility that evolution can program those neurons for multiple purposes to serve the needs of different species. The neurons in question are located in the hippocampus, a structure deep within the mammalian brain that is involved in the creation of long-term memories. A special population of hippocampal neurons, known as place cells, are thought to create an internal navigation system. First identified in the rat hippocampus in 1971 by the neuroscientist John O’Keefe, place cells fire when an animal is in a particular location; different place cells encode different places. This system helps animals determine where they are, where they need to go and how to get from here to there. In 2014, O’Keefe was awarded the Nobel Prize for his discovery of place cells, and over the last several decades they have been identified in multiple primate species, including humans. However, moving from place to place isn’t the only way an animal can experience a change in its surroundings. In your home, the walls and furniture mostly stay the same from day to day, said Michael Yartsev, who studies the neural basis of natural behavior at the University of California, Berkeley and co-led the new work. But the social context of your living space could change quite regularly. © 2023 An editorially independent publication supported by the Simons Foundation.

Keyword: Learning & Memory
Link ID: 28982 - Posted: 11.01.2023

Anil Oza Scientists once considered sleep to be like a shade getting drawn over a window between the brain and the outside world: when the shade is closed, the brain stops reacting to outside stimuli. A study published on 12 October in Nature Neuroscience1 suggests that there might be periods during sleep when that shade is partially open. Depending on what researchers said to them, participants in the study would either smile or frown on cue in certain phases of sleep. “You’re not supposed to be able to do stuff while you sleep,” says Delphine Oudiette, a cognitive scientist at the Paris Brain Institute in France and a co-author of the study. Historically, the definition of sleep is that consciousness of your environment halts, she adds. “It means you don’t react to the external world.” Dream time A few years ago, however, Oudiette began questioning this definition after she and her team conducted an experiment in which they were able to communicate with people who are aware that they are dreaming while they sleep — otherwise known as lucid dreamers. During these people’s dreams, experimenters were able to ask questions and get responses through eye and facial-muscle movements2. Karen Konkoly, who was a co-author on that study and a cognitive scientist at Northwestern University in Evanston, Illinois, says that after that paper came out, “it was a big open question in our minds whether communication would be possible with non-lucid dreamers”. So Oudiette continued with the work. In her latest study, she and her colleagues observed 27 people with narcolepsy — characterized by daytime sleepiness and a high frequency of lucid dreams — and 22 people without the condition. While they were sleeping, participants were repeatedly asked to frown or smile. All of them responded accurately to at least 70% of these prompts. © 2023 Springer Nature Limited

Keyword: Sleep; Learning & Memory
Link ID: 28968 - Posted: 10.25.2023

By Benjamin Mueller Once their scalpels reach the edge of a brain tumor, surgeons are faced with an agonizing decision: cut away some healthy brain tissue to ensure the entire tumor is removed, or give the healthy tissue a wide berth and risk leaving some of the menacing cells behind. Now scientists in the Netherlands report using artificial intelligence to arm surgeons with knowledge about the tumor that may help them make that choice. The method, described in a study published on Wednesday in the journal Nature, involves a computer scanning segments of a tumor’s DNA and alighting on certain chemical modifications that can yield a detailed diagnosis of the type and even subtype of the brain tumor. That diagnosis, generated during the early stages of an hourslong surgery, can help surgeons decide how aggressively to operate, the researchers said. In the future, the method may also help steer doctors toward treatments tailored for a specific subtype of tumor. “It’s imperative that the tumor subtype is known at the time of surgery,” said Jeroen de Ridder, an associate professor in the Center for Molecular Medicine at UMC Utrecht, a Dutch hospital, who helped lead the study. “What we have now uniquely enabled is to allow this very fine-grained, robust, detailed diagnosis to be performed already during the surgery.” A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know: ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam). © 2023 The New York Times Company

Keyword: Robotics; Intelligence
Link ID: 28958 - Posted: 10.12.2023

By Stephanie Pappas If you’ve ever awoken from a vivid dream only to find that you can’t remember the details by the end of breakfast, you’re not alone. People forget most of the dreams they have—though it is possible to train yourself to remember more of them. Dreaming happens mostly (though not always exclusively) during rapid eye movement (REM) sleep. During this sleep stage, brain activity looks similar to that in a waking brain, with some very important differences. Key among them: during REM sleep, the areas of the brain that transfer memories into long-term storage—as well as the long-term storage areas themselves—are relatively deactivated, says Deirdre Barrett, a dream researcher at Harvard Medical School and author of the book The Committee of Sleep (Oneiroi Press, 2001). This may be a side effect of REM’s role in memory consolidation, according to a 2019 study on mice in the journal Science. Short-term memory areas are active during REM sleep, but those only hang on to memories for about 30 seconds. “You have to wake up from REM sleep, generally, to recall a dream,” Barrett says. If, instead, you pass into the next stage of sleep without rousing, that dream will never enter long-term memory. REM sleep occurs about every 90 minutes, and it lengthens as the night drags on. The first REM cycle of the night is typically just a few minutes long, but by the end of an eight-hour night of sleep, a person has typically been in the REM stage for a good 20 minutes, Barrett says. That’s why the strongest correlation between any life circumstance and your memory of dreams is the number of hours you’ve slept. If you sleep only six hours, you’re getting less than half of the dream time of an eight-hour night, she says. Those final hours of sleep are the most important for dreaming. And people tend to remember the last dream of the night—the one just before waking. © 2023 Scientific American

Keyword: Sleep; Learning & Memory
Link ID: 28939 - Posted: 10.03.2023

By Clay Risen Endel Tulving, whose insights into the structure of human memory and the way we recall the past revolutionized the field of cognitive psychology, died on Sept. 11 in Mississauga, Ontario. He was 96. His daughters, Linda Tulving and Elo Tulving-Blais, said his death, at an assisted living home, was caused by complications of a stroke. Until Dr. Tulving began his pathbreaking work in the 1960s, most cognitive psychologists were more interested in understanding how people learn things than in how they retain and recall them. When they did think about memory, they often depicted it as one giant cerebral warehouse, packed higgledy-piggledy, with only a vague conception of how we retrieved those items. This, they asserted, was the realm of “the mind,” an untestable, almost philosophical construct. Dr. Tulving, who spent most of his career at the University of Toronto, first made his name with a series of clever experiments and papers, demonstrating how the mind organizes memories and how it uses contextual cues to retrieve them. Forgetting, he posited, was less about information loss than it was about the lack of cues to retrieve it. He established his legacy with a chapter in the 1972 book “Organization of Memory,” which he edited with Wayne Donaldson. In that chapter, he argued for a taxonomy of memory types. He started with two: procedural memory, which is largely unconscious and involves things like how to walk or ride a bicycle, and declarative memory, which is conscious and discrete. © 2023 The New York Times Company

Keyword: Learning & Memory
Link ID: 28934 - Posted: 09.29.2023

By Veronique Greenwood In the dappled sunlit waters of Caribbean mangrove forests, tiny box jellyfish bob in and out of the shade. Box jellies are distinguished from true jellyfish in part by their complex visual system — the grape-size predators have 24 eyes. But like other jellyfish, they are brainless, controlling their cube-shaped bodies with a distributed network of neurons. That network, it turns out, is more sophisticated than you might assume. On Friday, researchers published a report in the journal Current Biology indicating that the box jellyfish species Tripedalia cystophora have the ability to learn. Because box jellyfish diverged from our part of the animal kingdom long ago, understanding their cognitive abilities could help scientists trace the evolution of learning. The tricky part about studying learning in box jellies was finding an everyday behavior that scientists could train the creatures to perform in the lab. Anders Garm, a biologist at the University of Copenhagen and an author of the new paper, said his team decided to focus on a swift about-face that box jellies execute when they are about to hit a mangrove root. These roots rise through the water like black towers, while the water around them appears pale by comparison. But the contrast between the two can change from day to day, as silt clouds the water and makes it more difficult to tell how far away a root is. How do box jellies tell when they are getting too close? “The hypothesis was, they need to learn this,” Dr. Garm said. “When they come back to these habitats, they have to learn, how is today’s water quality? How is the contrast changing today?” In the lab, researchers produced images of alternating dark and light stripes, representing the mangrove roots and water, and used them to line the insides of buckets about six inches wide. When the stripes were a stark black and white, representing optimum water clarity, box jellies never got close to the bucket walls. With less contrast between the stripes, however, box jellies immediately began to run into them. This was the scientists’ chance to see if they would learn. © 2023 The New York Times Company

Keyword: Learning & Memory; Evolution
Link ID: 28925 - Posted: 09.23.2023

By Jim Davies Think of what you want to eat for dinner this weekend. What popped into mind? Pizza? Sushi? Clam chowder? Why did those foods (or whatever foods you imagined) appear in your consciousness and not something else? Psychologists have long held that when we are making a decision about a particular category of thing, we tend to bring to mind items that are typical or common in our culture or everyday lives, or ones we value the most. On this view, whatever foods you conjured up are likely ones that you eat often, or love to eat. Sounds intuitive. But a recent paper published in Cognition suggests it’s more complicated than that. Tracey Mills, a research assistant working at MIT, led the study along with Jonathan Phillips, a cognitive scientist and philosopher at Dartmouth College. They put over 2,000 subjects, recruited online, through a series of seven experiments that allowed them to test a novel approach for understanding which ideas within a category will pop into our consciousness—and which won’t. In this case, they had subjects think about zoo animals, holidays, jobs, kitchen appliances, chain restaurants, sports, and vegetables. What they found is that what makes a particular thing come to mind—such as a lion when one is considering zoo animals—is determined not by how valuable or familiar it is, but by where it lies in a multidimensional idea grid that could be said to resemble a kind of word cloud. “Under the hypothesis we argue for,” Mills and Phillips write, “the process of calling members of a category to mind might be modeled as a search through feature space, weighted toward certain features that are relevant for that category.” Historical “value” just happens to be one dimension that is particularly relevant when one is talking about dinner, but is less relevant for categories such as zoo animals or, say, crimes, they write. © 2023 NautilusNext Inc., All rights reserved.

Keyword: Attention; Learning & Memory
Link ID: 28910 - Posted: 09.16.2023

By Joanna Thompson= Like many people, Mary Ann Raghanti enjoys potatoes loaded with butter. Unlike most people, however, she actually asked the question of why we love stuffing ourselves with fatty carbohydrates. Raghanti, a biological anthropologist at Kent State University, has researched the neurochemical mechanism behind that savory craving. As it turns out, a specific brain chemical may be one of the things that not only developed our tendency to overindulge in food, alcohol and drugs but also helped the human brain evolve to be unique from the brains of closely related species. A new study, led by Raghanti and published on September 11 in the Proceedings of the National Academy of Sciences USA, examined the activity of a particular neurotransmitter in a region of the brain that is associated with reward and motivation across several species of primates. The researchers found higher levels of that brain chemical—neuropeptide Y (NPY)—in humans, compared with our closest living relatives. That boost in the reward peptide could explain our love of high-fat foods, from pizza to poutine. The impulse to stuff ourselves with fats and sugars may have given our ancestors an evolutionary edge, allowing them to develop a larger and more complex brain. “I think this is a first bit of neurobiological insight into one of the most interesting things about us as a species,” says Robert Sapolsky, a neuroendocrinology researcher at Stanford University, who was not directly involved in the research but helped review the new paper. Advertisement Neuropeptide Y is associated with “hedonic eating”—consuming food strictly to experience pleasure rather than to satisfy hunger. It drives individuals to seek out high-calorie foods, especially those rich in fat. Historically, though, NPY has been overlooked in favor of flashier “feel good” chemicals such as dopamine and serotonin. © 2023 Scientific American,

Keyword: Obesity; Intelligence
Link ID: 28905 - Posted: 09.13.2023

By Saugat Bolakhe Memory doesn’t represent a single scientific mystery; it’s many of them. Neuroscientists and psychologists have come to recognize varied types of memory that coexist in our brain: episodic memories of past experiences, semantic memories of facts, short- and long-term memories, and more. These often have different characteristics and even seem to be located in different parts of the brain. But it’s never been clear what feature of a memory determines how or why it should be sorted in this way. Now, a new theory backed by experiments using artificial neural networks proposes that the brain may be sorting memories by evaluating how likely they are to be useful as guides in the future. In particular, it suggests that many memories of predictable things, ranging from facts to useful recurring experiences — like what you regularly eat for breakfast or your walk to work — are saved in the brain’s neocortex, where they can contribute to generalizations about the world. Memories less likely to be useful — like the taste of that unique drink you had at that one party — are kept in the seahorse-shaped memory bank called the hippocampus. Actively segregating memories this way on the basis of their usefulness and generalizability may optimize the reliability of memories for helping us navigate novel situations. The authors of the new theory — the neuroscientists Weinan Sun and James Fitzgerald of the Janelia Research Campus of the Howard Hughes Medical Institute, Andrew Saxe of University College London, and their colleagues — described it in a recent paper in Nature Neuroscience. It updates and expands on the well-established idea that the brain has two linked, complementary learning systems: the hippocampus, which rapidly encodes new information, and the neocortex, which gradually integrates it for long-term storage. James McClelland, a cognitive neuroscientist at Stanford University who pioneered the idea of complementary learning systems in memory but was not part of the new study, remarked that it “addresses aspects of generalization” that his own group had not thought about when they proposed the theory in the mid 1990s. All Rights Reserved © 2023

Keyword: Learning & Memory; Attention
Link ID: 28900 - Posted: 09.07.2023

By Astrid Landon In June 2015, Jeffrey Thelen’s parents noticed their son was experiencing problems with his memory. In the subsequent years, he would get lost driving to his childhood home, forget his cat had died, and fail to recognize his brother and sister. His parents wondered: Was electroconvulsive therapy to blame? Thelen had been regularly receiving the treatment to help with symptoms of severe depression, which he’d struggled with since high school. At 34 years old, he had tried medications, but hadn’t had a therapy plan. His primary care physician referred him to get an evaluation for ECT, which was then prescribed by a psychiatrist. Electroconvulsive therapy has been used to treat various mental illnesses since the late 1930s. The technique, which involves passing electrical currents through the brain to trigger a short seizure, has always had a somewhat torturous reputation. Yet it’s still in use, in a modified form of its original version. According to one commonly cited statistic, 100,000 Americans receive ECT annually — most often to ease symptoms of severe depression or bipolar disorder — although exact demographic data is scarce. For Thelen, the treatment appeared to relieve his depression symptoms somewhat, but he reported new headaches and concentration issues, in addition to the memory loss. Those claims are central to a lawsuit Thelen filed in 2020 against Somatics, LLC and Elektrika, Inc., manufacturers and suppliers of ECT devices, alleging that the companies failed to disclose — and even intentionally hid — risks associated with ECT, including “brain damage and permanent neurocognitive injuries.” Thelen’s legal team told Undark that they have since reached a resolution with Elektrika on confidential terms. With regard to Somatics, in June a jury found that the company failed to warn about risks associated with ECT, but could not conclude that there was a legal causation between that and Thelen’s memory loss. The following month, his lawyers filed a motion for a new trial. (In response to a request for comment, Conrad Swartz, one of Somatics’ co-founders, directed Undark to the company’s attorney, Sue Cole. Cole did not respond to multiple emails. Lawyers for Elektrika declined to comment.)

Keyword: Depression; Learning & Memory
Link ID: 28899 - Posted: 09.07.2023

By Alla Katsnelson Our understanding of animal minds is undergoing a remarkable transformation. Just three decades ago, the idea that a broad array of creatures have individual personalities was highly suspect in the eyes of serious animal scientists — as were such seemingly fanciful notions as fish feeling pain, bees appreciating playtime and cockatoos having culture. Today, though, scientists are rethinking the very definition of what it means to be sentient and seeing capacity for complex cognition and subjective experience in a great variety of creatures — even if their inner worlds differ greatly from our own. Such discoveries are thrilling, but they probably wouldn’t have surprised Charles Henry Turner, who died a century ago, in 1923. An American zoologist and comparative psychologist, he was one of the first scientists to systematically probe complex cognition in animals considered least likely to possess it. Turner primarily studied arthropods such as spiders and bees, closely observing them and setting up trailblazing experiments that hinted at cognitive abilities more complex than most scientists at the time suspected. Turner also explored differences in how individuals within a species behaved — a precursor of research today on what some scientists refer to as personality. Most of Turner’s contemporaries believed that “lowly” critters such as insects and spiders were tiny automatons, preprogrammed to perform well-defined functions. “Turner was one of the first, and you might say should be given the lion’s share of credit, for changing that perception,” says Charles Abramson, a comparative psychologist at Oklahoma State University in Stillwater who has done extensive biographical research on Turner and has been petitioning the US Postal Service for years to issue a stamp commemorating him. Turner also challenged the views that animals lacked the capacity for intelligent problem-solving and that they behaved based on instinct or, at best, learned associations, and that individual differences were just noisy data. But just as the scientific establishment of the time lacked the imagination to believe that animals other than human beings can have complex intelligence and subjectivity of experience, it also lacked the collective imagination to envision Turner, a Black scientist, as an equal among them. The hundredth anniversary of Turner’s death offers an opportunity to consider what we may have missed out on by their oversight. © 2023 Annual Reviews

Keyword: Learning & Memory; Evolution
Link ID: 28869 - Posted: 08.09.2023

By Yasemin Saplakoglu On warm summer nights, green lacewings flutter around bright lanterns in backyards and at campsites. The insects, with their veil-like wings, are easily distracted from their natural preoccupation with sipping on flower nectar, avoiding predatory bats and reproducing. Small clutches of the eggs they lay hang from long stalks on the underside of leaves and sway like fairy lights in the wind. The dangling ensembles of eggs are beautiful but also practical: They keep the hatching larvae from immediately eating their unhatched siblings. With sickle-like jaws that pierce their prey and suck them dry, lacewing larvae are “vicious,” said James Truman, a professor emeritus of development, cell and molecular biology at the University of Washington. “It’s like ‘Beauty and the Beast’ in one animal.” This Jekyll-and-Hyde dichotomy is made possible by metamorphosis, the phenomenon best known for transforming caterpillars into butterflies. In its most extreme version, complete metamorphosis, the juvenile and adult forms look and act like totally different species. Metamorphosis is not an exception in the animal kingdom; it’s almost a rule. More than 80% of the known animal species today, mainly insects, amphibians and marine invertebrates, undergo some form of metamorphosis or have complex, multistage life cycles. The process of metamorphosis presents many mysteries, but some of the most deeply puzzling ones center on the nervous system. At the center of this phenomenon is the brain, which must code for not one but multiple different identities. After all, the life of a flying, mate-seeking insect is very different from the life of a hungry caterpillar. For the past half-century, researchers have probed the question of how a network of neurons that encodes one identity — that of a hungry caterpillar or a murderous lacewing larva — shifts to encode an adult identity that encompasses a completely different set of behaviors and needs. Truman and his team have now learned how much metamorphosis reshuffles parts of the brain. In a recent study published in the journal eLife, they traced dozens of neurons in the brains of fruit flies going through metamorphosis. They found that, unlike the tormented protagonist of Franz Kafka’s short story “The Metamorphosis,” who awakes one day as a monstrous insect, adult insects likely can’t remember much of their larval life. Although many of the larval neurons in the study endured, the part of the insect brain that Truman’s group examined was dramatically rewired. That overhaul of neural connections mirrored a similarly dramatic shift in the behavior of the insects as they changed from crawling, hungry larvae to flying, mate-seeking adults. All Rights Reserved © 2023

Keyword: Learning & Memory
Link ID: 28860 - Posted: 07.27.2023

Geneva Abdul The so-called “brain fog” symptom associated with long Covid is comparable to ageing 10 years, researchers have suggested. In a study by King’s College London, researchers investigated the impact of Covid-19 on memory and found cognitive impairment highest in individuals who had tested positive and had more than three months of symptoms. The study, published on Friday in a clinical journal published by The Lancet, also found the symptoms in affected individuals stretched to almost two years since initial infection. “The fact remains that two years on from their first infection, some people don’t feel fully recovered and their lives continue to be impacted by the long-term effects of the coronavirus,” said Claire Steves, a professor of ageing and health at King’s College. “We need more work to understand why this is the case and what can be done to help.” An estimated two million people living in the UK were experiencing self-reported long Covid – symptoms continuing for more than four weeks since infection – as of January 2023, according to the 2023 government census. Commonly reported symptoms included fatigue, difficulty concentrating, shortness of breath and muscle aches. The study included more than 5,100 participants from the Covid Symptom Study Biobank, recruited through a smartphone app. Through 12 cognitive tests measuring speed and accuracy, researchers examined working memory, attention, reasoning and motor controls between two periods of 2021 and 2022. © 2023 Guardian News & Media Limited or

Keyword: Learning & Memory; Attention
Link ID: 28854 - Posted: 07.22.2023

Nicola Davis Science correspondent Taking part in activities such as chess, writing a journal, or educational classes in older age may help to reduce the risk of dementia, a study has suggested. According to the World Health Organization, more than 55 million people have the disease worldwide, most of them older people. However experts have long emphasised that dementia is not an inevitable part of ageing, with being active, eating well and avoiding smoking among the lifestyle choices that can reduce risk. Now researchers have revealed fresh evidence that challenging the brain could also be beneficial. Writing in the journal Jama Network Open, researchers in the US and Australia report how they used data from the Australian Aspree Longitudinal Study of Older Persons covering the period from 1 March 2010 to 30 November 2020. Participants in the study were over the age of 70, did not have a major cognitive impairment or cardiovascular disease when recruited between 2010 and 2014, and were assessed for dementia through regular study visits. In the first year, participants were asked about their social networks. They were also questioned on whether they undertook certain leisure activities or trips out to venues such as galleries or restaurants, and how frequently: never, rarely, sometimes, often or always. The team analysed data from 10,318 participants, taking into account factors such as age, sex, smoking status, education, socioeconomic status, and whether participants had other diseases such as diabetes. The results reveal that for activities such as writing letters or journals, taking educational classes or using a computer, increasing the frequency of participation by one category, for example from “sometimes” to “often”, was associated with an 11% drop in the risk of developing dementia over a 10-year period. Similarly, increased frequency of activities such as card games, chess or puzzle-solving was associated with a 9% reduction in dementia risk. © 2023 Guardian News & Media Limited

Keyword: Alzheimers; Learning & Memory
Link ID: 28851 - Posted: 07.19.2023

Lilly Tozer Injecting ageing monkeys with a ‘longevity factor’ protein can improve their cognitive function, a study reveals. The findings, published on 3 July in Nature Aging1, could lead to new treatments for neurodegenerative diseases. It is the first time that restoring levels of klotho — a naturally occurring protein that declines in our bodies with age — has been shown to improve cognition in a primate. Previous research on mice had shown that injections of klotho can extend the animals’ lives and increases synaptic plasticity2 — the capacity to control communication between neurons, at junctions called synapses. “Given the close genetic and physiological parallels between primates and humans, this could suggest potential applications for treating human cognitive disorders,” says Marc Busche, a neurologist at the UK Dementia Research Institute group at University College London. The protein is named after the Greek goddess Clotho, one of the Fates, who spins the thread of life. The study involved testing the cognitive abilities of old rhesus macaques (Macaca mulatta), aged around 22 years on average, before and after a single injection of klotho. To do this, researchers used a behavioural experiment to test for spatial memory: the monkeys had to remember the location of an edible treat, placed in one of several wells by the investigator, after it was hidden from them. Study co-author Dena Dubal, a physician-researcher at the University of California, San Francisco, compares the test to recalling where you left your car in a car park, or remembering a sequence of numbers a couple of minutes after hearing it. Such tasks become harder with age. The monkeys performed significantly better in these tests after receiving klotho — before the injections they identified the correct wells around 45% of the time, compared with around 60% of the time after injection. The improvement was sustained for at least two weeks. Unlike in previous studies involving mice, relatively low doses of klotho were effective. This adds an element of complexity to the findings, which suggests a more nuanced mode of actions than was previously thought, Busche says. © 2023 Springer Nature Limited

Keyword: Learning & Memory; Development of the Brain
Link ID: 28847 - Posted: 07.06.2023

Nicola Davis Taking a short nap during the day may help to protect the brain’s health as it ages, researchers have suggested after finding that the practice appears to be associated with larger brain volume. While previous research has suggested long naps could be an early symptom of Alzheimer’s disease, other work has revealed that a brief doze can improve people’s ability to learn. Now researchers say they have found evidence to suggest napping may help to protect against brain shrinkage. That is of interest, the team say, as brain shrinkage, a process that occurs with age, is accelerated in people with cognitive problems and neurodegenerative diseases, with some research suggesting this may be related to sleep problems. “In line with these studies, we found an association between habitual daytime napping and larger total brain volume, which could suggest that napping regularly provides some protection against neurodegeneration through compensating for poor sleep,” the researchers note. Writing in the journal Sleep Health, researchers at UCL and the University of the Republic in Uruguay report how they drew on data from the UK Biobank study that has collated genetic, lifestyle and health information from 500,000 people aged 40 to 69 at recruitment. The team used data from 35,080 Biobank participants to look at whether a combination of genetic variants that have previously been associated with self-reported habitual daytime napping are also linked to brain volume, cognition and other aspects of brain health. © 2023 Guardian News & Media Limited

Keyword: Sleep; Development of the Brain
Link ID: 28829 - Posted: 06.21.2023

Kerri Smith In a dimly lit laboratory in London, a brown mouse explores a circular tabletop, sniffing as it ambles about. Suddenly, silently, a shadow appears. In a split second, the mouse’s brain whirs with activity. Neurons in its midbrain start to fire, sensing the threat of a potential predator, and a cascade of activity in an adjacent region orders its body to choose a response — freeze to the spot in the hope of going undetected, or run for shelter, in this case a red acetate box stationed nearby. From the mouse’s perspective, this is life or death. But the shadow wasn’t cast by a predator. Instead, it is the work of neuroscientists in Tiago Branco’s lab, who have rigged up a plastic disc on a lever to provoke, and thereby study, the mouse’s escape behaviour. This is a rapid decision-making process that draws on sensory information, previous experience and instinct. Branco, a neuroscientist at the Sainsbury Wellcome Centre at University College London, has wondered about installing a taxidermied owl on a zip wire to create a more realistic experience. And his colleagues have more ideas — cutting the disc into a wingspan shape, for instance. “Having drones — that would also be very nice,” says Dario Campagner, a researcher in Branco’s lab. A mouse detects a looming threat and runs for cover. The shadow has been darkened. The set-up is part of a growing movement to step away from some of the lab experiments that neuroscientists have used for decades to understand the brain and behaviour. Such exercises — training an animal to use a lever or joystick to get a reward, for example, or watching it swim through a water maze — have established important principles of brain activity and organization. But they take days to months of training an animal to complete specific, idiosyncratic tasks. The end result, Branco says, is like studying a “professional athlete”; the brain might work differently in the messy, unpredictable real world. Mice didn’t evolve to operate a joystick. Meanwhile, many behaviours that come naturally — such as escaping a predator, or finding scarce food or a receptive mate — are extremely important for the animal, says Ann Kennedy, a theoretical neuroscientist at Northwestern University in Chicago, Illinois. They are “critical to survival, and under selective pressure”, she says. By studying these natural actions, scientists are hoping to glean lessons about the brain and behaviour that are more holistic and more relevant to everyday activity than ever before.

Keyword: Learning & Memory; Evolution
Link ID: 28822 - Posted: 06.14.2023

Kari Paul and Maanvi Singh Elon Musk’s brain-implant company Neuralink last week received regulatory approval to conduct the first clinical trial of its experimental device in humans. But the billionaire executive’s bombastic promotion of the technology, his leadership record at other companies and animal welfare concerns relating to Neuralink experiments have raised alarm. “I was surprised,” said Laura Cabrera, a neuroethicist at Penn State’s Rock Ethics Institute about the decision by the US Food and Drug Administration to let the company go ahead with clinical trials. Musk’s erratic leadership at Twitter and his “move fast” techie ethos raise questions about Neuralink’s ability to responsibly oversee the development of an invasive medical device capable of reading brain signals, Cabrera argued. “Is he going to see a brain implant device as something that requires not just extra regulation, but also ethical consideration?” she said. “Or will he just treat this like another gadget?” Neuralink is far from the first or only company working on brain interface devices. For decades, research teams around the world have been exploring the use of implants and devices to treat conditions such as paralysis and depression. Already, thousands use neuroprosthetics like cochlear implants for hearing. But the broad scope of capabilities Musk is promising from the Neuralink device have garnered skepticism from experts. Neuralink entered the industry in 2016 and has designed a brain-computer interface (BCI) called the Link – an electrode-laden computer chip that can be sewn into the surface of the brain and connects it to external electronics – as well as a robotic device that implants the chip. © 2023 Guardian News & Media Limited

Keyword: Robotics; Learning & Memory
Link ID: 28816 - Posted: 06.07.2023

by Adam Kirsch Giraffes will eat courgettes if they have to, but they really prefer carrots. A team of researchers from Spain and Germany recently took advantage of this preference to investigate whether the animals are capable of statistical reasoning. In the experiment, a giraffe was shown two transparent containers holding a mixture of carrot and courgette slices. One container held mostly carrots, the other mostly courgettes. A researcher then took one slice from each container and offered them to the giraffe with closed hands, so it couldn’t see which vegetable had been selected. In repeated trials, the four test giraffes reliably chose the hand that had reached into the container with more carrots, showing they understood that the more carrots were in the container, the more likely it was that a carrot had been picked. Monkeys have passed similar tests, and human babies can do it at 12 months old. But giraffes’ brains are much smaller than primates’ relative to body size, so it was notable to see how well they grasped the concept. Such discoveries are becoming less surprising every year, however, as a flood of new research overturns longstanding assumptions about what animal minds are and aren’t capable of. A recent wave of popular books on animal cognition argue that skills long assumed to be humanity’s prerogative, from planning for the future to a sense of fairness, actually exist throughout the animal kingdom – and not just in primates or other mammals, but in birds, octopuses and beyond. In 2018, for instance, a team at the University of Buenos Aires found evidence that zebra finches, whose brains weigh half a gram, have dreams. Monitors attached to the birds’ throats found that when they were asleep, their muscles sometimes moved in exactly the same pattern as when they were singing out loud; in other words, they seemed to be dreaming about singing. © 2023 Guardian News & Media Limited

Keyword: Evolution; Learning & Memory
Link ID: 28808 - Posted: 05.31.2023