Links for Keyword: Learning & Memory

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1193

Jason Bruck Human actions have taken a steep toll on whales and dolphins. Some studies estimate that small whale abundance, which includes dolphins, has fallen 87% since 1980 and thousands of whales die from rope entanglement annually. But humans also cause less obvious harm. Researchers have found changes in the stress levels, reproductive health and respiratory health of these animals, but this valuable data is extremely hard to collect. To better understand how people influence the overall health of dolphins, my colleagues and I at Oklahoma State University’s Unmanned Systems Research Institute are developing a drone to collect samples from the spray that comes from their blowholes. Using these samples, we will learn more about these animals’ health, which can aid in their conservation. Today, researchers wanting to measure wild dolphins’ health primarily use remote biopsy darting – where researchers use a small dart to collect a sample of tissue – or handle the animals in order to collect samples. These methods don’t physically harm the animals, but despite precautions, they can be disruptive and stressful for dolphins. Additionally, this process is challenging, time-consuming and expensive. My current research focus is on dolphin perception – how they see, hear and sense the world. Using my experience, I am part of a team building a drone specifically designed to be an improvement over current sampling methods, both for dolphins and the researchers. Our goal is to develop a quiet drone that can fly into a dolphin’s blind spot and collect samples from the mucus that is mixed with water and air sprayed out of a dolphin’s blowhole when they exhale a breath. This is called the blow. Dolphins would experience less stress and teams could collect more samples at less expense. © 2010–2020, The Conversation US, Inc.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27342 - Posted: 07.02.2020

By Jack J. Lee For some bottlenose dolphins, finding a meal may be about who you know. Dolphins often learn how to hunt from their mothers. But when it comes to at least one foraging trick, Indo-Pacific bottlenose dolphins in Western Australia’s Shark Bay pick up the behavior from their peers, researchers argue in a report published online June 25 in Current Biology. While previous studies have suggested that dolphins learn from peers, this study is the first to quantify the importance of social networks over other factors, says Sonja Wild, a behavioral ecologist at the University of Konstanz in Germany. Cetaceans — dolphins, whales and porpoises — are known for using clever strategies to round up meals. Humpback whales (Megaptera novaeangliae) off Alaska sometimes use their fins and circular bubble nets to catch fish (SN: 10/15/19). At Shark Bay, Indo-Pacific bottlenose dolphins (Tursiops aduncus) use sea sponges to protect their beaks while rooting for food on the seafloor, a strategy the animals learn from their mothers (SN: 6/8/05). These Shark Bay dolphins also use a more unusual tool-based foraging method called shelling. A dolphin will trap underwater prey in a large sea snail shell, poke its beak into the shell’s opening, lift the shell above the water’s surface and shake the contents into its mouth. © Society for Science & the Public 2000–2020.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27328 - Posted: 06.26.2020

Natalie Dombois for Quanta Magazine It’s not surprising that the fruit fly larva in the laboratory of Jimena Berni crawls across its large plate of agar in search of food. “A Drosophila larva is either eating or not eating, and if it’s not eating, it wants to eat,” she said. The surprise is that this larva can search for food at all. Owing to a suite of genetic tricks performed by Berni, it has no functional brain. In fact, the systems that normally relay sensations of touch and feedback from its muscles have also been shut down. Berni, an Argentinian neuroscientist whose investigations of fruit fly nervous systems recently earned her a group leader position at the University of Sussex, is learning what the tiny cluster of neurons that directly controls the larva’s muscles does when it’s allowed to run free, entirely without input from the brain or senses. How does the animal forage when it’s cut off from information about the outside world? The answer is that it moves according to a very particular pattern of random movements, a finding that thrilled Berni and her collaborator David Sims, a professor of marine ecology at the Marine Biological Association in Plymouth, U.K. For in its prowl for food, this insensate maggot behaves exactly like an animal Sims has studied for more than 25 years — a shark. In neuroscience, the usual schema for considering behavior has it that the brain receives inputs, combines them with stored information, then decides what to do next. This corresponds to our own intuitions and experiences, because we humans are almost always responding to what we sense and remember. But for many creatures, useful information isn’t always available, and for them something else may also be going on. When searching their environment, sharks and a diverse array of other species, now including fruit fly larvae, sometimes default to the same pattern of movement, a specific type of random motion called a Lévy walk. All Rights Reserved © 2020

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 13: Homeostasis: Active Regulation of the Internal Environment
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 9: Homeostasis: Active Regulation of the Internal Environment
Link ID: 27301 - Posted: 06.13.2020

Ruth Williams With their tiny brains and renowned ability to memorize nectar locations, honeybees are a favorite model organism for studying learning and memory. Such research has indicated that to form long-term memories—ones that last a day or more—the insects need to repeat a training experience at least three times. By contrast, short- and mid-term memories that last seconds to minutes and minutes to hours, respectively, need only a single learning experience. Exceptions to this rule have been observed, however. For example, in some studies, bees formed long-lasting memories after a single learning event. Such results are often regarded as circumstantial anomalies, and the memories formed are not thought to require protein synthesis, a molecular feature of long-term memories encoded by repeated training, says Martin Giurfa of the University of Toulouse. But the anomalous findings, together with research showing that fruit flies and ants can form long-term memories after single experiences, piqued Giurfa’s curiosity. Was it possible that honeybees could reliably do the same, and if so, what molecular mechanisms were required? Giurfa reasoned that the ability to form robust memories might depend on the particular type of bee and the experience. Within a honeybee colony, there are nurses, who clean the hive and feed the young; guards, who patrol and protect the hive; and foragers, who search for nectar. Whereas previous studies have tested bees en masse, Giurfa and his colleagues focused on foragers, tasking them with remembering an experience relevant to their role: an odor associated with a sugary reward. © 1986–2020 The Scientist.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 15: Brain Asymmetry, Spatial Cognition, and Language
Link ID: 27272 - Posted: 06.01.2020

Diana Kwon What if you could boost your brain’s processing capabilities simply by sticking electrodes onto your head and flipping a switch? Berkeley, California–based neurotechnology company Humm has developed a device that it claims serves that purpose. Their “bioelectric memory patch” is designed to enhance working memory—the type of short-term memory required to temporarily hold and process information—by noninvasively stimulating the brain. In recent years, neurotechnology companies have unveiled direct-to-consumer (DTC) brain stimulation devices that promise a range of benefits, including enhancing athletic performance, increasing concentration, and reducing depression. Humm’s memory patch, which resembles a large, rectangular Band-Aid, is one such product. Users can stick the device to their forehead and toggle a switch to activate it. Electrodes within the patch generate transcranial alternating current stimulation (tACS), a method of noninvasively zapping the brain with oscillating waves of electricity. The company recommends 15 minutes of stimulation to give users up to “90 minutes of boosted learning” immediately after using the device. The product is set for public release in 2021. Over the last year or so, Humm has generated much excitement among investors, consumers, and some members of the scientific community. In addition to raising several million dollars in venture capital funding, the company has drawn interest both from academic research labs and from the United States military. According to Humm cofounder and CEO Iain McIntyre, the US Air Force has ordered approximately 1,000 patches to use in a study at their training academy that is set to start later this year. © 1986–2020 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 27269 - Posted: 05.29.2020

Alejandra Manjarrez The brain is a master of forming patterns, even when it involves events occurring at different times. Take the phenomenon of trace fear conditioning—scientists can get an animal to notice the relationship between a neutral stimulus and an aversive stimulus separated by a temporal chasm (the trace) of a few or even tens of seconds. While it’s a well-established protocol in neuroscience and psychology labs, the mechanism for how the brain bridges the time gap between two related stimuli in order to associate them is “one of the most enigmatic and highly investigated” questions, says Columbia University neuroscientist Attila Losonczy. If the first stimulus is finished, the information about its presence and identity “should be somehow maintained through some neuronal mechanism,” he explains, so it can be associated with the second stimulus coming later. Losonczy and his colleagues have recently investigated how this might occur in a study published May 8 in Neuron. They measured the neural activity in the hippocampal CA1 region of the brain—known to be crucial for the formation of memories—of mice exposed to trace fear conditioning. The team found that associating the two events separated by time involved the activation of a subset of neurons that fired sparsely every time mice received the first stimulus and during the time gap that followed. The pattern emerged only after mice had learned to associate both stimuli. The study highlights “the important question of how we link memories across time,” says Denise Cai, a neuroscientist at the Icahn School of Medicine at Mount Sinai who was not involved in the work. Studying the basic mechanisms of temporal association is critical for understanding how it goes wrong in disorders such as post-traumatic stress disorder (PTSD) or Alzheimer’s disease, she says. © 1986–2020 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 16: Psychopathology: Biological Basis of Behavior Disorders
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 12: Psychopathology: The Biology of Behavioral Disorders
Link ID: 27254 - Posted: 05.18.2020

Sukanya Charuchandra Even for Darold Treffert, an expert in the study of savants who has met around 300 people with conditions such as autism who possess extraordinary mental abilities, Kim Peek stood out from the pack. Treffert first spoke with Peek on the phone in the 1980s. Peek asked Treffert for his date of birth and then proceeded to recount historical events that had taken place on that day and during that week, Treffert says. This display of recall left Treffert with no doubt that Peek was a savant. Peek’s abilities dazzled screenwriter Barry Morrow when the two men met in 1984 at a committee meeting of the Association for Retarded Citizens. Morrow went on to pen the script for the 1988 film Rain Man, basing Dustin Hoffman’s character on Peek. The concept of savant syndrome dates back to 1887, when physician J. Langdon Down coined the term “idiot savant” for persons who showed low IQ but superlative artistic, musical, mathematical, or other skills. (At the time, the word “idiot” denoted low IQ and was not considered insulting.) Nine months after Peek was born in 1951, a doctor told his family “that Kim was retarded, and they should put him in an institution and forget about him,” says Treffert. “Another doctor suggested a lobotomy, which fortunately they didn’t carry out.” Instead, his parents raised him at home in Utah where he raced through books, memorizing them. Despite his feats of memory and other abilities, such as performing impressive calculations in his head, Peek never learned to carry out many everyday tasks, such as dressing himself. MRIs would later reveal that Peek had abnormalities in the left hemisphere of his brain and was missing a corpus callosum, which controls communication between the two cerebral hemispheres. © 1986–2020 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27251 - Posted: 05.18.2020

Diana Kwon As Earth rotates around its axis, the organisms that inhabit its surface are exposed to daily cycles of darkness and light. In animals, light has a powerful influence on sleep, hormone release, and metabolism. Work by Takaomi Sakai, a neuroscientist at Tokyo Metropolitan University, and his team suggests that light may also be crucial for forming and maintaining long-term memories. The puzzle of how memories persist in the brain has long been of interest to Sakai. Researchers had previously demonstrated, in both rodents and flies, that the production of new proteins is necessary for maintaining long-term memories, but Sakai wondered how this process persisted over several days given cells’ molecular turnover. Maybe, he thought, an environmental stimulus, such as the light-dark cycles, periodically triggered protein production to enable memory formation and storage. Sakai and his colleagues conducted a series of experiments to see how constant darkness would affect the ability of Drosophila melanogaster to form long-term memories. Male flies exposed to light after interacting with an unreceptive female showed reduced courtship behaviors toward new female mates several days later, indicating they had remembered the initial rejection. Flies kept in constant darkness, however, continued their attempts to copulate. The team then probed the molecular mechanisms of these behaviors and discovered a pathway by which light activates cAMP response element-binding protein (CREB)—a transcription factor previously identified as important for forming long-term memories—within certain neurons found in the mushroom bodies, the memory center in fly brains. © 1986–2020 The Scientist.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 10: Biological Rhythms and Sleep
Link ID: 27248 - Posted: 05.16.2020

Ashley Yeager Nearly seven years ago, Sheena Josselyn and her husband Paul Frankland were talking with their two-year-old daughter and started to wonder why she could easily remember what happened over the last day or two but couldn’t recall events that had happened a few months before. Josselyn and Frankland, both neuroscientists at the Hospital for Sick Children Research Institute in Toronto, suspected that maybe neurogenesis, the creation of new neurons, could be involved in this sort of forgetfulness. In humans and other mammals, neurogenesis happens in the hippocampus, a region of the brain involved in learning and memory, tying the generation of new neurons to the process of making memories. Josselyn and Frankland knew that in infancy, the brain makes a lot of new neurons, but that neurogenesis slows with age. Yet youngsters have more trouble making long-term memories than adults do, a notion that doesn’t quite jibe with the idea that the principal function of neurogenesis is memory formation. To test the connection between neurogenesis and forgetting, the researchers put mice in a box and shocked their feet with an electric current, then returned the animals to their home cages and either let them stay sedentary or had them run on a wheel, an activity that boosts neurogenesis. Six weeks later, the researchers put the mice back in the box where they had received the shocks. There, the sedentary mice froze in fear, anticipating a shock, but the mice that had run on a wheel didn’t show signs of anxiety. It was as if the wheel-running mice had forgotten they’d been shocked before. © 1986–2020 The Scientist.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 11: Emotions, Aggression, and Stress
Link ID: 27245 - Posted: 05.14.2020

Catherine Offord No matter how he looked at the data, Albert Tsao couldn’t see a pattern. Over several weeks in 2007 and again in 2008, the 19-year-old undergrad trained rats to explore a small trial arena, chucking them pieces of tasty chocolate cereal by way of encouragement. He then recorded the activity of individual neurons in the animals’ brains as they scampered, one at a time, about that same arena. He hoped that the experiment would offer clues as to how the rats’ brains were forming memories, but “the data that it gave us was confusing,” he says. There wasn’t any obvious pattern to the animals’ neural output at all. Then enrolled at Harvey Mudd College in California, Tsao was doing the project as part of a summer internship at the Kavli Institute for Systems Neuroscience in Norway, in a lab that focused on episodic memory—the type of long-term memory that allows humans and other mammals to recall personal experiences (or episodes), such as going on a first date or spending several minutes searching for chocolate. Neuroscientists suspected that the brain organizes these millions of episodes partly according to where they took place. The Kavli Institute’s Edvard Moser and May-Britt Moser had recently made a breakthrough with the discovery of “grid cells,” neurons that generate a virtual spatial map of an area, firing whenever the animal crosses the part of the map that that cell represents. These cells, the Mosers reported, were situated in a region of rats’ brains called the medial entorhinal cortex (MEC) that projects many of its neurons into the hippocampus, the center of episodic memory formation. Inspired by the findings, Tsao had opted to study a region right next to the MEC called the lateral entorhinal cortex (LEC), which also feeds into the hippocampus. © 1986–2020 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27232 - Posted: 05.05.2020

Amber Dance A mouse finds itself in a box it’s never seen before. The walls are striped on one side, dotted on the other. The orange-like odor of acetophenone wafts from one end of the box, the spiced smell of carvone from the other. The mouse remembers that the orange smell is associated with something good. Although it may not recall the exact nature of the reward, the mouse heads toward the scent. Except this mouse has never smelled acetophenone in its life. Rather, the animal is responding to a false memory, implanted in its brain by neuroscientists at the Hospital for Sick Children in Toronto. Sheena Josselyn, a coauthor on a 2019 Nature Neuroscience study reporting the results of the project, says the goal was not to confuse the rodent, but for the scientists to confirm their understanding of mouse memory. “If we really understand memory, we should be able to trick the brain into remembering something that never happened at all,” she explains. By simultaneously activating the neurons that sense acetophenone and those associated with reward, the researchers created the “memory” that the orange-y scent heralded good things. Thanks to optogenetics, which uses a pulse of light to activate or deactivate neurons, Josselyn and other scientists are manipulating animal memories in all kinds of ways. Even before the Toronto team implanted false memories into mice, researchers were making rodents forget or recall an event with the flick of a molecular light switch. With every flash of light, they test their hypotheses about how these animals—and by extension, people—collect, store, and access past experiences. Scientists are also examining how memory formation and retrieval change with age, how those processes are altered in animal models of Alzheimer’s disease, and how accessing memories can influence an animal’s emotional state. © 1986–2020 The Scientist.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 27228 - Posted: 05.02.2020

Diana Kwon As rodents scuttle through a maze, scientists can observe the activity of their brains’ “inner GPS,” neurons that manage spatial orientation and navigation. This positioning system was revealed through two different discoveries, decades apart. In 1971, neuroscientist John O’Keefe found place cells, neurons that are consistently activated when rats are in a specific location, while observing the animals as they ran around an enclosure. More than thirty years later, neuroscientists May-Britt and Edvard Moser used a similar method to identify grid cells, neurons that fire at regular intervals as animals move, enabling them to keep track of navigational cues. It was the early 2010s when neuroscientist Elizabeth Buffalo and her team at Emory University’s Yerkes National Primate Research Center in Atlanta started investigating what the brain’s GPS looks like in primates. While conducting memory tests by tracking the eye movements of primates viewing either familiar or unfamiliar images, the researchers began to wonder: Was this system also active in stationary animals? “They were moving their eyes as they were forming a memory of these pictures,” Buffalo says. “So we thought that maybe this eye movement exploration was something that primates do in an analogous way to how rodents explore as they move around a physical environment.” One of Buffalo’s graduate students, Nathaniel Killian, put this hypothesis to the test. Working with monkeys, he placed electrodes into the entorhinal cortex—the brain region where grid cells are found in rodents—and recoded brain activity while the animals viewed images on a screen. One day, Killian came into a lab meeting with an announcement: he had found grid cells in the primate brain. Although it took many more months to complete additional experiments to validate the results, Buffalo remembers thinking during that meeting, “Wow, we’re seeing something really new.” © 1986–2020 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27225 - Posted: 05.02.2020

By Asher Elbein Rufous treepies, birds in the crow family native to South and Southeast Asia, usually eat insects, seeds or fruits. But some of them have learned to eat fire. Well, not exactly, but close. At a small temple in the Indian state of Gujarat, the caretakers regularly set out small votive candles made with clarified butter. The birds flit down to steal the candles, extinguish the butter-soaked wicks with a quick shake of their heads and then gulp them down. This willingness to experiment with new foods and ways of foraging is an indicator of behavioral flexibility, and some scientists think it is evidence that certain species of birds might be less vulnerable to extinction. “The idea is that if a species has individuals that are capable of these novel behaviors, they’ll respond with changes in their behavior more easily than individuals from species that do not tend to produce novel behaviors like that,” said Louis Lefebvre, a professor at McGill University in Montreal and an author on the study. “The idea is pretty simple. The problem was to be able to test it in a convincing way.” A team of researchers, led by Simon Ducatez of Spain’s Center for Research on Ecology and Forestry Applications and including Dr. Lefebvre, combed through 204 ornithological journals for mentions of novel behaviors and feeding innovations, comparing the number of sightings in each species with their risk of extinction. Their results were published this month in Nature Ecology & Evolution. Dr. Lefebvre said the approach provided backup to earlier cognition experiments he had led with wild-caught birds, such as testing their ability to figure out how to open boxes full of food. © 2020 The New York Times Company

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27220 - Posted: 04.29.2020

By Simon Makin Our recollection of events is usually not like a replay of digital video from a security camera—a passive observation that faithfully reconstructs the spatial and sensory details of everything that happened. More often memory segments what we experience into a string of discrete, connected events. For instance, you might remember that you went for a walk before lunch at a given time last week without recalling the soda bottle strewn on the sidewalk, the crow cawing in the oak tree in your yard or the chicken salad sandwich you ate upon your return. Your mind designates a mental basket for “walk” and a subsequent bin for “lunch” that, once accessed, make many of these finer details available. This arrangement raises the question of how the brain performs such categorization. A new study by neuroscientist Susumu Tonegawa of the Massachusetts Institute of Technology and his colleagues claims to have discovered the neural processing that makes this organization of memory into discrete units possible. The work has implications for understanding how humans generalize knowledge, and it could aid efforts to develop AI systems that learn faster. A brain region called the hippocampus is critical for memory formation and also seems to be involved in navigation. Neurons in the hippocampus called “place” cells selectively respond to being in specific locations, forming a cognitive map of the environment. Such spatial information is clearly important for “episodic” (autobiographical rather than factual) memory. But so, too, are other aspects of experience, such as changing sensory input. There is evidence that neurons in the hippocampus encode sensory changes by altering the frequency at which they fire, a phenomenon termed “rate remapping.” According to research by neuroscientist Loren Frank of the University of California, San Francisco, and his colleagues, such changes may also encode information about where an animal has been and where it is going, enabling rate remapping to represent trajectories of travel. © 2020 Scientific American

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27187 - Posted: 04.14.2020

Anne Trafton | MIT News Office Imagine you are meeting a friend for dinner at a new restaurant. You may try dishes you haven’t had before, and your surroundings will be completely new to you. However, your brain knows that you have had similar experiences — perusing a menu, ordering appetizers, and splurging on dessert are all things that you have probably done when dining out. MIT neuroscientists have now identified populations of cells that encode each of these distinctive segments of an overall experience. These chunks of memory, stored in the hippocampus, are activated whenever a similar type of experience takes place, and are distinct from the neural code that stores detailed memories of a specific location. The researchers believe that this kind of “event code,” which they discovered in a study of mice, may help the brain interpret novel situations and learn new information by using the same cells to represent similar experiences. “When you encounter something new, there are some really new and notable stimuli, but you already know quite a bit about that particular experience, because it’s a similar kind of experience to what you have already had before,” says Susumu Tonegawa, a professor of biology and neuroscience at the RIKEN-MIT Laboratory of Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory. Tonegawa is the senior author of the study, which appears today in Nature Neuroscience. Chen Sun, an MIT graduate student, is the lead author of the paper. New York University graduate student Wannan Yang and Picower Institute technical associate Jared Martin are also authors of the paper.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 14: Attention and Consciousness
Link ID: 27174 - Posted: 04.07.2020

Researchers at the National Institutes of Health have discovered in mice what they believe is the first known genetic mutation to improve cognitive flexibility—the ability to adapt to changing situations. The gene, KCND2, codes for a protein that regulates potassium channels, which control electrical signals that travel along neurons. The electrical signals stimulate chemical messengers that jump from neuron to neuron. The researchers were led by Dax Hoffman, Ph.D., chief of the Section on Neurophysiology at NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). It appears in Nature Communications. The KCND2 protein, when modified by an enzyme, slows the generation of electrical impulses in neurons. The researchers found that altering a single base pair in the KCND2 gene enhanced the ability of the protein to dampen nerve impulses. Mice with this mutation performed better than mice without the mutation in a cognitive task. The task involved finding and swimming to a slightly submerged platform that had been moved to a new location. Mice with the mutation found the relocated platform much faster than their counterparts without the mutation. The researchers plan to investigate whether the mutation will affect neural networks in the animals’ brains. They added that studying the gene and its protein may ultimately lead to insights on the nature of cognitive flexibility in people. It also may help improve understanding of epilepsy, schizophrenia, Fragile X syndrome, and autism spectrum disorder, which all have been associated with other mutations in KCND2.

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27148 - Posted: 03.30.2020

May-Britt Moser & Edvard Moser There was something of the Viking about Per Andersen. The intrepid and steadfast Norwegian was renowned for his attacks on the deepest puzzle of the brain: how its wiring and electrical activity give rise to behaviour and experience. When he was a student in the 1950s, most neuroscientists studied accessible parts of the mammalian nervous system — the junctions between nerves and muscles, say. Andersen worked on the cerebral cortex, which processes higher-level functions: perception, voluntary movement, planning and abstract thinking. His pioneering recordings of electrical activity in the hippocampus — a part of the cortex involved in memory — launched a new era in physiological understanding of the brain and laid the foundations of modern systems neuroscience. He died on 17 February, aged 90. In 1949, it was predicted that learning might depend on repeated activity strengthening the connections — synapses — in networks of neurons. Andersen saw that this was the case in the hippocampus. As the effect was too fleeting to account directly for memory storage, he encouraged his student Terje Lømo to investigate. In 1973, in one of the greatest discoveries of twentieth-century neuroscience, Lømo and British visiting scholar Tim Bliss reported from Andersen’s laboratory that many bursts of electrical stimulation at certain frequencies enhanced connectivity for hours or days. This phenomenon — long-term potentiation (LTP) — remains the main explanation for how we form and store memories (T. V. P. Bliss and T. Lømo J. Physiol. 232, 331–356; 1973). We met Andersen as students, in the late 1980s. Our work with him on LTP and animal learning found differences in function between regions of the hippocampus and demonstrated changes in connectivity related to behaviour. His hunch that we should record activity from single cells led to our discovery of specialized neurons in the cortex that support the sense of where the body is in space. The work was a direct result of his insight. © 2020 Springer Nature Limited

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27130 - Posted: 03.21.2020

By R. Douglas Fields Our concepts of how the two and a half pounds of flabby flesh between our ears accomplish learning date to Ivan Pavlov’s classic experiments, where he found that dogs could learn to salivate at the sound of a bell. In 1949 psychologist Donald Hebb adapted Pavlov’s “associative learning rule” to explain how brain cells might acquire knowledge. Hebb proposed that when two neurons fire together, sending off impulses simultaneously, the connections between them—the synapses—grow stronger. When this happens, learning has taken place. In the dogs’ case, it would mean the brain now knows that the sound of a bell is followed immediately by the presence of food. This idea gave rise to an oft-quoted axiom: “Synapses that fire together wire together.” The theory proved sound, and the molecular details of how synapses change during learning have been described in detail. But not everything we remember results from reward or punishment, and in fact, most experiences are forgotten. Even when synapses do fire together, they sometimes do not wire together. What we retain depends on our emotional response to an experience, how novel it is, where and when the event occurred, our level of attention and motivation during the event, and we process these thoughts and feelings while asleep. A narrow focus on the synapse has given us a mere stick-figure conception of how learning and the memories it engenders work. It turns out that strengthening a synapse cannot produce a memory on its own, except for the most elementary reflexes in simple circuits. Vast changes throughout the expanse of the brain are necessary to create a coherent memory. Whether you are recalling last night’s conversation with dinner guests or using an acquired skill such as riding a bike, the activity of millions of neurons in many different regions of your brain must become linked to produce a coherent memory that interweaves emotions, sights, sounds, smells, event sequences and other stored experiences. Because learning encompasses so many elements of our experiences, it must incorporate different cellular mechanisms beyond the changes that occur in synapses. This recognition has led to a search for new ways to understand how information is transmitted, processed and stored in the brain to bring about learning. In the past 10 years neuroscientists have come to realize that the iconic “gray matter” that makes up the brain’s outer surface—familiar from graphic illustrations found everywhere, from textbooks to children’s cartoons—is not the only part of the organ involved in the inscription of a permanent record of facts and events for later recall and replay. It turns out that areas below the deeply folded, gray-colored surface also play a pivotal role in learning. © 2020 Scientific American

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 2: Cells and Structures: The Anatomy of the Nervous System
Link ID: 27114 - Posted: 03.12.2020

In a study of epilepsy patients, researchers at the National Institutes of Health monitored the electrical activity of thousands of individual brain cells, called neurons, as patients took memory tests. They found that the firing patterns of the cells that occurred when patients learned a word pair were replayed fractions of a second before they successfully remembered the pair. The study was part of an NIH Clinical Center trial for patients with drug-resistant epilepsy whose seizures cannot be controlled with drugs. “Memory plays a crucial role in our lives. Just as musical notes are recorded as grooves on a record, it appears that our brains store memories in neural firing patterns that can be replayed over and over again,” said Kareem Zaghloul, M.D., Ph.D., a neurosurgeon-researcher at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS) and senior author of the study published in Science. Dr. Zaghloul’s team has been recording electrical currents of drug-resistant epilepsy patients temporarily living with surgically implanted electrodes designed to monitor brain activity in the hopes of identifying the source of a patient’s seizures. This period also provides an opportunity to study neural activity during memory. In this study, his team examined the activity used to store memories of our past experiences, which scientists call episodic memories. In 1957, the case of an epilepsy patient H.M. provided a breakthrough in memory research. H.M could not remember new experiences after part of his brain was surgically removed to stop his seizures. Since then, research has pointed to the idea that episodic memories are stored, or encoded, as neural activity patterns that our brains replay when triggered by such things as the whiff of a familiar scent or the riff of a catchy tune. But exactly how this happens was unknown.

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 27098 - Posted: 03.06.2020

By Cindi May Music makes life better in so many ways. It elevates mood, reduces stress and eases pain. Music is heart-healthy, because it can lower blood pressure, reduce heart rate and decrease stress hormones in the blood. It also connects us with others and enhances social bonds. Music can even improve workout endurance and increase our enjoyment of challenging activities. The fact that music can make a difficult task more tolerable may be why students often choose to listen to it while doing their homework or studying for exams. But is listening to music the smart choice for students who want to optimize their learning? A new study by Manuel Gonzalez of Baruch College and John Aiello of Rutgers University suggests that for some students, listening to music is indeed a wise strategy, but for others, it is not. The effect of music on cognitive functioning appears not to be “one-size-fits-all” but to instead depend, in part, on your personality—specifically, on your need for external stimulation. People with a high requirement for such stimulation tend to get bored easily and to seek out external input. Those individuals often do worse, paradoxically, when listening to music while engaging in a mental task. People with a low need for external stimulation, on the other hand, tend to improve their mental performance with music. But other factors play a role as well. Gonzalez and Aiello took a fairly sophisticated approach to understanding the influence of music on intellectual performance, assessing not only listener personality but also manipulating the difficulty of the task and the complexity of the music. Whether students experience a perk or a penalty from music depends on the interplay of the personality of the learner, the mental task, and the music. © 2020 Scientific American

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 14: Attention and Consciousness
Link ID: 27093 - Posted: 03.05.2020