Links for Keyword: Learning & Memory

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1080

by Sarah DiGiulio Why is it that you can perfectly recite the words to *NSYNC’s “Bye Bye Bye,” but can’t remember the title of the new TV show you started watching on Netflix and wanted to tell your coworker about? We remember things because they either stand out, they relate to and can easily be integrated in our existing knowledge base, or it’s something we retrieve, recount or use repeatedly over time, explains Sean Kang, PhD, assistant professor in the Department of Education at Dartmouth College, whose research focuses on the cognitive psychology of learning and memory. “The average layperson trying to learn nuclear physics for the first time, for example, will probably find it very difficult to retain that information." That's because he or she likely doesn’t have existing knowledge in their brain to connect that new information to. And on a molecular level neuroscientists suspect that there’s actually a physical process that needs to be completed to form a memory — and us not remembering something is a result of that not happening, explains Blake Richards, DPhil, assistant professor in the Department of Biological Sciences and Fellow at the Canadian Institute for Advanced Research. In the same way that when you store a grocery list on a piece of paper, you are making a physical change to that paper by writing words down, or when you store a file on a computer, you’re making a physical change somewhere in the magnetization of some part of your hard drive — a physical change happens in your brain when you store a memory or new information. © 2018 NBC UNIVERSAL

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 25085 - Posted: 06.14.2018

By Ruth Williams The sun’s ultraviolet (UV) radiation is a major cause of skin cancer, but it offers some health benefits too, such as boosting production of essential vitamin D and improving mood. Today (May 17), a report in Cell adds enhanced learning and memory to UV’s unexpected benefits. Researchers have discovered that, in mice, exposure to UV light activates a molecular pathway that increases production of the brain chemical glutamate, heightening the animals’ ability to learn and remember. “The subject is of strong interest, because it provides additional support for the recently proposed theory of ultraviolet light’s regulation of the brain and central neuroendocrine system,” dermatologist Andrzej Slominski of the University of Alabama who was not involved in the research writes in an email to The Scientist. “It’s an interesting and timely paper investigating the skin-brain connection,” notes skin scientist Martin Steinhoff of University College Dublin’s Center for Biomedical Engineering who also did not participate in the research. “The authors make an interesting observation linking moderate UV exposure to . . . [production of] the molecule urocanic acid. They hypothesize that this molecule enters the brain, activates glutaminergic neurons through glutamate release, and that memory and learning are increased.” © 1986-2018 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 14: Biological Rhythms, Sleep, and Dreaming
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 10: Biological Rhythms and Sleep
Link ID: 25052 - Posted: 06.02.2018

By Matthew Hutson It's a Saturday morning in February, and Chloe, a curious 3-year-old in a striped shirt and leggings, is exploring the possibilities of a new toy. Her father, Gary Marcus, a developmental cognitive scientist at New York University (NYU) in New York City, has brought home some strips of tape designed to adhere Lego bricks to surfaces. Chloe, well-versed in Lego, is intrigued. But she has always built upward. Could she use the tape to build sideways or upside down? Marcus suggests building out from the side of a table. Ten minutes later, Chloe starts sticking the tape to the wall. "We better do it before Mama comes back," Marcus says in a singsong voice. "She won't be happy." (Spoiler: The wall paint suffers.) Implicit in Marcus's endeavor is an experiment. Could Chloe apply what she had learned about an activity to a new context? Within minutes, she has a Lego sculpture sticking out from the wall. "Papa, I did it!" she exclaims. In her adaptability, Chloe is demonstrating common sense, a kind of intelligence that, so far, computer scientists have struggled to reproduce. Marcus believes the field of artificial intelligence (AI) would do well to learn lessons from young thinkers like her. Researchers in machine learning argue that computers trained on mountains of data can learn just about anything—including common sense—with few, if any, programmed rules. These experts "have a blind spot, in my opinion," Marcus says. "It's a sociological thing, a form of physics envy, where people think that simpler is better." He says computer scientists are ignoring decades of work in the cognitive sciences and developmental psychology showing that humans have innate abilities—programmed instincts that appear at birth or in early childhood—that help us think abstractly and flexibly, like Chloe. He believes AI researchers ought to include such instincts in their programs. © 2018 American Association for the Advancement of Science.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 13: Memory, Learning, and Development
Link ID: 25026 - Posted: 05.26.2018

By Shawn Hayward Brenda Milner has collected her share of awards, prizes, honourary degrees and other recognitions throughout her amazing career, but there is something special about being recognized with top honours from the city and province she has called home since 1944, all within one week. On May 8, the Speaker of the National Assembly of Quebec, Jacques Chagnon presented Milner with its Medal of Honour, along with seven other Quebecers including McGill alumna Dr. Joanne Liu. The Medal of Honour is awarded to public figures from all walks of life who, through their career, their work or their social commitment, have earned the recognition of the Members of the National Assembly and the people of Quebec. Milner added to that recognition the title of Commander of the Order of Montreal, given to her by Mayor Valérie Plante during a ceremony at City Hall on May 14. The Order of Montreal was created on the city’s 375th anniversary to recognize women and men who have contributed in a remarkable way to the city’s development and reputation. There are three ranks in the Order, Commander being the highest. A celebrated researcher at the Montreal Neurological Institute and Hospital (The Neuro), Milner turns 100 years old on July 15. She is the Dorothy J. Killam Professor at The Neuro, and a professor in the Department of Neurology and Neurosurgery at McGill University. © 2018 McGill Reporter ·

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24984 - Posted: 05.17.2018

By Usha Lee McFarling, STAT UCLA neuroscientists reported Monday that they have transferred a memory from one animal to another via injections of RNA, a startling result that challenges the widely held view of where and how memories are stored in the brain. The finding from the lab of David Glanzman hints at the potential for new RNA-based treatments to one day restore lost memories and, if correct, could shake up the field of memory and learning. “It’s pretty shocking,” said Dr. Todd Sacktor, a neurologist and memory researcher at SUNY Downstate Medical Center in Brooklyn, N.Y. “The big picture is we’re working out the basic alphabet of how memories are stored for the first time.” He was not involved in the research, which was published in eNeuro, the online journal of the Society for Neuroscience. Advertisement Many scientists are expected to view the research more cautiously. The work is in snails, animals that have proven a powerful model organism for neuroscience but whose simple brains work far differently than those of humans. The experiments will need to be replicated, including in animals with more complex brains. And the results fly in the face of a massive amount of evidence supporting the deeply entrenched idea that memories are stored through changes in the strength of connections, or synapses, between neurons. © 2018 Scientific American

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24979 - Posted: 05.15.2018

Laurel Hamers Sluggish memories might be captured via RNA. The molecule, when taken from one sea slug and injected into another, appeared to transfer a rudimentary memory between the two, a new study suggests. Most neuroscientists believe long-term memories are stored by strengthening connections between nerve cells in the brain (SN: 2/3/18, p. 22). But these results, reported May 14 in eNeuro, buoy a competing argument: that some types of RNA molecules, and not linkages between nerve cells, are key to long-term memory storage. “It’s a very controversial idea,” admits study coauthor David Glanzman, a neuroscientist at UCLA. When poked or prodded, some sea slugs (Aplysia californica) will reflexively pull their siphon, a water-filtering appendage, into their bodies. Using electric shocks, Glanzman and his colleagues sensitized sea slugs to have a longer-lasting siphon-withdrawal response — a very basic form of memory. The team extracted RNA from those slugs and injected it into slugs that hadn’t been sensitized. These critters then showed the same long-lasting response to touch as their shocked companions. RNA molecules come in a variety of flavors that carry out specialized jobs, so it’s not yet clear what kind of RNA may be responsible for the effect, Glanzman says. But he suspects that it’s one of the handful of RNA varieties that don’t carry instructions to make proteins, the typical job of most RNA. (Called noncoding RNAs, these molecules are often involved in manipulating genes’ activity.) |© Society for Science & the Public 2000 - 2018.

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24978 - Posted: 05.15.2018

By Gretchen Reynolds Call them tip-of-the-tongue moments: those times we can’t quite call up the name or word that we know we know. These frustrating lapses are thought to be caused by a brief disruption in the brain’s ability to access a word’s sounds. We haven’t forgotten the word, and we know its meaning, but its formulation dances teasingly just beyond our grasp. Though these mental glitches are common throughout life, they become more frequent with age. Whether this is an inevitable part of growing older or somehow lifestyle-dependent is unknown. But because evidence already shows that physically fit older people have reduced risks for a variety of cognitive deficits, researchers recently looked into the relationship between aerobic fitness and word recall. For the study, whose results appeared last month in Scientific Reports, researchers at the University of Birmingham tested the lungs and tongues, figuratively speaking, of 28 older men and women at the school’s human-performance lab. Volunteers were between 60 and 80 and healthy, with no clinical signs of cognitive problems. Their aerobic capacities were measured by having them ride a specialized stationary bicycle to exhaustion; fitness levels among the subjects varied greatly. This group and a second set of volunteers in their 20s then sat at computers as word definitions flashed on the screens, prompting them to indicate whether they knew and could say the implied word. The vocabulary tended to be obscure — “decanter,” for example — because words rarely used are the hardest to summon quickly. To no one’s surprise, the young subjects experienced far fewer tip-of-the-tongue failures than the seniors, even though they had smaller vocabularies over all, according to other tests. Within the older group, the inability to identify and say the right words was strongly linked to fitness. The more fit someone was, the less likely he or she was to go through a “what’s that word again?” moment of mental choking. © 2018 The New York Times Company

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24974 - Posted: 05.15.2018

By Neuroskeptic A new paper in ACS Chemical Neuroscience pulls no punches in claiming that most of what we know about the neuroscience of learning is wrong: Dendritic Learning as a Paradigm Shift in Brain Learning According to authors Shira Sardi and colleagues, the prevailing view which is that learning takes place in the synapses is mistaken. Instead, they say, ‘dendritic learning’ is how brain cells really store information. If a neuron is a tree, the dendrites are the branches, while the synapses are the leaves on the ends of those branches. Here’s how Sardi et al. explain their new theory: On the left we see the idea of synaptic learning, which proposes that each synapse can independently adjust its strength. On the right, we have dendritic learning, the idea that each neuron only has a small number of adjustable units, corresponding to the main dendritic branches. The evidence for dendritic learning, Sardi et al. say, comes from experiments using cultured neurons in which they found that a) some neurons are more likely to fire when stimulated in certain places, suggesting that dendrites can vary in their excitability and b) that these (presumed) dendritic excitability levels are plastic (they can ‘learn’.)

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24968 - Posted: 05.12.2018

Maria Temming An artificial intelligence that navigates its environment much like mammals do could help solve a mystery about our own internal GPS. Equipped with virtual versions of specialized brain nerve cells called grid cells, the AI could easily solve and plan new routes through virtual mazes. That performance, described online May 9 in Nature, suggests the grid cells in animal brains play a critical role in path planning. “This is a big step forward” in understanding our own navigational neural circuitry, says Ingmar Kanitscheider, a computational neuroscientist at the University of Texas at Austin not involved in the work. The discovery that rats track their location with the help of grid cells, which project an imaginary hexagonal lattice onto an animal’s surroundings, earned a Norwegian research team the 2014 Nobel Prize in physiology or medicine (SN Online: 10/6/14). Neuroscientists suspected these cells, which have also been found in humans, might help not only give mammals an internal coordinate system, but also plan direct paths between points (SN Online: 8/5/13). To test that idea, neuroscientist Caswell Barry at University College London, along with colleagues at Google DeepMind, created an AI that contained virtual nerve cells, or neurons, whose activity resembled that of real grid cells. The researchers trained this AI to navigate virtual mazes by giving the system reward signals when it reached its destination. |© Society for Science & the Public 2000 - 2018

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24958 - Posted: 05.10.2018

Alison Abbott Scientists have used artificial intelligence (AI) to recreate the complex neural codes that the brain uses to navigate through space. The feat demonstrates how powerful AI algorithms can assist conventional neuroscience research to test theories about the brain’s workings — but the approach is not going to put neuroscientists out of work just yet, say the researchers. “It really was a very striking and remarkable convergence of form and function.” The computer program, details of which were published in Nature on 9 May1, was developed by neuroscientists at University College London (UCL) and AI researchers at the London-based Google company DeepMind. It used a technique called deep learning — a type of AI inspired by the structures in the brain — to train a computer-simulated rat to track its position in a virtual environment. The program surprised the scientists by spontaneously generating hexagonal-shaped patterns of activity akin to those generated by navigational cells in the mammalian brain called grid cells. Grid cells have been shown in experiments with real rats to be fundamental to how an animal tracks its own position in space. What’s more, the simulated rat was able to use the grid-cell-like coding to navigate a virtual maze so well that it even learnt to take shortcuts. © 2018 Macmillan Publishers Limited,

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24957 - Posted: 05.10.2018

By Diana Kwon Scientists have long attempted to understand where, and how, the brain stores memories. At the beginning of the 20th century, German scientist Richard Semon coined the term “engram” to describe the hypothetical physical representations of memories in the brain. Then, in the 1940s, Canadian psychologist Donald Hebb proposed that, when neurons encoded memories, connections, called synapses, between coactivated memory, or engram, cells were strengthened—a theory that was famously paraphrased as neurons that “fire together, wire together.” These two ideas have become the cornerstone of memory research—and in the decades since they first emerged, scientists have amassed evidence supporting them. “Donald Hebb suggested that it’s not engram cells that are the critical part of storing the memory, it’s the synapse between engram cells,” says Bong-Kiun Kaang, a neuroscientist at Seoul National University in South Korea. However, he adds, while there has been much indirect evidence that such a process underlies memory formation, such as studies showing long-term potentiation (the process through which two simultaneously activated neurons show enhanced connectivity), direct evidence has been lacking. One of the key issues has been the lack of tools to directly observe this process, Kaang says. To overcome this limitation, he and his colleagues injected a virus containing recombinant DNA—coding for different colors of fluorescent proteins for engram and non-engram cells—into the brains of mice. Using this technique, the team was able to pinpoint which type of cell had connected with a postsynaptic neuron. The endeavor wasn’t easy. Developing the method and getting it to work experimentally was a painstaking process that took almost a decade, Kaang tells The Scientist. So when his team finally managed to get promising results around two years ago, “we were very excited,” he says. © 1986-2018 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24916 - Posted: 04.28.2018

By Knvul Sheikh In the bare winter woods across North America, you can hear the clear whistles of Black-capped and Carolina Chickadees as they forage for food. The insects they normally love to eat are gone, so the birds must find seeds and stash them among the trees for later. The Black-capped Chickadee and its southern lookalike, the Carolina Chickadee, are like squirrels in this sense: well-known for their food-caching behavior. They’ve evolved sharp brains, with some parts that grow bigger in the winter, specifically so they can remember the location of hundreds to thousands of seeds. But in the narrow strip of land from Kansas to New Jersey where the two species overlap and mate, their offspring have a weaker memory, according to a new study published in Evolution last week. In a set of experiments, only 62.5 percent of hybrid chickadees were able to solve a puzzle to uncover their food, as opposed to 95 percent of normal chickadees. More importantly, the hybrids’ poor recall could hurt their ability to survive harsh winters. “These birds don’t migrate; they stay in their regions throughout the year, so winter survival is pretty important,” says Michael McQuillan, a biologist at Lehigh University who was the lead author of the research. “If the hybrids are less able to do this, or if they have worse memories, that could be really bad for them.” The trend could also explain why the blended birds haven’t evolved into a distinct species over time. Black-capped and Carolina Chickadees hybridize extensively—often to the chagrin of birders, who already have a hard time telling them apart. In general, hybridization is common: It occurs in about 10 percent of animal and 25 percent of plant species, McQuillan says. Many hybrids thrive, and in rare cases like the Golden-crowned Manakin and the Galapagos “Bird Bird” finch, they can form stable new lineages.

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24832 - Posted: 04.07.2018

By Daniela Carulli In 1898, Camillo Golgi, an eminent Italian physician and pathologist, published a landmark paper on the structure of “nervous cells.” In addition to the organelle that still bears his name, the Golgi apparatus, he described “a delicate covering” surrounding neurons’ cell bodies and extending along their dendrites. That same year, another Italian researcher, Arturo Donaggio, observed that these coverings, now known as perineuronal nets (PNNs), had openings in them, through which, he correctly surmised, axon terminals from neighboring neurons make synapses. Since then, however, PNNs have been largely neglected by the scientific community—especially after Santiago Ramón y Cajal, a fierce rival of Golgi (who would later share the Nobel Prize with him), dismissed them as a histological artifact. It wasn’t until the 1970s, thanks to the improvement of histological techniques and the development of immunohistochemistry, that researchers confirmed the existence of PNNs around some types of neurons in the brain and spinal cord of many vertebrate species, including humans. Composed of extracellular matrix (ECM) molecules, PNNs form during postnatal development, marking the end of what’s known as the “critical period” of heightened brain plasticity. For a while after birth, the external environment has a profound effect on the wiring of neuronal circuits and, in turn, on the development of an organism’s skills and behaviors, such as language, sensory processing, and emotional traits. But during childhood and adolescence, neuronal networks become more fixed, allowing the individual to retain the acquired functions. Evidence gathered over the past 15 years suggests that PNNs contribute to this fixation in many brain areas, by stabilizing the existing contacts between neurons and repelling incoming axons. © 1986-2018 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 24815 - Posted: 04.03.2018

Rachel Ehrenberg BOSTON — Conflicting results on whether brain stimulation helps or hinders memory may be explained by the electrodes’ precise location: whether they’re tickling white matter or gray matter. New research on epilepsy patients suggests that stimulating a particular stretch of the brain’s white matter — tissue that transfers nerve signals around the brain — improves performance on memory tests. But stimulating the same region’s gray matter, which contains the brain’s nerve cells, seems to impair memory, Nanthia Suthana, a cognitive neuroscientist at UCLA, reported March 25 at a meeting of the Cognitive Neuroscience Society. A groundbreaking study by Suthana and colleagues, published in 2012 the New England Journal of Medicine, found that people performed better on a memory task if their entorhinal cortex — a brain hub for memory and navigation — was given a low jolt of electricity during the task. But subsequent studies stimulating that area have had conflicting results. Follow-up work by Suthana suggests that activating the entorhinal cortex isn’t enough: Targeting a particular path of nerve fibers matters. “It’s a critical few millimeters that can make all the difference,” said Suthana. The research underscores the complexity of investigations of and potential treatments for memory loss, said Youssef Ezzyat, a neuroscientist at the University of Pennsylvania. Many variables seem to matter. Recent work by Ezzyat and colleagues found that the kind of brain activity during stimulation is also important, as is the precise timing of the stimulation (SN: 3/31/2018, p.16). |© Society for Science & the Public 2000 - 201

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24788 - Posted: 03.27.2018

Shankar Vedantam Economic theory rests on a simple notion about humans: people are rational. They seek out the best information. They measure costs and benefits, and maximize pleasure and profit. This idea of the rational economic actor has been around for centuries. But about 50 years ago, two psychologists shattered these assumptions. They showed that people routinely walk away from good money. And they explained why, once we get in a hole, we often keep digging. Think Fast with Daniel Kahneman The methods of these psychologists were as unusual as their insights. Instead of writing complex theorems, Daniel Kahneman and Amos Tversky spent hours together...talking. They came up with playful thought experiments. They laughed a lot. "We found our mistakes very funny," recalls Kahneman. "What was fun was finding yourself about to say something really stupid." The insights that Kahneman developed with Tversky, who passed away in 1996, transformed the way we understand the mind. That transformation also had philosophical implications. "The stories about the past are so good that they create an illusion that life is understandable, and they create an illusion that you can predict the future," Kahneman says. Daniel Kahneman won the Nobel prize in 2002, and over the past 99 episodes of Hidden Brain, we've drawn extensively on research inspired by his work. This week, we celebrate our 100th episode by interviewing Kahneman about judgment, memory, and the mind itself. He spoke with us before a live audience at NPR headquarters in Washington, D.C. © 2018 npr

Related chapters from BN8e: Chapter 18: Attention and Higher Cognition; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 14: Attention and Consciousness; Chapter 13: Memory, Learning, and Development
Link ID: 24744 - Posted: 03.13.2018

By CADE METZ SAN FRANCISCO — Machines are starting to learn tasks on their own. They are identifying faces, recognizing spoken words, reading medical scans and even carrying on their own conversations. All this is done through so-called neural networks, which are complex computer algorithms that learn tasks by analyzing vast amounts of data. But these neural networks create a problem that scientists are trying to solve: It is not always easy to tell how the machines arrive at their conclusions. On Tuesday, a team at Google took a small step toward addressing this issue with the unveiling of new research that offers the rough outlines of technology that shows how the machines are arriving at their decisions. “Even seeing part of how a decision was made can give you a lot of insight into the possible ways it can fail,” said Christopher Olah, a Google researcher. A growing number of A.I. researchers are now developing ways to better understand neural networks. Jeff Clune, a professor at University of Wyoming who now works in the A.I. lab at the ride-hailing company Uber, called this “artificial neuroscience.” Understanding how these systems work will become more important as they make decisions now made by humans, like who gets a job and how a self-driving car responds to emergencies. First proposed in the 1950s, neural networks are meant to mimic the web of neurons in the brain. But that is a rough analogy. These algorithms are really series of mathematical operations, and each operation represents a neuron. Google’s new research aims to show — in a highly visual way — how these mathematical operations perform discrete tasks, like recognizing objects in photos. © 2018 The New York Times Company

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24729 - Posted: 03.07.2018

Laura Sanders With fevers, chills and aches, the flu can pound the body. Some influenza viruses may hammer the brain, too. Months after being infected with influenza, mice had signs of brain damage and memory trouble, researchers report online February 26 in the Journal of Neuroscience. It’s unclear if people’s memories are affected in the same way as those of mice. But the new research adds to evidence suggesting that some body-wracking infections could also harm the human brain, says epidemiologist and neurologist Mitchell Elkind of Columbia University, who was not involved in the study. Obvious to anyone who has been waylaid by the flu, brainpower can suffer at the infection’s peak. But not much is known about any potential lingering effects on thinking or memory. “It hasn’t occurred to people that it might be something to test,” says neurobiologist Martin Korte of Technische Universität Braunschweig in Germany. The new study examined the effects of three types of influenza A — H1N1, the strain behind 2009’s swine flu outbreak; H7N7, a dangerous strain that only rarely infects people; and H3N2, the strain behind much of the 2017–2018 flu season misery (SN: 1/19/18, p. 12). Korte and colleagues shot these viruses into mice’s noses, and then looked for memory problems 30, 60 and 120 days later. A month after infection, the mice all appeared to have recovered and gained back weight. But those that had received H3N2 and H7N7 had trouble remembering the location of a hidden platform in a pool of water, the researchers found. Mice that received no influenza or the milder H1N1 virus performed normally at the task. |© Society for Science & the Public 2000 - 2018

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 11: Emotions, Aggression, and Stress
Link ID: 24704 - Posted: 02.27.2018

Sarah Webb Four years ago, scientists from Google showed up on neuroscientist Steve Finkbeiner’s doorstep. The researchers were based at Google Accelerated Science, a research division in Mountain View, California, that aims to use Google technologies to speed scientific discovery. They were interested in applying ‘deep-learning’ approaches to the mountains of imaging data generated by Finkbeiner’s team at the Gladstone Institute of Neurological Disease in San Francisco, also in California. Deep-learning algorithms take raw features from an extremely large, annotated data set, such as a collection of images or genomes, and use them to create a predictive tool based on patterns buried inside. Once trained, the algorithms can apply that training to analyse other data, sometimes from wildly different sources. The technique can be used to “tackle really hard, tough, complicated problems, and be able to see structure in data — amounts of data that are just too big and too complex for the human brain to comprehend”, Finkbeiner says. He and his team produce reams of data using a high-throughput imaging strategy known as robotic microscopy, which they had developed for studying brain cells. But the team couldn’t analyse its data at the speed it acquired them, so Finkbeiner welcomed the opportunity to collaborate. “I can’t honestly say at the time that I had a clear grasp of what questions might be addressed with deep learning, but I knew that we were generating data at about twice to three times the rate we could analyse it,” he says. © 2018 Macmillan Publishers Limited,

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 1: Biological Psychology: Scope and Outlook
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 1: An Introduction to Brain and Behavior
Link ID: 24684 - Posted: 02.21.2018

By GRETCHEN REYNOLDS Exercise may help the brain to build durable memories, through good times and bad. Stress and adversity weaken the brain’s ability to learn and retain information, earlier research has found. But according to a remarkable new neurological study in mice, regular exercise can counteract those effects by bolstering communication between brain cells. Memory has long been considered a biological enigma, a medley of mental ephemera that has some basis in material existence. Memories are coded into brain cells in the hippocampus, the brain’s memory center. If our memories were not written into those cells, they would not be available for later, long-term recall, and every brain would be like that of Dory, the memory-challenged fish in “Finding Nemo.” But representations of experience are extremely complex, and aspects of most memories must be spread across multiple brain cells, neuroscientists have determined. These cells must be able to connect with one another, so that the memory, as a whole, stays intact. The connections between neurons, known as synapses, are composed of electrical and chemical signals that move from cell to cell, like notes passed in class. The signals can be relatively weak and sporadic or flow with vigor and frequency. In general, the stronger the messages between neurons, the sturdier and more permanent the memories they hold. Neuroscientists have known for some time that the potency of our synapses depends to some degree on how we live our lives. Lack of sleep, alcohol, diet and other aspects of our lifestyles, especially stress, may dampen the flow of messages between brain cells, while practice fortifies it. Repeat an action and the signals between the cells maintaining the memory of that action can strengthen. That is learning. © 2018 The New York Times Company

Related chapters from BN8e: Chapter 17: Learning and Memory; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 13: Memory, Learning, and Development; Chapter 5: The Sensorimotor System
Link ID: 24683 - Posted: 02.21.2018

By Ashley Yeager Wandering through a maze with striped gray walls, a mouse searches for turns that will take it to a thirst-quenching reward. Although the maze seems real to the mouse, it is, in fact, a virtual world. Virtual reality (VR) has become a valuable tool to study brains and behaviors because researchers can precisely control sensory cues, correlating nerve-cell activity with specific actions. “It allows experiments that are not possible using real-world approaches,” neurobiologist Christopher Harvey of Harvard Medical School and colleagues wrote in 2016 in a commentary in Nature (533:324–25). Studies of navigation are perfect examples. Extraneous sounds, smells, tastes, and textures, along with internal information about balance and spatial orientation, combine with visual cues to help a mouse move through a maze. In a virtual environment, researchers can add or remove any of these sensory inputs to see how each affects nerve-cell firing and the neural patterns that underlie exploration and other behaviors. But there’s a catch. Many VR setups severely restrict how animals move, which can change nerve cells’ responses to sensory cues. As a result, some researchers have begun to build experimental setups that allow animals to move more freely in their virtual environments, while others have starting using robots to aid animals in navigation or to simulate interactions with others of their kind. Here, The Scientist explores recent efforts in both arenas, which aim to develop a more realistic sense of how the brain interprets reality. © 1986-2018 The Scientist

Related chapters from BN8e: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory, Learning, and Development
Link ID: 24667 - Posted: 02.16.2018