Links for Keyword: Learning & Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1217

By Carolyn Wilke Scientists have long sought to prevent sharp memories from dulling with age, but the problem remains stubborn. Now research published in Scientific Reports suggests virtual reality might help older people recall facts and events based on specific details. The study involved 42 healthy older adults from the San Francisco Bay Area. Half spent a dozen hours over four weeks playing a virtual-reality game called Labyrinth; they strapped on headsets and walked in place, roaming virtual neighborhoods while completing errands. The other half, in the control group, used electronic tablets to play games that did not require navigating or recalling details. After 15 sessions, the latter performed roughly the same as before on a long-term memory test based on picking out objects they had seen about an hour earlier. But the Labyrinth players' scores rose, and they were less frequently tricked by objects that resembled ones they had viewed. Those improvements “brought them back up to the level of another group of younger adults who did the same memory tests,” says cognitive neuroscientist Peter Wais of the University of California, San Francisco. He and his colleagues designed the VR game, which they say likely stimulates the hippocampus—a brain area important for long-term memory. The team did not observe improvement on two other tests, which measured autobiographical memory and spatial memory capability. © 2021 Scientific American,

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 27853 - Posted: 06.16.2021

By Emily Underwood In the 1930s, neurosurgeon Wilder Penfield pioneered a daring new kind of cartography. As a stenographer took notes, he delicately touched an electrode to the exposed brains of his awake, consenting patients and asked what they felt as electrical current hit different areas. Penfield wanted to better predict which brain functions would be threatened when surgeons had to remove tumors or chunks of tissue that were triggering epileptic seizures. Stimulating adjacent brain regions, he found, produced sensations in corresponding body parts: hand, forearm, elbow. The result of his mapping was the iconic “homunculus”: a map on the brain’s wrinkled outer layer representing the surface of the body. Penfield then ventured into more mysterious territory. When he probed the insula, a deep fold of cortex, some patients felt nauseated or gassy; others belched or vomited. “My stomach is upset and I smell something like medicine,” one said. Penfield found those visceral signals harder to decipher than the brain’s map of the body’s surface. Brain regions responsible for different internal sensations seemed to overlap. Sensory regions were hard to distinguish from those that sent motor instructions such as telling the intestines to contract. Penfield once asked participants to swallow an electrode to detect changes in gut contractions while he stimulated their brains. But his map of the inner organs was blurry and ambiguous—and stayed that way for most of the next century. Decades later, scientists are starting to unravel how our wet, spongy, slippery organs talk to the brain and how the brain talks back. That two-way communication, known as interoception, encompasses a complex, bodywide system of nerves and hormones. Much recent exploration has focused on the vagus nerve: a massive, meandering network of more than 100,000 fibers that travel from nearly every internal organ to the base of the brain and back again. © 2021 American Association for the Advancement of Science.

Related chapters from BN: Chapter 13: Homeostasis: Active Regulation of the Internal Environment; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 9: Homeostasis: Active Regulation of the Internal Environment; Chapter 13: Memory and Learning
Link ID: 27850 - Posted: 06.11.2021

By Ben Guarino and Frances Stead Sellers In the coronavirus pandemic’s early weeks, in neuropathology departments around the world, scientists wrestled with a question: Should they cut open the skulls of patients who died of covid-19 and extract their brains? Autopsy staff at Columbia University in New York were hesitant. Sawing into bone creates dust, and the Centers for Disease Control and Prevention had issued a warning about the bodies of covid patients — airborne debris from autopsies could be an infectious hazard. But as more patients were admitted and more began to die, researchers decided to “make all the efforts we could to start collecting the brain tissue,” Columbia neuropathologist Peter D. Canoll said. In March 2020, in an insolation room, the Columbia team extracted a brain from a patient who had died of severe covid-19, the illness caused by the coronavirus. During the next months, they would examine dozens more. Saw met skull elsewhere, too. In Germany, scientists autopsied brains — even though medical authorities recommended against doing that. Researchers were searching the brain for damage — and for the virus itself. At the pandemic’s start, understanding how the virus affected the nervous system was largely a mystery. S. Andrew Josephson, chair of neurology at the University of California at San Francisco and editor in chief of the academic journal JAMA Neurology, said, “We had hundreds of submissions of ‘I saw one case of X.’” It was difficult to understand whether single cases has any relationship to covid at all. Patients reported visual and auditory disturbances, vertigo and tingling sensations, among other perplexing symptoms. Some lost their sense of smell, or their vision became distorted. Weeks or months after the initial onset of symptoms, some remain convinced after even a mild bout of the coronavirus of persistent “brain fog.”

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 27845 - Posted: 06.08.2021

By Jason S. Tsukahara, Alexander P. Burgoyne, Randall W. Engle It has been said that “the eyes are the window to the soul,” but new research suggests that they may be a window to the brain as well. Our pupils respond to more than just the light. They indicate arousal, interest or mental exhaustion. Pupil dilation is even used by the FBI to detect deception. Now work conducted in our laboratory at the Georgia Institute of Technology suggests that baseline pupil size is closely related to individual differences in intelligence. The larger the pupils, the higher the intelligence, as measured by tests of reasoning, attention and memory. In fact, across three studies, we found that the difference in baseline pupil size between people who scored the highest on the cognitive tests and those who scored the lowest was large enough to be detected by the unaided eye. We first uncovered this surprising relationship while studying differences in the amount of mental effort people used to complete memory tasks. We used pupil dilations as an indicator of effort, a technique psychologist Daniel Kahneman popularized in the 1960s and 1970s. When we discovered a relationship between baseline pupil size and intelligence, we weren’t sure if it was real or what it meant. Advertisement Intrigued, we conducted several large-scale studies in which we recruited more than 500 people aged 18 to 35 from the Atlanta community. We measured participants’ pupil size using an eye tracker, a device that captures the reflection of light off the pupil and cornea using a high-powered camera and computer. We measured participants’ pupils at rest while they stared at a blank computer screen for up to four minutes. All the while, the eye tracker was recording. Using the tracker, we then calculated each participant’s average pupil size. © 2021 Scientific American

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 27844 - Posted: 06.08.2021

By Jackie Rocheleau It’s an attractive idea: By playing online problem-solving, matching and other games for a few minutes a day, people can improve such mental abilities as reasoning, verbal skills and memory. But whether these games deliver on those promises is up for debate. “For every study that finds some evidence, there’s an equal number of papers that find no evidence,” says Bobby Stojanoski, a cognitive neuroscientist at Western University in Ontario (SN: 3/8/17; SN: 5/9/17). Now, in perhaps the biggest real-world test of these programs, Stojanoski and colleagues pitted more than 1,000 people who regularly use brain trainers against around 7,500 people who don’t do the mini brain workouts. There was little difference between how both groups performed on a series of tests of their thinking abilities, suggesting that brain training doesn’t live up to its name, the scientists report in the April Journal of Experimental Psychology: General. “They put brain training to the test,” says Elizabeth Stine-Morrow, a cognitive aging scientist at the University of Illinois at Urbana-Champaign. While the study doesn’t show why brain trainers aren’t seeing benefits, it does show there is no link “between the amount of time spent with the brain training programs and cognition,” Stine-Morrow says. “That was pretty cool.” © Society for Science & the Public 2000–2021

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 27830 - Posted: 05.27.2021

By Nicholas Bakalar Long-term exposure to air pollution has many health consequences, including accelerating brain aging and increasing the risk for dementia. Now new research suggests that short-term exposure to polluted air, even at levels generally considered “acceptable,” may impair mental ability in the elderly. Scientists studied 954 men, average age 69, living in the greater Boston area. The men were tested at the start of the study and several times over the next 28 days using the Mini-Mental State Examination, or MMSE, a widely used test of cognitive ability. The test includes simple questions like “What year is this?” and “What season is it?,” and requires tasks like counting backward by sevens from 100. Correctly answering fewer than 25 of its 30 questions suggests mild dementia. Over the month, the researchers measured air levels of what’s known as PM 2.5, particles of soot and other fine particulate matter with a diameter of up to 2.5 microns, small enough to enter the lungs and move into bloodstream. There is no safe level of PM 2.5, but the Environmental Protection Agency considers air acceptable when it is under 12 micrograms per cubic meter. During the testing period, PM 2.5 levels in Boston averaged 10.77. Higher PM 2.5 was consistently associated with lower test scores. In weeks with the highest levels of air pollution, the men were 63 percent more likely to score below 25 on the MMSE than in weeks with the lowest levels. The study, in Nature Aging, adjusted for age, B.M.I., coronary heart disease, diabetes, alcohol consumption, smoking, high blood pressure and other factors. Dr. Andrea A. Baccarelli, the senior author and a professor of environmental science at the Columbia Mailman School of Public Health, said that these short-term effects may be reversible. “When air pollution goes down,” he said, “the brain reboots and goes back to normal. However, if repeated, these episodes produce long-term damage to the brain.” © 2021 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 14: Attention and Higher Cognition
Link ID: 27823 - Posted: 05.19.2021

Jordana Cepelewicz During every waking moment, we humans and other animals have to balance on the edge of our awareness of past and present. We must absorb new sensory information about the world around us while holding on to short-term memories of earlier observations or events. Our ability to make sense of our surroundings, to learn, to act and to think all depend on constant, nimble interactions between perception and memory. But to accomplish this, the brain has to keep the two distinct; otherwise, incoming data streams could interfere with representations of previous stimuli and cause us to overwrite or misinterpret important contextual information. Compounding that challenge, a body of research hints that the brain does not neatly partition short-term memory function exclusively into higher cognitive areas like the prefrontal cortex. Instead, the sensory regions and other lower cortical centers that detect and represent experiences may also encode and store memories of them. And yet those memories can’t be allowed to intrude on our perception of the present, or to be randomly rewritten by new experiences. A paper published recently in Nature Neuroscience may finally explain how the brain’s protective buffer works. A pair of researchers showed that, to represent current and past stimuli simultaneously without mutual interference, the brain essentially “rotates” sensory information to encode it as a memory. The two orthogonal representations can then draw from overlapping neural activity without intruding on each other. The details of this mechanism may help to resolve several long-standing debates about memory processing. To figure out how the brain prevents new information and short-term memories from blurring together, Timothy Buschman, a neuroscientist at Princeton University, and Alexandra Libby, a graduate student in his lab, decided to focus on auditory perception in mice. They had the animals passively listen to sequences of four chords over and over again, in what Buschman dubbed “the worst concert ever.” All Rights Reserved © 2021

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 27778 - Posted: 04.17.2021

Natalie Grover Few species in the animal kingdom can change the size of their brain. Fewer still can change it back to its original size. Now researchers have found the first insect species with that ability: Indian jumping ants. They are like catnip to researchers in the field. In contrast to their cousins, Indian jumping ants colonies do not perish once their queen dies. Instead, “chosen” workers take her place – with expanded ovaries and shrunken brains – to produce offspring. But, if a worker’s “pseudo-queen” status is somehow revoked, their bodies can bounce back, the research suggests. Typically, whether an ant will be a worker or a queen is decided at the larval stage. If fed generously and given the right hormones, the ant has the chance to become a big queen. If not, then it is stuck with a career as a sterile worker deprived of the opportunity to switch – unless it’s part of a species such as the Indian jumping ant. “They have this ability to completely transform themselves at the adult stage, and that makes them interesting to try to understand,” said lead author Dr Clint Penick from US-based Kennesaw State University. Social insects such as ants typically inhabit a caste-based society – the queen reigns as the sole reproducer by secreting pheromones that thwart female worker ants from laying eggs. The other ants work hard: foraging and hunting for food, cleaning, caring for the young and defending the nest. But unlike typical colonies that wither away on the death of their queen, Indian jumping ant colonies are functionally immortal. © 2021 Guardian News & Media Limited

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 27772 - Posted: 04.14.2021

Sarah DeGenova Ackerman The human brain is made up of billions of neurons that form complex connections with one another. Flexibility at these connections is a major driver of learning and memory, but things can go wrong if it isn’t tightly regulated. For example, in people, too much plasticity at the wrong time is linked to brain disorders such as epilepsy and Alzheimer’s disease. Additionally, reduced levels of the two neuroplasticity-controlling proteins we identified are linked to increased susceptibility to autism and schizophrenia. Similarly, in our fruit flies, removing the cellular brakes on plasticity permanently impaired their crawling behavior. While fruit flies are of course different from humans, their brains work in very similar ways to the human brain and can offer valuable insight. One obvious benefit of discovering the effect of these proteins is the potential to treat some neurological diseases. But since a neuron’s flexibility is closely tied to learning and memory, in theory, researchers might be able to boost plasticity in a controlled way to enhance cognition in adults. This could, for example, allow people to more easily learn a new language or musical instrument. A colorful microscope image of a developing fruit fly brain. In this image showing a developing fruit fly brain on the right and the attached nerve cord on the left, the astrocytes are labeled in different colors showing their wide distribution among neurons. Sarah DeGenova Ackerman, CC BY-ND © 2010–2021, The Conversation US, Inc.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 27770 - Posted: 04.14.2021

By Rachel Aviv Elizabeth Loftus was in Argentina, giving talks about the malleability of memory, in October, 2018, when she learned that Harvey Weinstein, who had recently been indicted for rape and sexual assault, wanted to speak with her. She couldn’t figure out how to receive international calls in her hotel room, so she asked if they could talk in three days, once she was home, in California. In response, she got a series of frantic e-mails saying that the conversation couldn’t wait. But, when Weinstein finally got through, she said, “basically he just wanted to ask, ‘How can something that seems so consensual be turned into something so wrong?’ ” Loftus, a professor at the University of California, Irvine, is the most influential female psychologist of the twentieth century, according to a list ­compiled by the Review of General Psychology. Her work helped usher in a paradigm shift, rendering obsolete the archival model of memory—the idea, dominant for much of the twentieth century, that our memories exist in some sort of mental library, as literal representations of past events. According to Loftus, who has published twenty-four books and more than six hundred papers, memories are reconstructed, not replayed. “Our representation of the past takes on a living, shifting reality,” she has written. “It is not fixed and immutable, not a place way back there that is preserved in stone, but a living thing that changes shape, expands, shrinks, and expands again, an amoeba-­like creature.” George A. Miller, one of the founders of cognitive psychology, once said in a speech to the American Psychological Association that the way to advance the field was “to give psychology away.” Loftus, who is seventy-six, adopts a similar view, seizing any opportunity to elaborate on what she calls the “flimsy curtain that separates our imagination and our memory.” In the past forty-five years, she has testified or consulted in more than three hundred cases, on behalf of people wrongly accused of robbery and murder, as well as for high-profile defendants like Bill Cosby, Jerry Sandusky, and the Duke lacrosse players accused of rape, in 2006. “If the MeToo movement had an office, Beth’s picture would be on the ten-most-wanted list,” her brother Robert told me. © 2021 Condé Nast.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 27754 - Posted: 03.31.2021

The Physics arXiv Blog One of the best-studied networks in neuroscience is the brain of a fruit fly, in particular, a part called the mushroom body. This analyzes sensory inputs such as odors, temperature, humidity and visual data so that the fly can learn to distinguish friendly stimuli from dangerous ones. Neuroscientists have long known how this section of the brain is wired. It consists of a set of cells called projection neurons that transmit the sensory information to a population of 2,000 neurons called Kenyon cells. The Kenyon cells are wired together to form a neural network capable of learning. This is how fruit flies learn to avoid potentially hazardous sensory inputs — such as dangerous smells and temperatures — while learning to approach foodstuffs, potential mates, and so on. But the power and flexibility of this relatively small network has long raised a curious question for neuroscientists: could it be re-programmed to tackle other tasks? Now they get an answer thanks to the work of Yuchan Liang at the Rensselaer Polytechnic Institute, the MIT-IBM Watson AI Lab, and colleagues. This team has hacked the fruit fly brain network to perform other tasks, such as natural language processing. It's the first time a naturally occurring network has been commandeered in this way. And this biological brain network is no slouch. Liang and the team says it matches the performance of artificial learning networks while using far fewer computational resources. © 2021 Kalmbach Media Co.

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 27671 - Posted: 01.30.2021

By Clay Risen In 1978, James R. Flynn, a political philosopher at the University of Otago, in New Zealand, was writing a book about what constituted a “humane” society. He considered “inhumane” societies as well — dictatorships, apartheid states — and, in his reading, came across the work of Arthur R. Jensen, a psychologist at the University of California, Berkeley. Dr. Jensen was best known for an article he published in 1969 claiming that the differences between Black and white Americans on I.Q. tests resulted from genetic differences between the races — and that programs that tried to improve Black educational outcomes, like Head Start, were bound to fail. Dr. Flynn, a committed leftist who had once been a civil rights organizer in Kentucky, felt instinctively that Dr. Jensen was wrong, and he set out to prove it. In 1980 he published a thorough, devastating critique of Dr. Jensen’s work — showing, for example, that many groups of whites scored as low on I.Q. tests as Black Americans. But he didn’t stop there. Like most researchers in his field, Dr. Jensen had assumed that intelligence was constant across generations, pointing to the relative stability of I.Q. tests over time as evidence. But Dr. Flynn noticed something that no one else had: Those tests were recalibrated every decade or so. When he looked at the raw, uncalibrated data over nearly 100 years, he found that I.Q. scores had gone up, dramatically. “If you scored people 100 years ago against our norms, they would score a 70,” or borderline mentally disabled, he said later. “If you scored us against their norms, we would score 130” — borderline gifted. Just as groundbreaking was his explanation for why. The rise was too fast to be genetic, nor could it be that our recent ancestors were less intelligent than we are. Rather, he argued, the last century has seen a revolution in abstract thinking, what he called “scientific spectacles,” brought on by the demands of a technologically robust industrial society. This new order, he maintained, required greater educational attainment and an ability to think in terms of symbols, analogies and complex logic — exactly what many I.Q. tests measure. © 2021 The New York Times Company

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 13: Memory and Learning
Link ID: 27664 - Posted: 01.27.2021

By Stephani Sutherland Patrick Thornton, a 40-year-old math teacher in Houston, Tex., relies on his voice to clearly communicate with his high school students. So when he began to feel he was recovering from COVID, he was relieved to get his voice back a month after losing it. Thornton got sick in mid-August and had symptoms typical of a moderate case: a sore throat, headaches, trouble breathing. By the end of September, “I was more or less counting myself as on the mend and healing,” Thornton says. “But on September 25, I took a nap, and then my mom called.” As the two spoke, Thornton’s mother remarked that it was great that his voice was returning. Something was wrong, however. “I realized that some of the words didn’t feel right in my mouth, you know?” he says. They felt jumbled, stuck inside. Thornton had suddenly developed a severe stutter for the first time in his life. “I got my voice back, but it broke my mouth,” he says. After relaying the story over several minutes, Thornton sighs heavily with exhaustion. The thought of going back to teaching with his stutter, “that was terrifying,” he says. In November Thornton still struggled with low energy, chest pain and headaches. And “sometimes my heart rate [would] just decide that we’re being chased by a tiger out of nowhere," he adds. His stutter only worsened by that time, Thornton says, and he worried that it reflected some more insidious condition in his brain, despite doctors’ insistence that the speech disruption was simply a product of stress. © 2021 Scientific American,

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 16: Psychopathology: Biological Basis of Behavior Disorders
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 12: Psychopathology: The Biology of Behavioral Disorders
Link ID: 27661 - Posted: 01.23.2021

Elena Renken More than a century ago, the zoologist Richard Semon coined the term “engram” to designate the physical trace a memory must leave in the brain, like a footprint. Since then, neuroscientists have made progress in their hunt for exactly how our brains form memories. They have learned that specific brain cells activate as we form a memory and reactivate as we remember it, strengthening the connections among the neurons involved. That change ingrains the memory and lets us keep memories we recall more often, while others fade. But the precise physical alterations within our neurons that bring about these changes have been hard to pin down — until now. In a study published last month, researchers at the Massachusetts Institute of Technology tracked an important part of the memory-making process at the molecular scale in engram cells’ chromosomes. Neuroscientists already knew that memory formation is not instantaneous, and that the act of remembering is crucial to locking a memory into the brain. These researchers have now discovered some of the physical embodiment of that mechanism. The MIT group worked with mice that had a fluorescent marker spliced into their genome to make their cells glow whenever they expressed the gene Arc, which is associated with memory formation. The scientists placed these mice in a novel location and trained them to fear a specific noise, then returned them to this location several days later to reactivate the memory. In the brain area called the hippocampus, the engram cells that formed and recalled this memory lit up with color, which made it easy to sort them out from other brain cells under the microscope during a postmortem examination. All Rights Reserved © 2020

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 15: Emotions, Aggression, and Stress
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 11: Emotions, Aggression, and Stress
Link ID: 27567 - Posted: 11.04.2020

Anil Ananthaswamy In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position and other properties — something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains. “I remember very distinctly the time when we found a neural network that actually solved the task,” he said. It was 2 a.m., a tad too early to wake up his adviser, James DiCarlo, or other colleagues, so an excited Yamins took a walk in the cold Cambridge air. “I was really pumped,” he said. It would have counted as a noteworthy accomplishment in artificial intelligence alone, one of many that would make neural networks the darlings of AI technology over the next few years. But that wasn’t the main goal for Yamins and his colleagues. To them and other neuroscientists, this was a pivotal moment in the development of computational models for brain functions. DiCarlo and Yamins, who now runs his own lab at Stanford University, are part of a coterie of neuroscientists using deep neural networks to make sense of the brain’s architecture. In particular, scientists have struggled to understand the reasons behind the specializations within the brain for various tasks. They have wondered not just why different parts of the brain do different things, but also why the differences can be so specific: Why, for example, does the brain have an area for recognizing objects in general but also for faces in particular? Deep neural networks are showing that such specializations may be the most efficient way to solve problems. All Rights Reserved © 2020

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 27562 - Posted: 10.31.2020

Jon Hamilton If you fall off a bike, you'll probably end up with a cinematic memory of the experience: the wind in your hair, the pebble on the road, then the pain. That's known as an episodic memory. And now researchers have identified cells in the human brain that make this sort of memory possible, a team reports in the journal Proceedings of the National Academy of Sciences. The cells are called time cells, and they place a sort of time stamp on memories as they are being formed. That allows us to recall sequences of events or experiences in the right order. "By having time cells create this indexing across time, you can put everything together in a way that makes sense," says Dr. Bradley Lega, the study's senior author and a neurosurgeon at the University of Texas Southwestern Medical Center in Dallas. Time cells were discovered in rodents decades ago. But the new study is critical because "the final arbitrator is always the human brain," says Dr. György Buzsáki, Biggs Professor of Neuroscience at New York University. Buzsáki is not an author of the study but did edit the manuscript. Lega and his team found the time cells by studying the brains of 27 people who were awaiting surgery for severe epilepsy. As part of their pre-surgical preparation, these patients had electrodes placed in the hippocampus and another area of the brain involved in navigation, memory and time perception. In the experiment, the patients studied sequences of 12 or 15 words that appeared on a laptop screen during a period of about 30 seconds. Then, after a break, they were asked to recall the words they had seen. © 2020 npr

Related chapters from BN: Chapter 17: Learning and Memory
Related chapters from MM:Chapter 13: Memory and Learning
Link ID: 27561 - Posted: 10.31.2020

By Stephani Sutherland Many of the symptoms experienced by people infected with SARS-CoV-2 involve the nervous system. Patients complain of headaches, muscle and joint pain, fatigue and “brain fog,” or loss of taste and smell—all of which can last from weeks to months after infection. In severe cases, COVID-19 can also lead to encephalitis or stroke. The virus has undeniable neurological effects. But the way it actually affects nerve cells still remains a bit of a mystery. Can immune system activation alone produce symptoms? Or does the novel coronavirus directly attack the nervous system? Some studies—including a recent preprint paper examining mouse and human brain tissue—show evidence that SARS-CoV-2 can get into nerve cells and the brain. The question remains as to whether it does so routinely or only in the most severe cases. Once the immune system kicks into overdrive, the effects can be far-ranging, even leading immune cells to invade the brain, where they can wreak havoc. Some neurological symptoms are far less serious yet seem, if anything, more perplexing. One symptom—or set of symptoms—that illustrates this puzzle and has gained increasing attention is an imprecise diagnosis called “brain fog.” Even after their main symptoms have abated, it is not uncommon for COVID-19 patients to experience memory loss, confusion and other mental fuzziness. What underlies these experiences is still unclear, although they may also stem from the body-wide inflammation that can go along with COVID-19. Many people, however, develop fatigue and brain fog that lasts for months even after a mild case that does not spur the immune system to rage out of control. Another widespread symptom called anosmia, or loss of smell, might also originate from changes that happen without nerves themselves getting infected. Olfactory neurons, the cells that transmit odors to the brain, lack the primary docking site, or receptor, for SARS-CoV-2, and they do not seem to get infected. Researchers are still investigating how loss of smell might result from an interaction between the virus and another receptor on the olfactory neurons or from its contact with nonnerve cells that line the nose. © 2020 Scientific American,

Related chapters from BN: Chapter 9: Hearing, Balance, Taste, and Smell; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 6: Hearing, Balance, Taste, and Smell; Chapter 13: Memory and Learning
Link ID: 27547 - Posted: 10.24.2020

Keith A. Trujillo1, Alfredo Quiñones-Hinojosa2, Kenira J. Thompson3 Joe Louis Martinez Jr. died on 29 August at the age of 76. In addition to making extraordinary contributions to the fields of neurobiology and Chicano psychology, Joe was a tireless advocate of diversity, equity, and inclusion in the sciences. He established professional development programs for individuals from underrepresented groups and provided lifelong mentoring as they pursued careers in science and academia. Joe was passionately devoted to expanding opportunities in the sciences well before diversity became a visible goal for scientific organizations and academic institutions. Born in Albuquerque, New Mexico, on 1 August 1944, Joe received his bachelor's degree in psychology from the University of San Diego in 1966; his master's in experimental psychology from New Mexico Highlands University in 1968; and his Ph.D. in physiological psychology from the University of Delaware in 1971. His faculty career began in 1972 at California State University, San Bernardino (CSUSB), shortly after the campus was established. He later completed postdocs in the laboratory of neurobiologist James McGaugh at the University of California, Irvine, and with neurobiologist Floyd Bloom at the Salk Institute for Biological Studies in San Diego, California. The University of California, Berkeley, recruited Joe in 1982, and he served as a professor as well as the area head of biopsychology and faculty assistant to the vice chancellor for affirmative action. As the highest-ranking Hispanic faculty member in the University of California system, Joe used his voice to help others from underrepresented groups. However, he felt that he could have a greater impact on diversity in the sciences by helping to build a university with a high concentration of Hispanic students, so in 1995 he moved to the University of Texas, San Antonio (UTSA). He began as a professor of biology and went on to assume a range of leadership roles, including director of the Cajal Neuroscience Institute. At UTSA, he worked with colleagues to obtain nearly $18 million in funding for neuroscience research and education. In 2012, he moved to the University of Illinois at Chicago where he served as professor and psychology department head until his retirement in 2016. At each institution, he embraced the opportunity to provide guidance and mentoring to innumerable students, faculty, and staff. © 2020 American Association for the Advancement of Science.

Related chapters from BN: Chapter 1: Introduction: Scope and Outlook; Chapter 17: Learning and Memory
Related chapters from MM:Chapter 1: Cells and Structures: The Anatomy of the Nervous System; Chapter 13: Memory and Learning
Link ID: 27523 - Posted: 10.16.2020

By Bret Stetka The human brain is hardwired to map our surroundings. This trait is called spatial memory—our ability to remember certain locations and where objects are in relation to one another. New findings published today in Scientific Reports suggest that one major feature of our spatial recall is efficiently locating high-calorie, energy-rich food. The study’s authors believe human spatial memory ensured that our hunter-gatherer ancestors could prioritize the location of reliable nutrition, giving them an evolutionary leg up. In the study, researchers at Wageningen University & Research in the Netherlands observed 512 participants follow a fixed path through a room where either eight food samples or eight food-scented cotton pads were placed in different locations. When they arrived at a sample, the participants would taste the food or smell the cotton and rate how much they liked it. Four of the food samples were high-calorie, including brownies and potato chips, and the other four, including cherry tomatoes and apples, were low in calories—diet foods, you might call them. After the taste test, the participants were asked to identify the location of each sample on a map of the room. They were nearly 30 percent more accurate at mapping the high-calorie samples versus the low-calorie ones, regardless of how much they liked those foods or odors. They were also 243 percent more accurate when presented with actual foods, as opposed to the food scents. “Our main takeaway message is that human minds seem to be designed for efficiently locating high-calorie foods in our environment,” says Rachelle de Vries, a Ph.D. candidate in human nutrition and health at Wageningen University and lead author of the new paper. De Vries feels her team’s findings support the idea that locating valuable caloric resources was an important and regularly occurring problem for early humans weathering the climate shifts of the Pleistocene epoch. “Those with a better memory for where and when high-calorie food resources would be available were likely to have a survival—or fitness—advantage,” she explains. © 2020 Scientific American

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 13: Homeostasis: Active Regulation of the Internal Environment
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 9: Homeostasis: Active Regulation of the Internal Environment
Link ID: 27518 - Posted: 10.10.2020

R. Stanley Williams For the first time, my colleagues and I have built a single electronic device that is capable of copying the functions of neuron cells in a brain. We then connected 20 of them together to perform a complicated calculation. This work shows that it is scientifically possible to make an advanced computer that does not rely on transistors to calculate and that uses much less electrical power than today’s data centers. Our research, which I began in 2004, was motivated by two questions. Can we build a single electronic element – the equivalent of a transistor or switch – that performs most of the known functions of neurons in a brain? If so, can we use it as a building block to build useful computers? Neurons are very finely tuned, and so are electronic elements that emulate them. I co-authored a research paper in 2013 that laid out in principle what needed to be done. It took my colleague Suhas Kumar and others five years of careful exploration to get exactly the right material composition and structure to produce the necessary property predicted from theory. Kumar then went a major step further and built a circuit with 20 of these elements connected to one another through a network of devices that can be programmed to have particular capacitances, or abilities to store electric charge. He then mapped a mathematical problem to the capacitances in the network, which allowed him to use the device to find the solution to a small version of a problem that is important in a wide range of modern analytics. © 2010–2020, The Conversation US, Inc.

Related chapters from BN: Chapter 17: Learning and Memory; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 27512 - Posted: 10.07.2020