Chapter 17. Learning and Memory

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 1853

by Adam Kirsch Giraffes will eat courgettes if they have to, but they really prefer carrots. A team of researchers from Spain and Germany recently took advantage of this preference to investigate whether the animals are capable of statistical reasoning. In the experiment, a giraffe was shown two transparent containers holding a mixture of carrot and courgette slices. One container held mostly carrots, the other mostly courgettes. A researcher then took one slice from each container and offered them to the giraffe with closed hands, so it couldn’t see which vegetable had been selected. In repeated trials, the four test giraffes reliably chose the hand that had reached into the container with more carrots, showing they understood that the more carrots were in the container, the more likely it was that a carrot had been picked. Monkeys have passed similar tests, and human babies can do it at 12 months old. But giraffes’ brains are much smaller than primates’ relative to body size, so it was notable to see how well they grasped the concept. Such discoveries are becoming less surprising every year, however, as a flood of new research overturns longstanding assumptions about what animal minds are and aren’t capable of. A recent wave of popular books on animal cognition argue that skills long assumed to be humanity’s prerogative, from planning for the future to a sense of fairness, actually exist throughout the animal kingdom – and not just in primates or other mammals, but in birds, octopuses and beyond. In 2018, for instance, a team at the University of Buenos Aires found evidence that zebra finches, whose brains weigh half a gram, have dreams. Monitors attached to the birds’ throats found that when they were asleep, their muscles sometimes moved in exactly the same pattern as when they were singing out loud; in other words, they seemed to be dreaming about singing. © 2023 Guardian News & Media Limited

Keyword: Evolution; Learning & Memory
Link ID: 28808 - Posted: 05.31.2023

Emily Waltz Researchers have been exploring whether zapping a person’s brain with electrical current through electrodes on their scalp can improve cognition.Credit: J.M. Eddin/Military Collection/Alamy After years of debate over whether non-invasively zapping the brain with electrical current can improve a person’s mental functioning, a massive analysis of past studies offers an answer: probably. But some question that conclusion, saying that the analysis spans experiments that are too disparate to offer a solid answer. In the past six years, the number of studies testing the therapeutic effects of a class of techniques called transcranial electrical stimulation has skyrocketed. These therapies deliver a painless, weak electrical current to the brain through electrodes placed externally on the scalp. The goal is to excite, disrupt or synchronize signals in the brain to improve function. Researchers have tested transcranial alternating current stimulation (tACS) and its sister technology, tDCS (transcranial direct current stimulation), on both healthy volunteers and those with neuropsychiatric conditions, such as depression, Parkinson’s disease or addiction. But study results have been conflicting or couldn’t be replicated, leading researchers to question the efficacy of the tools. The authors of the new analysis, led by Robert Reinhart, director of the cognitive and clinical neuroscience laboratory at Boston University in Massachusetts, say they compiled the report to quantify whether tACS shows promise, by comparing more than 100 studies of the technique, which applies an oscillating current to the brain. “We have to address whether or not this technique is actually working, because in the literature, you have a lot of conflicting findings,” says Shrey Grover, a cognitive neuroscientist at Boston University and an author on the paper. © 2023 Springer Nature Limited

Keyword: Learning & Memory
Link ID: 28807 - Posted: 05.31.2023

By Yasemin Saplakoglu Memories are shadows of the past but also flashlights for the future. Our recollections guide us through the world, tune our attention and shape what we learn later in life. Human and animal studies have shown that memories can alter our perceptions of future events and the attention we give them. “We know that past experience changes stuff,” said Loren Frank, a neuroscientist at the University of California, San Francisco. “How exactly that happens isn’t always clear.” A new study published in the journal Science Advances now offers part of the answer. Working with snails, researchers examined how established memories made the animals more likely to form new long-term memories of related future events that they might otherwise have ignored. The simple mechanism that they discovered did this by altering a snail’s perception of those events. The researchers took the phenomenon of how past learning influences future learning “down to a single cell,” said David Glanzman, a cell biologist at the University of California, Los Angeles who was not involved in the study. He called it an attractive example “of using a simple organism to try to get understanding of behavioral phenomena that are fairly complex.” Although snails are fairly simple creatures, the new insight brings scientists a step closer to understanding the neural basis of long-term memory in higher-order animals like humans. Though we often aren’t aware of the challenge, long-term memory formation is “an incredibly energetic process,” said Michael Crossley, a senior research fellow at the University of Sussex and the lead author of the new study. Such memories depend on our forging more durable synaptic connections between neurons, and brain cells need to recruit a lot of molecules to do that. To conserve resources, a brain must therefore be able to distinguish when it’s worth the cost to form a memory and when it’s not. That’s true whether it’s the brain of a human or the brain of a “little snail on a tight energetic budget,” he said. All Rights Reserved © 2023

Keyword: Learning & Memory; Attention
Link ID: 28787 - Posted: 05.18.2023

By Cordula Hölig, Brigitte Röder, Ramesh Kekunnaya Growing up in poverty or experiencing any adversity, such as abuse or neglect, during early childhood can put a person at risk for poor health, including mental disorders, later in life. Although the underlying mechanisms are poorly understood, some studies have shown that adverse early childhood experience leaves persisting (and possibly irreversible) traces in brain structure. As neuroscientists who are investigating sensitive periods of human brain development, we agree: safe and nurturing environments are a prerequisite for healthy brain development and lifelong well-being. Thus, preventing early childhood adversity undoubtedly leads to healthier lives. Poverty and adversity can cause changes in brain development. Harms can come from exposure to violence or toxins or a lack of nutrition, caregiving, perceptual and cognitive stimulation or language interaction. Neuroscientists have demonstrated that these factors crucially influence human brain development. Advertisement We don’t know whether these changes are reversed by more favorable circumstances later in life, however. Investigating this question in humans is extremely difficult. For one, multiple biological and psychological factors through which poverty and adversity affect brain development are hard to disentangle. That’s because they often occur together: a neglected child often experiences a lack of caregiving simultaneously with malnutrition and exposure to physical violence. Secondly, a clear beginning and end of an adverse experience is hard to define. Finally, it is almost impossible to fully reverse harsh environments in natural settings because most of the time it is impossible to move children out of their families or communities.. © 2023 Scientific American

Keyword: Development of the Brain; Learning & Memory
Link ID: 28783 - Posted: 05.13.2023

Heidi Ledford When Naomi Rance first started studying menopause and the brain, she pretty much had the field to herself. And what she was discovering surprised her. In studies of post-mortem brains, she had found neurons in a region called the hypothalamus that roughly doubled in size in women after menopause1. “This was changing so much in postmenopausal women,” says Rance, a neuropathologist at the University of Arizona in Tucson. “It had to be important.” This was the 1990s, and few other researchers were interested. Rance forged ahead on her own, painstakingly unravelling what the neurons were doing and finessing a way to study menopause symptoms in rats by tracking tiny temperature changes in their tails as a measure of hot flushes, a common symptom of menopause that is thought to be triggered in the hypothalamus. Thirty years later, a drug called fezolinetant, based on Rance’s discoveries, is being evaluated by the US Food and Drug Administration, with an approval decision expected in the first half of this year. If approved, fezolinetant could be a landmark: the first non-hormonal therapy to treat the source of hot flushes, a symptom that has become nearly synonymous with menopause and one that is experienced by about 80% of women going through the transition. (This article uses ‘women’ to describe people who experience menopause, while recognizing that not all people who identify as women go through menopause, and not all people who go through menopause identify as women.) Rance and others in the field, fezolinetant’s progress to this point is a sign that research into the causes and effects of menopausal symptoms is finally being taken seriously. In the next few years, the global number of postmenopausal women is expected to surpass one billion. But many women still struggle to access care related to menopause, and research into how best to manage such symptoms has lagged behind. That is slowly changing. Armed with improved animal models and a growing literature on the effects of existing treatments, more researchers are coming into the field to fill that gap. © 2023 Springer Nature Limited

Keyword: Hormones & Behavior; Learning & Memory
Link ID: 28778 - Posted: 05.10.2023

John Katsaras Charles Patrick Collier Dima Bolmatov Your brain is responsible for controlling most of your body’s activities. Its information processing capabilities are what allow you to learn, and it is the central repository of your memories. But how is memory formed, and where is it located in the brain? Although neuroscientists have identified different regions of the brain where memories are stored, such as the hippocampus in the middle of the brain, the neocortex in the top layer of the brain and the cerebellum at the base of the skull, they have yet to identify the specific molecular structures within those areas involved in memory and learning. Research from our team of biophysicists, physical chemists and materials scientists suggests that memory might be located in the membranes of neurons. Neurons are the fundamental working units of the brain. They are designed to transmit information to other cells, enabling the body to function. The junction between two neurons, called a synapse, and the chemistry that takes place between synapses, in the space called the synaptic cleft, are responsible for learning and memory. At a more fundamental level, the synapse is made of two membranes: one associated with the presynaptic neuron that transmits information, and one associated with the postsynaptic neuron that receives information. Each membrane is made up of a lipid bilayer containing proteins and other biomolecules. The changes taking place between these two membranes, commonly known as synaptic plasticity, are the primary mechanism for learning and memory. These include changes to the amounts of different proteins in the membranes, as well as the structure of the membranes themselves.

Keyword: Learning & Memory
Link ID: 28777 - Posted: 05.10.2023

By Kate Golembiewski On the one hand, this headgear looks like something a cyberfish would wear. On the other, it’s not far from a fashion statement someone at the Kentucky Derby might make. But scientists didn’t just affix this device for laughs: They are curious about the underlying brain mechanisms that allow fish to navigate their world, and how such mechanisms relate to the evolutionary roots of navigation for all creatures with brain circuitry. “Navigation is an extremely important aspect of behavior because we navigate to find food, to find shelter, to escape predators,” said Ronen Segev, a neuroscientist at Ben-Gurion University of the Negev in Israel who was part of a team that fitted 15 fish with cybernetic headgear for a study published on Tuesday in the journal PLOS Biology. Putting a computer on a goldfish to study how the neurons fire in its brain while navigating wasn’t easy. It takes a careful hand because a goldfish’s brain, which looks a bit like a small cluster of lentils, is only half an inch long. “Under a microscope, we exposed the brain and put the electrodes inside,” said Lear Cohen, a neuroscientist and doctoral candidate at Ben-Gurion who performed the surgeries to attach the devices. Each of those electrodes was the diameter of a strand of human hair. It was also tricky to find a way to perform the procedure on dry land without harming the test subject. “The fish needs water and you need him not to move,” he said. He and his colleagues solved both problems by pumping water and anesthetics into the fish’s mouth. Once the electrodes were in the brain, they were connected to a small recording device, which could monitor neuronal activity and which was sealed in a waterproof case, mounted on the fish’s forehead. To keep the computer from weighing the fish down and impeding its ability to swim, the researchers attached buoyant plastic foam to the device. © 2023 The New York Times Company

Keyword: Learning & Memory
Link ID: 28756 - Posted: 04.26.2023

By Oliver Whang What is the relationship between mind and body? Maybe the mind is like a video game controller, moving the body around the world, taking it on joy rides. Or maybe the body manipulates the mind with hunger, sleepiness and anxiety, something like a river steering a canoe. Is the mind like electromagnetic waves, flickering in and out of our light-bulb bodies? Or is the mind a car on the road? A ghost in the machine? Maybe no metaphor will ever quite fit because there is no distinction between mind and body: There is just experience, or some kind of physical process, a gestalt. These questions, agonized over by philosophers for centuries, are gaining new urgency as sophisticated machines with artificial intelligence begin to infiltrate society. Chatbots like OpenAI’s GPT-4 and Google’s Bard have minds, in some sense: Trained on vast troves of human language, they have learned how to generate novel combinations of text, images and even videos. When primed in the right way, they can express desires, beliefs, hopes, intentions, love. They can speak of introspection and doubt, self-confidence and regret. But some A.I. researchers say that the technology won’t reach true intelligence, or true understanding of the world, until it is paired with a body that can perceive, react to and feel around its environment. For them, talk of disembodied intelligent minds is misguided — even dangerous. A.I. that is unable to explore the world and learn its limits, in the ways that children figure out what they can and can’t do, could make life-threatening mistakes and pursue its goals at the risk of human welfare. “The body, in a very simple way, is the foundation for intelligent and cautious action,” said Joshua Bongard, a roboticist at the University of Vermont. “As far as I can see, this is the only path to safe A.I.” At a lab in Pasadena, Calif., a small team of engineers has spent the past few years developing one of the first pairings of a large language model with a body: a turquoise robot named Moxie. About the size of a toddler, Moxie has a teardrop-shaped head, soft hands and alacritous green eyes. Inside its hard plastic body is a computer processor that runs the same kind of software as ChatGPT and GPT-4. Moxie’s makers, part of a start-up called Embodied, describe the device as “the world’s first A.I. robot friend.” © 2023 The New York Times Company

Keyword: Intelligence; Robotics
Link ID: 28735 - Posted: 04.12.2023

Nicola Davis Science correspondent From squabbling over who booked a disaster holiday to differing recollections of a glorious wedding, events from deep in the past can end up being misremembered. But now researchers say even recent memories may contain errors. Scientists exploring our ability to recall shapes say people can make mistakes after just a few seconds – a phenomenon the team have called short-term memory illusions. “Even at the shortest term, our memory might not be fully reliable,” said Dr Marte Otten, the first author of the research from the University of Amsterdam. “Particularly when we have strong expectations about how the world should be, when our memory starts fading a little bit – even after one and a half seconds, two seconds, three seconds – then we start filling in based on our expectations.” Writing in the journal Plos One, Otten and colleagues note previous research has shown that when people are presented with a rotated or mirror-image letter, they often report seeing the letter in its correct orientation. While this had previously been put down to participants mis-seeing the shape, Otten and colleagues had doubts. “We thought that they are more likely to be a memory effect. So you saw it correctly, but as soon as you commit it to memory stuff starts going wrong,” said Otten. To investigate further, the researchers carried out four experiments. In the first, participants were screened to ensure they were able to complete basic visual memory tasks before being presented with a circle of six or eight letters, one or two of which were mirror-image forms. After a matter of seconds, participants were shown a second circle of letters which they were instructed to ignore – this acted as a distraction. They were then asked to select, from a series of options, a target shape that had been at particular location in the first circle, and rate their confidence in this choice. © 2023 Guardian News & Media Limited

Keyword: Learning & Memory
Link ID: 28730 - Posted: 04.09.2023

By Elizabeth Preston Several years ago, Christian Rutz started to wonder whether he was giving his crows enough credit. Rutz, a biologist at the University of St. Andrews in Scotland, and his team were capturing wild New Caledonian crows and challenging them with puzzles made from natural materials before releasing them again. In one test, birds faced a log drilled with holes that contained hidden food, and could get the food out by bending a plant stem into a hook. If a bird didn’t try within 90 minutes, the researchers removed it from the dataset. But, Rutz says, he soon began to realize he was not, in fact, studying the skills of New Caledonian crows. He was studying the skills of only a subset of New Caledonian crows that quickly approached a weird log they’d never seen before—maybe because they were especially brave, or reckless. The team changed their protocol. They began giving the more hesitant birds an extra day or two to get used to their surroundings, then trying the puzzle again. “It turns out that many of these retested birds suddenly start engaging,” Rutz says. “They just needed a little bit of extra time.” Scientists are increasingly realizing that animals, like people, are individuals. They have distinct tendencies, habits, and life experiences that may affect how they perform in an experiment. That means, some researchers argue, that much published research on animal behavior may be biased. Studies claiming to show something about a species as a whole—that green sea turtles migrate a certain distance, say, or how chaffinches respond to the song of a rival—may say more about individual animals that were captured or housed in a certain way, or that share certain genetic features. That’s a problem for researchers who seek to understand how animals sense their environments, gain new knowledge, and live their lives. © 2023 NautilusNext Inc.,

Keyword: Evolution; Intelligence
Link ID: 28724 - Posted: 04.01.2023

By Katherine Harmon Courage We all might wish for minds as retentive as a hard drive. Memory file created. Saved. Ready for access at any time. But don’t yet go wishing for the memory performance of AI. Artificial neural networks are prone to a troublesome glitch known, evocatively, as catastrophic forgetting. These seemingly tireless networks can keep learning tasks day and night. But sometimes, once a new task is learned, any recollection of an old task vanishes. It’s as if you learned to play tennis decently well, but after being taught to play water polo, you suddenly had no recollection of how to swing a racket. This apparent network overload put an idea in the head of Maxim Bazhenov, a professor who studies computational neuroscience and sleep at the University of California San Diego School of Medicine. Perhaps the spiking neural networks he was working with simply needed a rest. In natural sleep, he had seen that the same basic brain processes occur in humans and in honeybees, working over information accumulated during waking moments. “That machinery presumably was doing something useful” in order to be conserved across evolutionary paths, he says. So, he thought, why not try a similar state for the machines. The idea was to simply provide the artificial neural networks with a break from external stimuli, to instruct them to go into a sort of rest state. Like the dozing human brain, the networks were still active, but instead of taking in new information, they were mulling the old stuff, consolidating, surfacing patterns.

Keyword: Sleep; Learning & Memory
Link ID: 28709 - Posted: 03.18.2023

By Bruce Bower Monkeys in southern Thailand use rocks to pound open oil palm nuts, inadvertently shattering stone pieces off their makeshift nutcrackers. These flakes resemble some sharp-edged stone tools presumed to have been created on purpose by ancient hominids, researchers say. Thailand’s long-tailed macaques (Macaca fascicularis) produce shards that could easily be mistaken for stone flakes previously found at 17 East African hominid sites dating from about 3.3 million to 1.56 million years ago, say archaeologist Tomos Proffitt and colleagues. The finding suggests that ancient hominids may sometimes have created the stone flakes by accident while using rocks to smash nuts, bones or other objects, the scientists report March 10 in Science Advances. Previous research has already shown that rock-wielding capuchin monkeys in Brazil unwittingly produce hominid-like stone flakes (SN: 10/19/16). Observations of rock bashing by these two monkey species undermine a long-standing assumption that hominids must have intentionally made certain ancient stone flakes, including some of the earliest known examples of tools, Proffitt says (SN: 6/3/19). It’s time to reevaluate how such determinations are made, he contends. Proffitt’s group identified 219 complete and fragmented stone flakes at 40 macaque nut-cracking sites on the island where the monkeys live. The team also found rocks showing damage consistent with having been used either as pounding implements or pounding platforms. Some differences do exist between macaque and hominid stone flakes, says Proffitt, of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. For instance, many macaque flakes display battering damage on only one side, versus frequent two-sided damage on hominid artifacts. © Society for Science & the Public 2000–2023.

Keyword: Evolution; Intelligence
Link ID: 28699 - Posted: 03.11.2023

By Jacob Beck, Sam Clarke Imagine hosting a party. You arrange snacks, curate a playlist and place a variety of beers in the refrigerator. Your first guest shows up, adding a six-pack before taking one bottle for himself. You watch your next guest arrive and contribute a few more beers, minus one for herself. Ready for a drink, you open the fridge and are surprised to find only eight beers remaining. You haven't been consciously counting the beers, but you know there should be more, so you start poking around. Sure enough, in the crisper drawer, behind a rotting head of romaine, are several bottles. How did you know to look for the missing beer? It's not like you were standing guard at the refrigerator, tallying how many bottles went in and out. Rather you were using what cognitive scientists call your number sense, a part of the mind that unconsciously solves simple math problems. While you were immersed in conversation with guests, your number sense was keeping tabs on how many beers were in the fridge. For a long time scientists, mathematicians and philosophers have debated whether this number sense comes preinstalled or is learned over time. Plato was among the first in the Western tradition to propose that humans have innate mathematical abilities. In Plato's dialogue Meno, Socrates coaxes the Pythagorean theorem out of an uneducated boy by asking him a series of simple questions. Socrates's takeaway is that the boy had innate knowledge of the Pythagorean theorem all along; the questioning just helped him express it. In the 17th century John Locke rejected this idea, insisting that the human mind begins as a tabula rasa, or blank slate, with almost all knowledge acquired through experience. This view, known as empiricism, in contrast to Plato's nativism, was later further developed by John Stuart Mill, who argued that we learn two plus three is five by seeing many examples where it holds true: two apples and three apples make five apples, two beers and three beers make five beers, and so on.

Keyword: Development of the Brain; Learning & Memory
Link ID: 28693 - Posted: 03.08.2023

By Stephani Sutherland Tara Ghormley has always been an overachiever. She finished at the top of her class in high school, graduated summa cum laude from college and earned top honors in veterinary school. She went on to complete a rigorous training program and build a successful career as a veterinary internal medicine specialist. But in March 2020 she got infected with the SARS-CoV-2 virus—just the 24th case in the small, coastal central California town she lived in at the time, near the site of an early outbreak in the COVID pandemic. “I could have done without being first at this,” she says. Almost three years after apparently clearing the virus from her body, Ghormley is still suffering. She gets exhausted quickly, her heartbeat suddenly races, and she goes through periods where she can't concentrate or think clearly. Ghormley and her husband, who have relocated to a Los Angeles suburb, once spent their free time visiting their “happiest place on Earth”—Disneyland—but her health prevented that for more than a year. She still spends most of her days off resting in the dark or going to her many doctors' appointments. Her early infection and ongoing symptoms make her one of the first people in the country with “long COVID,” a condition where symptoms persist for at least three months after the infection and can last for years. The syndrome is known by medical professionals as postacute sequelae of COVID-19, or PASC. People with long COVID have symptoms such as pain, extreme fatigue and “brain fog,” or difficulty concentrating or remembering things. As of February 2022, the syndrome was estimated to affect about 16 million adults in the U.S. and had forced between two million and four million Americans out of the workforce, many of whom have yet to return. Long COVID often arises in otherwise healthy young people, and it can follow even a mild initial infection. The risk appears at least slightly higher in people who were hospitalized for COVID and in older adults (who end up in the hospital more often). Women and those at socioeconomic disadvantage also face higher risk, as do people who smoke, are obese, or have any of an array of health conditions, particularly autoimmune disease. Vaccination appears to reduce the danger but does not entirely prevent long COVID.

Keyword: Attention; Learning & Memory
Link ID: 28667 - Posted: 02.15.2023

By Erin Garcia de Jesús Forget screwdrivers or drills. A stick and a straw make for a great cockatoo tool kit. Some Goffin’s cockatoos (Cacatua goffiniana) know whether they need to have more than one tool in claw to topple an out-of-reach cashew, researchers report February 10 in Current Biology. By recognizing that two items are necessary to access the snack, the birds join chimpanzees as the only nonhuman animals known to use tools as a set. The study is a fascinating example of what cockatoos are capable of, says Anne Clark, a behavioral ecologist at Binghamton University in New York, who was not involved in the study. A mental awareness that people often attribute to our close primate relatives can also pop up elsewhere in the animal kingdom. A variety of animals including crows and otters use tools but don’t deploy multiple objects together as a kit (SN: 9/14/16; SN: 3/21/17). Chimpanzees from the Republic of Congo’s Noubalé-Ndoki National Park, on the other hand, recognize the need for both a sharp stick to break into termite mounds and a fishing stick to scoop up an insect feast (SN: 10/19/04). Researchers knew wild cockatoos could use three different sticks to break open fruit in their native range of Indonesia. But it was unclear whether the birds might recognize the sticks as a set or instead as a chain of single tools that became necessary as new problems arose, says evolutionary biologist Antonio Osuna Mascaró of the University of Veterinary Medicine Vienna. © Society for Science & the Public 2000–2023.

Keyword: Learning & Memory; Evolution
Link ID: 28663 - Posted: 02.11.2023

By John M. Beggs Over the last few decades, an idea called the critical brain hypothesis has been helping neuroscientists understand how the human brain operates as an information-processing powerhouse. It posits that the brain is always teetering between two phases, or modes, of activity: a random phase, where it is mostly inactive, and an ordered phase, where it is overactive and on the verge of a seizure. The hypothesis predicts that between these phases, at a sweet spot known as the critical point, the brain has a perfect balance of variety and structure and can produce the most complex and information-rich activity patterns. This state allows the brain to optimize multiple information processing tasks, from carrying out computations to transmitting and storing information, all at the same time. To illustrate how phases of activity in the brain — or, more precisely, activity in a neural network such as the brain — might affect information transmission through it, we can play a simple guessing game. Imagine that we have a network with 10 layers and 40 neurons in each layer. Neurons in the first layer will only activate neurons in the second layer, and those in the second layer will only activate those in the third layer, and so on. Now, I will activate some number of neurons in the first layer, but you will only be able to observe the number of neurons active in the last layer. Let’s see how well you can guess the number of neurons I activated under three different strengths of network connections. First, let’s consider weak connections. In this case, neurons typically activate independently of each other, and the pattern of network activity is random. No matter how many neurons I activate in the first layer, the number of neurons activated in the last layer will tend toward zero because the weak connections dampen the spread of activity. This makes our guessing game incredibly difficult. The amount of information about the first layer that you can learn from the last layer is practically nothing. All Rights Reserved © 2023

Keyword: Attention; Learning & Memory
Link ID: 28652 - Posted: 02.01.2023

By Ellen Barry The effect of social media use on children is a fraught area of research, as parents and policymakers try to ascertain the results of a vast experiment already in full swing. Successive studies have added pieces to the puzzle, fleshing out the implications of a nearly constant stream of virtual interactions beginning in childhood. A new study by neuroscientists at the University of North Carolina tries something new, conducting successive brain scans of middle schoolers between the ages of 12 and 15, a period of especially rapid brain development. The researchers found that children who habitually checked their social media feeds at around age 12 showed a distinct trajectory, with their sensitivity to social rewards from peers heightening over time. Teenagers with less engagement in social media followed the opposite path, with a declining interest in social rewards. The study, published on Tuesday in JAMA Pediatrics, is among the first attempts to capture changes to brain function correlated with social media use over a period of years. The study has important limitations, the authors acknowledge. Because adolescence is a period of expanding social relationships, the brain differences could reflect a natural pivot toward peers, which could be driving more frequent social media use. “We can’t make causal claims that social media is changing the brain,” said Eva H. Telzer, an associate professor of psychology and neuroscience at the University of North Carolina, Chapel Hill, and one of the authors of the study. But, she added, “teens who are habitually checking their social media are showing these pretty dramatic changes in the way their brains are responding, which could potentially have long-term consequences well into adulthood, sort of setting the stage for brain development over time.” © 2023 The New York Times Company

Keyword: Development of the Brain; Stress
Link ID: 28619 - Posted: 01.04.2023

By Shayla Love On Valentine’s Day in 2016, Anne Lantoine received not flowers, but divorce papers. In the months preceding, she had been preparing for her family’s move from France to Canada—or so she thought. She arrived in Quebec early with one of her three children, who was preparing to start college there, while the other two remained in Europe for school. Her husband stayed behind to manage the sale of their house in Marseille. Then the realtors began to complain, through a barrage of calls and emails, to Lantoine. Her husband was not acting like a man who wanted his house sold. He wasn’t answering phone calls and was never available for showings. In January 2016, Lantoine called him after yet another complaint from a realtor. The next morning, he sent her an email with a notice for a court hearing, and she discovered her husband had actually filed for divorce, without telling her, months earlier. That February, she finally got the paperwork, not from her husband, but from her real estate agent. “It was not my last shock,” Lantoine, now 59, recalls. “I also discovered that my husband’s mistress was living in my home.” These revelations were a huge blow practically: It disrupted the immigration paperwork, and Lantoine and her daughter lost their visa applications. But the searing pain was in the betrayal and deceit. “I became very anxious and had constant nightmares,” she says. “I was tired all the time and had panic attacks each time I opened my mail or my emails, or when I had an unidentified phone call.” Though the details of each case vary, romantic betrayal through infidelity, abandonment, or emotional manipulation can upend one’s life in an instant. For Lantoine, her future plans, and the person they were attached to, were suddenly gone, and her functioning along with them. © 2022 NautilusThink Inc, All rights reserved.

Keyword: Stress; Learning & Memory
Link ID: 28612 - Posted: 12.28.2022

By Deborah Blum Back in the year 2000, sitting in his small home office in California’s Mill Valley, surrounded by stacks of spreadsheets, Jay Rosner hit one of those dizzying moments of dismay. An attorney and the executive director of The Princeton Review Foundation, the philanthropic arm of the private test-preparation and tutoring company, The Princeton Review, Rosner was scheduled to give testimony in a highly charged affirmative action lawsuit against the University of Michigan. He knew the case, Grutter v. Bollinger, was eventually headed to the U.S. Supreme Court, but as he reviewed the paperwork, he discovered a daunting gap in his argument.  Rosner had been asked to explore potential racial and cultural biases baked into standardized testing. He believed such biases, which critics had been surfacing for years prior, were real, but in that moment, he felt himself coming up short. “I suddenly realized that I would be deposed on this issue,” he recalled, “and I had no data to support my hypothesis, only deductive reasoning.”   The punch of that realization still resonates. Rosner is the kind of guy who really likes data to stand behind his points, and he recalls an anxiety-infused hunt for some solid facts. Rosner was testifying about an entrance exam for law school, the LSAT, for which he could find no particulars. But he knew that a colleague had data on how students of different racial backgrounds answered specific questions on another powerful standardized test, the SAT, long used to help decide undergraduate admission to colleges — given in New York state. He decided he could use that information to make a case by analogy. The two scholars agreed to crunch some numbers.  Based on past history of test results, he knew that White students would overall have higher scores than Black students. Still, Rosner expected Black students to perform better on some questions. To his shock, he found no trace of such balance. The results were “incredibly uniform,” he said, skewing almost entirely in favor of White students. “Every single question except one in the New York state data on four SATs favored Whites over Blacks,” Rosner recalled.

Keyword: Intelligence; Genes & Behavior
Link ID: 28611 - Posted: 12.24.2022

Jon Hamilton Time is woven into our personal memories. Recall a childhood fall from a bike and the brain replays the entire episode in excruciating detail: the glimpse of wet leaves on the road ahead, the moment of weightless dread, and then the painful impact. This exact sequence has been embedded in the memory, thanks to some special neurons known as time cells. When the brain detects a notable event, time cells begin a highly orchestrated performance, says Marc Howard, who directs the Brain, Behavior, and Cognition program at Boston University. "What we find is that the cells fire in a sequence," he says. "So cell one might fire immediately, but cell two waits a little bit, followed by cell three, cell four, and so on." As each cell fires, it places a sort of time stamp on an unfolding experience. And the same cells fire in the same order when we retrieve a memory of the experience, even something mundane. "If I remember being in my kitchen and making a cup of coffee," Howard says, "the time cells that were active at that moment are re-activated." They recreate the grinder's growl, the scent of Arabica, the curl of steam rising from a fresh mug – and your neurons replay these moments in sequence every time you summon the memory. This system appears to explain how we are able to virtually travel back in time, and play mental movies of our life experiences. There are also hints that time cells play a critical role in imagining future events. Without time cells, our memories would lack order. In an experiment at the University of California, San Diego, scientists gave several groups of people a tour of the campus. The tour included 11 planned events, including finding change in a vending machine and drinking from a water fountain. © 2022 npr

Keyword: Attention; Learning & Memory
Link ID: 28608 - Posted: 12.21.2022