Chapter 13. Memory and Learning

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 81 - 100 of 1912

Jon Hamilton A team of researchers has developed a new way to study how genes may cause autism and other neurodevelopmental disorders: by growing tiny brain-like structures in the lab and tweaking their DNA. These "assembloids," described in the journal Nature, could one day help researchers develop targeted treatments for autism spectrum disorder, intellectual disability, schizophrenia, and epilepsy. "This really accelerates our effort to try to understand the biology of psychiatric disorders," says Dr. Sergiu Pașca, a professor of psychiatry and behavioral sciences at Stanford University and an author of the study. The research suggests that someday "we'll be able to predict which pathways we can target to intervene" and prevent these disorders, adds Kristen Brennand, a professor of psychiatry at Yale who was not involved in the work. The study comes after decades of work identifying hundreds of genes that are associated with autism and other neurodevelopmental disorders. But scientists still don't know how problems with these genes alter the brain. "The challenge now is to figure out what they're actually doing, how disruptions in these genes are actually causing disease," Pașca says. "And that has been really difficult." For ethical reasons, scientists can't just edit a person's genes to see what happens. They can experiment on animal brains, but lab animals like rodents don't really develop anything that looks like autism or schizophrenia. So Pașca and a team of scientists tried a different approach, which they detailed in their new paper. The team did a series of experiments using tiny clumps of human brain cells called brain organoids. These clumps will grow for a year or more in the lab, gradually organizing their cells much the way a developing brain would. And by exposing an organoid to certain growth factors, scientists can coax it into resembling tissue found in brain areas including the cortex and hippocampus. © 2023 npr

Keyword: Epilepsy; Autism
Link ID: 28940 - Posted: 10.03.2023

By Stephanie Pappas If you’ve ever awoken from a vivid dream only to find that you can’t remember the details by the end of breakfast, you’re not alone. People forget most of the dreams they have—though it is possible to train yourself to remember more of them. Dreaming happens mostly (though not always exclusively) during rapid eye movement (REM) sleep. During this sleep stage, brain activity looks similar to that in a waking brain, with some very important differences. Key among them: during REM sleep, the areas of the brain that transfer memories into long-term storage—as well as the long-term storage areas themselves—are relatively deactivated, says Deirdre Barrett, a dream researcher at Harvard Medical School and author of the book The Committee of Sleep (Oneiroi Press, 2001). This may be a side effect of REM’s role in memory consolidation, according to a 2019 study on mice in the journal Science. Short-term memory areas are active during REM sleep, but those only hang on to memories for about 30 seconds. “You have to wake up from REM sleep, generally, to recall a dream,” Barrett says. If, instead, you pass into the next stage of sleep without rousing, that dream will never enter long-term memory. REM sleep occurs about every 90 minutes, and it lengthens as the night drags on. The first REM cycle of the night is typically just a few minutes long, but by the end of an eight-hour night of sleep, a person has typically been in the REM stage for a good 20 minutes, Barrett says. That’s why the strongest correlation between any life circumstance and your memory of dreams is the number of hours you’ve slept. If you sleep only six hours, you’re getting less than half of the dream time of an eight-hour night, she says. Those final hours of sleep are the most important for dreaming. And people tend to remember the last dream of the night—the one just before waking. © 2023 Scientific American

Keyword: Sleep; Learning & Memory
Link ID: 28939 - Posted: 10.03.2023

Sara Reardon Scientists have identified two types of brain cell linked to a reduced risk of dementia in older people — even those who have brain abnormalities that are hallmarks of Alzheimer’s disease1. The finding could eventually lead to new ways to protect these cells before they die. The results were published in Cell on 28 September. The most widely held theory about Alzheimer’s attributes the disease to a build-up of sticky amyloid proteins in the brain. This leads to clump-like ‘plaques’ of amyloid that slowly kill neurons and eventually destroy memory and cognitive ability. But not everyone who develops cognitive impairment late in life has amyloid clumps in their brain, and not everyone with amyloid accumulation develops Alzheimer’s. Neurobiologist Hansruedi Mathys at the University of Pittsburgh School of Medicine in Pennsylvania and neuroscientist Li-Huei Tsai and computer scientist Manolis Kellis at the Massachusetts Institute of Technology in Cambridge and their colleagues decided to investigate this disconnect. To do so, they used data from a massive study that tracks cognitive and motor skills in thousands of people throughout old age. The researchers examined tissue samples from 427 brains from participants who had died. Some of those participants had dementia typical of advanced Alzheimer’s disease, some had mild cognitive impairment and the remainder had no sign of impairment. The researchers isolated cells from each participant’s prefrontal cortex, the region involved in higher brain function. To classify the cells, they sequenced all the active genes in each one. This allowed them to create an atlas of the brain showing where the different cell types occur. The scientists identified two key cell types that had a specific genetic marker. One had active genes coding for reelin, a protein associated with brain disorders such as schizophrenia, and the other had active genes that code for somatostatin, a hormone that regulates processes throughout the body. © 2023 Springer Nature Limited

Keyword: Alzheimers; Genes & Behavior
Link ID: 28938 - Posted: 09.29.2023

By Clay Risen Endel Tulving, whose insights into the structure of human memory and the way we recall the past revolutionized the field of cognitive psychology, died on Sept. 11 in Mississauga, Ontario. He was 96. His daughters, Linda Tulving and Elo Tulving-Blais, said his death, at an assisted living home, was caused by complications of a stroke. Until Dr. Tulving began his pathbreaking work in the 1960s, most cognitive psychologists were more interested in understanding how people learn things than in how they retain and recall them. When they did think about memory, they often depicted it as one giant cerebral warehouse, packed higgledy-piggledy, with only a vague conception of how we retrieved those items. This, they asserted, was the realm of “the mind,” an untestable, almost philosophical construct. Dr. Tulving, who spent most of his career at the University of Toronto, first made his name with a series of clever experiments and papers, demonstrating how the mind organizes memories and how it uses contextual cues to retrieve them. Forgetting, he posited, was less about information loss than it was about the lack of cues to retrieve it. He established his legacy with a chapter in the 1972 book “Organization of Memory,” which he edited with Wayne Donaldson. In that chapter, he argued for a taxonomy of memory types. He started with two: procedural memory, which is largely unconscious and involves things like how to walk or ride a bicycle, and declarative memory, which is conscious and discrete. © 2023 The New York Times Company

Keyword: Learning & Memory
Link ID: 28934 - Posted: 09.29.2023

By Veronique Greenwood In the dappled sunlit waters of Caribbean mangrove forests, tiny box jellyfish bob in and out of the shade. Box jellies are distinguished from true jellyfish in part by their complex visual system — the grape-size predators have 24 eyes. But like other jellyfish, they are brainless, controlling their cube-shaped bodies with a distributed network of neurons. That network, it turns out, is more sophisticated than you might assume. On Friday, researchers published a report in the journal Current Biology indicating that the box jellyfish species Tripedalia cystophora have the ability to learn. Because box jellyfish diverged from our part of the animal kingdom long ago, understanding their cognitive abilities could help scientists trace the evolution of learning. The tricky part about studying learning in box jellies was finding an everyday behavior that scientists could train the creatures to perform in the lab. Anders Garm, a biologist at the University of Copenhagen and an author of the new paper, said his team decided to focus on a swift about-face that box jellies execute when they are about to hit a mangrove root. These roots rise through the water like black towers, while the water around them appears pale by comparison. But the contrast between the two can change from day to day, as silt clouds the water and makes it more difficult to tell how far away a root is. How do box jellies tell when they are getting too close? “The hypothesis was, they need to learn this,” Dr. Garm said. “When they come back to these habitats, they have to learn, how is today’s water quality? How is the contrast changing today?” In the lab, researchers produced images of alternating dark and light stripes, representing the mangrove roots and water, and used them to line the insides of buckets about six inches wide. When the stripes were a stark black and white, representing optimum water clarity, box jellies never got close to the bucket walls. With less contrast between the stripes, however, box jellies immediately began to run into them. This was the scientists’ chance to see if they would learn. © 2023 The New York Times Company

Keyword: Learning & Memory; Evolution
Link ID: 28925 - Posted: 09.23.2023

COMIC: When, why and how did neurons first evolve? Scientists are piecing together the ancient story. By Tim Vernimmen Illustrated by Maki Naro 09.14.2023 © 2023 Annual Reviews

Keyword: Evolution; Development of the Brain
Link ID: 28920 - Posted: 09.21.2023

By Janet Lee Doing puzzles, playing memory-boosting games, taking classes and reading are activities that we often turn to for help keeping our brains sharp. But research is showing that what you eat, how often you exercise and the type of exercise you do can help lower your risk of dementia to a greater extent than previously thought. Live well every day with tips and guidance on food, fitness and mental health, delivered to your inbox every Thursday. Although more studies are needed, “there’s a lot of data that suggests exercise and diet are good for the brain and can prevent or help slow down” cognitive changes, says Jeffrey Burns, co-director of the University of Kansas Alzheimer’s Disease Research Center in Fairway. And living a healthy lifestyle can produce brain benefits no matter what your age. The big diet picture If you’re already eating in a way that protects your heart — plenty of whole grains, vegetables, and fruit, and little saturated fat, sodium and ultra-processed “junk” foods — there’s good news: You’re also protecting your brain. A healthy cardiovascular system keeps blood vessels open, allowing good blood flow to the brain and reducing the risk of high blood pressure, stroke and dementia. Research suggests that two specific dietary approaches — the Mediterranean diet and the MIND diet (the Mediterranean-DASH Intervention for Neurodegenerative Delay, essentially a combo of two heart-healthy eating plans) — may help stave off cognitive decline. Both diets rely on eating mostly plant foods (fruits, vegetables, whole grains, beans, nuts), olive oil, fish and poultry. The main difference between the two is that the MIND diet emphasizes specific fruits and vegetables, such as berries and leafy greens. Studies show that people who most closely follow either diet have a reduced risk of dementia compared with those who don’t. For example, people eating the Mediterranean way had a 23 percent lower risk of dementia in a nine-year study of more than 60,000 men and women published this year in BMC Medicine.

Keyword: Alzheimers
Link ID: 28915 - Posted: 09.21.2023

By Jim Davies Think of what you want to eat for dinner this weekend. What popped into mind? Pizza? Sushi? Clam chowder? Why did those foods (or whatever foods you imagined) appear in your consciousness and not something else? Psychologists have long held that when we are making a decision about a particular category of thing, we tend to bring to mind items that are typical or common in our culture or everyday lives, or ones we value the most. On this view, whatever foods you conjured up are likely ones that you eat often, or love to eat. Sounds intuitive. But a recent paper published in Cognition suggests it’s more complicated than that. Tracey Mills, a research assistant working at MIT, led the study along with Jonathan Phillips, a cognitive scientist and philosopher at Dartmouth College. They put over 2,000 subjects, recruited online, through a series of seven experiments that allowed them to test a novel approach for understanding which ideas within a category will pop into our consciousness—and which won’t. In this case, they had subjects think about zoo animals, holidays, jobs, kitchen appliances, chain restaurants, sports, and vegetables. What they found is that what makes a particular thing come to mind—such as a lion when one is considering zoo animals—is determined not by how valuable or familiar it is, but by where it lies in a multidimensional idea grid that could be said to resemble a kind of word cloud. “Under the hypothesis we argue for,” Mills and Phillips write, “the process of calling members of a category to mind might be modeled as a search through feature space, weighted toward certain features that are relevant for that category.” Historical “value” just happens to be one dimension that is particularly relevant when one is talking about dinner, but is less relevant for categories such as zoo animals or, say, crimes, they write. © 2023 NautilusNext Inc., All rights reserved.

Keyword: Attention; Learning & Memory
Link ID: 28910 - Posted: 09.16.2023

By Joanna Thompson= Like many people, Mary Ann Raghanti enjoys potatoes loaded with butter. Unlike most people, however, she actually asked the question of why we love stuffing ourselves with fatty carbohydrates. Raghanti, a biological anthropologist at Kent State University, has researched the neurochemical mechanism behind that savory craving. As it turns out, a specific brain chemical may be one of the things that not only developed our tendency to overindulge in food, alcohol and drugs but also helped the human brain evolve to be unique from the brains of closely related species. A new study, led by Raghanti and published on September 11 in the Proceedings of the National Academy of Sciences USA, examined the activity of a particular neurotransmitter in a region of the brain that is associated with reward and motivation across several species of primates. The researchers found higher levels of that brain chemical—neuropeptide Y (NPY)—in humans, compared with our closest living relatives. That boost in the reward peptide could explain our love of high-fat foods, from pizza to poutine. The impulse to stuff ourselves with fats and sugars may have given our ancestors an evolutionary edge, allowing them to develop a larger and more complex brain. “I think this is a first bit of neurobiological insight into one of the most interesting things about us as a species,” says Robert Sapolsky, a neuroendocrinology researcher at Stanford University, who was not directly involved in the research but helped review the new paper. Advertisement Neuropeptide Y is associated with “hedonic eating”—consuming food strictly to experience pleasure rather than to satisfy hunger. It drives individuals to seek out high-calorie foods, especially those rich in fat. Historically, though, NPY has been overlooked in favor of flashier “feel good” chemicals such as dopamine and serotonin. © 2023 Scientific American,

Keyword: Obesity; Intelligence
Link ID: 28905 - Posted: 09.13.2023

By Jacqueline Howard and Deidre McPhillips, Most families of children with autism may face long wait times to diagnose their child with the disorder, and once a diagnosis is made, it sometimes may not be definitive. But now, two studies released Tuesday suggest that a recently developed eye-tracking tool could help clinicians diagnose children as young as 16 months with autism – and with more certainty. Kids’ developmental disability diagnoses became more common during pandemic, but autism rates held steady, CDC report says “This is not a tool to replace expert clinicians,” said Warren Jones, director of research at the Marcus Autism Center at Children’s Healthcare of Atlanta and Nien Distinguished Chair in Autism at Emory University School of Medicine, who was an author on both studies. Rather, he said, the hope with this eye-tracking technology is that “by providing objective measurements that objectively measure the same thing in each child,” it can help inform the diagnostic process. The tool, called EarliPoint Evaluation, is cleared by the US Food and Drug Administration to help clinicians diagnose and assess autism, according to the researchers. Traditionally, children are diagnosed with autism based on a clinician’s assessment of their developmental history, behaviors and parents’ reports. Evaluations can take hours, and some subtle behaviors associated with autism may be missed, especially among younger children. “Typically, the way we diagnose autism is by rating our impressions,” said Whitney Guthrie, a clinical psychologist and scientist at the Children’s Hospital of Philadelphia’s Center for Autism Research. She was not involved in the new studies, but her research focuses on early diagnosis of autism.

Keyword: Autism; Schizophrenia
Link ID: 28904 - Posted: 09.13.2023

By Saugat Bolakhe Memory doesn’t represent a single scientific mystery; it’s many of them. Neuroscientists and psychologists have come to recognize varied types of memory that coexist in our brain: episodic memories of past experiences, semantic memories of facts, short- and long-term memories, and more. These often have different characteristics and even seem to be located in different parts of the brain. But it’s never been clear what feature of a memory determines how or why it should be sorted in this way. Now, a new theory backed by experiments using artificial neural networks proposes that the brain may be sorting memories by evaluating how likely they are to be useful as guides in the future. In particular, it suggests that many memories of predictable things, ranging from facts to useful recurring experiences — like what you regularly eat for breakfast or your walk to work — are saved in the brain’s neocortex, where they can contribute to generalizations about the world. Memories less likely to be useful — like the taste of that unique drink you had at that one party — are kept in the seahorse-shaped memory bank called the hippocampus. Actively segregating memories this way on the basis of their usefulness and generalizability may optimize the reliability of memories for helping us navigate novel situations. The authors of the new theory — the neuroscientists Weinan Sun and James Fitzgerald of the Janelia Research Campus of the Howard Hughes Medical Institute, Andrew Saxe of University College London, and their colleagues — described it in a recent paper in Nature Neuroscience. It updates and expands on the well-established idea that the brain has two linked, complementary learning systems: the hippocampus, which rapidly encodes new information, and the neocortex, which gradually integrates it for long-term storage. James McClelland, a cognitive neuroscientist at Stanford University who pioneered the idea of complementary learning systems in memory but was not part of the new study, remarked that it “addresses aspects of generalization” that his own group had not thought about when they proposed the theory in the mid 1990s. All Rights Reserved © 2023

Keyword: Learning & Memory; Attention
Link ID: 28900 - Posted: 09.07.2023

By Astrid Landon In June 2015, Jeffrey Thelen’s parents noticed their son was experiencing problems with his memory. In the subsequent years, he would get lost driving to his childhood home, forget his cat had died, and fail to recognize his brother and sister. His parents wondered: Was electroconvulsive therapy to blame? Thelen had been regularly receiving the treatment to help with symptoms of severe depression, which he’d struggled with since high school. At 34 years old, he had tried medications, but hadn’t had a therapy plan. His primary care physician referred him to get an evaluation for ECT, which was then prescribed by a psychiatrist. Electroconvulsive therapy has been used to treat various mental illnesses since the late 1930s. The technique, which involves passing electrical currents through the brain to trigger a short seizure, has always had a somewhat torturous reputation. Yet it’s still in use, in a modified form of its original version. According to one commonly cited statistic, 100,000 Americans receive ECT annually — most often to ease symptoms of severe depression or bipolar disorder — although exact demographic data is scarce. For Thelen, the treatment appeared to relieve his depression symptoms somewhat, but he reported new headaches and concentration issues, in addition to the memory loss. Those claims are central to a lawsuit Thelen filed in 2020 against Somatics, LLC and Elektrika, Inc., manufacturers and suppliers of ECT devices, alleging that the companies failed to disclose — and even intentionally hid — risks associated with ECT, including “brain damage and permanent neurocognitive injuries.” Thelen’s legal team told Undark that they have since reached a resolution with Elektrika on confidential terms. With regard to Somatics, in June a jury found that the company failed to warn about risks associated with ECT, but could not conclude that there was a legal causation between that and Thelen’s memory loss. The following month, his lawyers filed a motion for a new trial. (In response to a request for comment, Conrad Swartz, one of Somatics’ co-founders, directed Undark to the company’s attorney, Sue Cole. Cole did not respond to multiple emails. Lawyers for Elektrika declined to comment.)

Keyword: Depression; Learning & Memory
Link ID: 28899 - Posted: 09.07.2023

By Claudia López Lloreda Cells hidden in the skull may point to a way to detect, diagnose and treat inflamed brains. A detailed look at the skull reveals that bone marrow cells there change and are recruited to the brain after injury, possibly traveling through tiny channels connecting the skull and the outer protective layer of the brain. Paired with the discovery that inflammation in the skull is disease-specific, these new findings collectively suggest the skull’s marrow could serve as a target to track and potentially treat neurological disorders involving brain inflammation, researchers report August 9 in Cell. Immune cells that infiltrate the central nervous system during many diseases and neuronal injury can wreak havoc by flooding the brain with damaging molecules. This influx of immune cells causes inflammation in the brain and spinal cord and can contribute to diseases like multiple sclerosis (SN: 11/26/19). Detecting and dampening this reaction has been an extensive field of research. With this new work, the skull, “something that has been considered as just protective, suddenly becomes a very active site of interaction with the brain, not only responding to brain diseases, but also changing itself in response to brain diseases,” says Gerd Meyer zu Hörste, a neurologist at University of Münster in Germany who was not involved in the study. Ali Ertürk of the Helmholtz Center in Munich and colleagues discovered this potential role for the skull while probing the idea that the cells in skull marrow might behave differently from those in other bones. Ertürk’s team compared the genetic activity of cells in mice skull marrow, and the proteins those cells made, with those in the rodent’s humerus, femur and four other bones, along with the meninges, the protective membranes between the skull and the brain. © Society for Science & the Public 2000–2023.

Keyword: Alzheimers; Multiple Sclerosis
Link ID: 28898 - Posted: 09.07.2023

Diana Kwon Santiago Ramón y Cajal revolutionized neurobiology in the late nineteenth century with his exquisitely detailed illustrations of neural tissues. Created through years of meticulous microscopy work, the Spanish physician-scientist’s drawings revealed the unique cellular morphology of the brain. “With Cajal’s work, we saw that the cells of the brain don’t look like the cells of every other part of the body — they have incredible morphologies that you just don’t see elsewhere,” says Evan Macosko, a neuroscientist at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts. Ramón y Cajal’s drawings provided one of the first clues that the keys to understanding how the brain governs its many functions, from regulating blood pressure and sleep to controlling cognition and mood, might lie at the cellular level. Still, when it comes it comes to the brain, crucial information remained — and indeed, remains — missing. “In order to have a fundamental understanding of the brain, we really need to know how many different types of cells there are, how are they organized, and how they interact with each other,” says Xiaowei Zhuang, a biophysicist at Harvard University in Cambridge. What neuroscientists require, Zhuang explains, is a way to systematically identify and map the many categories of brain cells. Now researchers are closing in on such a resource, at least in mice. By combining high-throughput single-cell RNA sequencing with spatial transcriptomics — methods for determining which genes are expressed in individual cells, and where those cells are located — they are creating some of the most comprehensive atlases of the mouse brain so far. The crucial next steps will be working out what these molecularly defined cell types do, and bringing the various brain maps together to create a unified resource that the broader neuroscience community can use. © 2023 Springer Nature Limited

Keyword: Brain imaging; Development of the Brain
Link ID: 28880 - Posted: 08.24.2023

By Lauren Leffer When a nematode wriggles around a petri dish, what’s going on inside a tiny roundworm’s even tinier brain? Neuroscientists now have a more detailed answer to that question than ever before. As with any experimental animal, from a mouse to a monkey, the answers may hold clues about the contents of more complex creatures’ noggin, including what resides in the neural circuitry of our own head. A new brain “atlas” and computer model, published in Cell on Monday, lays out the connections between the actions of the nematode species Caenorhabditis elegans and this model organism’s individual brain cells. With the findings, researchers can now observe a C. elegans worm feeding or moving in a particular way and infer activity patterns for many of the animal’s behaviors in its specific neurons. Through establishing those brain-behavior links in a humble roundworm, neuroscientists are one step closer to understanding how all sorts of animal brains, even potentially human ones, encode action. “I think this is really nice work,” says Andrew Leifer, a neuroscientist and physicist who studies nematode brains at Princeton University and was not involved in the new research. “One of the most exciting reasons to study how a worm brain works is because it holds the promise of being able to understand how any brain generates behavior,” he says. “What we find in the worm forms hypotheses to look for in other organisms.” Biologists have been drawn to the elegant simplicity of nematode biology for many decades. South African biologist Sydney Brenner received a Nobel Prize in Physiology or Medicine in 2002 for pioneering work that enabled C. elegans to become an experimental animal for the study of cell maturation and organ development. C. elegans was the first multicellular organism to have its entire genome and nervous system mapped. The first neural map, or “connectome,” of a C. elegans brain was published in 1986. In that research, scientists hand drew connections using colored pencils and charted each of the 302 neurons and approximately 5,000 synapses inside the one-millimeter-long animal’s transparent body. Since then a subdiscipline of neuroscience has emerged—one dedicated to plotting out the brains of increasingly complex organisms. Scientists have compiled many more nematode connectomes, as well as brain maps of a marine annelid worm, a tadpole, a maggot and an adult fruit fly. Yet these maps simply serve as a snapshot in time of a single animal. They can tell us a lot about brain structure but little about how behaviors relate to that structure. © 2023 Scientific American

Keyword: Brain imaging; Development of the Brain
Link ID: 28879 - Posted: 08.24.2023

by Calli McMurray One of the co-directors of a now-shuttered Maryland psychology clinic implicated in 18 paper retractions has retired, Spectrum has learned. Prior to her retirement, Clara Hill was professor of psychology at the University of Maryland in College Park. Headshot of Clara Hill. Recent retirement: Clara Hill retired from the University of Maryland in the midst of 18 paper retractions after a 49-year career. Starting on 1 June, the American Psychological Association (APA) retracted 11 papers by Hill and her university colleagues Dennis Kivlighan, Jr. and Charles Gelso over issues with obtaining participant consent. The publisher plans to retract six more papers by the end of the year, according to an APA representative. On 13 August, Taylor & Francis retracted an additional paper led solely by Hill. The research was conducted at the Maryland Psychotherapy Clinic and Research Lab, where Hill, Kivlighan and Gelso were co-directors. The clinic had shut down as of 1 June. When asked about the circumstances surrounding Hill’s retirement, a university spokesperson told Spectrum in an email, “Dr. Clara Hill retired from UMD effective July 1, 2023.” After Spectrum asked again about the circumstances, a spokesperson replied, “This is all we’ll have for you on the faculty member’s retirement — thanks!” Hill worked at the university for 49 years. As of 1 August, Hill’s faculty page did not mention her retirement. By 14 August, her position had been amended to “Professor (Retired),” and a notice of her retirement had been added to the beginning of her biography. Spectrum left two voicemails on Hill’s university office phone and emailed her university address with requests for comment but did not hear back. The 11 papers retracted by the APA appeared in the Journal of Counseling Psychology, Dreaming and Psychotherapy. The additional retractions will come from the same titles, according to an APA representative. Hill conducted all 11 studies, whereas Kivlighan and Gelso conducted 10 and 6, respectively. © 2023 Simons Foundation

Keyword: Autism
Link ID: 28877 - Posted: 08.24.2023

Saima May Sidik A protein involved in wound healing can improve learning and memory in ageing mice1. Platelet factor 4 (PF4) has long been known for its role in promoting blood clotting and sealing broken blood vessels. Now, researchers are wondering whether this signalling molecule could be used to treat age-related cognitive disorders such as Alzheimer’s disease. “The therapeutic possibilities are very exciting,” says geneticist and anti-ageing scientist David Sinclair at Harvard University in Boston, Massachusetts, who was not involved in the research. The study was published on 16 August in Nature. Young blood, old brains About a decade ago, scientists discovered that blood from young mice could restore youthful properties, including learning abilities, in older mice2,3. The idea captivated Saul Villeda, a neuroscientist at the University of California, San Francisco, and a co-author of the new study. He and his colleagues have since been trying to identify the components of blood that cause this rejuvenation. Several lines of evidence suggested that PF4 might be one of these components, including the fact that young mice have higher levels of this molecule in their blood than do older mice. Villeda and his colleagues tried injecting PF4 into aged mice without including other blood components. The researchers found that the ratios of various types of immune cell shifted to become more similar to what is typically seen in younger mice. Some immune cells also reverted to a more youthful pattern of gene expression. Although PF4 was not able to cross the blood–brain barrier, its effects on the immune system also led to changes in the brain, probably through indirect mechanisms. Old mice that received doses of PF4 showed decreases in damaging inflammation in the hippocampus — a part of the brain that’s particularly vulnerable to the effects of ageing. They also showed increases in the levels of molecules that promote synaptic plasticity (the capacity to alter the strength of connections between nerve cells). © 2023 Springer Nature Limited

Keyword: Development of the Brain
Link ID: 28874 - Posted: 08.19.2023

By Alla Katsnelson Our understanding of animal minds is undergoing a remarkable transformation. Just three decades ago, the idea that a broad array of creatures have individual personalities was highly suspect in the eyes of serious animal scientists — as were such seemingly fanciful notions as fish feeling pain, bees appreciating playtime and cockatoos having culture. Today, though, scientists are rethinking the very definition of what it means to be sentient and seeing capacity for complex cognition and subjective experience in a great variety of creatures — even if their inner worlds differ greatly from our own. Such discoveries are thrilling, but they probably wouldn’t have surprised Charles Henry Turner, who died a century ago, in 1923. An American zoologist and comparative psychologist, he was one of the first scientists to systematically probe complex cognition in animals considered least likely to possess it. Turner primarily studied arthropods such as spiders and bees, closely observing them and setting up trailblazing experiments that hinted at cognitive abilities more complex than most scientists at the time suspected. Turner also explored differences in how individuals within a species behaved — a precursor of research today on what some scientists refer to as personality. Most of Turner’s contemporaries believed that “lowly” critters such as insects and spiders were tiny automatons, preprogrammed to perform well-defined functions. “Turner was one of the first, and you might say should be given the lion’s share of credit, for changing that perception,” says Charles Abramson, a comparative psychologist at Oklahoma State University in Stillwater who has done extensive biographical research on Turner and has been petitioning the US Postal Service for years to issue a stamp commemorating him. Turner also challenged the views that animals lacked the capacity for intelligent problem-solving and that they behaved based on instinct or, at best, learned associations, and that individual differences were just noisy data. But just as the scientific establishment of the time lacked the imagination to believe that animals other than human beings can have complex intelligence and subjectivity of experience, it also lacked the collective imagination to envision Turner, a Black scientist, as an equal among them. The hundredth anniversary of Turner’s death offers an opportunity to consider what we may have missed out on by their oversight. © 2023 Annual Reviews

Keyword: Learning & Memory; Evolution
Link ID: 28869 - Posted: 08.09.2023

By Yasemin Saplakoglu On warm summer nights, green lacewings flutter around bright lanterns in backyards and at campsites. The insects, with their veil-like wings, are easily distracted from their natural preoccupation with sipping on flower nectar, avoiding predatory bats and reproducing. Small clutches of the eggs they lay hang from long stalks on the underside of leaves and sway like fairy lights in the wind. The dangling ensembles of eggs are beautiful but also practical: They keep the hatching larvae from immediately eating their unhatched siblings. With sickle-like jaws that pierce their prey and suck them dry, lacewing larvae are “vicious,” said James Truman, a professor emeritus of development, cell and molecular biology at the University of Washington. “It’s like ‘Beauty and the Beast’ in one animal.” This Jekyll-and-Hyde dichotomy is made possible by metamorphosis, the phenomenon best known for transforming caterpillars into butterflies. In its most extreme version, complete metamorphosis, the juvenile and adult forms look and act like totally different species. Metamorphosis is not an exception in the animal kingdom; it’s almost a rule. More than 80% of the known animal species today, mainly insects, amphibians and marine invertebrates, undergo some form of metamorphosis or have complex, multistage life cycles. The process of metamorphosis presents many mysteries, but some of the most deeply puzzling ones center on the nervous system. At the center of this phenomenon is the brain, which must code for not one but multiple different identities. After all, the life of a flying, mate-seeking insect is very different from the life of a hungry caterpillar. For the past half-century, researchers have probed the question of how a network of neurons that encodes one identity — that of a hungry caterpillar or a murderous lacewing larva — shifts to encode an adult identity that encompasses a completely different set of behaviors and needs. Truman and his team have now learned how much metamorphosis reshuffles parts of the brain. In a recent study published in the journal eLife, they traced dozens of neurons in the brains of fruit flies going through metamorphosis. They found that, unlike the tormented protagonist of Franz Kafka’s short story “The Metamorphosis,” who awakes one day as a monstrous insect, adult insects likely can’t remember much of their larval life. Although many of the larval neurons in the study endured, the part of the insect brain that Truman’s group examined was dramatically rewired. That overhaul of neural connections mirrored a similarly dramatic shift in the behavior of the insects as they changed from crawling, hungry larvae to flying, mate-seeking adults. All Rights Reserved © 2023

Keyword: Learning & Memory
Link ID: 28860 - Posted: 07.27.2023

by Giorgia Guglielmi Mice with a mutation that boosts the activity of the autism-linked protein UBE3A show an array of behaviors reminiscent of the condition, a new study finds. The behaviors differ depending on whether the animals inherit the mutation from their mother or their father, the work also reveals. The results add to mounting evidence that hyperactive UBE3A leads to autism. Duplications of the chromosomal region that includes UBE3A have been associated with autism, whereas deletions and mutations that destroy the gene’s function are known to cause Angelman syndrome, which is characterized by developmental delay, seizures, lack of speech, a cheerful demeanor and, often, autism. “UBE3A is on a lot of clinicians’ radar because it is well known to be causative for Angelman syndrome when mutated or deleted,” says lead investigator Mark Zylka, professor of cell biology and physiology at the University of North Carolina at Chapel Hill. “What our study shows is that just because you have a mutation in UBE3A, it doesn’t mean that it’s going to be Angelman syndrome.” In the cell, UBE3A is involved in the degradation of proteins, and “gain-of-function” mutations — which send the UBE3A protein into overdrive — result in enhanced degradation of its targets, including UBE3A itself. Studying the effects of these mutations could provide insight into how they affect brain development and suggest targets for therapies, says study investigator Jason Yi, assistant professor of neuroscience at Washington University in St. Louis, Missouri. Gain-of-function mutations in UBE3A can disrupt early brain development and may contribute to neurodevelopmental conditions that are distinct from Angelman syndrome, Yi and Zylka have shown in previous studies. One of the mutations they analyzed had been found in an autistic child, so the team used CRISPR to create mice with this mutation. © 2023 Simons Foundation

Keyword: Autism
Link ID: 28857 - Posted: 07.27.2023