Most Recent Links
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
Lynne Peeples Near the end of his first series of chess matches against IBM’s Deep Blue computer in 1996, the Russian grandmaster Garry Kasparov lamented what he saw as an unfair disadvantage: “I’m really tired. These games took a lot of energy. But if I play a normal human match, my opponent would also be exhausted.” Why thinking hard makes us feel tired Whereas machine intelligence can keep running as long as it has a power supply, a human brain will become fatigued — and you don’t have to be a chess grandmaster to understand the feeling. Anyone can end up drained after a long day of work, at school or juggling the countless decisions of daily life. This mental exhaustion can sap motivation, dull focus and erode judgement. It can raise the odds of careless mistakes. Especially when combined with sleep loss or circadian disruption, cognitive fatigue can also contribute to deadly medical errors and road traffic accidents. It was partly Kasparov’s weary comments that inspired Mathias Pessiglione, a cognitive neuroscientist and research director at the Paris Brain Institute, to study the tired brain. He wanted to know: “Why is this cognitive system prone to fatigue?” Researchers and clinicians have long struggled to define, measure and treat cognitive fatigue — relying mostly on self-reports of how tired someone says they feel. Now, however, scientists from across disciplines are enlisting innovative experimental approaches and biological markers to probe the metabolic roots and consequences of cognitive fatigue. The efforts are getting a boost in attention and funding in large part because of long COVID, which afflicts roughly 6 in every 100 people after infection with the coronavirus SARS-CoV-2, says Vikram Chib, a biomedical engineer at Johns Hopkins University in Baltimore, Maryland. “The primary symptom of long COVID is fatigue,” says Chib. “I think that has opened a lot of people’s eyes.” © 2025 Springer Nature Limited
Keyword: Neuroimmunology; Attention
Link ID: 30049 - Posted: 12.13.2025
By Sara Talpos It’s been more than a decade since scientists first started publishing papers on neural organoids, the small clusters of cells grown in labs and designed to mimic various parts of the human brain. Since then, organoids have been used to study everything from bipolar disorder and Alzheimer’s disease, to tumors and parasitic infections. Because these new tools have the potential to reduce the use of animals in research — a goal of the current Trump administration — the field’s future may be more financially secure than other areas of scientific research. In September, for example, the federal government announced an $87 million investment into organoid research broadly. Matthew Owen brings a unique perspective to this emerging field. As a philosopher of mind, he focuses on trying to understand both what the mind is and how it relates to the body and the brain. He draws on the work of historical philosophers and applies some of their ideas to modern-day science. In 2020, as a visiting scholar in a neuroscience lab at McGill University, he was introduced to researchers working with organoids. Owen, who also does research in bioethics, wanted to help them address a perhaps unsettling question: Could these miniature cell clusters ever develop consciousness? Some experts believe that organoid consciousness is not likely to happen anytime in the near future, if at all. Still, certain experiments are prompting the question. In 2022, for example, researchers, including Brett Kagan of the Australian start-up Cortical Labs, published a paper explaining how they had taught their lab-grown brain cells to play a ping-pong-like video game. (Because the cells were placed in a single layer, the structures were not technically organoids, though they are expected to have similar capabilities.) In the process, the authors wrote, the tiny cell clusters displayed “sentience.” Undark recently spoke with Owen about this particular experiment and about his own writing on organoids.
Keyword: Consciousness; Development of the Brain
Link ID: 30048 - Posted: 12.13.2025
By Christina Caron When Marjorie Isaacson first started taking medication for depression in her late 20s, she considered it lifesaving. At the time, she had been dealing with a rocky marriage and struggling to eat. The drug, she found, helped her gain equilibrium. “I was really grateful just to be able to function,” she said. But recently, Ms. Isaacson, 69, has been considering whether she wants to stay on antidepressants for the rest of her life. Specifically, Ms. Isaacson wonders about the long-term effects of her medication, a serotonin-norepinephrine reuptake inhibitor that is known to raise blood pressure. And she feels unsettled by the emerging backlash against psychiatric drugs that has condemned their side effects and difficult withdrawal symptoms. “As the years have passed, things have changed from ‘Take it and see how it goes, no need now to be concerned’ to ‘Well, it’s turning out things might be kinda complicated,’” she said. “That is worrisome.” Antidepressants are among the most prescribed and easily accessible drugs in the United States, and many people take them for years. But even though modern-day antidepressants have been around for decades — the Food and Drug Administration approved Prozac for depression treatment in 1987 — there is very little information about long-term use. The F.D.A. approved the drugs based on trials that lasted, at most, a few months, and randomized controlled trials of antidepressants have typically spanned only two years or less. Current clinical guidelines do not specify the optimal amount of time they should be taken for. The lack of data can make it hard for people to know when — or whether — to quit. So we asked psychiatrists: How long should someone stay on antidepressants? © 2025 The New York Times Company
Keyword: Depression
Link ID: 30047 - Posted: 12.13.2025
By Kelly Servick In the past 20 years, mice with glowing cables sprouting from their heads have become a staple of neuroscience. They reflect the rise of optogenetics, in which neurons are engineered to contain light-sensitive proteins called opsins, allowing pulses of light to turn them on or off. The method has powered thousands of basic experiments into the brain circuits that drive behavior and underlie disease. As this research tool matured, hopes arose for using it as a treatment, too. Compared with the electrical or magnetic brain stimulation approaches already in use, optogenetics offers a way to more precisely target and manipulate the exact cell types underlying brain disorders. So far only one optogenetic application—addressing certain kinds of vision loss by introducing opsins into cells in the eye—has made it into human trials. But its promising early results, along with the discovery of more sensitive and sophisticated opsins, are inspiring researchers to look beyond the eye, developing treatments that would act on peripheral nerves or deep in the brain. Initial tests of these strategies in animal models of epilepsy, amyotrophic lateral sclerosis (ALS), and other neurological disorders have been encouraging, researchers reported last month at the annual meeting of the Society for Neuroscience (SfN) in San Diego. One company is hoping to launch a human trial for an optogenetic pain treatment by 2027. “We definitely don’t want to oversell the idea of using optogenetics [on human brains] any time soon, but we also are firmly convinced that this is now the right moment to be thinking about this seriously,” University of Geneva neurologist and neuroscientist Christian Lüscher told an SfN session he chaired, in which participants presented a newly published road map for bringing optogenetics to the clinic. Still, the presenters acknowledged major remaining challenges, including possible risks of inserting genes for opsins—many of which are derived from algae or other microbes—into a person’s nerves or brain cells. © 2025 American Association for the Advancement of Science.
Keyword: Pain & Touch; Epilepsy
Link ID: 30046 - Posted: 12.13.2025
By Jan Hoffman To treat their pain, anxiety and sleep problems, millions of Americans turn to cannabis, which is now legal in 40 states for medical use. But a new review of 15 years of research concludes that the evidence of its benefits is often weak or inconclusive, and that nearly 30 percent of medical cannabis patients meet criteria for cannabis use disorder. “The evidence does not support the use of cannabis or cannabinoids at this point for most of the indications that folks are using it for,” said Dr. Michael Hsu, an addiction psychiatrist and clinical instructor at the University of California, Los Angeles, and the lead author of the review, which was published last month in the medical journal JAMA. (Cannabis refers to the entire plant; cannabinoids are its many compounds.) The analysis arrives amid a surging acceptance and normalization of cannabis products, a $32 billion industry. For the review, addiction experts at academic medical centers across the country studied more than 2,500 clinical trials, guidelines and surveys conducted mostly in the United States and Canada. They found a wide gulf between the health purposes for which the public seeks out cannabis and what gold-standard science shows about its effectiveness. The researchers distinguished between medical cannabis, sold at dispensaries, and pharmaceutical-grade cannabinoids — the handful of medicines approved by the Food and Drug Administration with formulations containing either low-grade THC, a psychoactive compound, or CBD, a nonintoxicating compound. Those medicines, including Marinol, Syndros and Cesamet, are available by prescription at conventional pharmacies and have had good results in easing chemotherapy-related nausea, stimulating the appetite of patients with debilitating illnesses like H.I.V./AIDS, and easing some pediatric seizure disorders. © 2025 The New York Times Company
Keyword: Drug Abuse; Pain & Touch
Link ID: 30045 - Posted: 12.13.2025
By Claudia López Lloreda A new commentary calls into question a 2024 paper that described a universal pattern of cortical brain oscillations. But that team has provided a more expansive analysis in response and stands by its original conclusions. Both articles were published today in “Matters Arising” in Nature Neuroscience. Ultimately, the back-and-forth suggests that a frequency “motif” may exist, but it may not be as general as the original study proposed, says Aitor Morales-Gregorio, a postdoctoral researcher at Charles University, who was not involved with any of the work. “The [2024] conclusions are way too optimistic about how general and how universal this principle might be.” The 2024 study identified a brain-wave motif in 14 cortical areas in macaques: Alpha and beta rhythms predominated in the deeper layers, whereas gamma bands appeared in the more superficial layers. Because this motif also showed up in marmosets and humans, the researchers speculated that it may be a universal mechanism for cortical computation in primates. “Results typically come with a level of variability, of noise, of uncertainty,” says 2024 study investigator Diego Mendoza-Halliday, assistant professor of neuroscience at the University of Pittsburgh. But this pattern “was just there the whole time, at all times, in many, many of the recordings.” The team leveraged the findings to create an algorithm that detects Layer 4 of the cortex. But the pattern is “by no means universal,” according to the new commentary, which found the motif in about 60 percent of the recordings in an independent monkey dataset. Further, the algorithm trained to identify Layer 4 of the cortex is unreliable, the commentary shows. © 2025 Simons Foundation
Keyword: Attention
Link ID: 30044 - Posted: 12.13.2025
By John Pavlus Even in a world where large language models (LLMs) and AI chatbots are commonplace, it can be hard to fully accept that fluent writing can come from an unthinking machine. That’s because, to many of us, finding the right words is a crucial part of thought — not the outcome of some separate process. But what if our neurobiological reality includes a system that behaves something like an LLM? Long before the rise of ChatGPT, the cognitive neuroscientist Ev Fedorenko (opens a new tab) began studying how language works in the adult human brain. The specialized system she has described, which she calls “the language network,” maps the correspondences between words and their meanings. Her research suggests that, in some ways, we do carry around a biological version of an LLM — that is, a mindless language processor — inside our own brains. “You can think of the language network as a set of pointers,” Fedorenko said. “It’s like a map, and it tells you where in the brain you can find different kinds of meaning. It’s basically a glorified parser that helps us put the pieces together — and then all the thinking and interesting stuff happens outside of [its] boundaries.” Fedorenko has been gathering biological evidence of this language network for the past 15 years in her lab at the Massachusetts Institute of Technology. Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess). Nor is the human language network particularly large: If all of its tissue were clumped together, it would be about the size of a strawberry (opens a new tab). But when it is damaged, the effect is profound. An injured language network can result in forms of aphasia (opens a new tab) in which sophisticated cognition remains intact but trapped within a brain unable to express it or distinguish incoming words from others. © 2025 Simons Foundation
Keyword: Language
Link ID: 30043 - Posted: 12.06.2025
By Siddhant Pusdekar A single dose of psilocybin leads to widespread network-specific changes to cortical circuitry in mice, according to a new study published today in Cell. The results help explain how psilocybin can bring about lasting changes in behavior, and they pinpoint “the neurons that are most affected,” says Andrea Gomez, assistant professor of molecular and cellular biology at the University of California, Berkeley, who was not involved in the study. Specifically, the psychedelic strengthens cortical inputs from sensory brain areas and weakens inputs into cortico-cortical recurrent loops. Overall, these network changes suggest that psychedelics reroute information in a way that enhances responses to the outside world and reduces rumination, says study investigator Alex Kwan, professor of biomedical engineering at Cornell University. “This study provides some more mechanistic insight for why the drug may be a good antidepressant.” And the rewiring itself is not static, Kwan adds: “It can be influenced by manipulating neural activity” during psychedelic treatment. With this locus of psychedelic-induced changes identified, researchers can unpack how these neuronal ensembles coordinate “to create particular percepts or particular cognitions,” Gomez says. Kwan’s team focused on the mouse dorsal medial prefrontal cortex (dmPFC), which includes the anterior cingulate cortex—an important hub for the serotonin receptors that psilocybin targets. One dose of psilocybin increases dendritic spine growth in the medial prefrontal cortex of mice, an effect that lasts for at least a month, according to a 2021 study by Kwan’s team. And the treatment reduces the animals’ learned stress-related behaviors, but only if pyramidal tract neurons—one of the major types of excitatory neurons in the dmPFC—are active, Kwan’s group reported in April. © 2025 Simons Foundation
Keyword: Drug Abuse; Depression
Link ID: 30042 - Posted: 12.06.2025
By Emily Anthes In just a few short years, new diabetes and weight loss drugs like Ozempic, Wegovy and Mounjaro have taken the world by storm. In the United States, one in eight adults say they’ve tried one of these medications, which are known as GLP-1 drugs, and that number seems sure to rise as prices fall and new oral formulations hit the market. Fluffy and Fido could be next. On Tuesday, Okava Pharmaceuticals, a biopharmaceutical company based in San Francisco, is set to announce that it has officially begun a pilot study of a GLP-1 drug for cats with obesity. The company is testing a novel approach: Instead of receiving weekly injections of the drugs, as has been common in human patients, the cats will get small, injectable implants, slightly larger than a microchip, that will slowly release the drug for as long as six months. “You insert that capsule under the skin, and then you come back six months later, and the cat has lost the weight,” said Dr. Chen Gilor, a veterinarian at the University of Florida, who is leading the study. “It’s like magic.” Results are expected next summer. If they are promising, they could represent the next frontier for a class of drugs that has upended human medicine, and a potentially transformative treatment option for millions of pets. Some veterinarians have already begun administering human GLP-1 drugs, off label, to diabetic cats, and Okava is not the only company developing a product specifically for companion animals. “I think this is going to be the next big thing,” said Dr. Ernie Ward, a veterinarian and the founder of the Association for Pet Obesity Prevention. Veterinarians, he added, are “on the precipice of a complete new era in obesity medicine.” © 2025 The New York Times Company
Keyword: Obesity
Link ID: 30041 - Posted: 12.06.2025
By Emily Cataneo Imagine having a dream that you are trapped in a room with five rabid tigers. No matter how hard you try, you can’t escape. The tigers are screeching and thrashing and you’re terrified. Now imagine repurposing this dream. Imagine it from the perspective of one of the tigers. Now, you realize that the animals are panicking only because they want to escape. You open the door, inviting them to freedom, and they lie down, docile. Suddenly, the dream has become peaceful and calm, not terrifying and chaotic. BOOK REVIEW — “Nightmare Obscura: A Dream Engineer’s Guide Through the Sleeping Mind,” by Michelle Carr (Henry Holt and Co., 272 pages). Freud might have had a field day with this dream, but thanks in part to psychoanalysis’ fall from grace over the last century, medical professionals no longer put much stock in our minds’ nighttime wanderings as markers of either physical or mental health. That’s what dream scientist Michelle Carr aims to change. Carr, who serves as director of the Dream Engineering Laboratory in the Center for Advanced Research in Sleep Medicine in Montreal, has spent two decades gathering data on people like the tiger dreamer: She’s spent countless nights in labs watching people sleep, probing why we dream, why we have bad dreams, and how studying and even manipulating dreams can improve mental and physical health. In “Nightmare Obscura: A Dream Engineer’s Guide Through the Sleeping Mind,” Carr makes a passionate case for why the answers to these questions matter, deeply, especially for sufferers of trauma and suicidal ideation. What emerges is a passionate case for why dreams and nightmares are not just “random electrophysiological noise produced by the brain during sleep,” as scientists believed for many years, but rather a nightly exercise in “revising the shape of our autobiography.” In other words, Carr argues, our dreamscapes are essential pillars of who we are.
Keyword: Sleep
Link ID: 30040 - Posted: 12.06.2025
Sara Protasi I love napping. I love napping in the summer, when rhythms are more relaxed and the guilt of taking a break less intense (if only slightly). But I also love napping in the winter, when it’s cold outside, and burying myself under a warm blanket makes me feel like I’m hibernating. No matter the season, when lying in bed, I luxuriate in the feeling of my body relaxing, waiting for the moment when odd images start forming somewhere in that space between my closed lids and my corneas – or, most likely, somewhere in my mind. I love drifting into unconsciousness without worrying about the next item on my to-do list. I’m not a sound sleeper or someone who falls asleep easily at night, but napping comes easily and sweetly. I treasure the days in which I can nap. And I treasure even more the nights in which I sleep long and well. Yet our culture prizes efficiency and productivity, often seeing sleep as a waste of time. ‘Tech bros’ boast about regularly working more than 70 hours a week, and aim to reduce their sleep time as much as possible. Elon Musk suggested even more intense work schedules for government workers during his time at the US Department of Government Efficiency (DOGE). His approach resonated with many adherents of the Silicon Valley grind culture, which has sought to ‘hack’ sleep for a long time. As one CEO of a cost-cutting firm told the news site Business Insider this year: ‘While a 120-hour workweek isn’t a practical or sustainable solution for most, the principle behind it resonates. Companies that prioritise efficiency, automation and proactive cost management will always outperform those weighed down by bureaucracy.’ This approach is mirrored in a seemingly contradictory trend in the tech industry: a number of years ago, tech companies such as Apple and Google started introducing nap time for their workers. However, this approach was less a gesture of care than a response to exhaustion and sleep deprivation induced by their grind mentality, providing ‘recharging time’ to boost creativity and sustain the long hours required for work. Workers in less high-paying careers, who need to work multiple jobs, rarely have time to nap, and often have to resort to drugs such as modafinil, a stimulant prescribed for narcolepsy and used, often illegally, by students cramming for exams. This substance has gained the attention of the military. The US defence research agency DARPA has funded pharmaceutical companies and researchers to reduce sleep deprivation, with the long-term ambitious goal of operating without any need for sleep in the field. And the US isn’t alone: militaries worldwide are exploring how to keep their soldiers awake and functioning when sleep is in short supply. © Aeon Media Group Ltd. 2012-2025.
Keyword: Sleep
Link ID: 30039 - Posted: 12.06.2025
Alison Abbott For decades, neuroscientists focused almost exclusively on only half of the cells in the brain. Neurons were the main players, they thought, and everything else was made up of uninteresting support systems. By the 2010s, memory researcher Inbal Goshen was beginning to question that assumption. She was inspired by innovative molecular tools that would allow her to investigate the contributions of another, more mysterious group of cells called astrocytes. What she discovered about their role in learning and memory excited her even more. At the beginning, she felt like an outsider, especially at conferences. She imagined colleagues thinking, “Oh, that’s the weird one who works on astrocytes,” says Goshen, whose laboratory is at the Hebrew University of Jerusalem. A lot of people were sceptical, she says. But not any more. A rush of studies from labs in many subfields are revealing just how important these cells are in shaping our behaviour, mood and memory. Long thought of as support cells, astrocytes are emerging as key players in health and disease. “Neurons and neural circuits are the main computing units of the brain, but it’s now clear just how much astrocytes shape that computation,” says neurobiologist Nicola Allen at the Salk Institute for Biological Studies in La Jolla, California, who has spent her career researching astrocytes and other non-neuronal cells, collectively called glial cells. “Glial meetings are now consistently oversubscribed.” As far back as the nineteenth century, scientists could see with their simple microscopes that mammalian brains included two major types of cell — neurons and glia — in roughly equal numbers. © 2025 Springer Nature Limited
Keyword: Glia
Link ID: 30038 - Posted: 12.03.2025
By Jennie Erin Smith More than a decade ago, when researchers discovered a ghostly network of microscopic channels that push fluid through the brain, they began to wonder whether the brain’s plumbing, as they sometimes refer to it, might be implicated in neurodegenerative diseases such as Alzheimer’s. Now, they are testing a host of ways to improve it. At the Society for Neuroscience (SfN) meeting last month in San Diego, several teams reported early promise for drugs and other measures that improve fluid flow, showing they can remove toxic proteins from animal or human brains and reverse symptoms in mouse models of neurological disease. Plastic surgeons in China, meanwhile, have gone further, conducting experimental surgeries that they say help flush out disease-related proteins in people with Alzheimer’s. The trials have generated excitement but also concern over their bold claims of success. A group of academic surgeons in the United States is planning what they say will be a more rigorous clinical trial, also in Alzheimer’s patients, that could begin recruiting as early as next year. The surgical approach “sounds unbelievable,” says neuroscientist Jeffrey Iliff of the University of Washington. “But I’m not going to say I know it can’t work. Remember, 13 years ago we didn’t know any of this existed.” In 2012, Iliff, with pioneering Danish neuroscientist Maiken Nedergaard and colleagues, described a previously unrecognized set of fluid channels in the brain that they dubbed the glymphatic system. Three years later, other groups revealed a second, related system of fluid transport: a matrix of tiny lymphatic vessels in the meninges, or membranes covering the brain. © 2025 American Association for the Advancement of Science.
Keyword: Alzheimers
Link ID: 30037 - Posted: 12.03.2025
Elie Dolgin Last April, neuroscientist Sue Grigson received an e-mail from a man detailing his years-long struggle to kick addiction — first to opioids, and then to the very medication meant to help him quit. The man had stumbled on research by Grigson, suggesting that certain anti-obesity medications could help to reduce rats’ addiction to drugs such as heroin and fentanyl. He decided to try quitting again, this time while taking semaglutide, the blockbuster GLP-1 drug better known as Ozempic. “That’s when he wrote to me,” says Grigson, who works at Pennsylvania State University College of Medicine in Hershey. “He said that he was drug- and alcohol-free for the first time in his adult life.” Stories like this have been spreading fast in the past few years, through online forums, weight-loss clinics and news headlines. They describe people taking diabetes and weight-loss drugs such as semaglutide (also marketed as Wegovy) and tirzepatide (sold as Mounjaro or Zepbound) who find themselves suddenly able to shake long-standing addictions to cigarettes, alcohol and other drugs. And now, clinical data are starting to back them up. Earlier this year, a team led by Christian Hendershot, a psychologist now at the University of Southern California in Los Angeles, reported in a landmark randomized trial that weekly injections of semaglutide cut alcohol consumption1 — a key demonstration that GLP-1 drugs can alter addictive behaviour in people with a substance-use disorder. More than a dozen randomized clinical studies testing GLP-1 drugs for addiction are now under way worldwide, with some results expected in the next few months. © 2025 Springer Nature Limited
Keyword: Drug Abuse; Obesity
Link ID: 30036 - Posted: 12.03.2025
Jonathan Lambert For centuries, the nature of a fever — and whether it's good or bad — has been hotly contested. In ancient Greece, the physician Hippocrates thought that fever had useful qualities, and could cook an illness out of a patient. Later on, around the 18th century, many physicians regarded fever as a distinct illness, one that could actually cook the patient, and so should be treated. These days, researchers understand that fever is part of the immune system's response to a pathogen, one that's shared by many animal species. And while there's accumulating evidence that fevers can help kick an infection, precisely how they can help remains mysterious. Sponsor Message "There's a cultural knowledge that there's this relationship between temperature and viruses, but at a molecular level, we're quite unsure how temperature might be impacting viruses," says Sam Wilson, a microbiologist at the University of Cambridge. There are two main ideas, he says. The heat of a fever itself could be harming the virus, akin to Hippocrates' hypotheses. Alternatively, the heat is a means to an end, either stoking our immune system to work better, or simply a regrettable, but unavoidable byproduct of fighting off an infection. "The fact that there weren't definitive answers to these questions piqued my interest," says Wilson. That interest led to a study, published Thursday in Science, that suggests — at least in mice — that elevated temperature alone is enough to fight off some viruses. © 2025 npr
Keyword: Neuroimmunology
Link ID: 30035 - Posted: 12.03.2025
Helen Pearson In some parts of the world, record numbers of people are being diagnosed with attention deficit hyperactivity disorder (ADHD). In the United States, for example, government researchers last year reported that more than 11% of children had received an ADHD diagnosis at some point in their lives1 — a sharp increase from 2003, when around 8% of children had (see ‘ADHD among US boys and girls’). But now, top US health officials argue that diagnoses have spiralled out of control. In May, the Make America Healthy Again Commission — led by US health secretary Robert F. Kennedy Jr — said ADHD was part of a “crisis of overdiagnosis and overtreatment” and suggested that ADHD medications did not help children in the long term. One thing that’s clear is that several factors — including improved detection and greater awareness of ADHD — are causing people with symptoms to receive a diagnosis and treatment, whereas they wouldn’t have years earlier. Clinicians say this is especially true for women and girls, whose pattern of symptoms was often missed in the past. Although some specialists are concerned about the risks of overdiagnosis, many are more worried that too many people go undiagnosed and untreated. At the same time, the rise in awareness and diagnoses of ADHD has fuelled a public debate about how it should be viewed and how best to provide support, including when medication is required. The emergence of the neurodiversity movement is challenging the view of ADHD as a disorder that should be ‘treated’, and instead proposes that it’s a difference that should be better understood and supported — with more focus on adapting schools and workplaces, for instance. “I do have a big problem with ‘disorder’,” says Jeff Karp, a biomedical engineer at Brigham and Women’s Hospital in Boston, Massachusetts, who has ADHD. “It’s the school system that’s disordered. It’s not the kids.” But many clinicians and people with ADHD argue that it is associated with difficulties — ranging from academic struggles to an increased chance of injuries and substance misuse — that justify its label as a medical condition, and say that medication is an important and effective part of therapy for many people. © 2025 Springer Nature Limited
Keyword: ADHD
Link ID: 30034 - Posted: 11.29.2025
By Pam Belluck A recently recognized form of dementia is changing the understanding of cognitive decline, improving the ability to diagnose patients and underscoring the need for a wider array of treatments. Patients are increasingly being diagnosed with the condition, known as LATE, and guidelines advising doctors how to identify it were published this year. LATE is now estimated to affect about a third of people 85 and older and 10 percent of those 65 and older, according to those guidelines. Some patients who have been told they have Alzheimer’s may actually have LATE, dementia experts say. “In about one out of every five people that come into our clinic, what previously was thought to maybe be Alzheimer’s disease actually appears to be LATE,” said Dr. Greg Jicha, a neurologist and an associate director of the University of Kentucky’s Sanders-Brown Center on Aging. “It can look like Alzheimer’s clinically — they have a memory problem,” Dr. Jicha said. “It looks like a duck, walks like a duck, but then it doesn’t quack, it snorts instead. ” On its own, LATE, shorthand for Limbic-predominant age-related TDP-43 encephalopathy, is usually less severe than Alzheimer’s and unfolds more slowly, said Dr. Pete Nelson, an associate director of the Sanders-Brown Center, who helped galvanize efforts to identify the disorder. That can be reassuring to patients and their families. But there is no specific treatment for LATE. Also, many older people have more than one type of dementia pathology, and when LATE occurs in conjunction with Alzheimer’s, it exacerbates symptoms and speeds decline, he said. © 2025 The New York Times Company
Keyword: Alzheimers
Link ID: 30033 - Posted: 11.29.2025
By Catherine Offord Researchers have tested a proof-of-concept device that enabled people who had lost their normal sense of smell to detect the presence of certain odors. Rather than exploiting the smell pathway, in which nasal cells send signals along olfactory nerves to the brain, the technology makes use of a less known nerve highway in the nose that transmits other sensations, including the kick of wasabi and the coolness of mint. “It’s an interesting study,” says Zara Patel, a rhinologist at Stanford Medicine who was not involved in the work, published today in Science Advances. “This is not recovering a sense of smell, this is activating a different system.” But she and others caution it remains to be seen how beneficial this kind of technology could be for people with smell loss, or anosmia. Humans have about 400 different olfactory receptors that are thought to enable the nose to detect billions of odors. But people can lose some or all of their sense of smell for a variety of reasons, including head trauma and viral infections such as COVID-19. People with long-term anosmia describe a significantly reduced quality of life and are at higher risk of mental health disorders, notes Halina Stanley, a research scientist at CNRS, the French national research agency, and co-author on the new paper. “The idea that if you lose your sense of smell, this isn’t as bad as losing another sense, I think is actually quite wrong.” Research by another team in 2018 found that electrodes placed in the sinuses near the olfactory bulb, the brain region that processes odor signals, could stimulate perception of smell, with people reporting onion or fruity scents, for example. Scientists are now working to develop implants that could more directly and specifically stimulate the olfactory bulb—akin to cochlear implants, which replace lost hearing by detecting sounds and stimulating the auditory nerve. However, such technology would be complex and invasive, and, at present, is a long way from becoming a therapy. © 2025 American Association for the Advancement of Science.
Keyword: Chemical Senses (Smell & Taste); Robotics
Link ID: 30032 - Posted: 11.29.2025
By Trip Gabriel Paul Ekman, a psychologist who linked thousands of facial expressions to the emotions they often subconsciously conveyed, and who used his research to advise F.B.I. interrogators and screeners for the Transportation Security Administration as well as Hollywood animators, died on Nov. 17 at his home in San Francisco. He was 91. His daughter, Eve Ekman, confirmed the death. Dr. Ekman sought to add scientific exactitude to the human impulse to interpret how others feel through their facial expressions. He recorded 18 types of smiles, for example, distinguishing between a forced smile and a spontaneous one; a genuine smile, he discovered, crinkles the orbicularis oculi muscle — that is, it creates crow’s feet around the eyes. Sometimes described as the world’s most famous face reader, Dr. Ekman was ranked No. 15 in 2015 by the American Psychological Association in its list of 200 eminent psychologists of the modern era. He was influential in reshaping the way facial expressions were understood — as the product of evolution rather than environment — and his findings crossed over to popular culture. The Fox TV drama “Lie to Me,” which ran for three seasons starting in 2009, featured a psychologist modeled on Dr. Ekman (played by Tim Roth) who assists criminal investigations by decoding the hidden meanings of facial expressions and body language. The show was developed by the producer Brian Grazer, who was inspired by a lengthy profile of Dr. Ekman by Malcolm Gladwell in The New Yorker in 2002. “The idea that you could tell a liar by some scientific test and know what they’re feeling just by looking at them was staggering to me,” the show’s writer, Samuel Baum, told The New York Times in 2009. As a young research psychologist in the late 1960s, Dr. Ekman changed the scientific consensus on facial expressions. In the postwar era, the conventional wisdom of eminent anthropologists like Margaret Mead was that human facial expressions were learned and that they varied across cultures. © 2025 The New York Times Company
Keyword: Emotions; Evolution
Link ID: 30031 - Posted: 11.29.2025
By Carl Zimmer Last year, Ardem Patapoutian got a tattoo. An artist drew a tangled ribbon on his right arm, the diagram of a protein called Piezo. Dr. Patapoutian, a neuroscientist at Scripps Research in San Diego discovered Piezo in 2010, and in 2021 he won a Nobel Prize for the work. Three years later, he decided to memorialize the protein in ink. Piezo, Dr. Patapoutian had found, allows nerve endings in the skin to sense pressure, helping to create the sense of touch. “It was surreal to feel the needle as it was etching the Piezo protein that I was using to feel it,” he recalled. Dr. Patapoutian is no longer studying how Piezo informs us about the outside world. Instead, he has turned inward, to examine the flow of signals that travel from within the body to the brain. His research is part of a major new effort to map this sixth, internal sense, which is known as interoception. Scientists are discovering that interoception supplies the brain with a remarkably rich picture of what is happening throughout the body — a picture that is mostly hidden from our consciousness. This inner sense shapes our emotions, our behavior, our decisions, and even the way we feel sick with a cold. And a growing amount of research suggests that many psychiatric conditions, ranging from anxiety disorders to depression, might be caused in part by errors in our perception of our internal environment. Someday it may become possible to treat those conditions by retuning a person’s internal sense. But first, Dr. Patapoutian said, scientists need a firm understanding of how interoception works. “We’ve taken our body for granted,” he said. Everyone has a basic awareness of interoception, whether it’s a feeling of your heart racing, your bladder filling or a flock of butterflies fluttering in your stomach. And neuroscientists have long recognized interoception as one function of the nervous system. Dr. Charles Sherrington, a Nobel Prize-winning neuroscientist, first proposed the existence of “intero-ceptors,” in 1906. © 2025 The New York Times Company


.gif)

