Most Recent Links
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
David Adam In a town on the shores of Lake Geneva sit clumps of living human brain cells for hire. These blobs, about the size of a grain of sand, can receive electrical signals and respond to them — much as computers do. Research teams from around the world can send the blobs tasks, in the hope that they will process the information and send a signal back. Welcome to the world of wetware, or biocomputers. In a handful of academic laboratories and companies, researchers are growing human neurons and trying to turn them into functional systems equivalent to biological transistors. These networks of neurons, they argue, could one day offer the power of a supercomputer without the outsized power consumption. The results so far are limited. But keen scientists are already buying or borrowing online access to these brain-cell processors — or even investing tens of thousands of dollars to secure their own models. Some want to use these biocomputers as straightforward replacements for ordinary computers, whereas others want to use them to study how brains work. “Trying to understand biological intelligence is a very interesting scientific problem,” says Benjamin Ward-Cherrier, a robotics researcher at the University of Bristol, UK, who rents time on the Swiss brain blobs. “And looking at it from the bottom up — with simple small versions of our brain and building those up — I think is a better way of doing it than top down.” Biocomputing advocates claim that these systems could one day rival the capability of artificial intelligence and the potential of quantum computers. Other researchers who work with human neurons are more sceptical of what’s possible. And they warn that hype — and the science-fictional allure of what are sometimes labelled brain-in-a-jar systems — could even be counterproductive. If the idea that these systems possess sentience and consciousness takes hold, there could be repercussions for the research community. © 2025 Springer Nature Limited
Keyword: Learning & Memory; Robotics
Link ID: 30010 - Posted: 11.12.2025
By Roni Caryn Rabin The most stressful part of the trip for Sunny Brous came when she had to part with her wheelchair so that the flight crew could put it in the luggage hold. You just never know what shape it will be in when you get it back, she said. “I tell them, ‘Take the best care of it you can,’” she said. “Those wheels are my legs! Those wheels are my life.” Ms. Brous, 38, who lives in Hico, Texas, was one of dozens of women who converged on the Sea Crest Beach Resort on Cape Cod toward the end of summer for the gathering of a club no one really wanted to be a member of: women diagnosed in their 20s and early 30s with amyotrophic lateral sclerosis, or A.L.S. The terminal neurodegenerative disorder robs them of the ability to talk, walk, use their hands or even breathe. It has long been seen as a disease of older men, who make up a majority of patients. There is no cure. The women traveled with husbands, mothers, sisters and aides, and they did not travel light. Their packing lists included heavy BiPAP machines to help them breathe, formula for their feeding tubes, commodes, portable bidets, myriad chargers, leg braces and canes, pills and pill crushers and bottles of a medication with gold nanoparticles that was still being tested in clinical trials. Half of Ms. Brous’s suitcase was filled with party gifts for the friends she texts with throughout the year on an endless WhatsApp chat, including bags of popcorn with Texan flavors like Locked and Loaded, a Cheddar, bacon, sour cream and chives combo that you can only get in Hico. Desiree Galvez Kessler’s sister drove her, her mother and an aide up from Long Island in a van with a clunky Hoyer transfer lift in the back. Ms. Kessler — Desi to her friends — was diagnosed at 29, and has not been able to walk or speak for 10 years; the large computer tablet that she communicates with using eye-gaze technology is mounted on her wheelchair. © 2025 The New York Times Company
Keyword: ALS-Lou Gehrig's Disease
; Sexual Behavior
Link ID: 30009 - Posted: 11.12.2025
Steven Morris Some people respond to the unwanted attentions of a gull eyeing up a bag of chips or a Cornish pasty by frantically flapping their hands at the hungry bird while others beat a rapid retreat into the nearest seaside shelter. But researchers have found that a no-nonsense yell – even a relatively quiet one – may be the best way to get rid of a pesky herring gull. Animal behaviourists from the University of Exeter tried to establish the most effective method of countering a feathery threat by placing a portion of chips in a place where gulls were bound to find them. Once a gull approached, they played three recordings. First, a male voice shouting: “No, stay away, that’s my food, that’s my pasty!” Then, the same voice speaking the same words was played, followed by the “neutral” birdsong of a robin. Study finds shouting is best way to get rid of pesky seagulls – video They tested 61 gulls across nine seaside towns in Cornwall and found nearly half of the birds exposed to the shouting voice flapped away within a minute. Only 15% of the gulls exposed to the speaking male voice flew off, though the rest walked away from the food, still apparently sensing danger. In contrast, 70% of gulls exposed to the robin song stayed put. The volume of the “shouting” and “speaking” voices was the same, meaning the gulls seemed to be responding to the acoustic properties of the message rather than the loudness. © 2025 Guardian News & Media Limited
Keyword: Aggression
Link ID: 30008 - Posted: 11.12.2025
By Kevin Berger Steve Ramirez was feeling on top of the world in 2015. His father, Pedro Ramirez, had snuck into the United States in the 1980s to escape the civil war in El Salvador. Pedro Ramirez held jobs as a door-to-door salesman for tombstones, a janitor in a diner, and a technician in an animal lab. After years of ’round-the-clock work, Pedro Ramirez became a U.S. citizen. And here was his son, born in America, with a Ph.D. from the Massachusetts Institute of Technology, still in his 20s, being celebrated as one of the most exciting and promising neuroscientists in the country. Steve Ramirez had published research papers with his MIT mentor Xu Liu that reported how they used lasers to erase fear memories, spur positive memories, and even fabricate new memories in the brain. The experiments were only in mice. But they were impressive. Memories are made of networks of brain cells called engrams. The lasers targeted specific cells in engrams. Zap those cells and the whole engram was muted. The pair of neuroscientists gave a popular TED Talk on memory manipulation and were featured in international press stories that invariably mentioned the plotlines in the movies Eternal Sunshine of the Spotless Mind and Inception could be real. Bad memories could be deleted. New memories could be implanted. One night in 2013 Ramirez and Liu were celebrating the publication of one of their papers in a jazz lounge at the top of the Prudential Building in Boston. The music was grooving, and the city below glittered like stars. Ramirez thought, I’ve never been so happy and so fully alive. In early 2015, Liu, age 37, died suddenly. There had been no warning signs. Ramirez had never had a friend like Liu. Liu opened his mind to experiences in science he couldn’t have imagined. Their relationship felt organic from Ramirez’s first day in the lab. Liu joked they would always have chemistry doing science together. Grief is when the future your brain plans for is cut off. Ramirez’s thoughts of doing science without Liu became a trapdoor that landed him in a cellar of pain. © 2025 NautilusNext Inc.,
Keyword: Learning & Memory; Drug Abuse
Link ID: 30007 - Posted: 11.12.2025
By Daniel Bergner Marie began taking fluoxetine, the generic form of Prozac, when she was 15. The drug — an S.S.R.I., a selective serotonin reuptake inhibitor — was part of her treatment in an outpatient program for an eating disorder. It took its toll on her sexuality. “I was in touch with initial sparks of sexual energy relatively young,” she said, remembering crushes as far back as the age of 6 or 7. Shortly before starting on the drug, she was dazzled, from a distance, by a blue-eyed hockey player at school, tall and funny and charismatic. She recalled the fluster and fantasies he stirred. But on the medication, she felt the infatuation vanish swiftly. Listen to this article, read by Eric Jason Martin “And then,” Marie said, “I realized, Oh, I’m not developing new crushes.” She had no clue that the drug might be the cause: “I wasn’t informed about sexual side effects.” Even as the worst of the eating disorder abated, psychiatrists and family doctors told Marie and her parents that she should stay on an antidepressant. She complied, while trying and failing to escape the sexual side effects. She traded fluoxetine for other antidepressants, including Wellbutrin, a different class of antidepressant, which is sometimes prescribed to combat low libido. She’s 38 now and has been off psychiatric medication for six years. But sexual desire remains absent. “For me it’s just an empty dark space,” she said. “There’s nothing there.” Marie told me she has PSSD, post-S.S.R.I. sexual dysfunction, a loss of sexuality that persists after the drug is no longer being taken. It’s a controversial designation, because while the sexual side effects of S.S.R.I.s are well established — depleted or deadened desire, erectile dysfunction for men, elusive arousal for women, delayed and dulled orgasms or the inability to reach orgasm at all — the general assumption is that they subside completely when the drug is no longer in your system. Some psychiatrists suspect that PSSD is actually a result not of repercussions from the drugs but of the problem that led the patient to be medicated in the first place. Depression itself can stymie sexuality. So can anxiety, the other leading reason patients are prescribed S.S.R.I.s. © 2025 The New York Times Company
Keyword: Depression; Sexual Behavior
Link ID: 30006 - Posted: 11.12.2025
Katie Kavanagh Speaking multiple languages could slow down brain ageing and help to prevent cognitive decline, a study of more than 80,000 people has found. The work, published in Nature Aging on 10 November1, suggests that people who are multilingual are half as likely to show signs of accelerated biological ageing as are those who speak just one language. “We wanted to address one of the most persistent gaps in ageing research, which is if multilingualism can actually delay ageing,” says study co-author Agustín Ibáñez, a neuroscientist at the Adolfo Ibáñez University in Santiago, Chile. Previous research in this area has suggested that speaking multiple languages can improve cognitive functions such memory and attention2, which boosts brain health as we get older. But many of these studies rely on small sample sizes and use unreliable methods of measuring ageing, which leads to results that are inconsistent and not generalizable. “The effects of multilingualism on ageing have always been controversial, but I don’t think there has been a study of this scale before, which seems to demonstrate them quite decisively,” says Christos Pliatsikas, a cognitive neuroscientist at the University of Reading, UK. The paper’s results could “bring a step change to the field”, he adds. They might also “encourage people to go out and try to learn a second language, or keep that second language active”, says Susan Teubner-Rhodes, a cognitive psychologist at Auburn University in Alabama. © 2025 Springer Nature Limited
Keyword: Language; Alzheimers
Link ID: 30005 - Posted: 11.12.2025
By Nora Bradford Here are three words: pine, crab, sauce. There’s a fourth word that combines with each of the others to create another common word. What is it? When the answer finally comes to you, it’ll likely feel instantaneous. You might even say “Aha!” This kind of sudden realization is known as insight, and a research team recently uncovered how the brain produces it (opens a new tab), which suggests why insightful ideas tend to stick in our memory. Maxi Becker (opens a new tab), a cognitive neuroscientist at Duke University, first got interested in insight after reading the landmark 1962 book The Structure of Scientific Revolutions (opens a new tab) by the historian and philosopher of science Thomas Kuhn. “He describes how some ideas are so powerful that they can completely shift the way an entire field thinks,” she said. “That got me wondering: How does the brain come up with those kinds of ideas? How can a single thought change how we see the world?” Such moments of insight are written across history. According to the Roman architect and engineer Vitruvius, in the third century BCE the Greek mathematician Archimedes suddenly exclaimed “Eureka!” after he slid into a bathtub and saw the water level rise by an amount equal to his submerged volume (although this tale may be apocryphal (opens a new tab)). In the 17th century, according to lore, Sir Isaac Newton had a breakthrough in understanding gravity after an apple fell on his head. In the early 1900s, Einstein came to a sudden realization that “if a man falls freely, he would not feel his weight,” which led him to his theory of relativity, as he later described in a lecture. Insights are not limited to geniuses: We have these cognitive experiences all the time when solving riddles or dealing with social or intellectual problems. They are distinct from analytical problem-solving, such as the process of doing formulaic algebra, in which you arrive at a solution slowly and gradually as if you’re getting warmer. Instead, insights often follow periods of confusion. You never feel as if you’re getting warmer; rather, you go from cold to hot, seemingly in an instant. Or, as the neuropsychologist Donald Hebb, known for his work building neurobiological models of learning, wrote in the 1940s, sometimes “learning occurs as a single jump, an all-or-none affair.” © 2025 Simons Foundation
Keyword: Attention; Learning & Memory
Link ID: 30004 - Posted: 11.08.2025
By Carl Zimmer In Paola Arlotta’s lab at Harvard is a long, windowless hallway that is visited every day by one of her scientists. They go there to inspect racks of scientific muffin pans. In every cavity of every pan is a pool of pink liquid, at the bottom of which are dozens of translucent nuggets no bigger than peppercorns. The nuggets are clusters of neurons and other cells, as many as two million, normally found in the human brain. On their daily rounds, the scientists check that the nuggets are healthy and well-fed. “No first-year students walk in that corridor,” Dr. Arlotta said. “You have to be experienced enough to go there, because the risk is very high that you’re going to mess up the work that took years to build.” The oldest nuggets are now seven years old. Back in 2018, Dr. Arlotta and her colleagues created them from skin cells originally donated by volunteers. A chemical cocktail transformed them into the progenitor cells normally found in the fetal human brain. The cells multiplied into neurons and other types of brain cells. They wrapped their branches around each other and pulsed with electrical activity, much like the pulses that race around inside our heads. One such nugget can contain more neurons than the entire brain of a honeybee. But Dr. Arlotta is quick to stress that they are not brains. She and her colleagues call them brain organoids. “It’s so important to call them organoids and not brains, because they’re no such thing,” she said. “They are reductionist replicas that can show us some things that are the same, and many others that are not.” And yet the similarities are often remarkable, as Dr. Arlotta and her colleagues recently demonstrated in a new report on their long-lived organoids. After the organoids started growing in 2018, their neurons began behaving like the those in a fetal human brain, down to way their genes switched on and off. And as the months passed, the neurons matured to resemble the neurons in a baby after birth. © 2025 The New York Times Company
Keyword: Development of the Brain
Link ID: 30003 - Posted: 11.08.2025
Miryam Naddaf Scientists have created the most detailed maps yet of how our brains differentiate from stem cells during embryonic development and early life. In a Nature collection including five papers published yesterday, researchers tracked hundreds of thousands of early brain cells in the cortices of humans and mice, and captured with unprecedented precision the molecular events that give rise to a mixture of neurons and supporting cells. “It’s really the initial first draft of any ‘cell atlases’ for the developing brain,” says Hongkui Zeng, executive vice-president director of the Allen Institute for Brain Science in Seattle, Washington, and a co-author of two papers in the collection. These atlases could offer new ways to study neurological conditions such as autism and schizophrenia. Researchers can now “mine the data, find genes that may be critical for a particular event in a particular cell type and at a particular time point”, says Zeng. “We have a very exciting time coming,” adds Zoltán Molnár, a developmental neuroscientist at the University of Oxford, UK, who was not involved with any of the studies. The work is part of the BRAIN Initiative Cell Atlas Network (BICAN) — a project launched in 2022 by the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative at the US National Institutes of Health with US$500 million in funding to build reference maps of mammalian brains. Patterns of development Two of the papers map parts of the mouse cerebral cortex — the area of the brain involved in cognitive functions and perception. Zeng and her colleagues focused on how the visual cortex develops from 11.5-day-old embryos to 56-day-old mice. They created an atlas of 568,654 individual cells and identified 148 cell clusters and 714 subtypes1. “It’s the first complete high-resolution atlas of the cortical development, including both prenatal and postnatal” phases, says Zeng. © 2025 Springer Nature Limited
Keyword: Development of the Brain; Neurogenesis
Link ID: 30002 - Posted: 11.08.2025
By Holly Barker Multiple mouse and human brain atlases track the emergence of distinct cell types during development and uncover some of the pathways that decide a cell’s fate. The findings were published today in a collection of Nature papers. The papers highlight the timing and location of cell diversification and offer fresh insights into the evolution of those cells. Neuronal subtypes emerge at starkly different times in distinct brain regions, according to multiple mouse studies. And the work upends ideas about cell migration, including the notion that a portion of cortical neurons are made on site, developmental maps of the human brain suggest. “This is a dramatic revision of the fundamental principles that we thought were true in the cerebral cortex,” says Tomasz Nowakowski, associate professor of neurological surgery, anatomy and psychiatry, and of behavioral sciences, at the University of California, San Francisco and an investigator on one of the new studies. The special issue comprises 12 papers—including 6 newly published ones—from groups working as part of the BRAIN Initiative Cell Atlas Network. The work builds on the network’s complete cell census, published in 2023, that cataloged 34 classes and 5,322 unique cell types in the adult mouse brain. “Those cell types don’t appear out of a vacuum at the same time,” says Nowakowski, who co-authored a commentary on the new collection. Pinpointing when those cells emerge and where they originate from was the “obvious next question,” he says. At birth, the mouse brain contains all the initial cell classes that diversify into the multitude of neurons and glia found in older rodents. But precisely when that diversification occurs varies among brain regions: In the visual cortex, new cell types emerge weeks after birth and peak twice—once when the animal first opens its eyes and then again at the onset of the critical period, according to one study. © 2025 Simons Foundation
Keyword: Development of the Brain; Neurogenesis
Link ID: 30001 - Posted: 11.08.2025
By Paula Span For years, the two patients had come to the Penn Memory Center at the University of Pennsylvania, where doctors and researchers follow people with cognitive impairment as they age, as well as a group with normal cognition. Both patients, a man and a woman, had agreed to donate their brains after they died for further research. “An amazing gift,” said Dr. Edward Lee, the neuropathologist who directs the brain bank at the university’s Perelman School of Medicine. “They were both very dedicated to helping us understand Alzheimer’s disease.” The man, who died at 83 with dementia, had lived in the Center City neighborhood of Philadelphia with hired caregivers. The autopsy showed large amounts of amyloid plaques and tau tangles, the proteins associated with Alzheimer’s disease, spreading through his brain. Researchers also found infarcts, small spots of damaged tissue, indicating that he had suffered several strokes. By contrast, the woman, who was 84 when she died of brain cancer, “had barely any Alzheimer’s pathology,” Dr. Lee said. “We had tested her year after year, and she had no cognitive issues at all.” The man had lived a few blocks from Interstate 676, which slices through downtown Philadelphia. The woman had lived a few miles away in the suburb of Gladwyne, Pa., surrounded by woods and a country club. The amount of air pollution she was exposed to — specifically, the level of fine particulate matter called PM2.5 — was less than half that of his exposure. Was it a coincidence that he had developed severe Alzheimer’s while she had remained cognitively normal? With increasing evidence that chronic exposure to PM2.5, a neurotoxin, not only damages lungs and hearts but is also associated with dementia, probably not. © 2025 The New York Times Company
Keyword: Alzheimers; Neurotoxins
Link ID: 30000 - Posted: 11.05.2025
By Ramin Skibba In August, two parents in California filed a lawsuit against OpenAI, claiming that the company was responsible for their teenage son’s suicide. The previous fall, according to Maria and Matthew Raine, their 16-year-old, Adam, had started using the company’s popular AI chatbot ChatGPT as a homework helper. Over the course of several months, the Raines alleged, it shifted to a digital companion and then to a “suicide coach,” advising the teen how to quietly steal vodka from his parent’s liquor cabinet, urging him to keep his suicidal ideations a secret, and then guiding him about the feasibility and load-bearing capacity of a noose. By the time of Adam’s death in April, according to the Raines’ complaint, the chatbot had used the word “suicide” 1,275 times, six times more often than Adam himself. The case of Adam Raines was not an isolated incident, though publicly available data remains limited. And experts worry that more mental health crises, including suicides — the second leading cause of death among people between ages 10 and 24 years — could arise as users increasingly turn to generative AI chatbots for emotional support. Although it is difficult to pinpoint just how many people are relying on chatbots in this way, according to a recent Harvard Business Review survey based primarily on data collected from Reddit forum posts, the practice is common for therapy, companionship, and finding purpose. Researchers have scrambled to understand the trend, including both the potential risks and benefits of the chatbots, most of which were not designed to be used for mental health support. Some users claim that the bots help them, citing their perception that the tools won’t judge or stigmatize them, while others are seeking a substitute for therapy when they can’t access or afford it, experts say. Some users also don’t think of the chatbots as a form of therapy, but rather a kind of mindful journaling as they work through their emotions and problems. According to one example in the Harvard Business Review report, a Reddit user said, “I found a thread where people talked about using AI to analyze their moods, essentially having low-barrier ‘therapy’ sessions.
Keyword: Depression
Link ID: 29999 - Posted: 11.05.2025
Ian Sample Science editor Even modest amounts of daily exercise may slow the progression of Alzheimer’s disease in older people who are at risk of developing the condition, researchers have said. People are often encouraged to clock up 10,000 steps a day as part of a healthy routine, but scientists found 3,000 steps or more appeared to delay the brain changes and cognitive decline that Alzheimer’s patients experience. Results from the 14-year-long study showed cognitive decline was delayed by an average of three years in people who walked 3,000 to 5,000 steps a day, and by seven years in those who managed 5,000 to 7,000 steps daily. “We’re encouraging older people who are at risk of Alzheimer’s to consider making small changes to their activity levels, to build sustained habits that protect or benefit their brain and cognitive health,” said Dr Wai-Ying Yau, the first author on the study at Mass General Brigham hospital in Boston. Dementia affects an estimated 50 million people worldwide, with Alzheimer’s disease the most common cause. In the UK, more than 500,000 people have Alzheimer’s. The condition is linked to the buildup of two toxic forms of proteins in the brain, namely amyloid-beta plaques and tau tangles. Yau and her colleagues analysed data from 296 people aged 50 to 90 who were cognitively unimpaired at the beginning of the study. The data included annual cognitive assessments, step counts measured by pedometers, and PET imaging to detect levels of amyloid and tau in the volunteers’ brains. People with little brain amyloid at the start showed very little cognitive decline or buildup of tau protein over the course of the study. The risk of Alzheimer’s was greater for those with elevated amyloid at baseline, and among them, higher step counts were linked to slower rates of cognitive decline and a delayed buildup of tau proteins. In sedentary individuals, the buildup of tau and cognitive decline was substantially faster, the researchers report in the journal Nature Medicine. © 2025 Guardian News & Media Limited
Keyword: Alzheimers
Link ID: 29998 - Posted: 11.05.2025
By Denise Grady Dr. Marthe Gautier, a physician and researcher who had a major role in identifying the cause of Down syndrome but whose achievement was undermined when a male colleague took credit for her work, died on April 30, 2022. She was 96. Her death, in a retirement home in Meaux, France, though not widely reported at the time, was confirmed by her great-niece Tatiana Giraud. The New York Times, which had prepared an obituary about Dr. Gautier in advance, in 2018, learned of her death only recently. The disputed research in which Dr. Gautier was involved produced a historic breakthrough: It revealed that people with Down syndrome have an extra chromosome, one of the microscopic strands of DNA and protein that carry a person’s genetic blueprint. Most humans have 46 chromosomes. Down syndrome is also called trisomy 21, meaning that three copies of the 21st chromosome are present instead of two, for a total of 47 chromosomes. The discovery, at the Armand-Trousseau Hospital in Paris in 1958, was the first to link an abnormal number of chromosomes to a disorder that causes intellectual disability. More connections between such conditions and aberrant chromosomes were soon found. Those advances led to the development of tests to diagnose the disorders before birth, making it possible to terminate affected pregnancies in many cases. Dr. Gautier’s story “starts like a fairy tale and ends like villainy,” said Dr. Jean Kachaner, a former student of hers who is a pediatric cardiologist at the Necker Hospital for children in Paris. © 2025 The New York Times Company
Keyword: Development of the Brain; Genes & Behavior
Link ID: 29997 - Posted: 11.05.2025
By Kaia Glickman Anyone with a computer has been asked to “select every image containing a traffic light” or “type the letters shown below” to prove that they are human. While these log-in hurdles — called reCAPTCHA tests — may prompt some head-scratching (does the corner of that red light count?), they reflect that vision is considered a clear metric for differentiating computers from humans. But computers are catching up. The quest to create computers that can “see” has made huge progress in recent years. Fifteen years ago, computers could correctly identify what an image contains about 60 percent of the time. Now, it’s common to see success rates near 90 percent. But many computer systems still fail some of the simplest vision tests — thus reCAPTCHA’s continued usefulness. Digital artwork, one in a series displayed at CERN in Geneva. The foreground shows a particle collision event which is a possible candidate for a decay of the Higgs-like particle to a final state. The background depicts selected pages from articles published by the CMS collaboration at the LHC. Newer approaches aim to more closely resemble the human visual system by training computers to see images as they are — made up of actual objects — rather than as just a collection of pixels. These efforts are already yielding success, for example in helping develop robots that can “see” and grab objects. Computer vision models employ what are called visual neural networks. These networks use interconnected units called artificial neurons that, akin to in the brain, forge connections with each other as the system learns. Typically, these networks are trained on a set of images with descriptions, and eventually they can correctly guess what is in a new image they haven’t encountered before.
Keyword: Vision; Robotics
Link ID: 29996 - Posted: 11.01.2025
By Ellen Barry One of the most popular mental health innovations of the past decade is therapy via text message, which allows you to dip in and out of treatment in the course of a day. Say you wake up anxious before a presentation: You might text your therapist first thing in the morning to say that you can’t stop visualizing a humiliating failure. Three hours later, her response pops up on your phone. She suggests that you label the thought — “I’m feeling nervous about my presentation” — and then try to reframe it. She tells you to take a deep breath before deciding what is true in the moment. You read her answer between meetings. “I’m pretty sure my boss thinks I’m an idiot,” you type. The therapist responds the next morning. “What evidence do you have that she thinks that?” she asks. She tells you to write a list of the available evidence, pros and cons. Text-based therapy has expanded swiftly over the past decade through digital mental health platforms like BetterHelp and Talkspace, which pair users with licensed therapists and offer both live chat and as-needed texting sessions. A new study published on Thursday in the journal JAMA Network Open provides early evidence that the practice is effective in treating mild to moderate depression, finding outcomes similar to those of video-based therapy. In a clinical trial, 850 adults with mild to moderate depression were randomly assigned to two groups: One group received psychotherapy via a weekly video session; the other received unlimited, as-needed messaging or emailing with a therapist. After 12 weeks, participants in both groups reported similar improvement in depression symptoms. © 2025 The New York Times Company
Keyword: Depression
Link ID: 29995 - Posted: 11.01.2025
By Sarah DeWeerdt A temporary increase in neuronal activity in the cortex of newborn mice leads to social deficits in adulthood, according to a new preprint. Those adult rodents also show changes in brain electrical activity, gene expression and connectivity that are reminiscent of autism. The analysis lends support to a prominent hypothesis of autism’s origins, which holds that the condition can arise from an excess of excitatory signaling or insufficient inhibitory signaling in the brain, the study investigators write in their paper. Over the years, support for this signaling imbalance hypothesis has come from other studies in mice and observations that some people with autism have seizures or display excess neuronal activity in electroencephalography (EEG) recordings relative to people without the condition. Postmortem analysis suggests autistic people have more excitatory synapses in the prefrontal cortex than non-autistic people. But determining causality and the role of inhibitory signaling has been difficult. In contrast with most earlier work, the new study “really underscore[s] a different way of looking at excitation-inhibition imbalance, which is looking at it during development as a cause of subsequent changes in brain function that could be associated with autism,” says Vikaas Sohal, professor of psychiatry and behavioral science at the University of California, San Francisco, who was not involved in the work. The study was posted on bioRxiv last month. © 2025 Simons Foundation
Keyword: Development of the Brain; Autism
Link ID: 29994 - Posted: 11.01.2025
Ian Sample Science editor It’s never a great look. The morning meeting is in full swing but thanks to a late night out your brain switches off at the precise moment a question comes your way. Such momentary lapses in attention are a common problem for the sleep deprived, but what happens in the brain in these spells of mental shutdown has proved hard to pin down. Now scientists have shed light on the process and found there is more to zoning out than meets the eye. The brief loss of focus coincides with a wave of fluid flowing out of the brain, which returns once attention recovers. “The moment somebody’s attention fails is the moment this wave of fluid starts to pulse,” said Dr Laura Lewis, a senior author on the study at MIT in Boston. “It’s not just that your neurons aren’t paying attention to the world, there’s this big change in fluid in the brain at the same time.” Lewis and her colleague Dr Zinong Yang investigated the sleep-deprived brain to understand the kinds of attention failures that lead drowsy drivers to crash and tired animals to become a predator’s lunch. In the study, 26 volunteers took turns to wear an EEG cap while lying in an fMRI scanner. This enabled the scientists to monitor the brain’s electrical activity and physiological changes during tests in which people had to respond as quickly as possible to hearing a tone or seeing crosshairs on a screen turn into a square. Each volunteer was scanned after a restful night’s sleep at home and after a night of total sleep deprivation supervised by scientists at the laboratory. Unsurprisingly, people performed far worse when sleep deprived, responding more slowly or not at all. © 2025 Guardian News & Media Limited
Keyword: Sleep; Attention
Link ID: 29993 - Posted: 11.01.2025
Imma Perfetto Anyone who has ever struggled through the day following a poor night’s sleep has had to wrench their attention back to the task at hand after their mind drifted off unexpectedly. Now, researchers have pinpointed exactly what causes these momentary failures of attention. The new study in Nature Neuroscience found that the brains of sleep-deprived people initiate waves of cerebrospinal fluid (CSF), the liquid which cushions the brain, which dramatically impaired attention. This process usually happens during sleep. The rhythmic flow of CSF into and out of the brain carries away protein waste which has built up over the course of the day. When this is maintenance interrupted due to lack of sleep, it seems the brain attempts to play catch up during its waking hours. “If you don’t sleep, the CSF waves start to intrude into wakefulness where normally you wouldn’t see them,” says study senior author Laura Lewis of Massachusetts Institute of Technology’s (MIT) Institute for Medical Engineering and Science. “However, they come with an attentional trade off, where attention fails during the moments that you have this wave of fluid flow. “The results are suggesting that at the moment that attention fails, this fluid is actually being expelled outward away from the brain. And when attention recovers, it’s drawn back in.” © Copyright CSIRO
Keyword: Sleep; Attention
Link ID: 29992 - Posted: 11.01.2025
By Nima Sadrian In the popular narrative, cannabidiol, or CBD, is portrayed as a natural, non-intoxicating cure for a host of ailments — and sometimes that extends to the anxieties of modern adolescence. CBD is everywhere, infused in products such as gummy candies, vapes, skincare serums, and even fizzy seltzers. Usually derived from the hemp plant, CBD is pitched as a calming remedy with none of the stigma of marijuana. Even a 2018 World Health Organization report noted that CBD shows no signs of abuse or dependence potential. But as a physician and neuroscientist who studies how CBD affects the developing brain, I have to offer a different, more troubling answer: We simply don’t know if it’s safe for teens. And early evidence suggests potential for real, lasting harm. The comforting story our culture tells itself about CBD — that it offers harmless, botanical relief for stress and sleep problems — is dangerously out of step with the science. While we have been sold a simple wellness narrative, my own work and that of other scientists reveal a far more complex and cautionary tale — one that challenges the very foundation of the multibillion-dollar CBD industry. How did a compound that the Food and Drug Administration has only approved as a potent prescription drug for severe childhood epilepsy become a common additive? The answer lies in a catastrophic regulatory failure. The 2018 farm bill legalized hemp, but the legislation and its extensions created no framework to ensure that the products made from it were safe, effective, or accurately labeled, nor did the bill set an age limit for it. The result is a market that operates like the Wild West, a gold rush where consumer safety is an afterthought. The FDA-approved CBD medicine, Epidiolex, comes with a long list of documented risks, including liver damage and suicidal ideation, and requires careful medical supervision. Yet numerous consumer products containing CBD are sold without such warnings, mandatory testing, or oversight.
Keyword: Drug Abuse; Development of the Brain
Link ID: 29991 - Posted: 11.01.2025


.gif)

