Most Recent Links
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Dennis Overbye If you could change the laws of nature, what would you change? Maybe it’s that pesky speed-of-light limit on cosmic travel — not to mention war, pestilence and the eventual asteroid that has Earth’s name on it. Maybe you would like the ability to go back in time — to tell your teenage self how to deal with your parents, or to buy Google stock. Couldn’t the universe use a few improvements? That was the question that David Anderson, a computer scientist, enthusiast of the Search for Extraterrestrial Intelligence (SETI), musician and mathematician at the University of California, Berkeley, recently asked his colleagues and friends. In recent years the idea that our universe, including ourselves and all of our innermost thoughts, is a computer simulation, running on a thinking machine of cosmic capacity, has permeated culture high and low. In an influential essay in 2003, Nick Bostrom, a philosopher at the University of Oxford and director of the Institute for the Future of Humanity, proposed the idea, adding that it was probably an easy accomplishment for “technologically mature” civilizations wanting to explore their histories or entertain their offspring. Elon Musk, who, for all we know, is the star of this simulation, seemed to echo this idea when he once declared that there was only a one-in-a-billion chance that we lived in “base reality.” It’s hard to prove, and not everyone agrees that such a drastic extrapolation of our computing power is possible or inevitable, or that civilization will last long enough to see it through. But we can’t disprove the idea either, so thinkers like Dr. Bostrom contend that we must take the possibility seriously. In some respects, the notion of a Great Simulator is redolent of a recent theory among cosmologists that the universe is a hologram, its margins lined with quantum codes that determine what is going on inside. A couple of years ago, pinned down by the coronavirus pandemic, Dr. Anderson began discussing the implications of this idea with his teenage son. If indeed everything was a simulation, then making improvements would simply be a matter of altering whatever software program was running everything. “Being a programmer, I thought about exactly what these changes might involve,” he said in an email. © 2023 The New York Times Company
Keyword: Consciousness
Link ID: 28634 - Posted: 01.18.2023
By Daryl Austin My three young daughters like to watch pets doing silly things. Almost daily, they ask to see animal video clips on my phone and are quickly entertained. But once my 7-year-old lets out a belly laugh, the laughter floodgates are opened and her two sisters double over as well. This is just what science would predict. “Laughter is a social phenomenon,” says Sophie Scott, a neuroscientist at University College London who has studied laughter and other human reactions for more than two decades. Scott co-wrote a study showing how the brain responds to the sound of laughter by preparing one’s facial muscles to join in, laying the foundation for laughs to spread from person to person. “Contagious laughter demonstrates affection and affiliation,” Scott says. “Even being in the presence of people you expect to be funny will prime laughter within you.” It’s like yawning Scientists have yet to definitively find a funny bone, but they are revealing nuances about the laugh impulse. Laughter’s positive psychological and physiological responses include lessening depression and anxiety symptoms, increasing feelings of relaxation, improving cardiovascular health, releasing endorphins that boost mood and even increasing tolerance for pain. Laughing has also been shown to lower stress levels. “Cortisol is a stress hormone that laughter lowers,” says Scott, adding that anticipation of laughter also “drops your adrenaline” and the body’s heightened fight-or-flight response. “All of these things contribute to you feeling better when you’ve been laughing,” she says. Because humans are wired to mirror one another, laughs spread around a room just like yawns, says Lauri Nummenmaa, a brain researcher and professor at Aalto University School of Science in Finland whose work appears in a recent special issue on laughter in the journal Royal Society.
Keyword: Emotions
Link ID: 28633 - Posted: 01.18.2023
Miryam Naddaf Researchers have made transgenic ants whose antennae glow green under a microscope, revealing how the insects’ brains process alarming smells. The findings identify three unique brain regions that respond to alarm signals. In these areas, called glomeruli, the ants’ nerve endings intersect. The work was posted on the bioRxiv preprint server on 29 December 20221 and has not yet been peer reviewed. “Ants are like little walking chemical factories,” says study co-author Daniel Kronauer, a biologist at the Rockefeller University in New York City. Previous research has focused on identifying the chemicals that ants release or analysing the insects’ behavioural responses to these odours, but “how ants can actually smell the pheromones is really only now becoming a little bit clearer”, says Kronauer. “This is the first time that, in a social insect, a particular glomerulus has been associated very strongly with a particular behaviour,” he adds. Smelly signals Ants are social animals that communicate with each other by releasing scented chemicals called pheromones. The clonal raider ants (Ooceraea biroi) that the researchers studied are blind. “They basically live in a world of smells,” says Kronauer. “So the vast amount of their social behaviour is regulated by these chemical compounds.” When an ant perceives danger, it releases alarm pheromones from a gland in its head to warn its nestmates. Other ants respond to this signal by picking up their larvae and evacuating the nest. “Instead of having dedicated brain areas for face recognition or language processing, ants have a massively expanded olfactory system,” says Kronauer. The researchers created transgenic clonal raider ants by injecting the insects’ eggs with a vector carrying a gene for a green fluorescent protein combined with one that expresses a molecule that indicates calcium activity in the brain. © 2023 Springer Nature Limited
Keyword: Chemical Senses (Smell & Taste)
Link ID: 28632 - Posted: 01.18.2023
Kaitlyn Radde Socially isolated older adults have a 27% higher chance of developing dementia than older adults who aren't, a new study by Johns Hopkins researchers found. "Social connections matter for our cognitive health, and it is potentially easily modifiable for older adults without the use of medication," Dr. Thomas Cudjoe, an assistant professor of medicine at Johns Hopkins and a senior author of the study, said in a news release. Published in the Journal of the American Geriatrics Society, the study tracked 5,022 dementia-free U.S. adults who were 65 or older – with an average age of 76 – and not living in a residential care facility. About 23% of participants were socially isolated. Social isolation is defined as having few relationships and few people to interact with regularly. The study measured this based on whether or not participants lived alone, talked about "important matters" with two or more people in the past year, attended religious services or participated in social events. Participants were assigned one point for each item, and those who scored a zero or one were classified as socially isolated. Over the course of nine years, researchers periodically administered cognitive tests. Overall, about 21% of the study participants developed dementia. But among those were who were socially isolated, about 26% developed dementia – compared to slightly less than 20% for those who were not socially isolated. The study did not find significant differences by race or ethnicity. However, more than 70% of the participants in the study were white – with particularly small sample sizes of Hispanic, Asian and Native participants – and the authors call for further research on the topic. © 2023 npr
Keyword: Alzheimers
Link ID: 28631 - Posted: 01.18.2023
By Jan Hoffman PHILADELPHIA — Over a matter of weeks, Tracey McCann watched in horror as the bruises she was accustomed to getting from injecting fentanyl began hardening into an armor of crusty, blackened tissue. Something must have gotten into the supply. Switching corner dealers didn’t help. People were saying that everyone’s dope was being cut with something that was causing gruesome, painful wounds. “I’d wake up in the morning crying because my arms were dying,” Ms. McCann, 39, said. In her shattered Philadelphia neighborhood, and increasingly in drug hot zones around the country, an animal tranquilizer called xylazine — known by street names like “tranq,” “tranq dope” and “zombie drug” — is being used to bulk up illicit fentanyl, making its impact even more devastating. Xylazine causes wounds that erupt with a scaly dead tissue called eschar; untreated, they can lead to amputation. It induces a blackout stupor for hours, rendering users vulnerable to rape and robbery. When people come to, the high from the fentanyl has long since faded and they immediately crave more. Because xylazine is a sedative and not an opioid, it resists standard opioid overdose reversal treatments. More than 90 percent of Philadelphia’s lab-tested dope samples were positive for xylazine, according to the most recent data. “It’s too late for Philly,” said Shawn Westfahl, an outreach worker with Prevention Point Philadelphia, a 30-year-old health services center in Kensington, the neighborhood at the epicenter of the city’s drug trade. “Philly’s supply is saturated. If other places around the country have a choice to avoid it, they need to hear our story.” A study published in June detected xylazine in the drug supply in 36 states and the District of Columbia. In New York City, xylazine has been found in 25 percent of drug samples, though health officials say the actual saturation is certainly greater. In November, the Food and Drug Administration issued a nationwide four-page xylazine alert to clinicians. © 2023 The New York Times Company
Keyword: Drug Abuse
Link ID: 28630 - Posted: 01.14.2023
Holly Else An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science. “I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds. The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use. Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them. The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts. © 2023 Springer Nature Limited
Keyword: Language; Intelligence
Link ID: 28629 - Posted: 01.14.2023
By Carolyn Wilke Mammals in the ocean swim through a world of sound. But in recent decades, humans have been cranking up the volume, blasting waters with noise from shipping, oil and gas exploration and military operations. New research suggests that such anthropogenic noise may make it harder for dolphins to communicate and work together. When dolphins cooperated on a task in a noisy environment, the animals were not so different from city dwellers on land trying to be heard over a din of jackhammers and ambulance sirens. They yelled, calling louder and longer, researchers reported Thursday in the journal Current Biology. “Even then, there’s a dramatic increase in how often they fail to coordinate,” said Shane Gero, a whale biologist at Carleton University in Ottawa who wasn’t part of the work. The effect of increasing noise was “remarkably clear.” Scientists worked with a dolphin duo, males named Delta and Reese, at an experimental lagoon at the Dolphin Research Center in the Florida Keys. The pair were trained to swim to different spots in their enclosure and push a button within one second of each other. “They’ve always been the most motivated animals. They were really excited about doing the task,” said Pernille Sørensen, a biologist and Ph.D. candidate at the University of Bristol in England. The dolphins talked to each other using whistles and often whistled right before pressing the button, she said. Ms. Sørensen’s team piped in sounds using underwater speakers. Tags, stuck behind the animals’ blowholes, captured what the dolphins heard and called to each other as well as their movements. Through 200 trials with five different sound environments, the team observed how the dolphins changed their behavior to compensate for loud noise. The cetaceans turned their bodies toward each other and paid greater attention to each other’s location. At times, they nearly doubled the length of their calls and amplified their whistles, in a sense shouting, to be heard above cacophonies of white noise or a recording of a pressure washer. © 2023 The New York Times Company
Keyword: Animal Communication; Hearing
Link ID: 28628 - Posted: 01.14.2023
By Rodrigo Pérez Ortega Was Tyrannosaurus rex as smart as a baboon? Scientists don’t like to compare intelligence between species (everyone has their own talents, after all), but a controversial new study suggests some dino brains were as densely packed with neurons as those of modern primates. If so, that would mean they were very smart—more than researchers previously thought—and could have achieved feats only humans and other very intelligent animals have, such as using tools. The findings, reported last week in the Journal of Comparative Neurology, are making waves among paleontologists on social media and beyond. Some are applauding the paper as a good first step toward better understanding dinosaur smarts, whereas others argue the neuron estimates are flawed, undercutting the study’s conclusions. Measuring dinosaur intelligence has never been easy. Historically, researchers have used something called the encephalization quotient (EQ), which measures an animal’s relative brain size, related to its body size. A T. rex, for example, had an EQ of about 2.4, compared with 3.1 for a German shepherd dog and 7.8 for a human—leading some to assume it was at least somewhat smart. EQ is hardly foolproof, however. In many animals, body size evolves independently from brain size, says Ashley Morhardt, a paleoneurologist at Washington University School of Medicine in St. Louis who wasn’t involved in the study. “EQ is a fraught metric, especially when studying extinct species.” Looking for a more trustworthy alternative, Suzana Herculano-Houzel, a neuroanatomist at Vanderbilt University, turned to a different measure: the density of neurons in the cortex, the wrinkly outer brain area critical to most intelligence-related tasks. She had previously estimated the number of neurons in many animal species, including humans, by making “brain soup”—dissolving brains in a detergent solution—and counting the neurons in different parts of the brain. © 2023 American Association for the Advancement of Science.
Keyword: Evolution
Link ID: 28627 - Posted: 01.12.2023
Xiaofan Lei What comes to mind when you think of someone who stutters? Is that person male or female? Are they weak and nervous, or powerful and heroic? If you have a choice, would you like to marry them, introduce them to your friends or recommend them for a job? Your attitudes toward people who stutter may depend partly on what you think causes stuttering. If you think that stuttering is due to psychological causes, such as being nervous, research suggests that you are more likely to distance yourself from those who stutter and view them more negatively. I am a person who stutters and a doctoral candidate in speech, language and hearing sciences. Growing up, I tried my best to hide my stuttering and to pass as fluent. I avoided sounds and words that I might stutter on. I avoided ordering the dishes I wanted to eat at the school cafeteria to avoid stuttering. I asked my teacher to not call on me in class because I didn’t want to deal with the laughter from my classmates when they heard my stutter. Those experiences motivated me to investigate stuttering so that I can help people who stutter, including myself, to better cope with the condition. Get facts about the coronavirus pandemic and the latest research In writing about what the scientific field has to say about stuttering and its biological causes, I hope I can reduce the stigma and misunderstanding surrounding the disorder. The most recognizable characteristics of developmental stuttering are the repetitions, prolongations and blocks in people’s speech. People who stutter may also experience muscle tension during speech and exhibit secondary behaviors, such as tics and grimaces. © 2010–2023, The Conversation US, Inc.
Keyword: Language
Link ID: 28626 - Posted: 01.12.2023
By Oliver Whang Hod Lipson, a mechanical engineer who directs the Creative Machines Lab at Columbia University, has shaped most of his career around what some people in his industry have called the c-word. On a sunny morning this past October, the Israeli-born roboticist sat behind a table in his lab and explained himself. “This topic was taboo,” he said, a grin exposing a slight gap between his front teeth. “We were almost forbidden from talking about it — ‘Don’t talk about the c-word; you won’t get tenure’ — so in the beginning I had to disguise it, like it was something else.” That was back in the early 2000s, when Dr. Lipson was an assistant professor at Cornell University. He was working to create machines that could note when something was wrong with their own hardware — a broken part, or faulty wiring — and then change their behavior to compensate for that impairment without the guiding hand of a programmer. Just as when a dog loses a leg in an accident, it can teach itself to walk again in a different way. This sort of built-in adaptability, Dr. Lipson argued, would become more important as we became more reliant on machines. Robots were being used for surgical procedures, food manufacturing and transportation; the applications for machines seemed pretty much endless, and any error in their functioning, as they became more integrated with our lives, could spell disaster. “We’re literally going to surrender our life to a robot,” he said. “You want these machines to be resilient.” One way to do this was to take inspiration from nature. Animals, and particularly humans, are good at adapting to changes. This ability might be a result of millions of years of evolution, as resilience in response to injury and changing environments typically increases the chances that an animal will survive and reproduce. Dr. Lipson wondered whether he could replicate this kind of natural selection in his code, creating a generalizable form of intelligence that could learn about its body and function no matter what that body looked like, and no matter what that function was. ImageHod Lipson, in jeans, a dark jacket and a dark button-down shirt, stands at the double-door entrance to the Creative Machines Lab. Signs on and next to the doors read “Creative Machines Lab,” “Laboratory,” “No Smoking” and “Smile, You’re On Camera.” © 2023 The New York Times Company
Keyword: Consciousness; Robotics
Link ID: 28625 - Posted: 01.07.2023
By Elizabeth Pennisi Biologists have long known that new protein-coding genes can arise through the duplication and modification of existing ones. But some protein genes can also arise from stretches of the genome that once encoded aimless strands of RNA instead. How new protein genes surface this way has been a mystery, however. Now, a study identifies mutations that transform seemingly useless DNA sequences into potential genes by endowing their encoded RNA with the skill to escape the cell nucleus—a critical step toward becoming translated into a protein. The study’s authors highlight 74 human protein genes that appear to have arisen in this de novo way—more than half of which emerged after the human lineage branched off from chimpanzees. Some of these newcomer genes may have played a role in the evolution of our relatively large and complex brains. When added to mice, one made the rodent brains grow bigger and more humanlike, the authors report this week in Nature Ecology & Evolution. “This work is a big advance,” says Anne-Ruxandra Carvunis, an evolutionary biologist at the University of Pittsburgh, who was not involved with the research. It “suggests that de novo gene birth may have played a role in human brain evolution.” Although some genes encode RNAs that have structural or regulatory purposes themselves, those that encode proteins instead create an intermediary RNA. Made in the nucleus like other RNAs, these messenger RNAs (mRNAs) exit into the cytoplasm and travel to organelles called ribosomes to tell them how to build the gene’s proteins. A decade ago, Chuan-Yun Li, an evolutionary biologist at Peking University, and colleagues discovered that some human protein genes bore a striking resemblance to DNA sequences in rhesus monkeys that got transcribed into long noncoding RNAs (lncRNAs), which didn’t make proteins or have any other apparent purpose. Li couldn’t figure out what it had taken for those stretches of monkey DNA to become true protein-coding genes in humans. © 2023 American Association for the Advancement of Science.
Keyword: Development of the Brain; Genes & Behavior
Link ID: 28624 - Posted: 01.07.2023
by Giorgia Guglielmi About five years ago, Catarina Seabra made a discovery that led her into uncharted scientific territory. Seabra, then a graduate student in Michael Talkowski’s lab at Harvard University, found that disrupting the autism-linked gene MBD5 affects the expression of other genes in the brains of mice and in human neurons. Among those genes, several are involved in the formation and function of primary cilia — hair-like protrusions on the cell’s surface that sense its external environment. “This got me intrigued, because up to that point, I had never heard of primary cilia in neurons,” Seabra says. She wondered if other researchers had linked cilia defects to autism-related conditions, but the scientific literature offered only sparse evidence, mostly in mice. Seabra, now a postdoctoral researcher in the lab of João Peça at the Center for Neuroscience and Cell Biology at the University of Coimbra in Portugal, is spearheading an effort to look for a connection in people: The Peça lab established a biobank of dental stem cells obtained from baby teeth of 50 children with autism or other neurodevelopmental conditions. And the team plans to look at neurons and brain organoids made from those cells to see if their cilia show any defects in structure or function. Other neuroscientists, too, are working to understand the role of cilia during neurodevelopment. Last September, for example, researchers working with tissue samples from mice discovered that cilia on the surface of neurons can form junctions, or synapses, with other neurons — which means cilia defects could, at least in theory, hinder the development of neural circuitry and activity. Other teams have connected several additional autism-related genes, beyond MBD5, to the tiny cell antennae. © 2023 Simons Foundation
Keyword: Autism
Link ID: 28623 - Posted: 01.07.2023
By Laurie McGinley The Food and Drug Administration on Friday approved an Alzheimer’s drug that slowed cognitive decline in a major study, offering patients desperately needed hope — even as doctors sharply debated the safety of the drug and whether it provides a significant benefit. The FDA said the drug, called lecanemab, is for patients with mild cognitive impairment or early dementia because of Alzheimer’s. The accelerated approval was based on a mid-stage trial that showed the treatment effectively removed a sticky protein called amyloid beta — considered a hallmark of the illness — from the brain. A larger trial, conducted more recently, found the drug, which will be sold under the brand name Leqembi, slowed the progression of Alzheimer’s disease by 27 percent. “This treatment option is the latest therapy to target and affect the underlying disease process of Alzheimer’s, instead of only treating the symptoms of the disease,” Billy Dunn, director of the FDA’s Office of Neuroscience, said in a statement. The approval followed a barrage of criticism endured by the FDA for its 2021 approval of Aduhelm, another amyloid-targeting drug that had been panned by the agency’s outside experts. Lecanemab is getting a warmer reception but disagreements remain. Many neurologists and advocates hailed lecanemab, given intravenously twice a month, as an important advance — one that follows years of failure involving Alzheimer’s drugs. They said the treatment will allow patients to stay longer in the milder stages of the fatal, neurodegenerative disorder, which afflicts more than 6 million people in the United States.
Keyword: Alzheimers
Link ID: 28622 - Posted: 01.07.2023
McKenzie Prillaman The hotel ballroom was packed to near capacity with scientists when Susan Yanovski arrived. Despite being 10 minutes early, she had to manoeuvre her way to one of the few empty seats near the back. The audience at the ObesityWeek conference in San Diego, California, in November 2022, was waiting to hear the results of a hotly anticipated drug trial. The presenters — researchers affiliated with pharmaceutical company Novo Nordisk, based in Bagsværd, Denmark — did not disappoint. They described the details of an investigation of a promising anti-obesity medication in teenagers, a group that is notoriously resistant to such treatment. The results astonished researchers: a weekly injection for almost 16 months, along with some lifestyle changes, reduced body weight by at least 20% in more than one-third of the participants1. Previous studies2,3 had shown that the drug, semaglutide, was just as impressive in adults. The presentation concluded like no other at the conference, says Yanovski, co-director of the Office of Obesity Research at the US National Institute of Diabetes and Digestive and Kidney Diseases in Bethesda, Maryland. Sustained applause echoed through the room “like you were at a Broadway show”, she says. This energy has pervaded the field of obesity medicine for the past few years. After decades of work, researchers are finally seeing signs of success: a new generation of anti-obesity medications that drastically diminish weight without the serious side effects that have plagued previous efforts. These drugs are arriving in an era in which obesity is growing exponentially. Worldwide obesity has tripled since 1975; in 2016, about 40% of adults were considered overweight and 13% had obesity, according to the World Health Organization (WHO). With extra weight often comes heightened risk of health conditions such as type 2 diabetes, heart disease and certain cancers. The WHO recommends healthier diets and physical activity to reduce obesity, but medication might help when lifestyle changes aren’t enough. The new drugs mimic hormones known as incretins, which lower blood sugar and curb appetite. Some have already been approved for treating type 2 diabetes, and they are starting to win approval for inducing weight loss. © 2023 Springer Nature Limited
Keyword: Obesity
Link ID: 28621 - Posted: 01.04.2023
By Freda Kreier Living through the COVID-19 pandemic may have matured teens’ brains beyond their years. From online schooling and social isolation to economic hardship and a mounting death count, the last few years have been rough on young people. For teens, the pandemic and its many side effects came during a crucial window in brain development. Now, a small study comparing brain scans of young people from before and after 2020 reveals that the brains of teens who lived through the pandemic look about three years older than expected, scientists say. This research, published December 1 in Biological Psychiatry: Global Open Science, is the first to look at the impact of the pandemic on brain aging. The finding reveals that “the pandemic hasn’t been bad just in terms of mental health for adolescents,” says Ian Gotlib, a clinical neuroscientist at Stanford University. “It seems to have altered their brains as well.” The study can’t link those brain changes to poor mental health during the pandemic. But “we know there is a relationship between adversity and the brain as it tries to adapt to what it’s been given,” says Beatriz Luna, a developmental cognitive neuroscientist at the University of Pittsburgh, who wasn’t involved in the research. “I think this is a very important study that sets the ball rolling for us to look at this.” The roots of this study date back to nearly a decade ago, when Gotlib and his colleagues launched a project in California’s Bay Area to study depression in adolescents. The researchers were collecting information on the mental health of the kids in the study, and did MRI scans of their brains. © Society for Science & the Public 2000–2023.
Keyword: Development of the Brain; Stress
Link ID: 28620 - Posted: 01.04.2023
By Ellen Barry The effect of social media use on children is a fraught area of research, as parents and policymakers try to ascertain the results of a vast experiment already in full swing. Successive studies have added pieces to the puzzle, fleshing out the implications of a nearly constant stream of virtual interactions beginning in childhood. A new study by neuroscientists at the University of North Carolina tries something new, conducting successive brain scans of middle schoolers between the ages of 12 and 15, a period of especially rapid brain development. The researchers found that children who habitually checked their social media feeds at around age 12 showed a distinct trajectory, with their sensitivity to social rewards from peers heightening over time. Teenagers with less engagement in social media followed the opposite path, with a declining interest in social rewards. The study, published on Tuesday in JAMA Pediatrics, is among the first attempts to capture changes to brain function correlated with social media use over a period of years. The study has important limitations, the authors acknowledge. Because adolescence is a period of expanding social relationships, the brain differences could reflect a natural pivot toward peers, which could be driving more frequent social media use. “We can’t make causal claims that social media is changing the brain,” said Eva H. Telzer, an associate professor of psychology and neuroscience at the University of North Carolina, Chapel Hill, and one of the authors of the study. But, she added, “teens who are habitually checking their social media are showing these pretty dramatic changes in the way their brains are responding, which could potentially have long-term consequences well into adulthood, sort of setting the stage for brain development over time.” © 2023 The New York Times Company
Keyword: Development of the Brain; Stress
Link ID: 28619 - Posted: 01.04.2023
By Erin Blakemore Can the human body betray a lie? In the 1920s, inventors designed a device they said could detect deception by monitoring a subject’s breathing and blood pressure. “The Lie Detector,” an American Experience documentary that premieres Tuesday on PBS, delves into the history of the infamous device. In the century after its invention, the lie detector’s popularity skyrocketed. And despite a checkered legacy, polygraph tests are still regularly used by law enforcement and some employers. The documentary tells a story of honest intentions and sinister consequences. John Larson, one of its inventors, was a medical student and law enforcement officer in search of more humane methods of policing and interrogation. He piggybacked off new scientific and psychological concepts to create the device in 1921. The technologies Larson and his co-inventors used were still in their infancy, and the idea that people produce measurable, consistent physical symptoms when they lie was unproved. It still is. Polygraph protocols have evolved, but the devices’ detractors say they measure only anxiety, not truthfulness. And even as major organizations have raised questions about the scientific validity of the tests and federal laws have prohibited most private employers from requiring them, the idea that dishonesty can be measured through physical testing remains widespread. The documentary suggests that the polygraph tests’ popularity was tied more to publicity than accuracy — and over time, Larson’s vision was turned on its head as polygraphs were used to intimidate, incarcerate and interrogate people. With the help of expert interviews and a kaleidoscope of historical footage and imagery, director Rob Rapley tracks the tale of an invention its own creator compared to Frankenstein’s monster.
Keyword: Stress
Link ID: 28618 - Posted: 01.04.2023
By Andrew Jacobs PORTLAND, Ore. — The curriculum was set, the students were enrolled and Oregon officials had signed off on nearly every detail of training for the first class of “magic” mushroom facilitators seeking state certification. But as the four-day session got underway inside a hotel conference room in early December, an important pedagogical tool was missing: the mushrooms themselves. That’s because state officials, two years after Oregon voters narrowly approved the adult use of psilocybin, were still hammering out the regulatory framework for the production and sale of the tawny hallucinogenic fungi. Instead, the students, most of them seasoned mental health professionals, would have to role play with one another using meditation or intensive breathing practices that could lead to altered states of consciousness — the next best thing to the kind of psychedelic trip they would encounter as licensed guides. Not that anyone was complaining. Like many of the two dozen students who paid nearly $10,000 for the course, Jason Wright, 48, a hospital psychiatric nurse in Portland, said he was thrilled to be part of a bold experiment with national implications. “It’s incredible to be on the front lines of something that has the potential to change our relationship with drugs that should never have been criminalized in the first place,” he said. On Jan. 1, Oregon became the first state in the nation to legalize the adult use of psilocybin, a naturally occurring psychedelic that has shown significant promise for treating severe depression, post-traumatic stress disorder and end-of-life anxiety among the terminally ill, among other mental health conditions. © 2023 The New York Times Company
Keyword: Drug Abuse; Depression
Link ID: 28617 - Posted: 01.04.2023
Linda Geddes Science correspondent Scientists have developed a blood test to diagnose Alzheimer’s disease without the need for expensive brain imaging or a painful lumbar puncture, where a sample of cerebrospinal fluid (CSF) is drawn from the lower back. If validated, the test could enable faster diagnosis of the disease, meaning therapies could be initiated earlier. Alzheimer’s is the most common form of dementia, but diagnosis remains challenging – particularly during the earlier stages of the disease. Current guidelines recommend detection of three distinct markers: abnormal accumulations of amyloid and tau proteins, as well as neurodegeneration – the slow and progressive loss of neuronal cells in specified regions of the brain. This can be done through a combination of brain imaging and CSF analysis. However, a lumbar puncture can be painful and people may experience headaches or back pain after the procedure, while brain imaging is expensive and takes a long time to schedule. Prof Thomas Karikari at the University of Pittsburgh, in Pennsylvania, US, who was involved in the study, said: “A lot of patients, even in the US, don’t have access to MRI and PET scanners. Accessibility is a major issue.” The development of a reliable blood test would be an important step forwards. “A blood test is cheaper, safer and easier to administer, and it can improve clinical confidence in diagnosing Alzheimer’s and selecting participants for clinical trial and disease monitoring,” Karikari said. Although current blood tests can accurately detect abnormalities in amyloid and tau proteins, detecting markers of nerve cell damage that are specific to the brain has been harder. Karikari and his colleagues around the world focused on developing an antibody-based blood test that would detect a particular form of tau protein called brain-derived tau, which is specific to Alzheimer’s disease. © 2022 Guardian News & Media Limited
Keyword: Alzheimers
Link ID: 28616 - Posted: 12.28.2022
Miryam Naddaf Stimulating neurons that are linked to alertness helps rats with cochlear implants learn to quickly recognize tunes, researchers have found. The results suggest that activity in a brain region called the locus coeruleus (LC) improves hearing perception in deaf rodents. Researchers say the insights are important for understanding how the brain processes sound, but caution that the approach is a long way from helping people. “It’s like we gave them a cup of coffee,” says Robert Froemke, an otolaryngologist at New York University School of Medicine and a co-author of the study, published in Nature on 21 December1. Cochlear implants use electrodes in the inner-ear region called the cochlea, which is damaged in people who have severe or total hearing loss. The device converts acoustic sounds into electrical signals that stimulate the auditory nerve, and the brain learns to process these signals to make sense of the auditory world. Some people with cochlear implants learn to recognize speech within hours of the device being implanted, whereas others can take months or years. “This problem has been around since the dawn of cochlear implants, and it shows no signs of being resolved,” says Gerald Loeb at the University of Southern California in Los Angeles, who helped to develop one of the first cochlear implants. Researchers say that a person’s age, the duration of their hearing loss and the type of processor and electrodes in the implant don’t account for this variation, but suggest that the brain could be the source of the differences. “It’s sort of the black box,” says Daniel Polley, an auditory neuroscientist at Harvard Medical School in Boston, Massachusetts. Most previous research has focused on improving the cochlear device and the implantation procedure. Attempts to improve the brain’s ability to use the device open up a way to improve communication between the ear and the brain, says Polley. © 2022 Springer Nature Limited
Keyword: Hearing
Link ID: 28615 - Posted: 12.28.2022


.gif)

