Chapter 16. None

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 81 - 100 of 6572

By Harriet Brown The minute I saw the headline — “Should Patients Be Allowed to Die From Anorexia?” — in The New York Times Magazine last month, my heart sank. Over the last two years, more and more psychiatrists have floated the idea that it’s OK to stop trying to cure some people with anorexia nervosa. People with anorexia develop a deep fear of food and eating, often losing so much weight that they die of starvation or complications. About 20 percent die by suicide. Anorexia is a terrible disease, one that inflicts maximum pain on the person diagnosed and their families and friends. The suffering is continuous and intense, and it gets even worse during recovery. Our family experienced this for eight years. Having to watch my daughter suffer made me realize that anorexia is not a choice or a question of vanity but a tsunami of fear and anxiety that makes one of the most basic human acts, the act of eating, as terrifying as jumping out of a plane without a parachute. It usually takes years of steady, consistent, calorie-dense eating to fully heal the body and brain of a person with anorexia. Without the right kind of support and treatment, it’s nearly impossible. So when psychiatrists suggest that maybe some people can’t recover and should be allowed to stop trying, they’re sidestepping their own responsibility. What they should be saying instead is that current views on and treatments of anorexia are abysmal, and medicine needs to do better. If you’ve never experienced anorexia firsthand, consider yourself blessed. Anorexia has one of the highest mortality rates of any psychiatric illness. People with anorexia are 18 times as likely to die from suicide as their peers. Fewer than half of those with anorexia make a full recovery.

Keyword: Anorexia & Bulimia
Link ID: 29139 - Posted: 02.08.2024

Ian Sample Science editor After a decades-long and largely fruitless hunt for drugs to combat Alzheimer’s disease, an unlikely candidate has raised its head: the erectile dysfunction pill Viagra. Researchers found that men who were prescribed Viagra and similar medications were 18% less likely to develop the most common form of dementia years later than those who went without the drugs. The effect was strongest in men with the most prescriptions, with scientists finding a 44% lower risk of Alzheimer’s in those who received 21 to 50 prescriptions of the erectile dysfunction pills over the course of their study. While the findings are striking, the observational study cannot determine whether Viagra and similar pills protect against Alzheimer’s or whether men who are already less prone to the condition are simply more likely to use the tablets. “We can’t say that the drugs are responsible, but this does give us food for thought on how we move into the future,” said the lead author Dr Ruth Brauer at University College London. “We now need a proper clinical trial to look at the effects of these drugs on Alzheimer’s in women as well as men.” Brauer and her colleagues analysed medical records for more than 260,000 men who were diagnosed with erectile dysfunction but had no evidence of memory or thinking problems. Just over half were taking PDE5 inhibitor drugs, including sildenafil (sold as Viagra), avanafil, vardenafil and tadalafil. The men were followed for an average of five years to record any new cases of Alzheimer’s. © 2024 Guardian News & Media Limited

Keyword: Alzheimers
Link ID: 29138 - Posted: 02.08.2024

By Lisa Sanders, M.D. “We were thinking about going bowling with the kids tomorrow,” the woman told her 43-year-old brother as they settled into their accustomed spots in the living room of their mother’s home in Chicago. It was late — nearly midnight — and he had arrived from Michigan to spend the days between Christmas and New Year’s with this part of his family. She and her husband and her brother grew up together and spent many late nights laughing and talking. She knew her brother was passionate about bowling. He had spent almost every day in his local alley two summers ago. So she was taken by surprise when he answered, “I can’t do that anymore.” Certainly, her brother had had a tough year. It seemed to start with his terrible heartburn. For most of his life, he had what he described as run-of-the-mill heartburn, usually triggered by eating late at night, and he would have to take a couple of antacid tablets. But that year his heartburn went ballistic. His mouth always tasted like metal. And the reflux of food back up the esophagus would get so bad that it would make him vomit. Nothing seemed to help. He quit drinking coffee. Quit drinking alcohol. Stopped eating spicy foods. He told his doctor, who started him on a medication known as a proton pump inhibitor (P.P.I.) to reduce the acid or excess protons his stomach made. That pill provided relief from the burning pain. But he still had the metallic taste in his mouth, still felt sick after eating. He still vomited several times a week. When he discovered that he wouldn’t throw up when he drank smoothies, he almost completely gave up solid foods. When he was still feeling awful after weeks on the P.P.I., his gastroenterologist used a tiny camera to take a look at his esophagus. His stomach looked fine, but the region where the esophagus entered the stomach was a mess. Normally the swallowing tube ends with a tight sphincter that stays closed to protect delicate tissue from the harsh acid of the stomach. It opens when swallowing, to let the food pass. But his swallowing tube was wide open and the tissue around the sphincter was red and swollen. © 2024 The New York Times Company

Keyword: Hearing
Link ID: 29137 - Posted: 02.08.2024

Nicholas J. Kelley In the middle of 2023, a study conducted by the HuthLab at the University of Texas sent shockwaves through the realms of neuroscience and technology. For the first time, the thoughts and impressions of people unable to communicate with the outside world were translated into continuous natural language, using a combination of artificial intelligence (AI) and brain imaging technology. This is the closest science has yet come to reading someone’s mind. While advances in neuroimaging over the past two decades have enabled non-responsive and minimally conscious patients to control a computer cursor with their brain, HuthLab’s research is a significant step closer towards accessing people’s actual thoughts. As Alexander Huth, the neuroscientist who co-led the research, told the New York Times: Combining AI and brain-scanning technology, the team created a non-invasive brain decoder capable of reconstructing continuous natural language among people otherwise unable to communicate with the outside world. The development of such technology – and the parallel development of brain-controlled motor prosthetics that enable paralysed patients to achieve some renewed mobility – holds tremendous prospects for people suffering from neurological diseases including locked-in syndrome and quadriplegia. In the longer term, this could lead to wider public applications such as fitbit-style health monitors for the brain and brain-controlled smartphones. On January 29, Elon Musk announced that his Neuralink tech startup had implanted a chip in a human brain for the first time. He had previously told followers that Neuralink’s first product, Telepathy, would one day allow people to control their phones or computers “just by thinking”. © 2010–2024, The Conversation US, Inc.

Keyword: Brain imaging
Link ID: 29136 - Posted: 02.08.2024

By Nora Bradford Whenever you’re actively performing a task — say, lifting weights at the gym or taking a hard exam — the parts of your brain required to carry it out become “active” when neurons step up their electrical activity. But is your brain active even when you’re zoning out on the couch? The answer, researchers have found, is yes. Over the past two decades they’ve defined what’s known as the default mode network, a collection of seemingly unrelated areas of the brain that activate when you’re not doing much at all. Its discovery has offered insights into how the brain functions outside of well-defined tasks and has also prompted research into the role of brain networks — not just brain regions — in managing our internal experience. In the late 20th century, neuroscientists began using new techniques to take images of people’s brains as they performed tasks in scanning machines. As expected, activity in certain brain areas increased during tasks — and to the researchers’ surprise, activity in other brain areas declined simultaneously. The neuroscientists were intrigued that during a wide variety of tasks, the very same brain areas consistently dialed back their activity. It was as if these areas had been active when the person wasn’t doing anything, and then turned off when the mind had to concentrate on something external. Researchers called these areas “task negative.” When they were first identified, Marcus Raichle, a neurologist at the Washington University School of Medicine in St. Louis, suspected that these task-negative areas play an important role in the resting mind. “This raised the question of ‘What’s baseline brain activity?’” Raichle recalled. In an experiment, he asked people in scanners to close their eyes and simply let their minds wander while he measured their brain activity. All Rights Reserved © 2024

Keyword: Attention; Consciousness
Link ID: 29135 - Posted: 02.06.2024

By David Marchese Our memories form the bedrock of who we are. Those recollections, in turn, are built on one very simple assumption: This happened. But things are not quite so simple. “We update our memories through the act of remembering,” says Charan Ranganath, a professor of psychology and neuroscience at the University of California, Davis, and the author of the illuminating new book “Why We Remember.” “So it creates all these weird biases and infiltrates our decision making. It affects our sense of who we are.” Rather than being photo-accurate repositories of past experience, Ranganath argues, our memories function more like active interpreters, working to help us navigate the present and future. The implication is that who we are, and the memories we draw on to determine that, are far less fixed than you might think. “Our identities,” Ranganath says, “are built on shifting sand.” What is the most common misconception about memory? People believe that memory should be effortless, but their expectations for how much they should remember are totally out of whack with how much they’re capable of remembering.1 Another misconception is that memory is supposed to be an archive of the past. We expect that we should be able to replay the past like a movie in our heads. The problem with that assumption is that we don’t replay the past as it happened; we do it through a lens of interpretation and imagination. Semantic memory is the term for the memory of facts and knowledge about the world. standpoint? It’s exceptionally hard to answer the question of how much we can remember. What I’ll say is that we can remember an extraordinary amount of detail that would make you feel at times as if you have a photographic memory. We’re capable of these extraordinary feats. I would argue that we’re all everyday-memory experts, because we have this exceptional semantic memory, which is the scaffold for episodic memory. I know it sounds squirmy to say, “Well, I can’t answer the question of how much we remember,” but I don’t want readers to walk away thinking memory is all made up. © 2024 The New York Times Company

Keyword: Learning & Memory
Link ID: 29134 - Posted: 02.06.2024

By Shruti Ravindran When preparing to become a butterfly, the Eastern Black Swallowtail caterpillar wraps its bright striped body within a leaf. This leaf is its sanctuary, where it will weave its chrysalis. So when the leaf is disturbed by a would-be predator—a bird or insect—the caterpillar stirs into motion, briefly darting out a pair of fleshy, smelly horns. To humans, these horns might appear yellow—a color known to attract birds and many insects—but from a predator’s-eye-view, they appear a livid, almost neon violet, a color of warning and poison for some birds and insects. “It’s like a jump scare,” says Daniel Hanley, an assistant professor of biology at George Mason University. “Startle them enough, and all you need is a second to get away.” Hanley is part of a team that has developed a new technique to depict on video how the natural world looks to non-human species. The method is meant to capture how animals use color in unique—and often fleeting—behaviors like the caterpillar’s anti-predator display. Most animals, birds, and insects possess their own ways of seeing, shaped by the light receptors in their eyes. Human retinas, for example, are sensitive to three wavelengths of light—blue, green, and red—which enables us to see approximately 1 million different hues in our environment. By contrast, many mammals, including dogs, cats, and cows, sense only two wavelengths. But birds, fish, amphibians, and some insects and reptiles typically can sense four—including ultraviolet light. Their worlds are drenched in a kaleidoscope of color—they can often see 100 times as many shades as humans do. Hanley’s team, which includes not just biologists but multiple mathematicians, a physicist, an engineer, and a filmmaker, claims that their method can translate the colors and gradations of light perceived by hundreds of animals to a range of frequencies that human eyes can comprehend with an accuracy of roughly 90 percent. That is, they can simulate the way a scene in a natural environment might look to a particular species of animal, what shifting shapes and objects might stand out most. The team uses commercially available cameras to record video in four color channels—blue, green, red, and ultraviolet—and then applies open source software to translate the picture according to the mix of light receptor sensitivities a given animal may have. © 2024 NautilusNext Inc.,

Keyword: Vision; Evolution
Link ID: 29133 - Posted: 02.06.2024

By Ashley Juavinett In the 2010 award-winning film “Inception,” Leonardo DiCaprio’s character and others run around multiple layers of someone’s consciousness, trying to implant an idea in the person’s mind. If you can plant something deep enough, the film suggests, you can make them believe it is their own idea. The film was billed as science fiction, but three years later, in 2013, researchers actually did this — in a mouse, at least. The work focused on the hippocampus, along with its closely interconnected structures, long recognized by scientists to hold our dearest memories. If you damage significant portions of just one region of your hippocampus, the dentate gyrus, you’ll lose the ability to form new memories. How these memories are stored, however, is still up for debate. One early but persistent idea posits that enduring changes in our neural circuitry, or “engrams,” may represent the physical traces of specific memories. An engram is sometimes thought of as a group of cells, along with their synaptic weights and connections throughout the brain. In sum, the engram is what DiCaprio’s character would have had to discreetly manipulate in his target. In 2012, a team in Susumu Tonegawa’s lab at the Massachusetts Institute of Technology (MIT) showed that you could mark the cells of a real memory engram and reactivate them later. Taking that work one step further, Steve Ramirez, Xu Liu and others in Tonegawa’s lab demonstrated the following year that you can implant a memory of something that never even happened. In doing so, they turned science fiction into reality, one tiny foot shock at a time. Published in Science, Ramirez and Liu’s study is a breath of fresh air, scientifically speaking. The abstract starts with one of the shortest sentences you’ll ever find in a scientific manuscript: “Memories can be unreliable.” The entire paper is extremely readable, and there is no shortage of related papers and review articles that you could give your students to read for additional context. © 2024 Simons Foundation

Keyword: Learning & Memory
Link ID: 29131 - Posted: 02.06.2024

April Smith Did you know that anorexia is the most lethal mental health condition? One person dies from an eating disorder every hour in the U.S. Many of these deaths are not from health consequences related to starvation, but from suicide. Up to 1 in 5 women and 1 in 7 men in the U.S. will develop an eating disorder by age 40, and 1 in 2 people with an eating disorder will think about ending their life. About 1 in 4 people with anorexia nervosa or bulimia nervosa will attempt to kill themselves, and those with anorexia have a risk of death by suicide 31 times higher than peers without the disorder. In fact, nonsuicidal self-injury, suicidal ideation, suicide attempts and suicide deaths are all more prevalent among those with any type of eating disorder compared to those without an eating disorder. Why might that be? I am a clinical psychologist who studies eating disorders and self-harm, and I have spent the past 15 years researching this question. We still don’t have the answer. But new work on perception of the internal state of the body points to some promising possibilities for treatment. And what we’re learning could help anyone improve their relationship with their body. To understand why people with eating disorders are at risk of dying by suicide, I first want to ask you to do a little thought exercise. I’d like you to really think about your body: Think about your hair, face, arms, stomach, chest and legs. What words and feelings come to mind? Are there any things you wish you could change? Feel free to close your eyes and try this out. © 2010–2024, The Conversation US, Inc.

Keyword: Anorexia & Bulimia
Link ID: 29127 - Posted: 02.03.2024

By Laura Sanders Under extremely rare circumstances, it appears that Alzheimer’s disease can be transmitted between people. Five people who received contaminated injections of a growth hormone as children went on to develop Alzheimer’s unusually early, researchers report January 29 in Nature Medicine. The findings represent “the first time iatrogenic Alzheimer’s disease has been described,” neurologist John Collinge said January 25 in a news briefing, referring to a disease caused by a medical procedure. That sounds alarming, but researchers are quick to emphasize that Alzheimer’s disease is not contagious in everyday life, including caretaking and most medical settings. Support Science Today. Thank you for being a subscriber to Science News! Interested in more ways to support STEM? Consider making a gift to our nonprofit publisher, Society for Science, an organization dedicated to expanding scientific literacy and ensuring that every young person can strive to become an engineer or scientist. Donate Now “We are not suggesting for a moment that you can catch Alzheimer’s disease,” said Collinge, of the University College London’s Institute of Prion Diseases. “This is not transmissible in the sense of a viral or bacterial infection.” The reassurance is echoed by Carlo Condello, a neurobiologist at the University of California, San Francisco who wasn’t involved in the study. “In no way do we believe sporadic Alzheimer’s disease is a communicable disease,” he says. “Only under incredibly artificial, now out-of-date, medical practices is this appearing. It’s no longer an issue.” © Society for Science & the Public 2000–202

Keyword: Alzheimers
Link ID: 29126 - Posted: 01.31.2024

By Laurie McGinley ABINGTON, Pa. — Wrapped in a purple blanket, Robert Williford settles into a quiet corner of a bustling neurology clinic, an IV line delivering a colorless liquid into his left arm. The 67-year-old, who has early Alzheimer’s disease, is getting his initial dose of Leqembi. The drug is the first to clearly slow the fatal neurodegenerative ailment that afflicts 6.7 million older Americans, though the benefits may be modest. The retired social worker, one of the first African Americans to receive the treatment, hopes it will ease his forgetfulness so “I drive my wife less crazy.” But as Williford and his doctors embark on this treatment, they are doing so with scant scientific data about how the medication might work in people of color. In the pivotal clinical trial for the drug, Black patients globally accounted for only 47 of the 1,795 participants — about 2.6 percent. For U.S. trial sites, the percentage was 4.5 percent. The proportion of Black enrollees was similarly low for Eli Lilly Alzheimer’s drug, called donanemab, expected to be cleared by the Food and Drug Administration in coming months. Black people make up more than 13 percent of the U.S. population. The paltry data for the new class of groundbreaking drugs, which strip a sticky substance called amyloid beta from the brain, has ignited an intense debate among researchers and clinicians. Will the medications — the first glimmer of hope after years of failure — be as beneficial for African Americans as for White patients? “Are these drugs going to work in non-Whites? And particularly in Blacks? We just don’t have enough data, I don’t think,” said Suzanne E. Schindler, a clinical neurologist and dementia specialist at Washington University in St. Louis.

Keyword: Alzheimers
Link ID: 29122 - Posted: 01.31.2024

Ashley Montgomery In December 1963, a military family named the Gardners had just moved to San Diego, Calif. The oldest son, 17-year-old Randy Gardner, was a self-proclaimed "science nerd." His family had moved every two years, and in every town they lived in, Gardner made sure to enter the science fair. He was determined to make a splash in the 10th Annual Greater San Diego Science Fair. When researching potential topics, Gardner heard about a radio deejay in Honolulu, Hawaii, who avoided sleep for 260 hours. So Gardner and his two friends, Bruce McAllister and Joe Marciano, set out to beat this record. Randy Gardner spoke to NPR's Hidden Brain host Shankar Vedantam in 2017. When asked about his interest in breaking a sleep deprivation record, Gardner said, "I'm a very determined person, and when I get things under my craw, I can't let it go until there's some kind of a solution." Of his scientific trio, Randy lost the coin toss: He would be the test subject who would deprive himself of sleep. His two friends would take turns monitoring his mental and physical reaction times as well as making sure Gardner didn't fall asleep. The experiment began during their school's winter break on Dec. 28, 1963. Three days into sleeplessness, Gardner said, he experienced nausea and had trouble remembering things. Speaking to NPR in 2017, Gardner said: "I was really nauseous. And this went on for just about the entire rest of the experiment. And it just kept going downhill. I mean, it was crazy where you couldn't remember things. It was almost like an early Alzheimer's thing brought on by lack of sleep." But Gardner stayed awake. The experiment gained the attention of local reporters, which, in Gardner's opinion, was good for the experiment "because that kept me awake," he said. "You know, you're dealing with these people and their cameras and their questions." The news made its way to Stanford, Calif., where a young Stanford sleep researcher named William C. Dement was so intrigued that he drove to San Diego to meet Gardner. © 2024 npr

Keyword: Sleep
Link ID: 29120 - Posted: 01.31.2024

By Gina Kolata Aissam Dam, an 11-year-old boy, grew up in a world of profound silence. He was born deaf and had never heard anything. While living in a poor community in Morocco, he expressed himself with a sign language he invented and had no schooling. Last year, after moving to Spain, his family took him to a hearing specialist, who made a surprising suggestion: Aissam might be eligible for a clinical trial using gene therapy. On Oct. 4, Aissam was treated at the Children’s Hospital of Philadelphia, becoming the first person to get gene therapy in the United States for congenital deafness. The goal was to provide him with hearing, but the researchers had no idea if the treatment would work or, if it did, how much he would hear. The treatment was a success, introducing a child who had known nothing of sound to a new world. “There’s no sound I don’t like,” Aissam said, with the help of interpreters during an interview last week. “They’re all good.” While hundreds of millions of people in the world live with hearing loss that is defined as disabling, Aissam is among those whose deafness is congenital. His is an extremely rare form, caused by a mutation in a single gene, otoferlin. Otoferlin deafness affects about 200,000 people worldwide. The goal of the gene therapy is to replace the mutated otoferlin gene in patients’ ears with a functional gene. Although it will take years for doctors to sign up many more patients — and younger ones — to further test the therapy, researchers said that success for patients like Aissam could lead to gene therapies that target other forms of congenital deafness. © 2024 The New York Times Company

Keyword: Hearing
Link ID: 29119 - Posted: 01.27.2024

By Erin Garcia de Jesús Bruce the kea is missing his upper beak, giving the olive green parrot a look of perpetual surprise. But scientists are the astonished ones. The typical kea (Nestor notabilis) sports a long, sharp beak, perfect for digging insects out of rotten logs or ripping roots from the ground in New Zealand’s alpine forests. Bruce has been missing the upper part of his beak since at least 2012, when he was rescued as a fledgling and sent to live at the Willowbank Wildlife Reserve in Christchurch. The defect prevents Bruce from foraging on his own. Keeping his feathers clean should also be an impossible task. In 2021, when comparative psychologist Amalia Bastos arrived at the reserve with colleagues to study keas, the zookeepers reported something odd: Bruce had seemingly figured out how to use small stones to preen. “We were like, ‘Well that’s weird,’ ” says Bastos, of Johns Hopkins University. Over nine days, the team kept a close eye on Bruce, quickly taking videos if he started cleaning his feathers. Bruce, it turned out, had indeed invented his own work-around to preen, the researchers reported in 2021 in Scientific Reports. First, Bruce selects the proper tool, rolling pebbles around in his mouth with his tongue and spitting out candidates until he finds one that he likes, usually something pointy. Next, he holds the pebble between his tongue and lower beak. Then, he picks through his feathers. “It’s crazy because the behavior was not there from the wild,” Bastos says. When Bruce arrived at Willowbank, he was too young to have learned how to preen. And no other bird in the aviary uses pebbles in this way. “It seems like he just innovated this tool use for himself,” she says. © Society for Science & the Public 2000–2024.

Keyword: Intelligence; Evolution
Link ID: 29117 - Posted: 01.27.2024

By Shaena Montanari Around 2012, Jennifer Groh and her colleagues began a series of experiments investigating the effect of eye movements on auditory signals in the brain. It wasn’t until years later that they noticed something curious in their data: In both an animal model and in people, eye movements coincide with ripples across the eardrum. The finding, published in 2018, seemed “weird,” says Groh, professor of psychology and neuroscience at Duke University — and ripe for further investigation. “You can go your whole career never studying something that is anywhere near as beautifully regular and reproducible,” she says. “Signals that are really robust are unlikely to be just random.” A new experiment from Groh’s lab has now taken her observation a step further and suggests the faint sounds — dubbed “eye movement-related eardrum oscillations,” or EMREOs for short — serve to link two sensory systems. The eardrum oscillations contain “clean and precise” information about the direction of eye movements and, according to Groh’s working hypothesis, help animals connect sound with a visual scene. “The basic problem is that the way we localize visual information and the way we localize sounds leads to two different reference frames,” Groh says. EMREOs, she adds, play a part in relating those frames. The brain, and not the eyes, must generate the oscillations, Groh and her colleagues say, because they happen at the same time as eye movements, or sometimes even before. To learn more about the oscillations, the team placed small microphones in the ears of 10 volunteers, who then performed visual tasks while the researchers tracked their eye movements. The group published their results in Proceedings of the National Academy of Sciences in November. © 2024 Simons Foundation

Keyword: Hearing
Link ID: 29115 - Posted: 01.27.2024

By Kenna Hughes-Castleberry Crows, ravens and other birds in the Corvidae family have a head for numbers. Not only can they make quantity estimations (as can many other animal species), but they can learn to associate number values with abstract symbols, such as “3.” The biological basis of this latter talent stems from specific number-associated neurons in a brain region called the nidopallium caudolaterale (NCL), a new study shows. The region also supports long-term memory, goal-oriented thinking and number processing. Discovery of the specialized neurons in the NCL “helps us understand the origins of our counting and math capabilities,” says study investigator Andreas Nieder, professor of animal physiology at the University of Tübingen. Until now, number-associated neurons — cells that fire especially frequently in response to an animal seeing a specific number — had been found only in the prefrontal cortex of primates, which shared a common ancestor with corvids some 300 million years ago. The new findings imply that the ability to form number-sign associations evolved independently and convergently in the two lineages. “Studying whether animals have similar concepts or represent numerosity in ways that are similar to what humans do helps us establish when in our evolutionary history these abilities may have emerged and whether these abilities emerge only in species with particular ecologies or social structures,” says Jennifer Vonk, professor of psychology at Oakland University, who was not involved in the new study. Corvids are considered especially intelligent birds, with previous studies showing that they can create and use tools, and may even experience self-recognition. Nieder has studied corvids’ and other animals’ “number sense,” or the ability to understand numerical values, for more than a decade. His previous work revealed specialized neurons in the NCL that recognize and respond to different quantities of items — including the number zero. But he tested the neurons only with simple pictures and signs that have inherent meaning for the crows, such as size. © 2023 Simons Foundation.

Keyword: Intelligence; Evolution
Link ID: 29111 - Posted: 01.23.2024

By Ewen Callaway Researchers have used the protein-structure-prediction tool AlphaFold to identify1 hundreds of thousands of potential new psychedelic molecules — which could help to develop new kinds of antidepressant. The research shows, for the first time, that AlphaFold predictions — available at the touch of a button — can be just as useful for drug discovery as experimentally derived protein structures, which can take months, or even years, to determine. The development is a boost for AlphaFold, the artificial-intelligence (AI) tool developed by DeepMind in London that has been a game changer in biology. The public AlphaFold database holds structure predictions for nearly every known protein. Protein structures of molecules implicated in disease are used in the pharmaceutical industry to identify and improve promising medicines. But some scientists had been starting to doubt whether AlphaFold’s predictions could stand in for gold standard experimental models in the hunt for new drugs. “AlphaFold is an absolute revolution. If we have a good structure, we should be able to use it for drug design,” says Jens Carlsson, a computational chemist at the University of Uppsala in Sweden. Efforts to apply AlphaFold to finding new drugs have been met with considerable scepticism, says Brian Shoichet, a pharmaceutical chemist at the University of California, San Francisco. “There is a lot of hype. Whenever anybody says ‘such and such is going to revolutionize drug discovery’, it warrants some scepticism.” Shoichet counts more than ten studies that have found AlphaFold’s predictions to be less useful than protein structures obtained with experimental methods, such as X-ray crystallography, when used to identify potential drugs in a modelling method called protein–ligand docking. © 2024 Springer Nature Limited

Keyword: Drug Abuse
Link ID: 29110 - Posted: 01.23.2024

By Evelyn Lake Functional MRI (fMRI), though expensive, has many properties of an ideal clinical tool. It’s safe and noninvasive. It is widely available in some countries, and increasingly so on a global scale. Its “blood oxygen level dependent,” or BOLD, signal is altered in people with almost any neurological condition and is rich enough to contain information specific to each person, offering the potential for a personalized approach to medical care across a wide spectrum of neurological conditions. But despite enormous interest and investment in fMRI — and its wide use in basic neuroscience research — it still lacks broad clinical utility; it is mainly employed for surgical planning. For fMRI to inform a wider range of clinical decision-making, we need better ways of deciphering what underlying changes in the brain drive changes to the BOLD signal. If someone with Alzheimer’s disease has an increase in functional connectivity (a measure of synchrony between brain regions), for example, does this indicate that synapses are being lost? Or does it suggest that the brain is forming compensatory pathways to help the person avoid further cognitive decline? Or something else entirely? Depending on the answer, one can imagine different courses of treatment. Put simply, we cannot extract sufficient information from fMRI and patient outcomes alone to determine which scenarios are playing out and therefore what we should do when we observe changes in our fMRI readouts. To better understand what fMRI actually shows, we need to use complementary methodologies, such as the emerging optical imaging tool of wide-field fluorescence calcium imaging. Combining modalities presents significant technical challenges but offers the potential for deeper insights: observing the BOLD signal alongside other signals that report more directly on what is occurring in brain tissue. Using these more direct measurements instead of fMRI in clinical practice is not an option — they are unethical to use in people or invasive, requiring physical or optical access to the brain. © 2023 Simons Foundation.

Keyword: Brain imaging
Link ID: 29109 - Posted: 01.23.2024

By Mariana Lenharo Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything. The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration. Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between. The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects. The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room. Their struggle to get the collaboration off the ground is mirrored in wider fractures in the field. © 2024 Springer Nature Limited

Keyword: Consciousness
Link ID: 29106 - Posted: 01.18.2024

Nicola Davis Science correspondent Breaking up is hard to do, but it seems the brain may have a mechanism to help get over an ex. Researchers studying prairie voles say the rodents, which form monogamous relationships, experience a burst of the pleasure hormone dopamine in their brain when seeking and reuniting with their partner. However, after being separated for a lengthy period, they no longer experience such a surge. “We tend to think of it as ‘getting over a breakup’ because these voles can actually form a new bond after this change in dopamine dynamics – something they can’t do while the bond is still intact,” said Dr Zoe Donaldson, a behavioural neuroscientist at CU Boulder and senior author of the work. Writing in the journal Current Biology, the team describe how they carried out a series of experiments in which voles had to press levers to access either their mate or an unknown vole located on the other side of a see-through door. The team found the voles had a greater release of dopamine in their brain when pressing levers and opening doors to meet their mate than when meeting the novel vole. They also huddled more with their mate on meeting, and experienced a greater rise in dopamine while doing so. Donaldson said: “We think the difference is tied to knowing you are about to reunite with a partner and reflects that it is more rewarding to reunite with a partner than go hang out with a vole they don’t know.” However, these differences in dopamine levels were no longer present after they separated pairs of voles for four weeks – a considerable period in the lifetime of the rodents. Differences in huddling behaviour also decreased. The researchers say the findings suggest a devaluation of the bond between pairs of voles, rather than that they have forgotten each other. © 2024 Guardian News & Media Limited

Keyword: Sexual Behavior; Evolution
Link ID: 29104 - Posted: 01.18.2024