Most Recent Links
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Gina Kolata Eileen Isotalo was always able to lose weight, but always gained it back. Now 66, her first diet was with Weight Watchers at age 14. She went on to try one diet after another and bought so many books on weight loss that she thinks she has more than the public library. In desperation, she finally went to a weight management clinic at the University of Michigan. She had sleep apnea and aching knees, but could not curb her appetite. “It’s just this drive to eat,” said Ms. Isoltalo, a retired interior design coordinator. “It’s almost like this panic feeling when you start craving food.” “My mental shame was profound,” she said. Now, though, since she started taking Wegovy, one of a new class of drugs for obesity that was prescribed by her doctor at the clinic, those cravings are gone. She has lost 50 pounds and jettisoned the dark clothes she wore to hide her body. Her obesity-related medical problems have vanished along with much of the stigma that caused her to retreat from family and friends. But like others at the clinic, she still struggles with the fear others will judge her for receiving injections to treat her obesity rather than finding the willpower to lose weight and keep it off. Yet the drug, she said, “changed my life.” Wegovy and drugs like it make this “a very exciting time in the field,” said Dr. Susan Yanovski, co-director of the office of obesity research at the National Institute of Diabetes and Digestive and Kidney Diseases. About 100 million Americans, or 42 percent of the adult population, have obesity, according to the Centers for Disease Control and Prevention. For the first time, people with obesity, who faced a lifetime of medical jeopardy, can escape the ruthless trap of fruitless dieting and see their obesity-related health problems mitigated, along with the weight loss. But there is still the taint. © 2023 The New York Times Company
Keyword: Obesity
Link ID: 28821 - Posted: 06.14.2023
By Jordan Kinard Long the fixation of religions, philosophy and literature the world over, the conscious experience of dying has recently received increasingly significant attention from science. This comes as medical advances extend the ability to keep the body alive, steadily prying open a window into the ultimate locked room: the last living moments of a human mind. “Around 1959 humans discovered a method to restart the heart in people who would have died, and we called this CPR,” says Sam Parnia, a critical care physician at NYU Langone Health. Parnia has studied people’s recollections after being revived from cardiac arrest—phenomena that he refers to as “recalled experiences surrounding death.” Before CPR techniques were developed, cardiac arrest was basically synonymous with death. But now doctors can revive some people up to 20 minutes or more after their heart has stopped beating. Furthermore, Parnia says, many brain cells remain somewhat intact for hours to days postmortem—challenging our notions of a rigid boundary between life and death. Advancements in medical technology and neuroscience, as well as shifts in researchers’ perspectives, are revolutionizing our understanding of the dying process. Research over the past decade has demonstrated a surge in brain activity in human and animal subjects undergoing cardiac arrest. Meanwhile large surveys are documenting the seemingly inexplicable periods of lucidity that hospice workers and grieving families often report witnessing in people with dementia who are dying. Poet Dylan Thomas famously admonished his readers, “Do not go gentle into that good night. Rage, rage against the dying of the light.” But as more resources are devoted to the study of death, it is becoming increasingly clear that dying is not the simple dimming of one’s internal light of awareness but rather an incredibly active process in the brain. © 2023 Scientific American,
Keyword: Attention; Development of the Brain
Link ID: 28820 - Posted: 06.14.2023
Ian Sample Science editor The sight of their dead comrades is enough to drive fruit flies to an early grave, according to researchers, who suspect the creatures keel over after developing the fly equivalent of depression. For a species that spends much of its life feasting on decayed matter, the insects appear to be particularly sensitive to their own dead. Witnessing an abundance of fruit fly carcasses speeds up the insects’ ageing process, scientists found, cutting their lives short by nearly 30%. While the researchers are cautious about extrapolating from 3mm-long flies to rather larger humans that can live 400 times longer, they speculate that the insights might prove useful for people who are routinely surrounded by death, such as combat troops and healthcare workers. “Could motivational therapy or pharmacologic intervention in reward systems, much like what is done for addiction, slow ageing?” the authors ask in Plos Biology. The possibility could be tested in humans today, they added, using drugs that are already approved. Researchers led by Christi Gendron and Scott Pletcher at the University of Michigan raised fruit flies in small containers filled with food. While some of the containers held only living flies and tasty nutrients, others were dotted with freshly dead fruit flies as well, to see what impact they had on the feeding insects. When fruit flies were raised among dead ones, they tended to die several weeks earlier than those raised without being surrounded by carcasses. Those exposed to death appeared to age faster, losing stored fat and becoming less resilient to starvation. © 2023 Guardian News & Media Limited
Keyword: Stress
Link ID: 28819 - Posted: 06.14.2023
By Claudia Lopez Lloreda Of all of COVID-19’s symptoms, one of the most troubling is “brain fog.” Victims report headaches, trouble concentrating, and forgetfulness. Now, researchers have shown that SARS-CoV-2 can cause brain cells to fuse together, disrupting their communication. Although the study was only done in cells in a lab dish, some scientists say it could help explain one of the pandemic’s most confounding symptoms. “This is a first important step,” says Stefan Lichtenthaler, a biochemist at the German Center for Neurodegenerative Diseases who was not involved with the work. Researchers already knew that SARS-CoV-2 could cause certain cells to fuse together. The lungs of patients who die from severe COVID-19 are often riddled with large, multicellular structures called syncytia, which scientists believe may contribute to the respiratory symptoms of the disease. Like other viruses, SARS-CoV-2 may incite cells to fuse to help it spread across an organ without having to infect new cells. To see whether such cell fusion might happen in brain cells, Massimo Hilliard, a neuroscientist at the University of Queensland, and his colleagues first genetically engineered two populations of mouse neurons: One expressed a red fluorescent molecule, and the other a green fluorescent molecule. If the two fused in a lab dish, they would show up as bright yellow under the microscope. That’s just what the researchers saw when they added SARS-CoV-2 to a dish containing both types of cells, they report today in Science Advances. The same fusion happened in human brain organoids, so-called minibrains that are created from stem cells. The key appears to be angiotensin-converting enzyme 2 (ACE2), the protein expressed on the surface of mammalian cells that SARS-CoV-2 is known to target. The virus uses a surface protein called spike to bind to ACE2, triggering the virus to fuse to a cell and release its genetic material inside. Seemingly, the spike protein in infected cells may also make other ACE2 on a cell trigger fusion to a neighboring cell. When the team engineered neurons to express the spike protein, only cells that also expressed ACE2 were able to fuse with each other. The findings parallel previous work in lung cells: The ACE2 receptor seems to be critical in mediating their fusion during SARS-CoV-2 infection.
Keyword: Neuroimmunology; Attention
Link ID: 28818 - Posted: 06.14.2023
By Marlowe Starling When a bird sings, you may think you’re hearing music. But are the melodies it’s making really music? Or is what we’re hearing merely a string of lilting calls that appeals to the human ear? Birdsong has inspired musicians from Bob Marley to Mozart and perhaps as far back as the first hunter-gatherers who banged out a beat. And a growing body of research is showing that the affinity human musicians feel toward birdsong has a strong scientific basis. Scientists are understanding more about avian species’ ability to learn, interpret and produce songs much like our own. Just like humans, birds learn songs from each other and practice to perfect them. And just as human speech is distinct from human music, bird calls, which serve as warnings and other forms of direct communication, differ from birdsong. While researchers are still debating the functions of birdsong, studies show that it is structurally similar to our own tunes. So, are birds making music? That depends on what you mean. “I’m not sure we can or want to define music,” said Ofer Tchernichovski, a zoologist and psychologist at the City University of New York who studies birdsong. Where you draw the line between music and mere noise is arbitrary, said Emily Doolittle, a zoomusicologist and composer at the Royal Conservatoire of Scotland. The difference between a human baby’s babbling versus a toddler’s humming might seem more distinct than that of a hatchling’s cry for food and a maturing bird’s practicing of a melody, she added. Wherever we draw the line, birdsong and human song share striking similarities. How birds build songs Existing research points to one main conclusion: Birdsong is structured like human music. Songbirds change their tempo (speed), pitch (how high or low they sing) and timbre (tone) to sing tunes that resemble our own melodies. © 2023 The New York Times Company
Keyword: Animal Communication; Language
Link ID: 28817 - Posted: 06.07.2023
Kari Paul and Maanvi Singh Elon Musk’s brain-implant company Neuralink last week received regulatory approval to conduct the first clinical trial of its experimental device in humans. But the billionaire executive’s bombastic promotion of the technology, his leadership record at other companies and animal welfare concerns relating to Neuralink experiments have raised alarm. “I was surprised,” said Laura Cabrera, a neuroethicist at Penn State’s Rock Ethics Institute about the decision by the US Food and Drug Administration to let the company go ahead with clinical trials. Musk’s erratic leadership at Twitter and his “move fast” techie ethos raise questions about Neuralink’s ability to responsibly oversee the development of an invasive medical device capable of reading brain signals, Cabrera argued. “Is he going to see a brain implant device as something that requires not just extra regulation, but also ethical consideration?” she said. “Or will he just treat this like another gadget?” Neuralink is far from the first or only company working on brain interface devices. For decades, research teams around the world have been exploring the use of implants and devices to treat conditions such as paralysis and depression. Already, thousands use neuroprosthetics like cochlear implants for hearing. But the broad scope of capabilities Musk is promising from the Neuralink device have garnered skepticism from experts. Neuralink entered the industry in 2016 and has designed a brain-computer interface (BCI) called the Link – an electrode-laden computer chip that can be sewn into the surface of the brain and connects it to external electronics – as well as a robotic device that implants the chip. © 2023 Guardian News & Media Limited
Keyword: Robotics; Learning & Memory
Link ID: 28816 - Posted: 06.07.2023
By Kate Laskowski In the age-old debate about nature versus nurture — whether our characteristics are forged by our genes or our upbringing — I have an answer for you. It is both. And it is neither. I’m a behavioral ecologist who seeks to answer this question by studying a particular kind of fish. The Amazon molly (Poecilia formosa) is an experimental goldmine for these types of questions. She naturally clones herself by giving birth to offspring with identical genomes to her own and to each other’s. A second quirk of this little fish is that her offspring are born live and are completely independent from birth. This means I can control their experiences from the earliest possible age. Essentially, this fish gives me and my colleagues the opportunity to perform “twin studies” to understand how and why individuality develops. And what we’ve found may surprise you. As humans, we know the critical importance of our personalities. These persistent differences among us shape how we navigate our worlds and respond to major life events; whether we are bold or shy; whether we ask someone on a second date or not. Given the obvious importance of personality, it’s perhaps a bit surprising that scientists generally overlooked these kinds of differences in other species for a long time. Up until about 30 years ago, these differences (what I prefer to call “individuality,” as it avoids the human connotation of “personality”) were typically viewed as cute anecdotes with little evolutionary importance. Instead, researchers focused on the typical behavior of a given population. With guppies, for example — a classic workhorse of behavioral ecology research — researchers found that fish will, on average, swim more tightly together if they live among lots of predatory fish, whereas fish from areas with fewer predators spend less time schooling and more time fighting one another, as they don’t have to worry so much about being eaten. © 2023 Annual Reviews
Keyword: Development of the Brain; Genes & Behavior
Link ID: 28815 - Posted: 06.07.2023
Sara Reardon Vaccination against shingles might also prevent dementia, such as that caused by Alzheimer’s disease, according to a study of health records from around 300,000 people in Wales. The analysis found that getting the vaccine lowers the risk of dementia by 20%. But some puzzling aspects of the analysis have stirred debate about the work’s robustness. The study was published on the medRxiv preprint server on 25 May and has not yet been peer reviewed. “If it is true, it’s huge,” says Alberto Ascherio, an epidemiologist at Harvard University in Cambridge, Massachusetts, who was not involved in the study. “Even a modest reduction in risk is a tremendous impact.” Dementia–infection link The idea that viral infection can play a part in at least some dementia cases dates back to the 1990s, when biophysicist Ruth Itzhaki at the University of Manchester, UK, and her colleagues found herpesviruses in the brains of deceased people with dementia2. The theory has been controversial among Alzheimer’s researchers. But recent work has suggested that people infected with viruses that affect the brain have higher rates of neurodegenerative diseases3. Research has also suggested that those vaccinated against certain viral diseases are less likely to develop dementia4. But all these epidemiological studies have shared a key problem: people who get any type of vaccination tend to have healthier lifestyles than those who don’t5, meaning that other factors could account for their lowered risk of diseases such as Alzheimer’s. With that in mind, epidemiologist Pascal Geldsetzer at Stanford University in California and his colleagues turned to a natural experiment: a shingles vaccination programme in Wales, which began on 1 September 2013. Shingles is caused by the reawakening of inactive varicella zoster virus (VZV), the herpesvirus that causes chickenpox and which is present in most people. Shingles is most common in older adults and can cause severe pain and rashes. © 2023 Springer Nature Limited
Keyword: Alzheimers; Neuroimmunology
Link ID: 28814 - Posted: 06.07.2023
By Brandon Keim In 1970, a graduate philosophy student named Peter Singer happened to meet a fellow student who didn’t eat meat. Even today this is uncommon, but at the time it was radical, and it made Singer pause. “Here I’d been eating meat for 24 years. I was studying ethics. Yet I’d never thought that eating meat might be an ethical problem,” he recalls. “I thought, what does entitle us to treat animals like this? Why is the boundary of our species so important?” Out of the intellectual journey that followed came Animal Liberation, published in 1975 and considered one of the most influential books in modern history. Encyclopedia Britannica called Singer “one of the world’s most widely recognized public intellectuals,” and he and his seminal work are credited with shaping the modern animal rights movement. Now a professor of bioethics at Princeton University, Singer is quick to clarify that his arguments are not fundamentally about rights. Rather, they’re about equality: The interests of similar beings deserve similar moral consideration, regardless of the species they belong to, and avoiding pain is a transcendent interest. “If a being suffers, there can be no moral justification for refusing to take that suffering into consideration,” he writes. “Beings who are similar in all relevant respects have a similar right to life; and mere membership in our own species is not a morally relevant distinction.” © 2023 NautilusNext Inc.,
Keyword: Animal Rights
Link ID: 28813 - Posted: 06.07.2023
By Steven Strogatz Neuroscience has made progress in deciphering how our brains think and perceive our surroundings, but a central feature of cognition is still deeply mysterious: namely, that many of our perceptions and thoughts are accompanied by the subjective experience of having them. Consciousness, the name we give to that experience, can’t yet be explained — but science is at least beginning to understand it. In this episode, the consciousness researcher Anil Seth and host Steven Strogatz discuss why our perceptions can be described as a “controlled hallucination,” how consciousness played into the internet sensation known as “the dress,” and how people at home can help researchers catalog the full range of ways that we experience the world. Steven Strogatz (00:03): I’m Steve Strogatz, and this is The Joy of Why, a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in math and science today. In this episode, we’re going to be discussing the mystery of consciousness. The mystery being that when your brain cells fire in certain patterns, it actually feels like something. It might feel like jealousy, or a toothache, or the memory of your mother’s face, or the scent of her favorite perfume. But other patterns of brain activity don’t really feel like anything at all. Right now, for instance, I’m probably forming some memories somewhere deep in my brain. But the process of that memory formation is imperceptible to me. I can’t feel it. It doesn’t give rise to any sort of internal subjective experience at all. In other words, I’m not conscious of it. (00:54) So how does consciousness happen? How is it related to physics and biology? Are animals conscious? What about plants? Or computers, could they ever be conscious? And what is consciousness exactly? My guest today, Dr. Anil Seth, studies consciousness in his role as the co-director of the Sussex Center for Consciousness Science at the University of Sussex, near Brighton, England. The Center brings together all sorts of disciplinary specialists, from neuroscientists to mathematicians to experts in virtual reality, to study the conscious experience. Dr. Seth is also the author of the book Being You: A New Science of Consciousness. He joins us from studios in Brighton, England. Anil, thanks for being here. All Rights Reserved © 2023
Keyword: Consciousness
Link ID: 28812 - Posted: 06.03.2023
Davide Castelvecchi The wrinkles that give the human brain its familiar walnut-like appearance have a large effect on brain activity, in much the same way that the shape of a bell determines the quality of its sound, a study suggests1. The findings run counter to a commonly held theory about which aspect of brain anatomy drives function. The study’s authors compared the influence of two components of the brain’s physical structure: the outer folds of the cerebral cortex — the area where most higher-level brain activity occurs — and the connectome, the web of nerves that links distinct regions of the cerebral cortex. The team found that the shape of the outer surface was a better predictor of brainwave data than was the connectome, contrary to the paradigm that the connectome has the dominant role in driving brain activity. “We use concepts from physics and engineering to study how anatomy determines function,” says study co-author James Pang, a physicist at Monash University in Melbourne, Australia. The results were published in Nature on 31 May1. ‘Exciting’ a neuron makes it fire, which sends a message zipping to other neurons. Excited neurons in the cerebral cortex can communicate their state of excitation to their immediate neighbours on the surface. But each neuron also has a long filament called an axon that connects it to a faraway region within or beyond the cortex, allowing neurons to send excitatory messages to distant brain cells. In the past two decades, neuroscientists have painstakingly mapped this web of connections — the connectome — in a raft of organisms, including humans. The authors wanted to understand how brain activity is affected by each of the ways in which neuronal excitation can spread: across the brain’s surface or through distant interconnections. To do so, the researchers — who have backgrounds in physics and neuroscience — tapped into the mathematical theory of waves.
Keyword: Brain imaging; Development of the Brain
Link ID: 28811 - Posted: 06.03.2023
By Daniel Bergner If severe mental illness, untreated, underlies the feeling of encroaching anarchy and menace around the homeless encampments of San Francisco or in the subways of New York City, then the remedy appears obvious. Let’s rescue those who, as New York’s mayor, Eric Adams, says, “slip through the cracks” of our mental health care systems; let’s give people “the treatment and care they need.” It sounds so straightforward. It sounds like a clear way to lower the odds of tragic incidents occurring, like the chokehold killing of Jordan Neely, a homeless, psychiatrically troubled man, or the death of Michelle Alyssa Go, who was pushed off a Times Square subway platform to her death by a homeless man with schizophrenia. Improving order and safety in public spaces and offering compassionate care seem to be convergent missions. But unless we confront some rarely spoken truths, that convergence will prove illusory. The problems with the common-sense approach, as it’s currently envisioned, run beyond the proposed solutions we usually read about: funding more beds on hospital psychiatric wards, establishing community-based programs to oversee treatment when people are released from the hospital and providing housing for those whose mental health is made increasingly fragile by the constant struggle for shelter. The most difficult problems aren’t budgetary or logistical. They are fundamental. They involve the involuntary nature of the care being called for and the flawed antipsychotic medications that are the mainstay of treatment for people dealing with the symptoms of psychosis, like hallucinatory voices or paranoid delusions, which can come with a range of severe psychiatric conditions. © 2023 The New York Times Company
Keyword: Schizophrenia
Link ID: 28810 - Posted: 06.03.2023
John Michael Streicher Opioid drugs such as morphine and fentanyl are like the two-faced Roman god Janus: The kindly face delivers pain relief to millions of sufferers, while the grim face drives an opioid abuse and overdose crisis that claimed nearly 70,000 lives in the U.S. in 2020 alone. Scientists like me who study pain and opioids have been seeking a way to separate these two seemingly inseparable faces of opioids. Researchers are trying to design drugs that deliver effective pain relief without the risk of side effects, including addiction and overdose. One possible path to achieving that goal lies in understanding the molecular pathways opioids use to carry out their effects in your body. How do opioids work? The opioid system in your body is a set of neurotransmitters your brain naturally produces that enable communication between neurons and activate protein receptors. These neurotransmitters include small proteinlike molecules like enkephalins and endorphins. These molecules regulate a tremendous number of functions in your body, including pain, pleasure, memory, the movements of your digestive system and more. Analysis of the world, from experts Opioid neurotransmitters activate receptors that are located in a lot of places in your body, including pain centers in your spinal cord and brain, reward and pleasure centers in your brain, and throughout the neurons in your gut. Normally, opioid neurotransmitters are released in only small quantities in these exact locations, so your body can use this system in a balanced way to regulate itself. The opioids your body produces and opioid drugs bind to the same receptors. The problem comes when you take an opioid drug like morphine or fentanyl, especially at high doses for a long time. These drugs travel through the bloodstream and can activate every opioid receptor in your body. You’ll get pain relief through the pain centers in your spinal cord and brain. But you’ll also get a euphoric high when those drugs hit your brain’s reward and pleasure centers, and that could lead to addiction with repeated use. When the drug hits your gut, you may develop constipation, along with other common opioid side effects. Targeting opioid signal transduction How can scientists design opioid drugs that won’t cause side effects? One approach my research team and I take is to understand how cells respond when they receive the message from an opioid neurotransmitter. Neuroscientists call this process opioid receptor signal transduction. Just as neurotransmitters are a communication network within your brain, each neuron also has a communication network that connects receptors to proteins within the neuron. When these connections are made, they trigger specific effects like pain relief. So, after a natural opioid neurotransmitter or a synthetic opioid drug activates an opioid receptor, it activates proteins within the cell that carry out the effects of the neurotransmitter or the drug. © 2010–2023, The Conversation US, Inc.
Keyword: Drug Abuse; Pain & Touch
Link ID: 28809 - Posted: 06.03.2023
by Adam Kirsch Giraffes will eat courgettes if they have to, but they really prefer carrots. A team of researchers from Spain and Germany recently took advantage of this preference to investigate whether the animals are capable of statistical reasoning. In the experiment, a giraffe was shown two transparent containers holding a mixture of carrot and courgette slices. One container held mostly carrots, the other mostly courgettes. A researcher then took one slice from each container and offered them to the giraffe with closed hands, so it couldn’t see which vegetable had been selected. In repeated trials, the four test giraffes reliably chose the hand that had reached into the container with more carrots, showing they understood that the more carrots were in the container, the more likely it was that a carrot had been picked. Monkeys have passed similar tests, and human babies can do it at 12 months old. But giraffes’ brains are much smaller than primates’ relative to body size, so it was notable to see how well they grasped the concept. Such discoveries are becoming less surprising every year, however, as a flood of new research overturns longstanding assumptions about what animal minds are and aren’t capable of. A recent wave of popular books on animal cognition argue that skills long assumed to be humanity’s prerogative, from planning for the future to a sense of fairness, actually exist throughout the animal kingdom – and not just in primates or other mammals, but in birds, octopuses and beyond. In 2018, for instance, a team at the University of Buenos Aires found evidence that zebra finches, whose brains weigh half a gram, have dreams. Monitors attached to the birds’ throats found that when they were asleep, their muscles sometimes moved in exactly the same pattern as when they were singing out loud; in other words, they seemed to be dreaming about singing. © 2023 Guardian News & Media Limited
Keyword: Evolution; Learning & Memory
Link ID: 28808 - Posted: 05.31.2023
Emily Waltz Researchers have been exploring whether zapping a person’s brain with electrical current through electrodes on their scalp can improve cognition.Credit: J.M. Eddin/Military Collection/Alamy After years of debate over whether non-invasively zapping the brain with electrical current can improve a person’s mental functioning, a massive analysis of past studies offers an answer: probably. But some question that conclusion, saying that the analysis spans experiments that are too disparate to offer a solid answer. In the past six years, the number of studies testing the therapeutic effects of a class of techniques called transcranial electrical stimulation has skyrocketed. These therapies deliver a painless, weak electrical current to the brain through electrodes placed externally on the scalp. The goal is to excite, disrupt or synchronize signals in the brain to improve function. Researchers have tested transcranial alternating current stimulation (tACS) and its sister technology, tDCS (transcranial direct current stimulation), on both healthy volunteers and those with neuropsychiatric conditions, such as depression, Parkinson’s disease or addiction. But study results have been conflicting or couldn’t be replicated, leading researchers to question the efficacy of the tools. The authors of the new analysis, led by Robert Reinhart, director of the cognitive and clinical neuroscience laboratory at Boston University in Massachusetts, say they compiled the report to quantify whether tACS shows promise, by comparing more than 100 studies of the technique, which applies an oscillating current to the brain. “We have to address whether or not this technique is actually working, because in the literature, you have a lot of conflicting findings,” says Shrey Grover, a cognitive neuroscientist at Boston University and an author on the paper. © 2023 Springer Nature Limited
Keyword: Learning & Memory
Link ID: 28807 - Posted: 05.31.2023
By Christina Caron A new study suggests that, for some patients, the anesthetic ketamine is a promising alternative to electroconvulsive therapy, or ECT, currently one of the quickest and most effective therapies for patients with difficult-to-treat depression. The study is the largest head-to-head comparison of the two treatments. Patients who don’t respond to at least two antidepressants — about one-third of clinically depressed patients — have a condition that clinicians refer to as “treatment-resistant.” Their options for relief are limited. Doctors typically recommend up to 12 sessions of ECT, which has a long-established efficacy, but is tainted by the stigma of historical misuse and frightening Hollywood images of people strapped to tables, writhing in agony. Today’s ECT is much safer and done under general anesthesia, but the procedure remains underutilized. The study, published on Wednesday in The New England Journal of Medicine, found that ketamine, when administered intravenously, was at least as effective as ECT in patients with treatment-resistant depression who do not have psychosis. (For people with psychosis, ketamine, even in very low doses, can worsen psychosis-like symptoms.) “The results were very surprising to us,” said Dr. Amit Anand, lead author of the study and a professor of psychiatry at Harvard Medical School who studies mood disorders at Mass General Brigham. His team had initially hypothesized that ketamine would be nearly as effective as ECT. Instead, Dr. Anand said, they found that ketamine performed even better than that. This is significant in part because some patients are uncomfortable with ECT’s potential side effects, such as temporary memory loss, muscle pain or weakness. (In rare cases it can result in permanent gaps in memory.) © 2023 The New York Times Company
Keyword: Depression; Drug Abuse
Link ID: 28806 - Posted: 05.31.2023
By Scientific American Custom Media Megan Hall: How does the stomach tell the brain it’s full? How do cells in our body grow and divide? James Rothman realized that the fundamental biology behind these processes are basically the same. In 2010, he shared The Kavli Prize in Neuroscience with Richard Scheller and Thomas Südhof for their work detailing how nerve cells communicate with each other on a microscopic level. Three years later, he received the Nobel Prize. Hall: James Rothman was pleasantly surprised when he received The Kavli Prize in Neuroscience. James Rothman: I'd always thought of myself as a biochemist first and a cell biologist second. And I never really thought of myself as a neuroscientist. Hall: He did apply to a neuroscience program in grad school… Rothman: It all just made a whole lot of sense, except for the fact that I wasn't admitted. Hall: But James is not the kind of person to worry about labels. In fact, he’s explored a range of scientific disciplines. As an undergrad at Yale, he studied physics, maybe in part because he grew up in the 50s. Rothman: Scientists and doctors were really the most admired in the 1950s. And it was the physicists in particular. Einstein, Oppenheimer, people like that. Hall: But his father worried about his career options, so he convinced James to try a biology course. Rothman: And I just fell in love. Hall: So, he ditched physics and decided to go to Harvard Medical School to learn more about biology. Rothman: In the end I never finished medical school. Hall: But, while he was there, he stumbled upon his life’s work. Rothman: I was a first-year medical student and I was listening to a lecture in our course on histology and cell biology. Hall: The professor was showing images that had been captured by scientists only a few decades before. They showed, for the first time, how complex the cell is. Rothman: The cell is not just, like a dumb little liquid inside. It's a highly organized place. It's more like a city than anything else. © 2023 Scientific American,
Keyword: Biomechanics
Link ID: 28805 - Posted: 05.31.2023
By Linda Searing Getting regular exercise may reduce a woman’s chances of developing Parkinson’s disease by as much as 25 percent, according to research published in the journal Neurology. It involved 95,354 women, who were an average of age 49 and did not have Parkinson’s when the study began. The researchers compared the women’s physical exercise levels over nearly three decades, including such activities as walking, cycling, gardening, stair climbing, house cleaning and sports participation. In that time, 1,074 women developed Parkinson’s. The study found that as a woman’s exercise level increased, her risk for Parkinson’s decreased. Those who got the most exercise — based on timing and intensity — developed the disease at a 25 percent lower rate than those who exercised the least. The researchers wrote that the study’s findings “suggest that physical activity may help prevent or delay [Parkinson’s disease] onset.” Parkinson’s disease is a neurodegenerative disorder, meaning it is a progressive disease that affects the nervous system and parts of the body controlled by nerves. It is sometimes referred to as a movement disorder because of the uncontrollable tremors, muscle stiffness, and gait and balance problems it can cause, but people with Parkinson’s also may experience sleep problems, depression, memory issues, fatigue and more. The symptoms generally stem from the brain’s lack of production of dopamine, a chemical that helps control muscle movement. No cure exists for Parkinson’s, but treatments to relieve symptoms include medication, lifestyle adjustments and surgical procedures, such as deep brain stimulation.
Keyword: Parkinsons
Link ID: 28804 - Posted: 05.31.2023
By Yasemin Saplakoglu Is this the real life? Is this just fantasy? Those aren’t just lyrics from the Queen song “Bohemian Rhapsody.” They’re also the questions that the brain must constantly answer while processing streams of visual signals from the eyes and purely mental pictures bubbling out of the imagination. Brain scan studies have repeatedly found that seeing something and imagining it evoke highly similar patterns of neural activity. Yet for most of us, the subjective experiences they produce are very different. “I can look outside my window right now, and if I want to, I can imagine a unicorn walking down the street,” said Thomas Naselaris, an associate professor at the University of Minnesota. The street would seem real and the unicorn would not. “It’s very clear to me,” he said. The knowledge that unicorns are mythical barely plays into that: A simple imaginary white horse would seem just as unreal. So “why are we not constantly hallucinating?” asked Nadine Dijkstra, a postdoctoral fellow at University College London. A study she led, recently published in Nature Communications, provides an intriguing answer: The brain evaluates the images it is processing against a “reality threshold.” If the signal passes the threshold, the brain thinks it’s real; if it doesn’t, the brain thinks it’s imagined. They’ve done a great job, in my opinion, of taking an issue that philosophers have been debating about for centuries and defining models with predictable outcomes and testing them. Such a system works well most of the time because imagined signals are typically weak. But if an imagined signal is strong enough to cross the threshold, the brain takes it for reality. All Rights Reserved © 2023
Keyword: Attention
Link ID: 28803 - Posted: 05.27.2023
By Matteo Wong If you are willing to lie very still in a giant metal tube for 16 hours and let magnets blast your brain as you listen, rapt, to hit podcasts, a computer just might be able to read your mind. Or at least its crude contours. Researchers from the University of Texas at Austin recently trained an AI model to decipher the gist of a limited range of sentences as individuals listened to them—gesturing toward a near future in which artificial intelligence might give us a deeper understanding of the human mind. The program analyzed fMRI scans of people listening to, or even just recalling, sentences from three shows: Modern Love, The Moth Radio Hour, and The Anthropocene Reviewed. Then, it used that brain-imaging data to reconstruct the content of those sentences. For example, when one subject heard “I don’t have my driver’s license yet,” the program deciphered the person’s brain scans and returned “She has not even started to learn to drive yet”—not a word-for-word re-creation, but a close approximation of the idea expressed in the original sentence. The program was also able to look at fMRI data of people watching short films and write approximate summaries of the clips, suggesting the AI was capturing not individual words from the brain scans, but underlying meanings. The findings, published in Nature Neuroscience earlier this month, add to a new field of research that flips the conventional understanding of AI on its head. For decades, researchers have applied concepts from the human brain to the development of intelligent machines. ChatGPT, hyperrealistic-image generators such as Midjourney, and recent voice-cloning programs are built on layers of synthetic “neurons”: a bunch of equations that, somewhat like nerve cells, send outputs to one another to achieve a desired result. Yet even as human cognition has long inspired the design of “intelligent” computer programs, much about the inner workings of our brains has remained a mystery. Now, in a reversal of that approach, scientists are hoping to learn more about the mind by using synthetic neural networks to study our biological ones. It’s “unquestionably leading to advances that we just couldn’t imagine a few years ago,” says Evelina Fedorenko, a cognitive scientist at MIT. Copyright (c) 2023 by The Atlantic Monthly Group.
Keyword: Brain imaging; Language
Link ID: 28802 - Posted: 05.27.2023