Most Recent Links
Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.
By Ann Gibbons Go to the Democratic Republic of the Congo, and you’re unlikely to encounter chimps so plump they have trouble climbing trees or vervet monkeys so chubby they huff and puff as they swing from branch to branch. Humans are a different story. Walk down a typical U.S. street and almost half of the people you encounter are likely to have obesity. Scientists have long blamed our status as the “fattest primate” on genes that help us store fat more efficiently or diets overloaded with sugars or fat. But a new study of 40 species of nonhuman primates, ranging from tiny mouse lemurs to hulking gorillas, finds many pack on the pounds just as easily as we do, regardless of diet, habitat, or genetic differences. All they need is extra food. “Lots of primates put on too much weight, the same as humans,” says Herman Pontzer, a biological anthropologist at Duke University and author of the new study, published this week in the Philosophical Transactions of the Royal Society B. “Humans are not special.” Some researchers have suggested our species is prone to obesity because our ancestors evolved to be incredibly efficient at storing calories. The adaptation would have helped our ancient relatives, who often faced famine after the transition to agriculture, get through lean times. This selection pressure for so-called thrifty genes set us apart from other primates, the thinking goes. But other primates can get fat. Kanzi, the first ape to show he understands spoken English, was triple the average weight of his bonobo species after years of being rewarded with bananas, peanuts, and other treats during research; scientists eventually put him on a diet. And then there was Uncle Fatty, an obese macaque who lived on the streets of Bangkok where tourists fed him milkshakes, noodles, and other junk food. He weighed an astonishing 15 kilograms—three times more than the average macaque—before he went to the monkey equivalent of a fat farm. © 2023 American Association for the Advancement of Science.
Keyword: Obesity; Evolution
Link ID: 28902 - Posted: 09.10.2023
Neurotransmitters are the words our brain cells use to communicate with one another. For years, researchers relied on tools that provided limited temporal and spatial resolution to track changes in the fast chemical chat between neurons. But that started to change about ten years ago for glutamate—the most abundant excitatory neurotransmitter in vertebrates that plays an essential role in learning, memory, and information processing—when scientists engineered the first glutamate fluorescent reporter, iGluSnFR, which provided a readout of neurons’ fast glutamate release. In 2013, researchers at the Howard Hughes Medical Institute collaborated with scientists from other institutions to develop the first generation of iGluSnFR.1 To create the biosensor, the team combined a bacteria-derived glutamate binding protein, Gltl, a wedged fluorescent GFP protein, and a membrane-targeting protein that anchors the reporter to the surface of the cell. Upon glutamate binding, the Gltl protein changes its conformation, increasing the fluorescence intensity of GFP. In their first study, the team showcased the utility of the biosensor for monitoring glutamate levels by demonstrating selective activation by glutamate in cell cultures. By conducting experiments with brain cells from the C. elegans worm, zebrafish, and mice, they confirmed that the reporter also tracked glutamate in vivo, a finding that set iGluSnFR apart from existing glutamate sensors. The first iGluSnFR generation allowed researchers to study glutamate dynamics in different biological systems, but the indicator could not detect small amounts of the neurotransmitter or keep up with brain cells’ fast glutamate release bouts. Making improvements © 1986–2023 The Scientist.
Keyword: Brain imaging
Link ID: 28901 - Posted: 09.10.2023
By Saugat Bolakhe Memory doesn’t represent a single scientific mystery; it’s many of them. Neuroscientists and psychologists have come to recognize varied types of memory that coexist in our brain: episodic memories of past experiences, semantic memories of facts, short- and long-term memories, and more. These often have different characteristics and even seem to be located in different parts of the brain. But it’s never been clear what feature of a memory determines how or why it should be sorted in this way. Now, a new theory backed by experiments using artificial neural networks proposes that the brain may be sorting memories by evaluating how likely they are to be useful as guides in the future. In particular, it suggests that many memories of predictable things, ranging from facts to useful recurring experiences — like what you regularly eat for breakfast or your walk to work — are saved in the brain’s neocortex, where they can contribute to generalizations about the world. Memories less likely to be useful — like the taste of that unique drink you had at that one party — are kept in the seahorse-shaped memory bank called the hippocampus. Actively segregating memories this way on the basis of their usefulness and generalizability may optimize the reliability of memories for helping us navigate novel situations. The authors of the new theory — the neuroscientists Weinan Sun and James Fitzgerald of the Janelia Research Campus of the Howard Hughes Medical Institute, Andrew Saxe of University College London, and their colleagues — described it in a recent paper in Nature Neuroscience. It updates and expands on the well-established idea that the brain has two linked, complementary learning systems: the hippocampus, which rapidly encodes new information, and the neocortex, which gradually integrates it for long-term storage. James McClelland, a cognitive neuroscientist at Stanford University who pioneered the idea of complementary learning systems in memory but was not part of the new study, remarked that it “addresses aspects of generalization” that his own group had not thought about when they proposed the theory in the mid 1990s. All Rights Reserved © 2023
Keyword: Learning & Memory; Attention
Link ID: 28900 - Posted: 09.07.2023
By Astrid Landon In June 2015, Jeffrey Thelen’s parents noticed their son was experiencing problems with his memory. In the subsequent years, he would get lost driving to his childhood home, forget his cat had died, and fail to recognize his brother and sister. His parents wondered: Was electroconvulsive therapy to blame? Thelen had been regularly receiving the treatment to help with symptoms of severe depression, which he’d struggled with since high school. At 34 years old, he had tried medications, but hadn’t had a therapy plan. His primary care physician referred him to get an evaluation for ECT, which was then prescribed by a psychiatrist. Electroconvulsive therapy has been used to treat various mental illnesses since the late 1930s. The technique, which involves passing electrical currents through the brain to trigger a short seizure, has always had a somewhat torturous reputation. Yet it’s still in use, in a modified form of its original version. According to one commonly cited statistic, 100,000 Americans receive ECT annually — most often to ease symptoms of severe depression or bipolar disorder — although exact demographic data is scarce. For Thelen, the treatment appeared to relieve his depression symptoms somewhat, but he reported new headaches and concentration issues, in addition to the memory loss. Those claims are central to a lawsuit Thelen filed in 2020 against Somatics, LLC and Elektrika, Inc., manufacturers and suppliers of ECT devices, alleging that the companies failed to disclose — and even intentionally hid — risks associated with ECT, including “brain damage and permanent neurocognitive injuries.” Thelen’s legal team told Undark that they have since reached a resolution with Elektrika on confidential terms. With regard to Somatics, in June a jury found that the company failed to warn about risks associated with ECT, but could not conclude that there was a legal causation between that and Thelen’s memory loss. The following month, his lawyers filed a motion for a new trial. (In response to a request for comment, Conrad Swartz, one of Somatics’ co-founders, directed Undark to the company’s attorney, Sue Cole. Cole did not respond to multiple emails. Lawyers for Elektrika declined to comment.)
Keyword: Depression; Learning & Memory
Link ID: 28899 - Posted: 09.07.2023
By Claudia López Lloreda Cells hidden in the skull may point to a way to detect, diagnose and treat inflamed brains. A detailed look at the skull reveals that bone marrow cells there change and are recruited to the brain after injury, possibly traveling through tiny channels connecting the skull and the outer protective layer of the brain. Paired with the discovery that inflammation in the skull is disease-specific, these new findings collectively suggest the skull’s marrow could serve as a target to track and potentially treat neurological disorders involving brain inflammation, researchers report August 9 in Cell. Immune cells that infiltrate the central nervous system during many diseases and neuronal injury can wreak havoc by flooding the brain with damaging molecules. This influx of immune cells causes inflammation in the brain and spinal cord and can contribute to diseases like multiple sclerosis (SN: 11/26/19). Detecting and dampening this reaction has been an extensive field of research. With this new work, the skull, “something that has been considered as just protective, suddenly becomes a very active site of interaction with the brain, not only responding to brain diseases, but also changing itself in response to brain diseases,” says Gerd Meyer zu Hörste, a neurologist at University of Münster in Germany who was not involved in the study. Ali Ertürk of the Helmholtz Center in Munich and colleagues discovered this potential role for the skull while probing the idea that the cells in skull marrow might behave differently from those in other bones. Ertürk’s team compared the genetic activity of cells in mice skull marrow, and the proteins those cells made, with those in the rodent’s humerus, femur and four other bones, along with the meninges, the protective membranes between the skull and the brain. © Society for Science & the Public 2000–2023.
Keyword: Alzheimers; Multiple Sclerosis
Link ID: 28898 - Posted: 09.07.2023
By Jocelyn Kaiser Parkinson’s disease, a brain disorder that gradually leads to difficulty moving, tremors, and usually dementia by the end, is often difficult to diagnose early in its yearslong progression. That makes testing experimental treatments challenging and slows people from getting existing drugs, which can’t stop the ongoing death of brain cells but temporarily improve many of the resulting symptoms. Now, a study using rodents and tissue from diagnosed Parkinson’s patients suggests DNA damage spotted in blood samples offers a simple way to diagnose the disease early. Although the potential test needs to be validated in clinical studies, the detected DNA damage joins a “flurry” of other biomarkers recently identified for Parkinson’s and “adds to our ability to state confidently that an individual has Parkinson’s disease or not,” says neurodegeneration researcher Mark Cookson of the National Institute on Aging, whose grantmaking arm helped fund the new work, published today in Science Translational Medicine. A blood test based on the findings could also help patients go on existing treatments earlier and boost clinical trials evaluating new therapies, the study’s authors say. “It’s really exciting because it’s something [physicians] could use to detect [Parkinson’s] before the clinical symptoms emerge,” says neuroscientist Malú Tansey of the University of Florida, who also was not involved with the research. Parkinson’s occurs when the death of certain neurons in the brain causes levels of the neurotransmitter dopamine to drop, leading to muscle stiffness, balance problems, speech and cognitive problems, and other symptoms over time. The disorder, tied to both environmental and genetic factors, afflicts up to 1 million people in the United States.
Keyword: Parkinsons
Link ID: 28897 - Posted: 09.07.2023
By Carolyn Wilke Young jumping spiders dangle by a thread through the night, in a box, in a lab. Every so often, their legs curl and their spinnerets twitch — and the retinas of their eyes, visible through their translucent exoskeletons, shift back and forth. “What these spiders are doing seems to be resembling — very closely — REM sleep,” says Daniela Rößler, a behavioral ecologist at the University of Konstanz in Germany. During REM (which stands for rapid eye movement), a sleeping animal’s eyes dart about unpredictably, among other features. In people, REM is when most dreaming happens, particularly the most vivid dreams. Which leads to an intriguing question. If spiders have REM sleep, might dreams also unfold in their poppy-seed-size brains? Rößler and her colleagues reported on the retina-swiveling spiders in 2022. Training cameras on 34 spiders, they found that the creatures had brief REM-like spells about every 17 minutes. The eye-darting behavior was specific to these bouts: It didn’t happen at times in the night when the jumping spiders stirred, stretched, readjusted their silk lines or cleaned themselves with a brush of a leg. Though the spiders are motionless in the run-up to these REM-like bouts, the team hasn’t yet proved that they are sleeping. But if it turns out that they are — and if what looks like REM really is REM — dreaming is a distinct possibility, Rößler says. She finds it easy to imagine that jumping spiders, as highly visual animals, might benefit from dreams as a way to process information they took in during the day. Young jumping spiders have translucent skin. Behind their eyes, tube-shaped retinas move as the spiderlings look about. As shown in this sped-up video, researchers have also observed such retinal tube-shifting behavior in resting — possibly sleeping — spiders. In these intermittent, active bouts, the animals’ legs curl and their spinnerets twitch — suggesting that spiders may experience something like REM sleep. © 2023 Annual Reviews
Keyword: Sleep; Evolution
Link ID: 28896 - Posted: 09.07.2023
By Jori Lewis The squat abandoned concrete structure may have been a water tower when this tract of land in the grasslands of Mozambique was a cotton factory. Now it served an entirely different purpose: Housing a bat colony. To climb through the building’s low opening, bat researcher Césaria Huó and I had to battle a swarm of biting tsetse flies and clear away a layer of leaves and vines. My eyes quickly adjusted to the low light, but my nose, even behind a mask, couldn’t adjust to the smell of hundreds of bats and layers of bat guano—a fetid reek of urea with fishy, spicy overtones. But Huó had a different reaction. “I don’t mind the smell now,” she said. After several months of monitoring bat colonies in the Gorongosa National Park area as a master’s student in the park’s conservation biology program, Huó said she almost likes it. “Now, when I smell it, I know there are bats here.” Since we arrived at the tower during the daylight hours, I had expected the nocturnal mammals to be asleep. Instead, they were shaking their wings, flying from one wall or spot on the ceiling to another, swooping sometimes a bit too close to me for my comfort. But the bats didn’t care about me; they were cruising for mates. It was mating season, and we had lucked out to see their mating performances. Huó pointed out that some females were inspecting the males, checking out their wing flapping prowess. But Huó and her adviser, the polymath entomologist Piotr Naskrecki, did not bring me to this colony to view the bats’ seductive dances and their feats of flight, since those behaviors are already known to scientists. We were here to decipher what the bats were saying while doing them. Huó and Naskrecki had set up cameras and audio recorders the night before to learn more about these bats and try to understand the nature of the calls they use, listening for signs of meaning. © 2023 NautilusNext Inc., All rights reserved.
Keyword: Animal Communication; Evolution
Link ID: 28895 - Posted: 09.07.2023
By R. Douglas Fields One day, while threading a needle to sew a button, I noticed that my tongue was sticking out. The same thing happened later, as I carefully cut out a photograph. Then another day, as I perched precariously on a ladder painting the window frame of my house, there it was again! What’s going on here? I’m not deliberately protruding my tongue when I do these things, so why does it keep making appearances? After all, it’s not as if that versatile lingual muscle has anything to do with controlling my hands. Right? Yet as I would learn, our tongue and hand movements are intimately interrelated at an unconscious level. This peculiar interaction’s deep evolutionary roots even help explain how our brain can function without conscious effort. A common explanation for why we stick out our tongue when we perform precision hand movements is something called motor overflow. In theory, it can take so much cognitive effort to thread a needle (or perform other demanding fine motor skills) that our brain circuits get swamped and impinge on adjacent circuits, activating them inappropriately. It’s certainly true that motor overflow can happen after neural injury or in early childhood when we are learning to control our bodies. But I have too much respect for our brains to buy that “limited brain bandwidth” explanation. How, then, does this peculiar hand-mouth cross-talk really occur? Tracing the neural anatomy of tongue and hand control to pinpoint where a short circuit might happen, we find first of all that the two are controlled by completely different nerves. This makes sense: A person who suffers a spinal cord injury that paralyzes their hands does not lose their ability to speak. That’s because the tongue is controlled by a cranial nerve, but the hands are controlled by spinal nerves. Simons Foundation
Keyword: Language; Emotions
Link ID: 28894 - Posted: 08.30.2023
Mariana Lenharo Science fiction has long entertained the idea of artificial intelligence becoming conscious — think of HAL 9000, the supercomputer-turned-villain in the 1968 film 2001: A Space Odyssey. With the rapid progress of artificial intelligence (AI), that possibility is becoming less and less fantastical, and has even been acknowledged by leaders in AI. Last year, for instance, Ilya Sutskever, chief scientist at OpenAI, the company behind the chatbot ChatGPT, tweeted that some of the most cutting-edge AI networks might be “slightly conscious”. Many researchers say that AI systems aren’t yet at the point of consciousness, but that the pace of AI evolution has got them pondering: how would we know if they were? To answer this, a group of 19 neuroscientists, philosophers and computer scientists have come up with a checklist of criteria that, if met, would indicate that a system has a high chance of being conscious. They published their provisional guide earlier this week in the arXiv preprint repository1, ahead of peer review. The authors undertook the effort because “it seemed like there was a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness,” says co-author Robert Long, a philosopher at the Center for AI Safety, a research non-profit organization in San Francisco, California. The team says that a failure to identify whether an AI system has become conscious has important moral implications. If something has been labelled ‘conscious’, according to co-author Megan Peters, a neuroscientist at the University of California, Irvine, “that changes a lot about how we as human beings feel that entity should be treated”. Long adds that, as far as he can tell, not enough effort is being made by the companies building advanced AI systems to evaluate the models for consciousness and make plans for what to do if that happens. “And that’s in spite of the fact that, if you listen to remarks from the heads of leading labs, they do say that AI consciousness or AI sentience is something they wonder about,” he adds. © 2023 Springer Nature Limited
Keyword: Consciousness
Link ID: 28893 - Posted: 08.30.2023
By Amanda Holpuch Doctors in Australia had screened, scanned and tested a woman to find out why she was sick after being hospitalized with abdominal pains and diarrhea. They were not prepared for what they found. A three-inch red worm was living in the woman’s brain. The worm was removed last year after doctors spent more than a year trying to find the cause of the woman’s distress. The hunt for the answer, and the alarming discovery, was described this month in Emerging Infectious Diseases, a monthly journal published by the Centers for Disease Control and Prevention. The woman, whom the article identifies as a 64-year-old resident of southeastern New South Wales, Australia, was admitted to a hospital in January 2021 after complaining of diarrhea and abdominal pain for three weeks. She had a dry cough and night sweats. Scientists and doctors from Canberra, Sydney and Melbourne said in the journal article that the woman was initially told she had a rare lung infection, but the cause was unknown. Her symptoms improved with treatment, but weeks later, she was hospitalized again, this time with a fever and cough. Doctors then treated her for a group of blood disorders known as hypereosinophilic syndrome, and the medicine they used suppressed her immune system. Over a three-month period in 2022, she experienced forgetfulness and worsening depression. An MRI showed that she had a brain lesion and, in June 2022, doctors performed a biopsy. Inside the lesion, doctors found a “stringlike structure” and removed it. The structure was a red, live parasitic worm, about 3.15 inches long and .04 inches in diameter. © 2023 The New York Times Company
Keyword: Depression
Link ID: 28892 - Posted: 08.30.2023
By Maria Temming When Christopher Mazurek realizes he’s dreaming, it’s always the small stuff that tips him off. The first time it happened, Mazurek was a freshman at Northwestern University in Evanston, Ill. In the dream, he found himself in a campus dining hall. It was winter, but Mazurek wasn’t wearing his favorite coat. “I realized that, OK, if I don’t have the coat, I must be dreaming,” Mazurek says. That epiphany rocked the dream like an earthquake. “Gravity shifted, and I was flung down a hallway that seemed to go on for miles,” he says. “My left arm disappeared, and then I woke up.” Most people rarely if ever realize that they’re dreaming while it’s happening, what’s known as lucid dreaming. But some enthusiasts have cultivated techniques to become self-aware in their sleep and even wrest some control over their dream selves and settings. Mazurek, 24, says that he’s gotten better at molding his lucid dreams since that first whirlwind experience, sometimes taking them as opportunities to try flying or say hi to deceased family members. Other lucid dreamers have used their personal virtual realities to plumb their subconscious minds for insights or feast on junk food without real-world consequences. But now, scientists have a new job for lucid dreamers: to explore their dreamscapes and report out in real time. Dream research has traditionally relied on reports collected after someone wakes up. But people often wake with only spotty, distorted memories of what they dreamed. The dreamers can’t say exactly when events occurred, and they certainly can’t tailor their dreams to specific scientific studies. © Society for Science & the Public 2000–2023.
Keyword: Sleep; Consciousness
Link ID: 28891 - Posted: 08.30.2023
By Christina Caron About one in four adults in the United States develops symptoms of insomnia each year. In most cases, these are short-lived, caused by things like stress or illness. But one in 10 adults is estimated to have chronic insomnia, which means difficulty falling or staying asleep at least three times a week for three months or longer. Sleep deprivation doesn’t just create physical health problems, it can also harm our minds. A recent poll from the National Sleep Foundation, for example, found a link between poor sleep health and depressive symptoms. In addition, studies have shown that a lack of sleep can lead otherwise healthy people to experience anxiety and distress. Fortunately, there is a well-studied and proven treatment for insomnia that generally works in eight sessions or less: cognitive behavioral therapy for insomnia, or C.B.T.-I. If you cannot find a provider, C.B.T.-I. instruction is easy to access online. Yet it is rarely the first thing people try, said Aric Prather, a sleep researcher at the University of California, San Francisco, who treats patients with insomnia. Instead, they often turn to medication. According to a 2020 survey from the Centers for Disease Control, more than 8 percent of adults reported taking sleep medication every day or most days to help them fall or stay asleep. Studies have found that C.B.T.-I. is as effective as using sleep medications in the short term and more effective in the long term. Clinical trial data suggests that as many as 80 percent of the people who try C.B.T.-I. see improvements in their sleep and most patients find relief in four to eight sessions, even if they have had insomnia for decades, said Philip Gehrman, the director of the Sleep, Neurobiology and Psychopathology lab at the University of Pennsylvania. © 2023 The New York Times Company
Keyword: Sleep
Link ID: 28890 - Posted: 08.30.2023
In a study of 152 deceased athletes less than 30 years old who were exposed to repeated head injury through contact sports, brain examination demonstrated that 63 (41%) had chronic traumatic encephalopathy (CTE), a degenerative brain disorder associated with exposure to head trauma. Neuropsychological symptoms were severe in both those with and without evidence of CTE. Suicide was the most common cause of death in both groups, followed by unintentional overdose. Among the brain donors found to have CTE, 71% had played contact sports at a non-professional level (youth, high school, or college competition). Common sports included American football, ice hockey, soccer, rugby, and wrestling. The study, published in JAMA Neurology, confirms that CTE can occur even in young athletes exposed to repetitive head impacts. The research was supported in part by the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health. Because CTE cannot be definitively diagnosed in individuals while living, it is unknown how commonly CTE occurs in such athletes. As in all brain bank studies, donors differ from the general population and no estimates of prevalence can be concluded from this research. Most of the study donors were white, male football players with cognitive, behavioral, and/or mood symptoms. Their families desired neuropathologic examination after their loved one’s early death and donated to the Understanding Neurologic Injury and Traumatic Encephalopathy (UNITE) Brain Bank. There were no differences in cause of death or clinical symptoms between those with CTE and those without.
Keyword: Brain Injury/Concussion
Link ID: 28889 - Posted: 08.30.2023
By Matt Richtel More than one-fifth of people who use cannabis struggle with dependency or problematic use, according to a study published on Tuesday in The Journal of the American Medical Association Network Open. The research found that 21 percent of people in the study had some degree of cannabis use disorder, which clinicians characterize broadly as problematic use of cannabis that leads to a variety of symptoms, such as recurrent social and occupational problems, indicating impairment and distress. In the study, 6.5 percent of users suffered moderate to severe disorder. Cannabis users who experience more severe dependency tended to be recreational users, whereas less severe but still problematic use was associated roughly equally with medical and recreational use. The most common symptoms among both groups were increased tolerance, craving, and uncontrolled escalation of cannabis use. ImageA person holding a lit joint while bags of cannabis sit on a black table in the Cannabis use is rising nationwide as more states have legalized it. The new findings align with prior research, which has found that around 20 percent of cannabis users develop cannabis use disorder. The condition can be treated with detoxification and abstinence, therapies and other treatments that work with addictive behaviors. The new study drew its data from nearly 1,500 primary care patients in Washington State, where recreational use is legal, in an effort to explore the prevalence of cannabis use disorder among both medical and nonmedical users. The research found that 42 percent of cannabis users identified themselves solely as medical users; 25 percent identified as nonmedical users, and 32 percent identified as both recreational and medical users. © 2023 The New York Times Company
Keyword: Drug Abuse
Link ID: 28888 - Posted: 08.30.2023
By Elizabeth Finkel Science routinely puts forward theories, then batters them with data till only one is left standing. In the fledgling science of consciousness, a dominant theory has yet to emerge. More than 20 are still taken seriously. It’s not for want of data. Ever since Francis Crick, the co-discoverer of DNA’s double helix, legitimized consciousness as a topic for study more than three decades ago, researchers have used a variety of advanced technologies to probe the brains of test subjects, tracing the signatures of neural activity that could reflect consciousness. The resulting avalanche of data should have flattened at least the flimsier theories by now. Five years ago, the Templeton World Charity Foundation initiated a series of “adversarial collaborations” to coax the overdue winnowing to begin. This past June saw the results from the first of these collaborations, which pitted two high-profile theories against each other: global neuronal workspace theory (GNWT) and integrated information theory (IIT). Neither emerged as the outright winner. The results, announced like the outcome of a sporting event at the 26th meeting of the Association for the Scientific Study of Consciousness (ASSC) in New York City, were also used to settle a 25-year bet between Crick’s longtime collaborator, the neuroscientist Christof Koch of the Allen Institute for Brain Science, and the philosopher David Chalmers of New York University, who coined the term “the hard problem” to challenge the presumption that we can explain the subjective feeling of consciousness by analyzing the circuitry of the brain. The neuroscientist Christof Koch of the Allen Institute for Brain Science deemed the mixed results of the first adversarial collaboration on consciousness to be “a victory for science.” Nevertheless, Koch proclaimed, “It’s a victory for science.” But was it? All Rights Reserved © 2023
Keyword: Consciousness; Attention
Link ID: 28887 - Posted: 08.26.2023
Linda Geddes I’ve made a cup of coffee, written my to-do list and now I’m wiring up my ear to a device that will send an electrical message to my brainstem. If the testimonials are to believed, incorporating this stimulating habit into my daily routine could help to reduce stress and anxiety, curb inflammation and digestive issues, and perhaps improve my sleep and concentration by tapping into the “electrical superhighway” that is the vagus nerve. From plunging your face into icy water, to piercing the small flap of cartilage in front of your ear, the internet is awash with tips for hacking this system that carries signals between the brain and chest and abdominal organs. Manufacturers and retailers are also increasingly cashing in on this trend, with Amazon alone offering hundreds of vagus nerve products, ranging from books and vibrating pendants to electrical stimulators similar to the one I’ve been testing. Meanwhile, scientific interest in vagus nerve stimulation is exploding, with studies investigating it as a potential treatment for everything from obesity to depression, arthritis and Covid-related fatigue. So, what exactly is the vagus nerve, and is all this hype warranted? The vagus nerve is, in fact, a pair of nerves that serve as a two-way communication channel between the brain and the heart, lungs and abdominal organs, plus structures such as the oesophagus and voice box, helping to control involuntary processes, including breathing, heart rate, digestion and immune responses. They are also an important part of the parasympathetic nervous system, which governs the “rest and digest” processes, and relaxes the body after periods of stress or danger that activate our sympathetic “fight or flight” responses. In the late 19th century, scientists observed that compressing the main artery in the neck – alongside which the vagus nerves run – could help to prevent or treat epilepsy. This idea was resurrected in the 1980s, when the first electrical stimulators were implanted into the necks of epilepsy patients, helping to calm down the irregular electrical brain activity that triggers seizures. © 2023 Guardian News & Media Limited
Keyword: Depression; Obesity
Link ID: 28886 - Posted: 08.26.2023
By David Grimm Apart from Garfield’s legendary love of lasagna, perhaps no food is more associated with cats than tuna. The dish is a staple of everything from The New Yorker cartoons to Meow Mix jingles—and more than 6% of all wild-caught fish goes into cat food. Yet tuna (or any seafood for that matter) is an odd favorite for an animal that evolved in the desert. Now, researchers say they have found a biological explanation for this curious craving. In a study published this month in Chemical Senses, scientists report that cat taste buds contain the receptors needed to detect umami—the savory, deep flavor of various meats, and one of the five basic tastes in addition to sweet, sour, salty, and bitter. Indeed, umami appears to be the primary flavor cats seek out. That’s no surprise for an obligate carnivore. But the team also found these cat receptors are uniquely tuned to molecules found at high concentrations in tuna, revealing why our feline friends seem to prefer this delicacy over all others. “This is an important study that will help us better understand the preferences of our familiar pets,” says Yasuka Toda, a molecular biologist at Meiji University and a leader in studying the evolution of umami taste in mammals and birds. The work could help pet food companies develop healthier diets and more palatable medications for cats, says Toda, who was not involved with the industry-funded study. Cats have a unique palate. They can’t taste sugar because they lack a key protein for sensing it. That’s probably because there’s no sugar in meat, says Scott McGrane, a flavor scientist and research manager for the sensory science team at the Waltham Petcare Science Institute, which is owned by pet food–maker Mars Petcare UK. There’s a saying in evolution, he says: “If you don’t use it, you lose it.” Cats also have fewer bitter taste receptors than humans do—a common trait in uber-carnivores. But cats must taste something, McGrane reasoned, and that something is likely the savory flavor of meat. In humans and many other animals, two genes—Tas1r1 and Tas1r3—encode proteins that join together in taste buds to form a receptor that detects umami. Previous work had shown that cats express the Tas1r3 gene in their taste buds, but it was unclear whether they had the other critical puzzle piece.
Keyword: Chemical Senses (Smell & Taste); Evolution
Link ID: 28885 - Posted: 08.26.2023
By Miryam Naddaf, It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end. Its audacious goal was to understand the human brain by modelling it in a computer. During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions, developing brain implants to treat blindness and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions. “When the project started, hardly anyone believed in the potential of big data and the possibility of using it, or supercomputers, to simulate the complicated functioning of the brain,” says Thomas Skordas, deputy director-general of the European Commission in Brussels. Advertisement Almost since it began, however, the HBP has drawn criticism. The project did not achieve its goal of simulating the whole human brain — an aim that many scientists regarded as far-fetched in the first place. It changed direction several times, and its scientific output became “fragmented and mosaic-like”, says HBP member Yves Frégnac, a cognitive scientist and director of research at the French national research agency CNRS in Paris. For him, the project has fallen short of providing a comprehensive or original understanding of the brain. “I don’t see the brain; I see bits of the brain,” says Frégnac. HBP directors hope to bring this understanding a step closer with a virtual platform — called EBRAINS — that was created as part of the project. EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments. “Today, we have all the tools in hand to build a real digital brain twin,” says Viktor Jirsa, a neuroscientist at Aix-Marseille University in France and an HBP board member. But the funding for this offshoot is still uncertain. And at a time when huge, expensive brain projects are in high gear elsewhere, scientists in Europe are frustrated that their version is winding down. “We were probably one of the first ones to initiate this wave of interest in the brain,” says Jorge Mejias, a computational neuroscientist at the University of Amsterdam, who joined the HBP in 2019. Now, he says, “everybody’s rushing, we don’t have time to just take a nap”. Chequered past
Keyword: Brain imaging; Robotics
Link ID: 28884 - Posted: 08.26.2023
Jon Hamilton Scientists have genetically engineered a squid that is almost as transparent as the water it's in. The squid will allow researchers to watch brain activity and biological processes in a living animal. Sponsor Message ARI SHAPIRO, HOST: For most of us, it would take magic to become invisible, but for some lucky, tiny squid, all it took was a little genetic tweaking. As part of our Weekly Dose of Wonder series, NPR's Jon Hamilton explains how scientists created a see-through squid. JON HAMILTON, BYLINE: The squid come from the Marine Biological Laboratory in Woods Hole, Mass. Josh Rosenthal is a senior scientist there. He says even the animal's caretakers can't keep track of them. JOSH ROSENTHAL: They're really hard to spot. We know we put it in this aquarium, but they might look for a half-hour before they can actually see it. They're that transparent. HAMILTON: Almost invisible. Carrie Albertin, a fellow at the lab, says studying these creatures has been transformative. CARRIE ALBERTIN: They are so strikingly see-through. It changes the way you interpret what's going on in this animal, being able to see completely through the body. HAMILTON: Scientists can watch the squid's three hearts beating in synchrony or see its brain cells at work. And it's all thanks to a gene-editing technology called CRISPR. A few years ago, Rosenthal and Albertin decided they could use CRISPR to create a special octopus or squid for research. ROSENTHAL: Carrie and I are highly biased. We both love cephalopods - right? - and we have for our entire careers. HAMILTON: So they focused on the hummingbird bobtail squid. It's smaller than a thumb and shaped like a dumpling. Like other cephalopods, it has a relatively large and sophisticated brain. Rosenthal takes me to an aquarium to show me what the squid looks like before its genes are altered. ROSENTHAL: Here is our hummingbird bobtail squid. You can see him right there in the bottom, just kind of sitting there hunkered down in the sand. At night, it'll come out and hunt and be much more mobile. © 2023 npr
Keyword: Brain imaging; Evolution
Link ID: 28883 - Posted: 08.26.2023