Chapter 2. Neurophysiology: The Generation, Transmission, and Integration of Neural Signals

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 994

By Gina Kolata Tucker Marr’s life changed forever last October. He was on his way to a wedding reception when he fell down a steep flight of metal stairs, banging the right side of his head so hard he went into a coma. He’d fractured his skull, and a large blood clot formed on the left side of his head. Surgeons had to remove a large chunk of his skull to relieve pressure on his brain and to remove the clot. “Getting a piece of my skull taken out was crazy to me,” Mr. Marr said. “I almost felt like I’d lost a piece of me.” But what seemed even crazier to him was the way that piece was restored. Mr. Marr, a 27-year-old analyst at Deloitte, became part of a new development in neurosurgery. Instead of remaining without a piece of skull or getting the old bone put back, a procedure that is expensive and has a high rate of infection, he got a prosthetic piece of skull made with a 3-D printer. But it is not the typical prosthesis used in such cases. His prosthesis, which is covered by his skin, is embedded with an acrylic window that would let doctors peer into his brain with ultrasound. A few medical centers are offering such acrylic windows to patients who had to have a piece of skull removed to treat conditions like a brain injury, a tumor, a brain bleed or hydrocephalus. “It’s very cool,” Dr. Michael Lev, director of emergency radiology at Massachusetts General Hospital, said. But, “it is still early days,” he added. Advocates of the technique say that if a patient with such a window has a headache or a seizure or needs a scan to see if a tumor is growing, a doctor can slide an ultrasound probe on the patient’s head and look at the brain in the office. © 2023 The New York Times Company

Keyword: Brain imaging; Brain Injury/Concussion
Link ID: 28914 - Posted: 09.16.2023

By Miryam Naddaf, It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end. Its audacious goal was to understand the human brain by modelling it in a computer. During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions, developing brain implants to treat blindness and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions. “When the project started, hardly anyone believed in the potential of big data and the possibility of using it, or supercomputers, to simulate the complicated functioning of the brain,” says Thomas Skordas, deputy director-general of the European Commission in Brussels. Advertisement Almost since it began, however, the HBP has drawn criticism. The project did not achieve its goal of simulating the whole human brain — an aim that many scientists regarded as far-fetched in the first place. It changed direction several times, and its scientific output became “fragmented and mosaic-like”, says HBP member Yves Frégnac, a cognitive scientist and director of research at the French national research agency CNRS in Paris. For him, the project has fallen short of providing a comprehensive or original understanding of the brain. “I don’t see the brain; I see bits of the brain,” says Frégnac. HBP directors hope to bring this understanding a step closer with a virtual platform — called EBRAINS — that was created as part of the project. EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments. “Today, we have all the tools in hand to build a real digital brain twin,” says Viktor Jirsa, a neuroscientist at Aix-Marseille University in France and an HBP board member. But the funding for this offshoot is still uncertain. And at a time when huge, expensive brain projects are in high gear elsewhere, scientists in Europe are frustrated that their version is winding down. “We were probably one of the first ones to initiate this wave of interest in the brain,” says Jorge Mejias, a computational neuroscientist at the University of Amsterdam, who joined the HBP in 2019. Now, he says, “everybody’s rushing, we don’t have time to just take a nap”. Chequered past

Keyword: Brain imaging; Robotics
Link ID: 28884 - Posted: 08.26.2023

Jon Hamilton Scientists have genetically engineered a squid that is almost as transparent as the water it's in. The squid will allow researchers to watch brain activity and biological processes in a living animal. Sponsor Message ARI SHAPIRO, HOST: For most of us, it would take magic to become invisible, but for some lucky, tiny squid, all it took was a little genetic tweaking. As part of our Weekly Dose of Wonder series, NPR's Jon Hamilton explains how scientists created a see-through squid. JON HAMILTON, BYLINE: The squid come from the Marine Biological Laboratory in Woods Hole, Mass. Josh Rosenthal is a senior scientist there. He says even the animal's caretakers can't keep track of them. JOSH ROSENTHAL: They're really hard to spot. We know we put it in this aquarium, but they might look for a half-hour before they can actually see it. They're that transparent. HAMILTON: Almost invisible. Carrie Albertin, a fellow at the lab, says studying these creatures has been transformative. CARRIE ALBERTIN: They are so strikingly see-through. It changes the way you interpret what's going on in this animal, being able to see completely through the body. HAMILTON: Scientists can watch the squid's three hearts beating in synchrony or see its brain cells at work. And it's all thanks to a gene-editing technology called CRISPR. A few years ago, Rosenthal and Albertin decided they could use CRISPR to create a special octopus or squid for research. ROSENTHAL: Carrie and I are highly biased. We both love cephalopods - right? - and we have for our entire careers. HAMILTON: So they focused on the hummingbird bobtail squid. It's smaller than a thumb and shaped like a dumpling. Like other cephalopods, it has a relatively large and sophisticated brain. Rosenthal takes me to an aquarium to show me what the squid looks like before its genes are altered. ROSENTHAL: Here is our hummingbird bobtail squid. You can see him right there in the bottom, just kind of sitting there hunkered down in the sand. At night, it'll come out and hunt and be much more mobile. © 2023 npr

Keyword: Brain imaging; Evolution
Link ID: 28883 - Posted: 08.26.2023

By Pam Belluck At Ann Johnson’s wedding reception 20 years ago, her gift for speech was vividly evident. In an ebullient 15-minute toast, she joked that she had run down the aisle, wondered if the ceremony program should have said “flutist” or “flautist” and acknowledged that she was “hogging the mic.” Just two years later, Mrs. Johnson — then a 30-year-old teacher, volleyball coach and mother of an infant — had a cataclysmic stroke that paralyzed her and left her unable to talk. On Wednesday, scientists reported a remarkable advance toward helping her, and other patients, speak again. In a milestone of neuroscience and artificial intelligence, implanted electrodes decoded Mrs. Johnson’s brain signals as she silently tried to say sentences. Technology converted her brain signals into written and vocalized language, and enabled an avatar on a computer screen to speak the words and display smiles, pursed lips and other expressions. The research, published in the journal Nature, demonstrates the first time spoken words and facial expressions have been directly synthesized from brain signals, experts say. Mrs. Johnson chose the avatar, a face resembling hers, and researchers used her wedding toast to develop the avatar’s voice. “We’re just trying to restore who people are,” said the team’s leader, Dr. Edward Chang, the chairman of neurological surgery at the University of California, San Francisco. “It let me feel like I was a whole person again,” Mrs. Johnson, now 48, wrote to me. The goal is to help people who cannot speak because of strokes or conditions like cerebral palsy and amyotrophic lateral sclerosis. To work, Mrs. Johnson’s implant must be connected by cable from her head to a computer, but her team and others are developing wireless versions. Eventually, researchers hope, people who have lost speech may converse in real time through computerized pictures of themselves that convey tone, inflection and emotions like joy and anger. “What’s quite exciting is that just from the surface of the brain, the investigators were able to get out pretty good information about these different features of communication,” said Dr. Parag Patil, a neurosurgeon and biomedical engineer at the University of Michigan, who was asked by Nature to review the study before publication. © 2023 The New York Times Company

Keyword: Stroke; Robotics
Link ID: 28882 - Posted: 08.24.2023

Diana Kwon Santiago Ramón y Cajal revolutionized neurobiology in the late nineteenth century with his exquisitely detailed illustrations of neural tissues. Created through years of meticulous microscopy work, the Spanish physician-scientist’s drawings revealed the unique cellular morphology of the brain. “With Cajal’s work, we saw that the cells of the brain don’t look like the cells of every other part of the body — they have incredible morphologies that you just don’t see elsewhere,” says Evan Macosko, a neuroscientist at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts. Ramón y Cajal’s drawings provided one of the first clues that the keys to understanding how the brain governs its many functions, from regulating blood pressure and sleep to controlling cognition and mood, might lie at the cellular level. Still, when it comes it comes to the brain, crucial information remained — and indeed, remains — missing. “In order to have a fundamental understanding of the brain, we really need to know how many different types of cells there are, how are they organized, and how they interact with each other,” says Xiaowei Zhuang, a biophysicist at Harvard University in Cambridge. What neuroscientists require, Zhuang explains, is a way to systematically identify and map the many categories of brain cells. Now researchers are closing in on such a resource, at least in mice. By combining high-throughput single-cell RNA sequencing with spatial transcriptomics — methods for determining which genes are expressed in individual cells, and where those cells are located — they are creating some of the most comprehensive atlases of the mouse brain so far. The crucial next steps will be working out what these molecularly defined cell types do, and bringing the various brain maps together to create a unified resource that the broader neuroscience community can use. © 2023 Springer Nature Limited

Keyword: Brain imaging; Development of the Brain
Link ID: 28880 - Posted: 08.24.2023

By Lauren Leffer When a nematode wriggles around a petri dish, what’s going on inside a tiny roundworm’s even tinier brain? Neuroscientists now have a more detailed answer to that question than ever before. As with any experimental animal, from a mouse to a monkey, the answers may hold clues about the contents of more complex creatures’ noggin, including what resides in the neural circuitry of our own head. A new brain “atlas” and computer model, published in Cell on Monday, lays out the connections between the actions of the nematode species Caenorhabditis elegans and this model organism’s individual brain cells. With the findings, researchers can now observe a C. elegans worm feeding or moving in a particular way and infer activity patterns for many of the animal’s behaviors in its specific neurons. Through establishing those brain-behavior links in a humble roundworm, neuroscientists are one step closer to understanding how all sorts of animal brains, even potentially human ones, encode action. “I think this is really nice work,” says Andrew Leifer, a neuroscientist and physicist who studies nematode brains at Princeton University and was not involved in the new research. “One of the most exciting reasons to study how a worm brain works is because it holds the promise of being able to understand how any brain generates behavior,” he says. “What we find in the worm forms hypotheses to look for in other organisms.” Biologists have been drawn to the elegant simplicity of nematode biology for many decades. South African biologist Sydney Brenner received a Nobel Prize in Physiology or Medicine in 2002 for pioneering work that enabled C. elegans to become an experimental animal for the study of cell maturation and organ development. C. elegans was the first multicellular organism to have its entire genome and nervous system mapped. The first neural map, or “connectome,” of a C. elegans brain was published in 1986. In that research, scientists hand drew connections using colored pencils and charted each of the 302 neurons and approximately 5,000 synapses inside the one-millimeter-long animal’s transparent body. Since then a subdiscipline of neuroscience has emerged—one dedicated to plotting out the brains of increasingly complex organisms. Scientists have compiled many more nematode connectomes, as well as brain maps of a marine annelid worm, a tadpole, a maggot and an adult fruit fly. Yet these maps simply serve as a snapshot in time of a single animal. They can tell us a lot about brain structure but little about how behaviors relate to that structure. © 2023 Scientific American

Keyword: Brain imaging; Development of the Brain
Link ID: 28879 - Posted: 08.24.2023

By Claudia López Lloreda In what seems like something out of a sci-fi movie, scientists have plucked the famous Pink Floyd song “Another Brick in the Wall” from individuals’ brains. Using electrodes, computer models and brain scans, researchers previously have been able to decode and reconstruct individual words and entire thoughts from people’s brain activity (SN: 11/15/22; SN: 5/1/23). The new study, published August 15 in PLOS Biology, adds music into the mix, showing that songs can also be decoded from brain activity and revealing how different brain areas pick up an array of acoustical elements. The finding could eventually help improve devices that allow communication from people with paralysis or other conditions that limit one’s ability to speak. People listened to Pink Floyd’s “Another Brick in the Wall” song while having their brain activity monitored. Using that data and a computer model, researchers were able to reconstruct sounds that resemble the song. To decode the song, neuroscientist Ludovic Bellier of the University of California, Berkeley and colleagues analyzed the brain activity recorded by electrodes implanted in the brains of 29 individuals with epilepsy. While in the hospital undergoing monitoring for the disorder, the individuals listened to the 1979 rock song. People’s nerve cells, particularly those in auditory areas, responded to hearing the song, and the electrodes detected not only neural signals associated with words but also rhythm, harmony and other musical aspects, the team found. With that information, the researchers developed a computer model to reconstruct sounds from the brain activity data, and found that they could produce sounds that resemble the song. © Society for Science & the Public 2000–2023.

Keyword: Hearing; Brain imaging
Link ID: 28876 - Posted: 08.19.2023

Liam Drew Scientific advances are rapidly making science-fiction concepts such as mind-reading a reality — and raising thorny questions for ethicists, who are considering how to regulate brain-reading techniques to protect human rights such as privacy. On 13 July, neuroscientists, ethicists and government ministers discussed the topic at a Paris meeting organized by UNESCO, the United Nations scientific and cultural agency. Delegates plotted the next steps in governing such ‘neurotechnologies’ — techniques and devices that directly interact with the brain to monitor or change its activity. The technologies often use electrical or imaging techniques, and run the gamut from medically approved devices, such as brain implants for treating Parkinson’s disease, to commercial products such as wearables used in virtual reality (VR) to gather brain data or to allow users to control software. How to regulate neurotechnology “is not a technological discussion — it’s a societal one, it’s a legal one”, Gabriela Ramos, UNESCO’s assistant director-general for social and human sciences, told the meeting. Advances in neurotechnology include a neuroimaging technique that can decode the contents of people’s thoughts, and implanted brain–computer interfaces (BCIs) that can convert people’s thoughts of handwriting into text1. The field is growing fast — UNESCO’s latest report on neurotechnology, released at the meeting, showed that, worldwide, the number of neurotechnology-related patents filed annually doubled between 2015 and 2020. Investment rose 22-fold between 2010 and 2020, the report says, and neurotechnology is now a US$33-billion industry. One area in need of regulation is the potential for neurotechnologies to be used for profiling individuals and the Orwellian idea of manipulating people’s thoughts and behaviour. Mass-market brain-monitoring devices would be a powerful addition to a digital world in which corporate and political actors already use personal data for political or commercial gain, says Nita Farahany, an ethicist at Duke University in Durham, North Carolina, who attended the meeting. © 2023 Springer Nature Limited

Keyword: Brain imaging
Link ID: 28859 - Posted: 07.27.2023

by Holly Barker By bloating brain samples and imaging them with a powerful microscope, researchers can reconstruct neurons across the entire mouse brain, according to a new preprint. The technique could help scientists uncover the neural circuits responsible for complex behaviors, as well as the pathways that are altered in neurological conditions. Tracking axons can help scientists understand how individual neurons and brain areas communicate over long distances. But tracing their path through the brain is tricky, says study investigator Adam Glaser, senior scientist at the Allen Institute for Neural Dynamics in Seattle, Washington. Axons, which are capable of spanning the entire brain, can be less than a micrometer in diameter, so mapping their route requires detailed imaging, he says. One existing approach involves a microscope that slices off an ultra-thin section of the brain and then scans it, repeating the process about 20,000 times to capture the entire mouse brain. Scientists then blend the images together to form a 3D reconstruction of neuronal pathways. But the process takes several days and is therefore more prone to complications — bubbles forming on the lens, say — than faster techniques, Glaser says. And slicing can distort the edges of the image, making it “challenging or impossible” to stitch them back together, says Paul Tillberg, principal scientist at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, who was not involved in the study. “This is particularly an issue when reconstructing brain-wide axonal projections, where a single point of confusion can misalign an entire axonal arbor to the wrong neuron,” he says. © 2023 Simons Foundation

Keyword: Brain imaging
Link ID: 28850 - Posted: 07.19.2023

By Yasemin Saplakoglu Enough pints of beer can have you falling off your bar stool or loudly reciting lyrics to early 2000s jams to total strangers, because alcohol can get past one of the strongest defenses in the body. If you’ve ever been drunk, high or drowsy from allergy medication, you’ve experienced what happens when some molecules defeat the defense system called the blood-brain barrier and make it into the brain. Embedded in the walls of the hundreds of miles of capillaries that wind through the brain, the barrier keeps most molecules in the blood from ever reaching sensitive neurons. Much as the skull protects the brain from external physical threats, the blood-brain barrier protects it from chemical and pathogenic ones. While it’s a fantastic feat of evolution, the barrier is very much a nuisance for drug developers, who have spent decades trying to selectively overcome it to deliver therapeutics to the brain. Biomedical researchers want to understand the barrier better because its failures seem to be the key to some diseases and because manipulating the barrier could help improve the treatment of certain conditions. It’s really there to control the environment for proper brain function. “We’ve learned a lot over the last decade,” said Elizabeth Rhea, a research biologist at the University of Washington Medicine Memory and Brain Wellness Center. But “we’re definitely still facing challenges in getting substrates and therapeutics across.” Protection, but Not a Fortress Like the rest of the body, the brain needs circulating blood to deliver essential nutrients and oxygen and to carry away waste. But blood chemistry constantly fluctuates, and brain tissue is extremely sensitive to its chemical environment. Neurons rely on precise releases of ions to communicate — if ions could flow freely out of the blood, that precision would be lost. Other types of biologically active molecules can also twang the delicate neurons, interfering with thoughts, memories and behaviors. All Rights Reserved © 2023

Keyword: Drug Abuse
Link ID: 28831 - Posted: 06.21.2023

Davide Castelvecchi The wrinkles that give the human brain its familiar walnut-like appearance have a large effect on brain activity, in much the same way that the shape of a bell determines the quality of its sound, a study suggests1. The findings run counter to a commonly held theory about which aspect of brain anatomy drives function. The study’s authors compared the influence of two components of the brain’s physical structure: the outer folds of the cerebral cortex — the area where most higher-level brain activity occurs — and the connectome, the web of nerves that links distinct regions of the cerebral cortex. The team found that the shape of the outer surface was a better predictor of brainwave data than was the connectome, contrary to the paradigm that the connectome has the dominant role in driving brain activity. “We use concepts from physics and engineering to study how anatomy determines function,” says study co-author James Pang, a physicist at Monash University in Melbourne, Australia. The results were published in Nature on 31 May1. ‘Exciting’ a neuron makes it fire, which sends a message zipping to other neurons. Excited neurons in the cerebral cortex can communicate their state of excitation to their immediate neighbours on the surface. But each neuron also has a long filament called an axon that connects it to a faraway region within or beyond the cortex, allowing neurons to send excitatory messages to distant brain cells. In the past two decades, neuroscientists have painstakingly mapped this web of connections — the connectome — in a raft of organisms, including humans. The authors wanted to understand how brain activity is affected by each of the ways in which neuronal excitation can spread: across the brain’s surface or through distant interconnections. To do so, the researchers — who have backgrounds in physics and neuroscience — tapped into the mathematical theory of waves.

Keyword: Brain imaging; Development of the Brain
Link ID: 28811 - Posted: 06.03.2023

By Matteo Wong If you are willing to lie very still in a giant metal tube for 16 hours and let magnets blast your brain as you listen, rapt, to hit podcasts, a computer just might be able to read your mind. Or at least its crude contours. Researchers from the University of Texas at Austin recently trained an AI model to decipher the gist of a limited range of sentences as individuals listened to them—gesturing toward a near future in which artificial intelligence might give us a deeper understanding of the human mind. The program analyzed fMRI scans of people listening to, or even just recalling, sentences from three shows: Modern Love, The Moth Radio Hour, and The Anthropocene Reviewed. Then, it used that brain-imaging data to reconstruct the content of those sentences. For example, when one subject heard “I don’t have my driver’s license yet,” the program deciphered the person’s brain scans and returned “She has not even started to learn to drive yet”—not a word-for-word re-creation, but a close approximation of the idea expressed in the original sentence. The program was also able to look at fMRI data of people watching short films and write approximate summaries of the clips, suggesting the AI was capturing not individual words from the brain scans, but underlying meanings. The findings, published in Nature Neuroscience earlier this month, add to a new field of research that flips the conventional understanding of AI on its head. For decades, researchers have applied concepts from the human brain to the development of intelligent machines. ChatGPT, hyperrealistic-image generators such as Midjourney, and recent voice-cloning programs are built on layers of synthetic “neurons”: a bunch of equations that, somewhat like nerve cells, send outputs to one another to achieve a desired result. Yet even as human cognition has long inspired the design of “intelligent” computer programs, much about the inner workings of our brains has remained a mystery. Now, in a reversal of that approach, scientists are hoping to learn more about the mind by using synthetic neural networks to study our biological ones. It’s “unquestionably leading to advances that we just couldn’t imagine a few years ago,” says Evelina Fedorenko, a cognitive scientist at MIT. Copyright (c) 2023 by The Atlantic Monthly Group.

Keyword: Brain imaging; Language
Link ID: 28802 - Posted: 05.27.2023

By Oliver Whang Gert-Jan Oskam was living in China in 2011 when he was in a motorcycle accident that left him paralyzed from the hips down. Now, with a combination of devices, scientists have given him control over his lower body again. “For 12 years I’ve been trying to get back my feet,” Mr. Oskam said in a press briefing on Tuesday. “Now I have learned how to walk normal, natural.” In a study published on Wednesday in the journal Nature, researchers in Switzerland described implants that provided a “digital bridge” between Mr. Oskam’s brain and his spinal cord, bypassing injured sections. The discovery allowed Mr. Oskam, 40, to stand, walk and ascend a steep ramp with only the assistance of a walker. More than a year after the implant was inserted, he has retained these abilities and has actually showed signs of neurological recovery, walking with crutches even when the implant was switched off. “We’ve captured the thoughts of Gert-Jan, and translated these thoughts into a stimulation of the spinal cord to re-establish voluntary movement,” Grégoire Courtine, a spinal cord specialist at the Swiss Federal Institute of Technology, Lausanne, who helped lead the research, said at the press briefing. Jocelyne Bloch, a neuroscientist at the University of Lausanne who placed the implant in Mr. Oskam, added, “It was quite science fiction in the beginning for me, but it became true today.” A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know: © 2023 The New York Times Company

Keyword: Robotics; Brain imaging
Link ID: 28801 - Posted: 05.27.2023

By Laura Sanders Scientists can see chronic pain in the brain with new clarity. Over months, electrodes implanted in the brains of four people picked up specific signs of their persistent pain. This detailed view of chronic pain, described May 22 in Nature Neuroscience, suggests new ways to curtail the devastating condition. The approach “provides a way into the brain to track pain,” says Katherine Martucci, a neuroscientist who studies chronic pain at Duke University School of Medicine. Chronic pain is incredibly common. In the United States from 2019 to 2020, more adults were diagnosed with chronic pain than with diabetes, depression or high blood pressure, researchers reported May 16 in JAMA Network Open. Chronic pain is also incredibly complex, an amalgam influenced by the body, brain, context, emotions and expectations, Martucci says. That complexity makes chronic pain seemingly invisible to an outsider, and very difficult to treat. One treatment approach is to stimulate the brain with electricity. As part of a clinical trial, researchers at the University of California, San Francisco implanted four electrode wires into the brains of four volunteers with chronic pain. These electrodes can both monitor and stimulate nerve cells in two brain areas: the orbitofrontal cortex, or OFC, and the anterior cingulate cortex, or ACC. The OFC isn’t known to be a key pain influencer in the brain, but this region has lots of neural connections to pain-related areas, including the ACC, which is thought to be involved in how people experience pain. But before researchers stimulated the brain, they needed to know how chronic pain was affecting it. For about 3 to 6 months, the implanted electrodes monitored brain signals of these people as they went about their lives. During that time, the participants rated their pain on standard scales two to eight times a day. © Society for Science & the Public 2000–2023.

Keyword: Pain & Touch; Brain imaging
Link ID: 28795 - Posted: 05.23.2023

By Priyanka Runwal Researchers have for the first time recorded the brain’s firing patterns while a person is feeling chronic pain, paving the way for implanted devices to one day predict pain signals or even short-circuit them. Using a pacemaker-like device surgically placed inside the brain, scientists recorded from four patients who had felt unremitting nerve pain for more than a year. The devices recorded several times a day for up to six months, offering clues for where chronic pain resides in the brain. The study, published on Monday in the journal Nature Neuroscience, reported that the pain was associated with electrical fluctuations in the orbitofrontal cortex, an area involved in emotion regulation, self-evaluation and decision making. The research suggests that such patterns of brain activity could serve as biomarkers to guide diagnosis and treatment for millions of people with shooting or burning chronic pain linked to a damaged nervous system. “The study really advances a whole generation of research that has shown that the functioning of the brain is really important to processing and perceiving pain,” said Dr. Ajay Wasan, a pain medicine specialist at the University of Pittsburgh School of Medicine, who wasn’t involved in the study. About one in five American adults experience chronic pain, which is persistent or recurrent pain that lasts longer than three months. To measure pain, doctors typically rely on patients to rate their pain, using either a numerical scale or a visual one based on emojis. But self-reported pain measures are subjective and can vary throughout the day. And some patients, like children or people with disabilities, may struggle to accurately communicate or score their pain. “There’s a big movement in the pain field to develop more objective markers of pain that can be used alongside self-reports,” said Kenneth Weber, a neuroscientist at Stanford University, who was not involved in the study. In addition to advancing our understanding of what neural mechanisms underlie the pain, Dr. Weber added, such markers can help validate the pain experienced by some patients that is not fully appreciated — or is even outright ignored — by their doctors. © 2023 The New York Times Company

Keyword: Pain & Touch; Brain imaging
Link ID: 28794 - Posted: 05.23.2023

By Marla Broadfoot In Alexandre Dumas’s classic novel The Count of Monte-Cristo, a character named Monsieur Noirtier de Villefort suffers a terrible stroke that leaves him paralyzed. Though he remains awake and aware, he is no longer able to move or speak, relying on his granddaughter Valentine to recite the alphabet and flip through a dictionary to find the letters and words he requires. With this rudimentary form of communication, the determined old man manages to save Valentine from being poisoned by her stepmother and thwart his son’s attempts to marry her off against her will. Dumas’s portrayal of this catastrophic condition — where, as he puts it, “the soul is trapped in a body that no longer obeys its commands” — is one of the earliest descriptions of locked-in syndrome. This form of profound paralysis occurs when the brain stem is damaged, usually because of a stroke but also as the result of tumors, traumatic brain injury, snakebite, substance abuse, infection or neurodegenerative diseases like amyotrophic lateral sclerosis (ALS). The condition is thought to be rare, though just how rare is hard to say. Many locked-in patients can communicate through purposeful eye movements and blinking, but others can become completely immobile, losing their ability even to move their eyeballs or eyelids, rendering the command “blink twice if you understand me” moot. As a result, patients can spend an average of 79 days imprisoned in a motionless body, conscious but unable to communicate, before they are properly diagnosed. The advent of brain-machine interfaces has fostered hopes of restoring communication to people in this locked-in state, enabling them to reconnect with the outside world. These technologies typically use an implanted device to record the brain waves associated with speech and then use computer algorithms to translate the intended messages. The most exciting advances require no blinking, eye tracking or attempted vocalizations, but instead capture and convey the letters or words a person says silently in their head. © 2023 Annual Reviews

Keyword: Brain imaging; Language
Link ID: 28791 - Posted: 05.21.2023

Tess McClure Every few months, Cohen “Coey” Irwin lies on his back and lets the walls close in. Lights move overhead, scanning over the tattoos covering his cheeks. He lies suspended, his head encased by a padded helmet, ears blocked, as his body is shunted into a tunnel. The noise begins: a rhythmic crashing, loud as a jackhammer. For the next hour, an enormous magnet will produce finely detailed images of Irwin’s brain. Irwin has spent much of his adult life addicted to smoking methamphetamine – or P, as the drug is known in New Zealand. He knows its effects intimately: the euphoria, the paranoia, the explosive violence, the energy, the tics that run through his neck and lips. Stepping outside the MRI machine, however, he can get a fresh view for the first time – looking in from the outside at what the drug has done to his internal organs. New Zealanders are some of the world’s biggest meth takers: wastewater testing has placed it in the top four consumers worldwide. The country’s physical isolation – 4,000km from the nearest major ports – makes importing hard drugs challenging and costly, but meth can be manufactured relatively cheaply and easily, and is derived from available pharmaceuticals. Almost a third of middle-aged New Zealanders have tried the drug, a University of Otago study found in 2020. In the backroom of Mātai research centre, Irwin thinks back to when it all started. He was a teenager when he tried P for the first time – trying to impress a girl on New Year’s Eve, in his home town of Porirua, Wellington. The girlfriend didn’t last, but the drug was love at first puff, he says, and would become one of the defining relationships of his life. “I remember it was the next day, the sun had risen, I was still awake with the people at the table I’d been smoking with. And I was instantly trying to find ways: how can we make money to get more?” Within a few years, he would be smoking every day. © 2023 Guardian News & Media Limited

Keyword: Drug Abuse; Brain imaging
Link ID: 28772 - Posted: 05.06.2023

By Laura Sanders Like Dumbledore’s wand, a scan can pull long strings of stories straight out of a person’s brain — but only if that person cooperates. This “mind-reading” feat, described May 1 in Nature Neuroscience, has a long way to go before it can be used outside of sophisticated laboratories. But the result could ultimately lead to seamless devices that help people who can’t talk or otherwise communicate easily. The research also raises privacy concerns about unwelcome neural eavesdropping (SN: 2/11/21). “I thought it was fascinating,” says Gopala Anumanchipalli, a neural engineer at the University of California, Berkeley who wasn’t involved in the study. “It’s like, ‘Wow, now we are here already,’” he says. “I was delighted to see this.” As opposed to implanted devices that have shown recent promise, the new system requires no surgery (SN: 11/15/22). And unlike other external approaches, it produces continuous streams of words instead of having a more constrained vocabulary. For the new study, three people lay inside a bulky MRI machine for at least 16 hours each. They listened to stories, mostly from The Moth podcast, while functional MRI scans detected changes in blood flow in the brain. These changes are proxies for brain activity, albeit slow and imperfect measures. With this neural data in hand, computational neuroscientists Alexander Huth and Jerry Tang of the University of Texas at Austin and colleagues were able to match patterns of brain activity to certain words and ideas. The approach relied on a language model that was built with GPT, one of the forerunners that enabled today’s AI chatbots (SN: 4/12/23). © Society for Science & the Public 2000–2023.

Keyword: Brain imaging; Consciousness
Link ID: 28769 - Posted: 05.03.2023

By Oliver Whang Think of the words whirling around in your head: that tasteless joke you wisely kept to yourself at dinner; your unvoiced impression of your best friend’s new partner. Now imagine that someone could listen in. On Monday, scientists from the University of Texas, Austin, made another step in that direction. In a study published in the journal Nature Neuroscience, the researchers described an A.I. that could translate the private thoughts of human subjects by analyzing fMRI scans, which measure the flow of blood to different regions in the brain. Already, researchers have developed language-decoding methods to pick up the attempted speech of people who have lost the ability to speak, and to allow paralyzed people to write while just thinking of writing. But the new language decoder is one of the first to not rely on implants. In the study, it was able to turn a person’s imagined speech into actual speech and, when subjects were shown silent films, it could generate relatively accurate descriptions of what was happening onscreen. “This isn’t just a language stimulus,” said Alexander Huth, a neuroscientist at the university who helped lead the research. “We’re getting at meaning, something about the idea of what’s happening. And the fact that that’s possible is very exciting.” The study centered on three participants, who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded the blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in the brain activity to the words and phrases that the participants had heard. © 2023 The New York Times Company

Keyword: Brain imaging; Consciousness
Link ID: 28768 - Posted: 05.03.2023

Sara Reardon The little voice inside your head can now be decoded by a brain scanner — at least some of the time. Researchers have developed the first non-invasive method of determining the gist of imagined speech, presenting a possible communication outlet for people who cannot talk. But how close is the technology — which is currently only moderately accurate — to achieving true mind-reading? And how can policymakers ensure that such developments are not misused? Most existing thought-to-speech technologies use brain implants that monitor activity in a person’s motor cortex and predict the words that the lips are trying to form. To understand the actual meaning behind the thought, computer scientists Alexander Huth and Jerry Tang at the University of Texas at Austin and their colleagues combined functional magnetic resonance imaging (fMRI), a non-invasive means of measuring brain activity, with artificial intelligence (AI) algorithms called large language models (LLMs), which underlie tools such as ChatGPT and are trained to predict the next word in a piece of text. In a study published in Nature Neuroscience on 1 May, the researchers had 3 volunteers lie in an fMRI scanner and recorded the individuals’ brain activity while they listened to 16 hours of podcasts each1. By measuring the blood flow through the volunteers’ brains and integrating this information with details of the stories they were listening to and the LLM’s ability to understand how words relate to one another, the researchers developed an encoded map of how each individual’s brain responds to different words and phrases. Next, the researchers recorded the participants’ fMRI activity while they listened to a story, imagined telling a story or watched a film that contained no dialogue. Using a combination of the patterns they had previously encoded for each individual and algorithms that determine how a sentence is likely to be constructed based on other words in it, the researchers attempted to decode this new brain activity. The video below shows the sentences produced from brain recordings taken while a study participant watched a clip from the animated film Sintel about a girl caring for a baby dragon. © 2023 Springer Nature Limited

Keyword: Brain imaging; Consciousness
Link ID: 28767 - Posted: 05.03.2023