Chapter 2. Functional Neuroanatomy: The Cells and Structure of the Nervous System

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 1368

Davide Castelvecchi The wrinkles that give the human brain its familiar walnut-like appearance have a large effect on brain activity, in much the same way that the shape of a bell determines the quality of its sound, a study suggests1. The findings run counter to a commonly held theory about which aspect of brain anatomy drives function. The study’s authors compared the influence of two components of the brain’s physical structure: the outer folds of the cerebral cortex — the area where most higher-level brain activity occurs — and the connectome, the web of nerves that links distinct regions of the cerebral cortex. The team found that the shape of the outer surface was a better predictor of brainwave data than was the connectome, contrary to the paradigm that the connectome has the dominant role in driving brain activity. “We use concepts from physics and engineering to study how anatomy determines function,” says study co-author James Pang, a physicist at Monash University in Melbourne, Australia. The results were published in Nature on 31 May1. ‘Exciting’ a neuron makes it fire, which sends a message zipping to other neurons. Excited neurons in the cerebral cortex can communicate their state of excitation to their immediate neighbours on the surface. But each neuron also has a long filament called an axon that connects it to a faraway region within or beyond the cortex, allowing neurons to send excitatory messages to distant brain cells. In the past two decades, neuroscientists have painstakingly mapped this web of connections — the connectome — in a raft of organisms, including humans. The authors wanted to understand how brain activity is affected by each of the ways in which neuronal excitation can spread: across the brain’s surface or through distant interconnections. To do so, the researchers — who have backgrounds in physics and neuroscience — tapped into the mathematical theory of waves.

Keyword: Brain imaging; Development of the Brain
Link ID: 28811 - Posted: 06.03.2023

By Matteo Wong If you are willing to lie very still in a giant metal tube for 16 hours and let magnets blast your brain as you listen, rapt, to hit podcasts, a computer just might be able to read your mind. Or at least its crude contours. Researchers from the University of Texas at Austin recently trained an AI model to decipher the gist of a limited range of sentences as individuals listened to them—gesturing toward a near future in which artificial intelligence might give us a deeper understanding of the human mind. The program analyzed fMRI scans of people listening to, or even just recalling, sentences from three shows: Modern Love, The Moth Radio Hour, and The Anthropocene Reviewed. Then, it used that brain-imaging data to reconstruct the content of those sentences. For example, when one subject heard “I don’t have my driver’s license yet,” the program deciphered the person’s brain scans and returned “She has not even started to learn to drive yet”—not a word-for-word re-creation, but a close approximation of the idea expressed in the original sentence. The program was also able to look at fMRI data of people watching short films and write approximate summaries of the clips, suggesting the AI was capturing not individual words from the brain scans, but underlying meanings. The findings, published in Nature Neuroscience earlier this month, add to a new field of research that flips the conventional understanding of AI on its head. For decades, researchers have applied concepts from the human brain to the development of intelligent machines. ChatGPT, hyperrealistic-image generators such as Midjourney, and recent voice-cloning programs are built on layers of synthetic “neurons”: a bunch of equations that, somewhat like nerve cells, send outputs to one another to achieve a desired result. Yet even as human cognition has long inspired the design of “intelligent” computer programs, much about the inner workings of our brains has remained a mystery. Now, in a reversal of that approach, scientists are hoping to learn more about the mind by using synthetic neural networks to study our biological ones. It’s “unquestionably leading to advances that we just couldn’t imagine a few years ago,” says Evelina Fedorenko, a cognitive scientist at MIT. Copyright (c) 2023 by The Atlantic Monthly Group.

Keyword: Brain imaging; Language
Link ID: 28802 - Posted: 05.27.2023

By Oliver Whang Gert-Jan Oskam was living in China in 2011 when he was in a motorcycle accident that left him paralyzed from the hips down. Now, with a combination of devices, scientists have given him control over his lower body again. “For 12 years I’ve been trying to get back my feet,” Mr. Oskam said in a press briefing on Tuesday. “Now I have learned how to walk normal, natural.” In a study published on Wednesday in the journal Nature, researchers in Switzerland described implants that provided a “digital bridge” between Mr. Oskam’s brain and his spinal cord, bypassing injured sections. The discovery allowed Mr. Oskam, 40, to stand, walk and ascend a steep ramp with only the assistance of a walker. More than a year after the implant was inserted, he has retained these abilities and has actually showed signs of neurological recovery, walking with crutches even when the implant was switched off. “We’ve captured the thoughts of Gert-Jan, and translated these thoughts into a stimulation of the spinal cord to re-establish voluntary movement,” Grégoire Courtine, a spinal cord specialist at the Swiss Federal Institute of Technology, Lausanne, who helped lead the research, said at the press briefing. Jocelyne Bloch, a neuroscientist at the University of Lausanne who placed the implant in Mr. Oskam, added, “It was quite science fiction in the beginning for me, but it became true today.” A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know: © 2023 The New York Times Company

Keyword: Robotics; Brain imaging
Link ID: 28801 - Posted: 05.27.2023

By Laura Sanders Scientists can see chronic pain in the brain with new clarity. Over months, electrodes implanted in the brains of four people picked up specific signs of their persistent pain. This detailed view of chronic pain, described May 22 in Nature Neuroscience, suggests new ways to curtail the devastating condition. The approach “provides a way into the brain to track pain,” says Katherine Martucci, a neuroscientist who studies chronic pain at Duke University School of Medicine. Chronic pain is incredibly common. In the United States from 2019 to 2020, more adults were diagnosed with chronic pain than with diabetes, depression or high blood pressure, researchers reported May 16 in JAMA Network Open. Chronic pain is also incredibly complex, an amalgam influenced by the body, brain, context, emotions and expectations, Martucci says. That complexity makes chronic pain seemingly invisible to an outsider, and very difficult to treat. One treatment approach is to stimulate the brain with electricity. As part of a clinical trial, researchers at the University of California, San Francisco implanted four electrode wires into the brains of four volunteers with chronic pain. These electrodes can both monitor and stimulate nerve cells in two brain areas: the orbitofrontal cortex, or OFC, and the anterior cingulate cortex, or ACC. The OFC isn’t known to be a key pain influencer in the brain, but this region has lots of neural connections to pain-related areas, including the ACC, which is thought to be involved in how people experience pain. But before researchers stimulated the brain, they needed to know how chronic pain was affecting it. For about 3 to 6 months, the implanted electrodes monitored brain signals of these people as they went about their lives. During that time, the participants rated their pain on standard scales two to eight times a day. © Society for Science & the Public 2000–2023.

Keyword: Pain & Touch; Brain imaging
Link ID: 28795 - Posted: 05.23.2023

By Priyanka Runwal Researchers have for the first time recorded the brain’s firing patterns while a person is feeling chronic pain, paving the way for implanted devices to one day predict pain signals or even short-circuit them. Using a pacemaker-like device surgically placed inside the brain, scientists recorded from four patients who had felt unremitting nerve pain for more than a year. The devices recorded several times a day for up to six months, offering clues for where chronic pain resides in the brain. The study, published on Monday in the journal Nature Neuroscience, reported that the pain was associated with electrical fluctuations in the orbitofrontal cortex, an area involved in emotion regulation, self-evaluation and decision making. The research suggests that such patterns of brain activity could serve as biomarkers to guide diagnosis and treatment for millions of people with shooting or burning chronic pain linked to a damaged nervous system. “The study really advances a whole generation of research that has shown that the functioning of the brain is really important to processing and perceiving pain,” said Dr. Ajay Wasan, a pain medicine specialist at the University of Pittsburgh School of Medicine, who wasn’t involved in the study. About one in five American adults experience chronic pain, which is persistent or recurrent pain that lasts longer than three months. To measure pain, doctors typically rely on patients to rate their pain, using either a numerical scale or a visual one based on emojis. But self-reported pain measures are subjective and can vary throughout the day. And some patients, like children or people with disabilities, may struggle to accurately communicate or score their pain. “There’s a big movement in the pain field to develop more objective markers of pain that can be used alongside self-reports,” said Kenneth Weber, a neuroscientist at Stanford University, who was not involved in the study. In addition to advancing our understanding of what neural mechanisms underlie the pain, Dr. Weber added, such markers can help validate the pain experienced by some patients that is not fully appreciated — or is even outright ignored — by their doctors. © 2023 The New York Times Company

Keyword: Pain & Touch; Brain imaging
Link ID: 28794 - Posted: 05.23.2023

By Marla Broadfoot In Alexandre Dumas’s classic novel The Count of Monte-Cristo, a character named Monsieur Noirtier de Villefort suffers a terrible stroke that leaves him paralyzed. Though he remains awake and aware, he is no longer able to move or speak, relying on his granddaughter Valentine to recite the alphabet and flip through a dictionary to find the letters and words he requires. With this rudimentary form of communication, the determined old man manages to save Valentine from being poisoned by her stepmother and thwart his son’s attempts to marry her off against her will. Dumas’s portrayal of this catastrophic condition — where, as he puts it, “the soul is trapped in a body that no longer obeys its commands” — is one of the earliest descriptions of locked-in syndrome. This form of profound paralysis occurs when the brain stem is damaged, usually because of a stroke but also as the result of tumors, traumatic brain injury, snakebite, substance abuse, infection or neurodegenerative diseases like amyotrophic lateral sclerosis (ALS). The condition is thought to be rare, though just how rare is hard to say. Many locked-in patients can communicate through purposeful eye movements and blinking, but others can become completely immobile, losing their ability even to move their eyeballs or eyelids, rendering the command “blink twice if you understand me” moot. As a result, patients can spend an average of 79 days imprisoned in a motionless body, conscious but unable to communicate, before they are properly diagnosed. The advent of brain-machine interfaces has fostered hopes of restoring communication to people in this locked-in state, enabling them to reconnect with the outside world. These technologies typically use an implanted device to record the brain waves associated with speech and then use computer algorithms to translate the intended messages. The most exciting advances require no blinking, eye tracking or attempted vocalizations, but instead capture and convey the letters or words a person says silently in their head. © 2023 Annual Reviews

Keyword: Brain imaging; Language
Link ID: 28791 - Posted: 05.21.2023

Tess McClure Every few months, Cohen “Coey” Irwin lies on his back and lets the walls close in. Lights move overhead, scanning over the tattoos covering his cheeks. He lies suspended, his head encased by a padded helmet, ears blocked, as his body is shunted into a tunnel. The noise begins: a rhythmic crashing, loud as a jackhammer. For the next hour, an enormous magnet will produce finely detailed images of Irwin’s brain. Irwin has spent much of his adult life addicted to smoking methamphetamine – or P, as the drug is known in New Zealand. He knows its effects intimately: the euphoria, the paranoia, the explosive violence, the energy, the tics that run through his neck and lips. Stepping outside the MRI machine, however, he can get a fresh view for the first time – looking in from the outside at what the drug has done to his internal organs. New Zealanders are some of the world’s biggest meth takers: wastewater testing has placed it in the top four consumers worldwide. The country’s physical isolation – 4,000km from the nearest major ports – makes importing hard drugs challenging and costly, but meth can be manufactured relatively cheaply and easily, and is derived from available pharmaceuticals. Almost a third of middle-aged New Zealanders have tried the drug, a University of Otago study found in 2020. In the backroom of Mātai research centre, Irwin thinks back to when it all started. He was a teenager when he tried P for the first time – trying to impress a girl on New Year’s Eve, in his home town of Porirua, Wellington. The girlfriend didn’t last, but the drug was love at first puff, he says, and would become one of the defining relationships of his life. “I remember it was the next day, the sun had risen, I was still awake with the people at the table I’d been smoking with. And I was instantly trying to find ways: how can we make money to get more?” Within a few years, he would be smoking every day. © 2023 Guardian News & Media Limited

Keyword: Drug Abuse; Brain imaging
Link ID: 28772 - Posted: 05.06.2023

By Laura Sanders Like Dumbledore’s wand, a scan can pull long strings of stories straight out of a person’s brain — but only if that person cooperates. This “mind-reading” feat, described May 1 in Nature Neuroscience, has a long way to go before it can be used outside of sophisticated laboratories. But the result could ultimately lead to seamless devices that help people who can’t talk or otherwise communicate easily. The research also raises privacy concerns about unwelcome neural eavesdropping (SN: 2/11/21). “I thought it was fascinating,” says Gopala Anumanchipalli, a neural engineer at the University of California, Berkeley who wasn’t involved in the study. “It’s like, ‘Wow, now we are here already,’” he says. “I was delighted to see this.” As opposed to implanted devices that have shown recent promise, the new system requires no surgery (SN: 11/15/22). And unlike other external approaches, it produces continuous streams of words instead of having a more constrained vocabulary. For the new study, three people lay inside a bulky MRI machine for at least 16 hours each. They listened to stories, mostly from The Moth podcast, while functional MRI scans detected changes in blood flow in the brain. These changes are proxies for brain activity, albeit slow and imperfect measures. With this neural data in hand, computational neuroscientists Alexander Huth and Jerry Tang of the University of Texas at Austin and colleagues were able to match patterns of brain activity to certain words and ideas. The approach relied on a language model that was built with GPT, one of the forerunners that enabled today’s AI chatbots (SN: 4/12/23). © Society for Science & the Public 2000–2023.

Keyword: Brain imaging; Consciousness
Link ID: 28769 - Posted: 05.03.2023

By Oliver Whang Think of the words whirling around in your head: that tasteless joke you wisely kept to yourself at dinner; your unvoiced impression of your best friend’s new partner. Now imagine that someone could listen in. On Monday, scientists from the University of Texas, Austin, made another step in that direction. In a study published in the journal Nature Neuroscience, the researchers described an A.I. that could translate the private thoughts of human subjects by analyzing fMRI scans, which measure the flow of blood to different regions in the brain. Already, researchers have developed language-decoding methods to pick up the attempted speech of people who have lost the ability to speak, and to allow paralyzed people to write while just thinking of writing. But the new language decoder is one of the first to not rely on implants. In the study, it was able to turn a person’s imagined speech into actual speech and, when subjects were shown silent films, it could generate relatively accurate descriptions of what was happening onscreen. “This isn’t just a language stimulus,” said Alexander Huth, a neuroscientist at the university who helped lead the research. “We’re getting at meaning, something about the idea of what’s happening. And the fact that that’s possible is very exciting.” The study centered on three participants, who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded the blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns in the brain activity to the words and phrases that the participants had heard. © 2023 The New York Times Company

Keyword: Brain imaging; Consciousness
Link ID: 28768 - Posted: 05.03.2023

Sara Reardon The little voice inside your head can now be decoded by a brain scanner — at least some of the time. Researchers have developed the first non-invasive method of determining the gist of imagined speech, presenting a possible communication outlet for people who cannot talk. But how close is the technology — which is currently only moderately accurate — to achieving true mind-reading? And how can policymakers ensure that such developments are not misused? Most existing thought-to-speech technologies use brain implants that monitor activity in a person’s motor cortex and predict the words that the lips are trying to form. To understand the actual meaning behind the thought, computer scientists Alexander Huth and Jerry Tang at the University of Texas at Austin and their colleagues combined functional magnetic resonance imaging (fMRI), a non-invasive means of measuring brain activity, with artificial intelligence (AI) algorithms called large language models (LLMs), which underlie tools such as ChatGPT and are trained to predict the next word in a piece of text. In a study published in Nature Neuroscience on 1 May, the researchers had 3 volunteers lie in an fMRI scanner and recorded the individuals’ brain activity while they listened to 16 hours of podcasts each1. By measuring the blood flow through the volunteers’ brains and integrating this information with details of the stories they were listening to and the LLM’s ability to understand how words relate to one another, the researchers developed an encoded map of how each individual’s brain responds to different words and phrases. Next, the researchers recorded the participants’ fMRI activity while they listened to a story, imagined telling a story or watched a film that contained no dialogue. Using a combination of the patterns they had previously encoded for each individual and algorithms that determine how a sentence is likely to be constructed based on other words in it, the researchers attempted to decode this new brain activity. The video below shows the sentences produced from brain recordings taken while a study participant watched a clip from the animated film Sintel about a girl caring for a baby dragon. © 2023 Springer Nature Limited

Keyword: Brain imaging; Consciousness
Link ID: 28767 - Posted: 05.03.2023

By McKenzie Prillaman Cracking the code to brain cancer treatment might start with cracking the brain’s protective shield. Nearly impenetrable walls of jam-packed cells line most of the brain’s blood vessels. Although this blood-brain barrier protects the organ from harmful invaders, it also prevents many medications from reaching the brain. Now, scientists can get a powerful chemotherapy drug into the human brain by temporarily opening its protective shield with ultrasound and tiny bubbles. The early-stage clinical trial, described May 2 in the Lancet Oncology, could lead to new treatments for those with brain cancer. Better treatments are especially needed for glioblastoma, a common and aggressive type of brain tumor. Even after surgical removal, another mass tends to grow in its place. “There’s really no established treatment for when the tumors come back,” says neurosurgeon Adam Sonabend of the Northwestern University Feinberg School of Medicine in Chicago. Patients with recurrent glioblastomas “don’t have any meaningful therapeutic options, so we were exploring new ways of treating them.” After the initial tumor has been removed, patients typically receive a relatively weak chemotherapy drug that can bypass the brain’s barricade. More potent drugs could help destroy any lingering disease — if the medicines could break through the barrier. © Society for Science & the Public 2000–2023.

Keyword: Brain imaging
Link ID: 28763 - Posted: 05.03.2023

By Nora Bradford The classical view of how the human brain controls voluntary movement might not tell the whole story. That map of the primary motor cortex — the motor homunculus — shows how this brain region is divided into sections assigned to each body part that can be controlled voluntarily (SN: 6/16/15). It puts your toes next to your ankle, and your neck next to your thumb. The space each part takes up on the cortex is also proportional to how much control one has over that part. Each finger, for example, takes up more space than a whole thigh. A new map reveals that in addition to having regions devoted to specific body parts, three newfound areas control integrative, whole-body actions. And representations of where specific body parts fall on this map are organized differently than previously thought, researchers report April 19 in Nature. Research in monkeys had hinted at this. “There is a whole cohort of people who have known for 50 years that the homunculus isn’t quite right,” says Evan Gordon, a neuroscientist at Washington University School of Medicine in St. Louis. But ever since pioneering brain-mapping work by neurosurgeon Wilder Penfield starting in the 1930s, the homunculus has reigned supreme in neuroscience. Gordon and his colleagues study synchronized activity and communication between different brain regions. They noticed some spots in the primary motor cortex were linked to unexpected areas involved in action control and pain perception. Because that didn’t fit with the homunculus map, they wrote it off as a result of imperfect data. “But we kept seeing it, and it kept bugging us,” Gordon says. So the team gathered functional MRI data on volunteers as they performed various tasks. Two participants completed simple movements like moving just their eyebrows or toes, as well as complex tasks like simultaneously rotating their wrist and moving their foot from side to side. The fMRI data revealed which parts of the brain activated at the same time as each task was done, allowing the researchers to trace which regions were functionally connected to one another. Seven more participants were recorded while not doing any particular task in order to look at how brain areas communicate during rest. © Society for Science & the Public 2000–2023.

Keyword: Brain imaging
Link ID: 28748 - Posted: 04.22.2023

Max Kozlov The bizarre-looking ‘homunculus’ is one of neuroscience’s most fundamental diagrams. Found in countless textbooks, it depicts a deformed constellation of body parts mapped onto a narrow strip of the brain, showing the corresponding brain regions that control each part. But a study published in Nature1 on 19 April reveals that this brain strip, called the primary motor cortex, is much more complex than the famous diagram suggests. It might coordinate complex movements involving multiple muscles through connections to brain regions responsible for critical thinking, maintaining the body’s physiology and planning actions. The new results could help scientists better understand and treat brain injuries. “This study is very interesting and very important,” says Michael Graziano, a neuroscientist at Princeton University in New Jersey. It’s becoming clear that the primary motor cortex isn’t “just a simple roster of muscles down the brain that control the toes to the tongue”, he says. Little man in the brain The idea of the homunculus dates to the late nineteenth century, when researchers noticed that electrically stimulating the primary motor cortex corresponded to specific body parts twitching. Later work found that some body parts, such as the hands, feet and mouth, took up a disproportionate amount of space in the primary motor cortex compared with the rest of the body. In 1937, these findings culminated with the first publication of the motor homunculus, which translates to ‘little man’ in Latin. Neurosurgeon Wilder Penfield’s 1948 diagram of the motor homunculus (left) shows the areas of the primary motor cortex that control each body part. A new study redraws the diagram (right), adding regions connected to brain areas responsible for coordinating complex movements.Credit: E. Gordon et al./Nature © 2023 Springer Nature Limited

Keyword: Brain imaging
Link ID: 28747 - Posted: 04.22.2023

BySara Reardon Anyone who’s ever owned a telescope has probably tried looking through the wrong end to see whether it works in reverse—that is, like a microscope. Spoiler alert: It doesn’t. Now, a team of researchers inspired by the strange eyes of a sea creature has figured out a way to do it. By flipping the mirrors and lenses used in certain types of telescopes, they have created a new kind of microscope that can be used to image samples floating in any type of liquid—even the insides of transparent organs—while retaining enough light to allow for high magnification. The design could help scientists achieve high enough magnification to study tiny structures such as the long, skinny axons that connect neurons in the brain or individual proteins or RNA molecules inside cells. “It’s nice to see even something as basic as a lens could still bring interest and there's still room there to do some work that would help a lot of people,” says Kimani Touissant, an electrical engineer at Brown University. He says the design could be useful in his work, in which he uses lasers to etch patterns into gels that mimic collagen and act as scaffolds for cells. At very high magnification, light trained on a sample can scatter around it, blurring and dimming the image. To get around that problem, scientists using traditional, lens-based microscopes cover their sample with a thin layer of oil or water, then dip their device’s lens into the liquid, minimizing the degree of light scattering. But this technique requires instruments to have different lenses for different types of liquid, making it an expensive, finicky process and limiting the ways that samples can be prepared. Enter Fabian Voigt, a molecular biologist at Harvard University and inventor of the new design. He was reading a book about animal vision when he encountered the odd case of scallops’ eyes. Unlike most animals, whose eyes feature retinas that send images to the brain, scallops have mantles covered with hundreds of tiny blue dots, each of which contains a curved mirror at its back. As light passes through each eye’s lens, its inner mirror reflects the light back onto the creature’s photoreceptors to create an image that then allows the scallop to respond to its environment.

Keyword: Brain imaging
Link ID: 28742 - Posted: 04.18.2023

Miryam Naddaf Virtual models representing the brains of people with epilepsy could help to enable more-effective treatments of the disease by showing neurosurgeons precisely which zones are responsible for seizures. The models, created using a computational system known as the Virtual Epileptic Patient (VEP), have been developed as part of the Human Brain Project (HBP), a ten-year European initiative focused on digital brain research. The approach is being tested in a clinical trial called EPINOV, to evaluate whether it improves the success rate of epilepsy surgery. “It’s an example of personalized medicine,” says Aswin Chari, a neurosurgeon at University College London. VEP uses “the patient’s own brain scans [and] the patient’s own brainwave-recording data to build a model and improve our understanding of where their seizures are coming from”. Life-changing surgery Epileptic seizures are brought on by abnormal brain activity, and around one-third of the 50 million people living with epilepsy worldwide do not respond to anti-seizure drugs. “For those people, surgery is a huge game changer,” says Chari. It aims to free patients from seizures by removing parts of the epileptogenic zone — the brain region that is thought to initiate seizures. To identify the epileptogenic zone, clinicians currently use scanning techniques such as magnetic resonance imaging (MRI) and electroencephalogram (EEG) to investigate brain activity. They also perform stereoelectroencephalography (SEEG), which involves placing up to 16 electrodes, each 7 centimetres long, through the skull to monitor the activity of specific areas for 1–2 weeks. © 2023 Springer Nature Limited

Keyword: Epilepsy; Brain imaging
Link ID: 28732 - Posted: 04.09.2023

By Simon Makin Waves of cerebrospinal fluid which normally wash over brains during sleep can be made to pulse in the brains of people who are wide awake, a new study finds. The clear fluid may flush out harmful waste, such as the sticky proteins that accumulate in Alzheimer’s disease (SN: 7/15/18). So being able to control the fluid’s flow in the brain could possibly one day have implications for treating certain brain disorders. “I think this [finding] will help with many neurological disorders,” says Jonathan Kipnis, a neuroscientist at Washington University in St. Louis who was not involved in the study. “Think of Formula One. You can have the best car and driver, but without a great maintenance crew, that driver will not win the race.” Spinal fluid flow in the brain is a major part of that maintenance crew, he says. But he and other researchers, including the study’s authors, caution that any potential therapeutic applications are still far off. In 2019, neuroscientist Laura Lewis of Boston University and colleagues reported that strong waves of cerebrospinal fluid wash through our brains while we slumber, suggesting that one unappreciated role of sleep may be to give the brain a deep clean (SN: 10/31/19). And the team showed that the slow neural oscillations that characterize deep, non-REM sleep occur in lockstep with the waves of spinal fluid through the brain. “If you drop your clothes in a bath of water, eventually dirt will come out. But if you swish them back and forth, things are moving much more effectively,” Lewis says. “That’s the analogy I think of.” These flows were far larger than the small, rhythmic influences that one’s breathing and heartbeat have on spinal fluid. © Society for Science & the Public 2000–2023.

Keyword: Sleep
Link ID: 28727 - Posted: 04.01.2023

Suzana Herculano-Houzel Neuroscientists have long assumed that neurons are greedy, hungry units that demand more energy when they become more active, and the circulatory system complies by providing as much blood as they require to fuel their activity. Indeed, as neuronal activity increases in response to a task, blood flow to that part of the brain increases even more than its rate of energy use, leading to a surplus. This increase is the basis of common functional imaging technology that generates colored maps of brain activity. Scientists used to interpret this apparent mismatch in blood flow and energy demand as evidence that there is no shortage of blood supply to the brain. The idea of a nonlimited supply was based on the observation that only about 40% of the oxygen delivered to each part of the brain is used – and this percentage actually drops as parts of the brain become more active. It seemed to make evolutionary sense: The brain would have evolved this faster-than-needed increase in blood flow as a safety feature that guarantees sufficient oxygen delivery at all times. Functional magnetic resonance imaging is one of several ways to measure the brain. But does blood distribution in the brain actually support a demand-based system? As a neuroscientist myself, I had previously examined a number of other assumptions about the most basic facts about brains and found that they didn’t pan out. To name a few: Human brains don’t have 100 billion neurons, though they do have the most cortical neurons of any species; the degree of folding of the cerebral cortex does not indicate how many neurons are present; and it’s not larger animals that live longer, but those with more neurons in their cortex. I believe that figuring out what determines blood supply to the brain is essential to understanding how brains work in health and disease. It’s like how cities need to figure out whether the current electrical grid will be enough to support a future population increase. Brains, like cities, only work if they have enough energy supplied. © 2010–2023, The Conversation US, Inc.

Keyword: Stroke; Brain imaging
Link ID: 28726 - Posted: 04.01.2023

By Z Paige L’Erario New Research Points to Causes for Brain Disorders with No Obvious Injury A picture of a human brain taken by a positron emission tomography scanner, also called PET scan, is seen on a screen on January 9, 2019, at the Regional and University Hospital Center of Brest in France. Credit: Fred Tanneau/Getty Images “Stop faking!” Imagine hearing those words moments after your doctor diagnosed you with, say, a stroke or a brain tumor. That sounds absurd but for many people diagnosed with a condition called functional neurological disorder (FND), this is exactly what happens. Although the disorder is not well known to many people, FND is actually one of the most common conditions that neurologists like myself encounter. In it, abnormal brain functioning causes symptoms to appear. FND comes in many forms, with symptoms that can include seizures, feelings of weakness and movement disorders. People may lose consciousness or their ability to move or walk. Or they may experience abnormal tremors or tics. It can be highly disabling and just as costly as structural neurological conditions such as amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease, multiple sclerosis and Parkinson’s disease. Although men can develop FND, young to middle-aged women receive this diagnosis most frequently. And during the first two years of the COVID pandemic, FND briefly made international headlines when functional tic-like behaviors spread with social media usage, particularly among adolescent girls.

Keyword: Brain imaging; Stress
Link ID: 28725 - Posted: 04.01.2023

ByJennifer Couzin-Frankel A class of Alzheimer’s drugs that aims to slow cognitive decline, including the antibody lecanemab that was granted accelerated approval in the United States in January, can cause brain shrinkage, researchers report in a new analysis. Although scientists and drug developers have documented this loss of brain volume in clinical trial participants for years, the scientific review, published yesterday in Neurology, is the first to look at data across numerous studies. It also links the brain shrinkage to a better known side effect of the drugs, brain swelling, which often presents without symptoms. “We don’t fully know what these changes might imply,” says Jonathan Jackson, a cognitive neuroscientist at Massachusetts General Hospital. But, “These data are extremely concerning, and it’s likely these changes are detrimental.” The analysis, which found that trial participants taking these Alzheimer’s drugs often developed more brain shrinkage than when they were on a placebo, alarmed Scott Ayton, a neuroscientist at the Florey Institute of Neuroscience and Mental Health in Melbourne, Australia, who led the work. “We’re talking about the possibility of brain damage” from treatment, says Ayton, who was invited by Eisai to join an advisory board on lecanemab’s rollout in Australia if the drug is approved there. “I find it very peculiar that these data, which are very important, have been completely ignored by the field.” A spokesperson for Eisai suggested there are benign theories for the brain shrinkage, too. The company said that although participants in its pivotal trial did experience “greater cortical volume loss on lecanemab relative to placebo,” those reductions may be due to antibody clearing the protein beta amyloid from the brain, and reducing inflammation. © 2023 American Association for the Advancement of Science.

Keyword: Alzheimers; Brain imaging
Link ID: 28721 - Posted: 03.29.2023

By Allison Parshall Functional magnetic resonance imaging, or fMRI, is one of the most advanced tools for understanding how we think. As a person in an fMRI scanner completes various mental tasks, the machine produces mesmerizing and colorful images of their brain in action. Looking at someone’s brain activity this way can tell neuroscientists which brain areas a person is using but not what that individual is thinking, seeing or feeling. Researchers have been trying to crack that code for decades—and now, using artificial intelligence to crunch the numbers, they’ve been making serious progress. Two scientists in Japan recently combined fMRI data with advanced image-generating AI to translate study participants’ brain activity back into pictures that uncannily resembled the ones they viewed during the scans. The original and re-created images can be seen on the researchers’ website. “We can use these kinds of techniques to build potential brain-machine interfaces,” says Yu Takagi, a neuroscientist at Osaka University in Japan and one of the study’s authors. Such future interfaces could one day help people who currently cannot communicate, such as individuals who outwardly appear unresponsive but may still be conscious. The study was recently accepted to be presented at the 2023 Conference on The study has made waves online since it was posted as a preprint (meaning it has not yet been peer-reviewed or published) in December 2022. Online commentators have even compared the technology to “mind reading.” But that description overstates what this technology is capable of, experts say. “I don’t think we’re mind reading,” says Shailee Jain, a computational neuroscientist at the University of Texas at Austin, who was not involved in the new study. “I don’t think the technology is anywhere near to actually being useful for patients—or to being used for bad things—at the moment. But we are getting better, day by day.”

Keyword: Vision; Brain imaging
Link ID: 28708 - Posted: 03.18.2023