Links for Keyword: Brain imaging

Follow us on Facebook or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 698

By Miryam Naddaf Moving a prosthetic arm. Controlling a speaking avatar. Typing at speed. These are all things that people with paralysis have learnt to do using brain–computer interfaces (BCIs) — implanted devices that are powered by thought alone. These devices capture neural activity using dozens to hundreds of electrodes embedded in the brain. A decoder system analyses the signals and translates them into commands. Although the main impetus behind the work is to help restore functions to people with paralysis, the technology also gives researchers a unique way to explore how the human brain is organized, and with greater resolution than most other methods. Scientists have used these opportunities to learn some basic lessons about the brain. Results are overturning assumptions about brain anatomy, for example, revealing that regions often have much fuzzier boundaries and job descriptions than was thought. Such studies are also helping researchers to work out how BCIs themselves affect the brain and, crucially, how to improve the devices. “BCIs in humans have given us a chance to record single-neuron activity for a lot of brain areas that nobody’s ever really been able to do in this way,” says Frank Willett, a neuroscientist at Stanford University in California who is working on a BCI for speech. The devices also allow measurements over much longer time spans than classical tools do, says Edward Chang, a neurosurgeon at the University of California, San Francisco. “BCIs are really pushing the limits, being able to record over not just days, weeks, but months, years at a time,” he says. “So you can study things like learning, you can study things like plasticity, you can learn tasks that require much, much more time to understand.” © 2024 Springer Nature Limited

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 5: The Sensorimotor System
Link ID: 29159 - Posted: 02.22.2024

Nicholas J. Kelley In the middle of 2023, a study conducted by the HuthLab at the University of Texas sent shockwaves through the realms of neuroscience and technology. For the first time, the thoughts and impressions of people unable to communicate with the outside world were translated into continuous natural language, using a combination of artificial intelligence (AI) and brain imaging technology. This is the closest science has yet come to reading someone’s mind. While advances in neuroimaging over the past two decades have enabled non-responsive and minimally conscious patients to control a computer cursor with their brain, HuthLab’s research is a significant step closer towards accessing people’s actual thoughts. As Alexander Huth, the neuroscientist who co-led the research, told the New York Times: Combining AI and brain-scanning technology, the team created a non-invasive brain decoder capable of reconstructing continuous natural language among people otherwise unable to communicate with the outside world. The development of such technology – and the parallel development of brain-controlled motor prosthetics that enable paralysed patients to achieve some renewed mobility – holds tremendous prospects for people suffering from neurological diseases including locked-in syndrome and quadriplegia. In the longer term, this could lead to wider public applications such as fitbit-style health monitors for the brain and brain-controlled smartphones. On January 29, Elon Musk announced that his Neuralink tech startup had implanted a chip in a human brain for the first time. He had previously told followers that Neuralink’s first product, Telepathy, would one day allow people to control their phones or computers “just by thinking”. © 2010–2024, The Conversation US, Inc.

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29136 - Posted: 02.08.2024

By Evelyn Lake Functional MRI (fMRI), though expensive, has many properties of an ideal clinical tool. It’s safe and noninvasive. It is widely available in some countries, and increasingly so on a global scale. Its “blood oxygen level dependent,” or BOLD, signal is altered in people with almost any neurological condition and is rich enough to contain information specific to each person, offering the potential for a personalized approach to medical care across a wide spectrum of neurological conditions. But despite enormous interest and investment in fMRI — and its wide use in basic neuroscience research — it still lacks broad clinical utility; it is mainly employed for surgical planning. For fMRI to inform a wider range of clinical decision-making, we need better ways of deciphering what underlying changes in the brain drive changes to the BOLD signal. If someone with Alzheimer’s disease has an increase in functional connectivity (a measure of synchrony between brain regions), for example, does this indicate that synapses are being lost? Or does it suggest that the brain is forming compensatory pathways to help the person avoid further cognitive decline? Or something else entirely? Depending on the answer, one can imagine different courses of treatment. Put simply, we cannot extract sufficient information from fMRI and patient outcomes alone to determine which scenarios are playing out and therefore what we should do when we observe changes in our fMRI readouts. To better understand what fMRI actually shows, we need to use complementary methodologies, such as the emerging optical imaging tool of wide-field fluorescence calcium imaging. Combining modalities presents significant technical challenges but offers the potential for deeper insights: observing the BOLD signal alongside other signals that report more directly on what is occurring in brain tissue. Using these more direct measurements instead of fMRI in clinical practice is not an option — they are unethical to use in people or invasive, requiring physical or optical access to the brain. © 2023 Simons Foundation.

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29109 - Posted: 01.23.2024

Kamal Nahas Peter Hegemann, a biophysicist at Humboldt University, has spent his career exploring interactions between proteins and light. Specifically, he studies how photoreceptors detect and respond to light, focusing largely on rhodopsins, a family of membrane photoreceptors in animals, plants, fungi, protists, and prokaryotes.1 Early in his career, his curiosity led him to an unknown rhodopsin in green algae that later proved to have useful applications in neuroscience research. Hegemann became a pioneer in the field of optogenetics, which revolutionized the ways in which scientists draw causal links between neuronal activity and behavior. In the early 1980s during his graduate studies at the Max Planck Institute of Biochemistry, Hegemann spent his days exploring rhodopsins in bacteria and archaea. However, the field was crowded, and he was eager to study a rhodopsin that scientists knew nothing about. Around this time, Kenneth Foster, a biophysicist at Syracuse University, was investigating whether the green algae Chlamydomonas, a photosynthetic unicellular eukaryote related to plants, used a rhodopsin in its eyespot organelle to detect light and trigger the algae to swim. He struggled to pinpoint the protein itself, so he took a roundabout approach and started interfering with nearby molecules that interact with rhodopsins.2 Rhodopsins require the small molecule retinal to function as a photoreceptor. When Foster depleted Chlamydomonas of its own retinal, the algae were unable to use light to direct movement, a behavior that was restored when he introduced retinal analogues. In 1985, Hegemann joined Foster’s group as a postdoctoral researcher to continue this work. “I wanted to find something new,” Hegemann said. “Therefore, I worked on an exotic organism and an exotic topic.” A year later, Hegemann started his own research group at the Max Planck Institute of Biochemistry where he searched for the green algae’s rhodopsin that Foster proposed should exist. © 1986–2024 The Scientist.

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 29077 - Posted: 01.03.2024

By Fletcher Reveley One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen: “I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.” The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind. For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story: “Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.” The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.”

Related chapters from BN: Chapter 19: Language and Lateralization; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 15: Language and Lateralization; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29073 - Posted: 01.03.2024

By Gary Stix This year was full of roiling debate and speculation about the prospect of machines with superhuman capabilities that might, sooner than expected, leave the human brain in the dust. A growing public awareness of ChatGPT and other so-called large language models (LLMs) dramatically expanded public awareness of artificial intelligence. In tandem, it raised the question of whether the human brain can keep up with the relentless pace of AI advances. The most benevolent answer posits that humans and machines need not be cutthroat competitors. Researchers found one example of potential cooperation by getting AI to probe the infinite complexity of the ancient game of Go—which, like chess, has seen a computer topple the highest-level human players. A study published in March showed how people might learn from machines with such superhuman skills. And understanding ChatGPT’s prodigious abilities offers some inkling as to why an equivalence between the deep neural networks that underlie the famed chatbot and the trillions of connections in the human brain is constantly invoked. Importantly, the machine learning incorporated into AI has not totally distracted mainstream neuroscience from avidly pursuing better insights into what has been called “the most complicated object in the known universe”: the brain. One of the grand challenges in science—understanding the nature of consciousness—received its due in June with the prominent showcasing of experiments that tested the validity of two competing theories, both of which purport to explain the underpinnings of the conscious self. The past 12 months provided lots of examples of impressive advances for you to store in your working memory. Now here’s a closer look at some of the standout mind and brain stories we covered in Scientific American in 2023. © 2023 SCIENTIFIC AMERICAN

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Language and Lateralization
Link ID: 29069 - Posted: 12.31.2023

Emily Baumgaertner This is not a work of art. It’s an image of microscopic blood flow in a rat’s brain, taken with one of many new tools that are yielding higher levels of detail in brain imaging. Here are seven more glorious images from neuroscience research → © 2023 The New York Times Company

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 29059 - Posted: 12.22.2023

Liam Drew In a laboratory in San Francisco, California, a woman named Ann sits in front of a huge screen. On it is an avatar created to look like her. Thanks to a brain–computer interface (BCI), when Ann thinks of talking, the avatar speaks for her — and in her own voice, too. In 2005, a brainstem stroke left Ann almost completely paralysed and unable to speak. Last year, neurosurgeon Edward Chang, at the University of California, San Francisco, placed a grid of more than 250 electrodes on the surface of Ann’s brain, on top of the regions that once controlled her body, face and larynx. As Ann imagined speaking certain words, researchers recorded her neural activity. Then, using machine learning, they established the activity patterns corresponding to each word and to the facial movements Ann would, if she could, use to vocalize them. The system can convert speech to text at 78 words per minute: a huge improvement on previous BCI efforts and now approaching the 150 words per minute considered average for regular speech1. Compared with two years ago, Chang says, “it’s like night and day”. In an added feat, the team programmed the avatar to speak aloud in Ann’s voice, basing the output on a recording of a speech she made at her wedding. “It was extremely emotional for Ann because it was the first time that she really felt that she was speaking for almost 20 years,” says Chang. This work was one of several studies in 2023 that boosted excitement about implantable BCIs. Another study2 also translated neural activity into text at unprecedented speed. And in May, scientists reported that they had created a digital bridge between the brain and spinal cord of a man paralysed in a cycling accident3. A BCI decoded his intentions to move and directed a spinal implant to stimulate the nerves of his legs, allowing him to walk. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 11: Motor Control and Plasticity
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 5: The Sensorimotor System
Link ID: 28997 - Posted: 11.11.2023

by Giorgia Guglielmi / The ability to see inside the human brain has improved diagnostics and revealed how brain regions communicate, among other things. Yet questions remain about the replicability of neuroimaging studies that aim to connect structural or functional differences to complex traits or conditions, such as autism. Some neuroscientists call these studies ‘brain-wide association studies’ — a nod to the ‘genome-wide association studies,’ or GWAS, that link specific variants to particular traits. But unlike GWAS, which typically analyze hundreds of thousands of genomes at once, most published brain-wide association studies involve, on average, only about two dozen participants — far too few to yield reliable results, a March analysis suggests. Spectrum talked to Damien Fair, co-lead investigator on the study and director of the Masonic Institute for the Developing Brain at the University of Minnesota in Minneapolis, about solutions to the problem and reproducibility issues in neuroimaging studies in general. Spectrum: How have neuroimaging studies changed over time, and what are the consequences? Damien Fair: The realization that we could noninvasively peer inside the brain and look at how it’s reacting to certain types of stimuli blew open the doors on studies correlating imaging measurements with behaviors or phenotypes. But even though there was a shift in the type of question that was being asked, the study design stayed identical. That has caused a lot of the reproducibility issues we’re seeing today, because we didn’t change sample sizes. The opportunity is huge right now because we finally, as a community, are understanding how to use magnetic resonance imaging for highly reliable, highly reproducible, highly generalizable findings. S: Where did the reproducibility issues in neuroimaging studies begin? DF: The field got comfortable with a certain type of study that provided significant and exciting results, but without having the rigor to show how those findings reproduced. For brain-wide association studies, the importance of having large samples just wasn’t realized until more recently. It was the same problem in the early age of genome-wide association studies looking at common genetic variants and how they relate to complex traits. If you’re underpowered, highly significant results may not generalize to the population. © 2023 Simons Foundation

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28980 - Posted: 11.01.2023

By Carl Zimmer An international team of scientists has mapped the human brain in much finer resolution than ever before. The brain atlas, a $375 million effort started in 2017, has identified more than 3,300 types of brain cells, an order of magnitude more than was previously reported. The researchers have only a dim notion of what the newly discovered cells do. The results were described in 21 papers published on Thursday in Science and several other journals. Ed Lein, a neuroscientist at the Allen Institute for Brain Science in Seattle who led five of the studies, said that the findings were made possible by new technologies that allowed the researchers to probe millions of human brain cells collected from biopsied tissue or cadavers. “It really shows what can be done now,” Dr. Lein said. “It opens up a whole new era of human neuroscience.” Still, Dr. Lein said that the atlas was just a first draft. He and his colleagues have only sampled a tiny fraction of the 170 billion cells estimated to make up the human brain, and future surveys will certainly uncover more cell types, he said. Biologists first noticed in the 1800s that the brain was made up of different kinds of cells. In the 1830s, the Czech scientist Jan Purkinje discovered that some brain cells had remarkably dense explosions of branches. Purkinje cells, as they are now known, are essential for fine-tuning our muscle movements. Later generations developed techniques to make other cell types visible under a microscope. In the retina, for instance, researchers found cylindrical “cone cells” that capture light. By the early 2000s, researchers had found more than 60 types of neurons in the retina alone. They were left to wonder just how many kinds of cells were lurking in the deeper recesses of the brain, which are far harder to study. © 2023 The New York Times Company

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory and Learning
Link ID: 28963 - Posted: 10.14.2023

by Maris Fessenden A new lightweight device with a wisplike tether can record neural activity while mice jump, run and explore their environment. The open-source recording system, which its creators call ONIX, overcomes several of the limitations of previous systems and enables the rodents to move more freely during recording. The behavior that ONIX allows brings to mind children running around in a playground, says Jakob Voigts, a researcher at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, who helped build and test the system. He and his colleagues describe their work in a preprint posted on bioRxiv earlier this month. To understand how the brain creates complex behaviors — such as those found in social interaction, sensory processing and cognition, which are commonly affected in autism — researchers observe brain signals as these behaviors unfold. Head-mounted devices enable researchers to eavesdrop on the electrical chatter between brain cells in mice, rats and primates. But as the smallest of these animal models, mice present some significant challenges. Current neural recording systems are bulky and heavy, making the animals carry up to a fifth of their body weight on their skulls. Predictably, this slows the mice down and tires them out. And most neural recording systems use a tether to relay signals from the mouse’s brain to a computer. But this tether twists and tangles as the mouse turns its head and body, exerting torque that the mouse can feel. Researchers must therefore periodically replace or untangle the tether. Longer tethers allow for more time to elapse between changeouts, but the interruptions still affect natural behavior. And battery-powered, wireless systems add too much weight. Altogether, these challenges inhibit natural behaviors and limit the amount of time that recording can take place, preventing scientists from studying, for example, the complete process of learning a new task. © 2023 Simons Foundation

Related chapters from BN: Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28930 - Posted: 09.27.2023

By Gina Kolata Tucker Marr’s life changed forever last October. He was on his way to a wedding reception when he fell down a steep flight of metal stairs, banging the right side of his head so hard he went into a coma. He’d fractured his skull, and a large blood clot formed on the left side of his head. Surgeons had to remove a large chunk of his skull to relieve pressure on his brain and to remove the clot. “Getting a piece of my skull taken out was crazy to me,” Mr. Marr said. “I almost felt like I’d lost a piece of me.” But what seemed even crazier to him was the way that piece was restored. Mr. Marr, a 27-year-old analyst at Deloitte, became part of a new development in neurosurgery. Instead of remaining without a piece of skull or getting the old bone put back, a procedure that is expensive and has a high rate of infection, he got a prosthetic piece of skull made with a 3-D printer. But it is not the typical prosthesis used in such cases. His prosthesis, which is covered by his skin, is embedded with an acrylic window that would let doctors peer into his brain with ultrasound. A few medical centers are offering such acrylic windows to patients who had to have a piece of skull removed to treat conditions like a brain injury, a tumor, a brain bleed or hydrocephalus. “It’s very cool,” Dr. Michael Lev, director of emergency radiology at Massachusetts General Hospital, said. But, “it is still early days,” he added. Advocates of the technique say that if a patient with such a window has a headache or a seizure or needs a scan to see if a tumor is growing, a doctor can slide an ultrasound probe on the patient’s head and look at the brain in the office. © 2023 The New York Times Company

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 19: Language and Lateralization
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 15: Language and Lateralization
Link ID: 28914 - Posted: 09.16.2023

Neurotransmitters are the words our brain cells use to communicate with one another. For years, researchers relied on tools that provided limited temporal and spatial resolution to track changes in the fast chemical chat between neurons. But that started to change about ten years ago for glutamate—the most abundant excitatory neurotransmitter in vertebrates that plays an essential role in learning, memory, and information processing—when scientists engineered the first glutamate fluorescent reporter, iGluSnFR, which provided a readout of neurons’ fast glutamate release. In 2013, researchers at the Howard Hughes Medical Institute collaborated with scientists from other institutions to develop the first generation of iGluSnFR.1 To create the biosensor, the team combined a bacteria-derived glutamate binding protein, Gltl, a wedged fluorescent GFP protein, and a membrane-targeting protein that anchors the reporter to the surface of the cell. Upon glutamate binding, the Gltl protein changes its conformation, increasing the fluorescence intensity of GFP. In their first study, the team showcased the utility of the biosensor for monitoring glutamate levels by demonstrating selective activation by glutamate in cell cultures. By conducting experiments with brain cells from the C. elegans worm, zebrafish, and mice, they confirmed that the reporter also tracked glutamate in vivo, a finding that set iGluSnFR apart from existing glutamate sensors. The first iGluSnFR generation allowed researchers to study glutamate dynamics in different biological systems, but the indicator could not detect small amounts of the neurotransmitter or keep up with brain cells’ fast glutamate release bouts. Making improvements © 1986–2023 The Scientist.

Related chapters from BN: Chapter 4: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology; Chapter 3: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Related chapters from MM:Chapter 4: Development of the Brain; Chapter 3: The Chemistry of Behavior: Neurotransmitters and Neuropharmacology
Link ID: 28901 - Posted: 09.10.2023

By Miryam Naddaf, It took 10 years, around 500 scientists and some €600 million, and now the Human Brain Project — one of the biggest research endeavours ever funded by the European Union — is coming to an end. Its audacious goal was to understand the human brain by modelling it in a computer. During its run, scientists under the umbrella of the Human Brain Project (HBP) have published thousands of papers and made significant strides in neuroscience, such as creating detailed 3D maps of at least 200 brain regions, developing brain implants to treat blindness and using supercomputers to model functions such as memory and consciousness and to advance treatments for various brain conditions. “When the project started, hardly anyone believed in the potential of big data and the possibility of using it, or supercomputers, to simulate the complicated functioning of the brain,” says Thomas Skordas, deputy director-general of the European Commission in Brussels. Advertisement Almost since it began, however, the HBP has drawn criticism. The project did not achieve its goal of simulating the whole human brain — an aim that many scientists regarded as far-fetched in the first place. It changed direction several times, and its scientific output became “fragmented and mosaic-like”, says HBP member Yves Frégnac, a cognitive scientist and director of research at the French national research agency CNRS in Paris. For him, the project has fallen short of providing a comprehensive or original understanding of the brain. “I don’t see the brain; I see bits of the brain,” says Frégnac. HBP directors hope to bring this understanding a step closer with a virtual platform — called EBRAINS — that was created as part of the project. EBRAINS is a suite of tools and imaging data that scientists around the world can use to run simulations and digital experiments. “Today, we have all the tools in hand to build a real digital brain twin,” says Viktor Jirsa, a neuroscientist at Aix-Marseille University in France and an HBP board member. But the funding for this offshoot is still uncertain. And at a time when huge, expensive brain projects are in high gear elsewhere, scientists in Europe are frustrated that their version is winding down. “We were probably one of the first ones to initiate this wave of interest in the brain,” says Jorge Mejias, a computational neuroscientist at the University of Amsterdam, who joined the HBP in 2019. Now, he says, “everybody’s rushing, we don’t have time to just take a nap”. Chequered past

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 18: Attention and Higher Cognition
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 14: Attention and Higher Cognition
Link ID: 28884 - Posted: 08.26.2023

Jon Hamilton Scientists have genetically engineered a squid that is almost as transparent as the water it's in. The squid will allow researchers to watch brain activity and biological processes in a living animal. Sponsor Message ARI SHAPIRO, HOST: For most of us, it would take magic to become invisible, but for some lucky, tiny squid, all it took was a little genetic tweaking. As part of our Weekly Dose of Wonder series, NPR's Jon Hamilton explains how scientists created a see-through squid. JON HAMILTON, BYLINE: The squid come from the Marine Biological Laboratory in Woods Hole, Mass. Josh Rosenthal is a senior scientist there. He says even the animal's caretakers can't keep track of them. JOSH ROSENTHAL: They're really hard to spot. We know we put it in this aquarium, but they might look for a half-hour before they can actually see it. They're that transparent. HAMILTON: Almost invisible. Carrie Albertin, a fellow at the lab, says studying these creatures has been transformative. CARRIE ALBERTIN: They are so strikingly see-through. It changes the way you interpret what's going on in this animal, being able to see completely through the body. HAMILTON: Scientists can watch the squid's three hearts beating in synchrony or see its brain cells at work. And it's all thanks to a gene-editing technology called CRISPR. A few years ago, Rosenthal and Albertin decided they could use CRISPR to create a special octopus or squid for research. ROSENTHAL: Carrie and I are highly biased. We both love cephalopods - right? - and we have for our entire careers. HAMILTON: So they focused on the hummingbird bobtail squid. It's smaller than a thumb and shaped like a dumpling. Like other cephalopods, it has a relatively large and sophisticated brain. Rosenthal takes me to an aquarium to show me what the squid looks like before its genes are altered. ROSENTHAL: Here is our hummingbird bobtail squid. You can see him right there in the bottom, just kind of sitting there hunkered down in the sand. At night, it'll come out and hunt and be much more mobile. © 2023 npr

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 6: Evolution of the Brain and Behavior
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28883 - Posted: 08.26.2023

Diana Kwon Santiago Ramón y Cajal revolutionized neurobiology in the late nineteenth century with his exquisitely detailed illustrations of neural tissues. Created through years of meticulous microscopy work, the Spanish physician-scientist’s drawings revealed the unique cellular morphology of the brain. “With Cajal’s work, we saw that the cells of the brain don’t look like the cells of every other part of the body — they have incredible morphologies that you just don’t see elsewhere,” says Evan Macosko, a neuroscientist at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts. Ramón y Cajal’s drawings provided one of the first clues that the keys to understanding how the brain governs its many functions, from regulating blood pressure and sleep to controlling cognition and mood, might lie at the cellular level. Still, when it comes it comes to the brain, crucial information remained — and indeed, remains — missing. “In order to have a fundamental understanding of the brain, we really need to know how many different types of cells there are, how are they organized, and how they interact with each other,” says Xiaowei Zhuang, a biophysicist at Harvard University in Cambridge. What neuroscientists require, Zhuang explains, is a way to systematically identify and map the many categories of brain cells. Now researchers are closing in on such a resource, at least in mice. By combining high-throughput single-cell RNA sequencing with spatial transcriptomics — methods for determining which genes are expressed in individual cells, and where those cells are located — they are creating some of the most comprehensive atlases of the mouse brain so far. The crucial next steps will be working out what these molecularly defined cell types do, and bringing the various brain maps together to create a unified resource that the broader neuroscience community can use. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 7: Life-Span Development of the Brain and Behavior; Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 13: Memory and Learning; Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28880 - Posted: 08.24.2023

By Lauren Leffer When a nematode wriggles around a petri dish, what’s going on inside a tiny roundworm’s even tinier brain? Neuroscientists now have a more detailed answer to that question than ever before. As with any experimental animal, from a mouse to a monkey, the answers may hold clues about the contents of more complex creatures’ noggin, including what resides in the neural circuitry of our own head. A new brain “atlas” and computer model, published in Cell on Monday, lays out the connections between the actions of the nematode species Caenorhabditis elegans and this model organism’s individual brain cells. With the findings, researchers can now observe a C. elegans worm feeding or moving in a particular way and infer activity patterns for many of the animal’s behaviors in its specific neurons. Through establishing those brain-behavior links in a humble roundworm, neuroscientists are one step closer to understanding how all sorts of animal brains, even potentially human ones, encode action. “I think this is really nice work,” says Andrew Leifer, a neuroscientist and physicist who studies nematode brains at Princeton University and was not involved in the new research. “One of the most exciting reasons to study how a worm brain works is because it holds the promise of being able to understand how any brain generates behavior,” he says. “What we find in the worm forms hypotheses to look for in other organisms.” Biologists have been drawn to the elegant simplicity of nematode biology for many decades. South African biologist Sydney Brenner received a Nobel Prize in Physiology or Medicine in 2002 for pioneering work that enabled C. elegans to become an experimental animal for the study of cell maturation and organ development. C. elegans was the first multicellular organism to have its entire genome and nervous system mapped. The first neural map, or “connectome,” of a C. elegans brain was published in 1986. In that research, scientists hand drew connections using colored pencils and charted each of the 302 neurons and approximately 5,000 synapses inside the one-millimeter-long animal’s transparent body. Since then a subdiscipline of neuroscience has emerged—one dedicated to plotting out the brains of increasingly complex organisms. Scientists have compiled many more nematode connectomes, as well as brain maps of a marine annelid worm, a tadpole, a maggot and an adult fruit fly. Yet these maps simply serve as a snapshot in time of a single animal. They can tell us a lot about brain structure but little about how behaviors relate to that structure. © 2023 Scientific American

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory and Learning
Link ID: 28879 - Posted: 08.24.2023

Liam Drew Scientific advances are rapidly making science-fiction concepts such as mind-reading a reality — and raising thorny questions for ethicists, who are considering how to regulate brain-reading techniques to protect human rights such as privacy. On 13 July, neuroscientists, ethicists and government ministers discussed the topic at a Paris meeting organized by UNESCO, the United Nations scientific and cultural agency. Delegates plotted the next steps in governing such ‘neurotechnologies’ — techniques and devices that directly interact with the brain to monitor or change its activity. The technologies often use electrical or imaging techniques, and run the gamut from medically approved devices, such as brain implants for treating Parkinson’s disease, to commercial products such as wearables used in virtual reality (VR) to gather brain data or to allow users to control software. How to regulate neurotechnology “is not a technological discussion — it’s a societal one, it’s a legal one”, Gabriela Ramos, UNESCO’s assistant director-general for social and human sciences, told the meeting. Advances in neurotechnology include a neuroimaging technique that can decode the contents of people’s thoughts, and implanted brain–computer interfaces (BCIs) that can convert people’s thoughts of handwriting into text1. The field is growing fast — UNESCO’s latest report on neurotechnology, released at the meeting, showed that, worldwide, the number of neurotechnology-related patents filed annually doubled between 2015 and 2020. Investment rose 22-fold between 2010 and 2020, the report says, and neurotechnology is now a US$33-billion industry. One area in need of regulation is the potential for neurotechnologies to be used for profiling individuals and the Orwellian idea of manipulating people’s thoughts and behaviour. Mass-market brain-monitoring devices would be a powerful addition to a digital world in which corporate and political actors already use personal data for political or commercial gain, says Nita Farahany, an ethicist at Duke University in Durham, North Carolina, who attended the meeting. © 2023 Springer Nature Limited

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28859 - Posted: 07.27.2023

by Holly Barker By bloating brain samples and imaging them with a powerful microscope, researchers can reconstruct neurons across the entire mouse brain, according to a new preprint. The technique could help scientists uncover the neural circuits responsible for complex behaviors, as well as the pathways that are altered in neurological conditions. Tracking axons can help scientists understand how individual neurons and brain areas communicate over long distances. But tracing their path through the brain is tricky, says study investigator Adam Glaser, senior scientist at the Allen Institute for Neural Dynamics in Seattle, Washington. Axons, which are capable of spanning the entire brain, can be less than a micrometer in diameter, so mapping their route requires detailed imaging, he says. One existing approach involves a microscope that slices off an ultra-thin section of the brain and then scans it, repeating the process about 20,000 times to capture the entire mouse brain. Scientists then blend the images together to form a 3D reconstruction of neuronal pathways. But the process takes several days and is therefore more prone to complications — bubbles forming on the lens, say — than faster techniques, Glaser says. And slicing can distort the edges of the image, making it “challenging or impossible” to stitch them back together, says Paul Tillberg, principal scientist at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia, who was not involved in the study. “This is particularly an issue when reconstructing brain-wide axonal projections, where a single point of confusion can misalign an entire axonal arbor to the wrong neuron,” he says. © 2023 Simons Foundation

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Link ID: 28850 - Posted: 07.19.2023

Davide Castelvecchi The wrinkles that give the human brain its familiar walnut-like appearance have a large effect on brain activity, in much the same way that the shape of a bell determines the quality of its sound, a study suggests1. The findings run counter to a commonly held theory about which aspect of brain anatomy drives function. The study’s authors compared the influence of two components of the brain’s physical structure: the outer folds of the cerebral cortex — the area where most higher-level brain activity occurs — and the connectome, the web of nerves that links distinct regions of the cerebral cortex. The team found that the shape of the outer surface was a better predictor of brainwave data than was the connectome, contrary to the paradigm that the connectome has the dominant role in driving brain activity. “We use concepts from physics and engineering to study how anatomy determines function,” says study co-author James Pang, a physicist at Monash University in Melbourne, Australia. The results were published in Nature on 31 May1. ‘Exciting’ a neuron makes it fire, which sends a message zipping to other neurons. Excited neurons in the cerebral cortex can communicate their state of excitation to their immediate neighbours on the surface. But each neuron also has a long filament called an axon that connects it to a faraway region within or beyond the cortex, allowing neurons to send excitatory messages to distant brain cells. In the past two decades, neuroscientists have painstakingly mapped this web of connections — the connectome — in a raft of organisms, including humans. The authors wanted to understand how brain activity is affected by each of the ways in which neuronal excitation can spread: across the brain’s surface or through distant interconnections. To do so, the researchers — who have backgrounds in physics and neuroscience — tapped into the mathematical theory of waves.

Related chapters from BN: Chapter 2: Functional Neuroanatomy: The Cells and Structure of the Nervous System; Chapter 7: Life-Span Development of the Brain and Behavior
Related chapters from MM:Chapter 2: Neurophysiology: The Generation, Transmission, and Integration of Neural Signals; Chapter 13: Memory and Learning
Link ID: 28811 - Posted: 06.03.2023