Chapter 2. Cells and Structures: The Anatomy of the Nervous System
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
New research from the National Institutes of Health found that pairing the antidepressant amitriptyline with drugs designed to treat central nervous system diseases, enhances drug delivery to the brain by inhibiting the blood-brain barrier in rats. The blood-brain barrier serves as a natural, protective boundary, preventing most drugs from entering the brain. The research, performed in rats, appeared online April 27 in the Journal of Cerebral Blood Flow and Metabolism. Although researchers caution that more studies are needed to determine whether people will benefit from the discovery, the new finding has the potential to revolutionize treatment for a whole host of brain-centered conditions, including epilepsy, stroke,human amyotrophic lateral sclerosis (ALS), depression, and others. The results are so promising that a provisional patent application has been filed for methods of co-administration of amitriptyline with central nervous system drugs. According to Ronald Cannon, Ph.D., staff scientist at NIH’s National Institute of Environmental Health Sciences (NIEHS), the biggest obstacle to efficiently delivering drugs to the brain is a protein pump called P-glycoprotein. Located along the inner lining of brain blood vessels, P-glycoprotein directs toxins and pharmaceuticals back into the body’s circulation before they pass into the brain. To get an idea of how P-glycoprotein works, Cannon said to think of the protein as a hotel doorman, standing in front of a revolving door at a lobby entrance. A person who is not authorized to enter would get turned away, being ushered back around the revolving door and out into the street.
Link ID: 23548 - Posted: 04.28.2017
By NICK WINGFIELD SEATTLE — Zoran Popović knows a thing or two about video games. A computer science professor at the University of Washington, Dr. Popović has worked on software algorithms that make computer-controlled characters move realistically in games like the science-fiction shooter “Destiny.” But while those games are entertainment designed to grab players by their adrenal glands, Dr. Popović’s latest creation asks players to trace lines over fuzzy images with a computer mouse. It has a slow pace with dreamy music that sounds like the ambient soundtrack inside a New Age bookstore. The point? To advance neuroscience. Since November, thousands of people have played the game, “Mozak,” which uses common tricks of the medium — points, leveling up and leader boards that publicly rank the performance of players — to crowdsource the creation of three-dimensional models of neurons. The Center for Game Science, a group at the University of Washington that Dr. Popović oversees, developed the game in collaboration with the Allen Institute for Brain Science, a nonprofit research organization founded by Paul Allen, the billionaire co-founder of Microsoft, that is seeking a better understanding of the brain. Dr. Popović had previously received wide attention in the scientific community for a puzzle game called “Foldit,” released nearly a decade ago, that harnesses the skills of players to solve riddles about the structure of proteins. The Allen Institute’s goal of cataloging the structure of neurons, the cells that transmit information throughout the nervous system, could one day help researchers understand the roots of neurodegenerative diseases like Alzheimer’s and Parkinson’s and their treatment. Neurons come in devilishly complex shapes and staggering quantities — about 100 million and 87 billion in mouse and human brains, both of which players can work on in Mozak. © 2017 The New York Times Company
Keyword: Brain imaging
Link ID: 23533 - Posted: 04.25.2017
Angelo Young Billionaire magnate Elon Musk is trying to fill the world with electric cars and solar panels while at the same time aiming to deploy reusable rockets to eventually colonize Mars. As if that weren’t enough for his plate, Musk recently announced the launch of Neuralink, a neuroscience startup seeking to create a way to interface human brains with computers. According to him, this would be part of guarding humanity against what Musk considers a threat from the rise of artificial intelligence. He envisions a lattice of electrodes implanted into the human skull that could allow people to download and upload thoughts as well as treat brain conditions such as epilepsy or bipolar disorders. Musk’s proposition seems as outlandish and unlikely as his vision for the Hyperloop rapid transport system, but like his other big ideas, there’s real science behind it. Figuring out what’s really involved in efforts to sync brains with computers was part of what inspired Adam Piore to write “The Body Builders: Inside the Science of the Engineered Human,” which was released last month by HarperCollins. Written in plain language that gives nonscientists a way to separate the science from the sensational, “The Body Builders” is a fascinating dive into what’s happening right now in bioengineering research — from brain-computer interfaces to bionic limbs — that will redefine human-machine interactions in the years to come. Piore, an award-winning journalist who has written extensively about scientific advances, spoke to Salon recently about just how close we are to being able to read one another’s thoughts through electrodes and the processing power of modern computers. © 2017 Salon Media Group, Inc.
By Ryan Cross Microscopes reveal miniscule wonders by making things seem bigger. Just imagine what scientists could see if they could also make things bigger. A new strategy to blow brains up does just that. Researchers previously invented a method for injecting a polyacrylate mesh into brain tissue, the same water-absorbing and expanding molecule that makes dirty diapers swell up. Just add water, and the tissue enlarges to 4.5 times its original size. But it wasn’t good enough to see everything. The brain is full of diminutive protrusions called dendritic spines lining the signal receiving end of a neuron. Hundreds to thousands of these nubs help strengthen or weaken an individual dendrite’s connection to other neurons in the brain. The nanoscale size of these spines makes studying them with light microscopes impossible or blurry at best, however. Now, the same group has overcome this barrier in an improved method called iterative expansion microscopy, described today in Nature Methods. Here, the tissue is expanded once, the crosslinked mesh is cleaved, and then the tissue is expanded again, resulting in roughly 20-fold enlargement. Neurons are then visualized by light-emitting molecules linked to antibodies which latch onto specified proteins. The technique has yielded detailed images showing the formation of proteins along synapses in mice, as well as detailed renderings of dendritic spines (seen in the image above) in the mouse hippocampus—a center or learning and memory in the brain. The advance could enable neuroscientists to map the many individual connections between neurons across the brain and the unique arrangement of receptors that turn brain circuits on and off. © 2017 American Association for the Advancement of Science
Keyword: Brain imaging
Link ID: 23500 - Posted: 04.18.2017
Richard A. Friedman I was doing KenKen, a math puzzle, on a plane recently when a fellow passenger asked why I bothered. I said I did it for the beauty. O.K., I’ll admit it’s a silly game: You have to make the numbers within the grid obey certain mathematical constraints, and when they do, all the pieces fit nicely together and you get this rush of harmony and order. Still, it makes me wonder what it is about mathematical thinking that is so elegant and aesthetically appealing. Is it the internal logic? The unique mix of simplicity and explanatory power? Or perhaps just its pure intellectual beauty? I’ve loved math since I was a kid because it felt like a big game and because it seemed like the laziest thing you could do mentally. After all, how many facts do you need to remember to do math? Later in college, I got excited by physics, which I guess you could say is just a grand exercise in applying math to understand the universe. My roommate, a brainy math major, used to bait me, saying that I never really understood the math I was using. I would counter that he never understood what on Earth the math he studied was good for. We were both right, but he’d be happy to know that I’ve come around to his side: Math is beautiful on a purely abstract level, quite apart from its ability to explain the world. We all know that art, music and nature are beautiful. They command the senses and incite emotion. Their impact is swift and visceral. How can a mathematical idea inspire the same feelings? Well, for one thing, there is something very appealing about the notion of universal truth — especially at a time when people entertain the absurd idea of alternative facts. The Pythagorean theorem still holds, and pi is a transcendental number that will describe all perfect circles for all time. © 2017 The New York Times Company
Keyword: Brain imaging
Link ID: 23498 - Posted: 04.17.2017
By Niall Firth The firing of every neuron in an animal’s body has been recorded, live. The breakthrough in imaging the nervous system of a hydra – a tiny, transparent creature related to jellyfish – as it twitches and moves has provided insights into how such simple animals control their behaviour. Similar techniques might one day help us get a deeper understanding of how our own brains work. “This could be important not just for the human brain but for neuroscience in general,” says Rafael Yuste at Columbia University in New York City. Instead of a brain, hydra have the most basic nervous system in nature, a nerve net in which neurons spread throughout its body. Even so, researchers still know almost nothing about how the hydra’s few thousand neurons interact to create behaviour. To find out, Yuste and colleague Christophe Dupre genetically modified hydra so that their neurons glowed in the presence of calcium. Since calcium ions rise in concentration when neurons are active and fire a signal, Yuste and Dupre were able to relate behaviour to activity in glowing circuits of neurons. For example, a circuit that seems to be involved in digestion in the hydra’s stomach-like cavity became active whenever the animal opened its mouth to feed. This circuit may be an ancestor of our gut nervous system, the pair suggest. © Copyright Reed Business Information Ltd.
By Michael Price Do the anatomical differences between men and women—sex organs, facial hair, and the like—extend to our brains? The question has been as difficult to answer as it has been controversial. Now, the largest brain-imaging study of its kind indeed finds some sex-specific patterns, but overall more similarities than differences. The work raises new questions about how brain differences between the sexes may influence intelligence and behavior. For decades, brain scientists have noticed that on average, male brains tend to have slightly higher total brain volume than female ones, even when corrected for males’ larger average body size. But it has proved notoriously tricky to pin down exactly which substructures within the brain are more or less voluminous. Most studies have looked at relatively small sample sizes—typically fewer than 100 brains—making large-scale conclusions impossible. In the new study, a team of researchers led by psychologist Stuart Ritchie, a postdoctoral fellow at the University of Edinburgh, turned to data from UK Biobank, an ongoing, long-term biomedical study of people living in the United Kingdom with 500,000 enrollees. A subset of those enrolled in the study underwent brain scans using MRI. In 2750 women and 2466 men aged 44–77, Ritchie and his colleagues examined the volumes of 68 regions within the brain, as well as the thickness of the cerebral cortex, the brain’s wrinkly outer layer thought to be important in consciousness, language, memory, perception, and other functions. © 2017 American Association for the Advancement of Science
By Knvul Sheikh For the past five decades pharmaceutical drugs like levodopa have been the gold standard for treating Parkinson’s disease. These medications alleviate motor symptoms of the disease, but none of them can cure it. Patients with Parkinson’s continue to lose dopamine neurons critical to the motor control centers of the brain. Eventually the drugs become ineffective and patients’ tremors get worse. They experience a loss of balance and a debilitating stiffness takes over their legs. To replace the lost dopamine neurons, scientists have begun investigating stem cell therapy as a potential treatment or even a cure. But embryonic cells and adult stem cells have proved difficult to harness and transplant into the brain. Now a study from the Karolinska Institute in Stockholm shows it is possible to coax the brain’s own astrocytes—cells that typically support and nurture neurons—into producing a new generation of dopamine neurons. The reprogrammed cells display several of the properties and functions of native dopamine neurons and could alter the course of Parkinson’s, according to the researchers. “You can directly reprogram a cell that is already inside the brain and change the function in such a way that you can improve neurological symptoms,” says senior author Ernest Arenas, a professor of medical biochemistry at Karolinska. Previously, scientists had to nudge specialized cells like neurons into becoming pluripotent cells before they could develop a different kind of specialized cell, he says. It was like having to erase all the written instructions for how a cell should develop and what job it should do and then rewriting them all over again. But Arenas and his team found a way to convert the instructions into a different set of commands without erasing them. © 2017 Scientific American
By Veronique Greenwood A number of studies have used functional MRI to see what our brain looks like as we recall pleasant memories, watch scary movies or listen to sad music. Scientists have even had some success telling which of these stimuli a subject is experiencing by looking at his or her scans. But does this mean it is possible to tell what emotions we are experiencing in the absence of prompts, as we let our mind wander naturally? That is a difficult question to answer, in part because psychologists disagree about how emotions should be defined. Nevertheless, some scientists are trying to tackle it. In a study reported in the June 2016 issue of Cerebral Cortex, Heini Saarimäki of Aalto University in Finland and her colleagues observed volunteers in a brain scanner who were being prompted to recall memories they associated with words drawn from six emotional categories or to reflect on a movie clip selected to provoke certain emotions. The participants also completed a questionnaire about how closely linked different emotions were—rating, for instance, whether “anxiety” is closer to “fear” than to “happiness.” The researchers found that pattern-recognition software could detect which category of emotion a person had been prompted with. In addition, the more closely he or she linked words in the questionnaire, the more his or her brain scans for those emotions resembled one another. Another study, published in September 2016 in PLOS Biology by Kevin LaBar of Duke University and his colleagues, attempted to match brain scans of people lying idle in a scanner to seven predefined patterns associated with specific emotions provoked in an earlier study. The researchers found they could predict the subjects' self-reported emotions from the scans about 75 percent of the time. © 2017 Scientific American,
Ed Yong It’s a good time to be interested in the brain. Neuroscientists can now turn neurons on or off with just a flash of light, allowing them to manipulate the behavior of animals with exceptional precision. They can turn brains transparent and seed them with glowing molecules to divine their structure. They can record the activity of huge numbers of neurons at once. And those are just the tools that currently exist. In 2013, Barack Obama launched the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative—a $115 million plan to develop even better technologies for understanding the enigmatic gray blobs that sit inside our skulls. John Krakaeur, a neuroscientist at Johns Hopkins Hospital, has been asked to BRAIN Initiative meetings before, and describes it like “Maleficent being invited to Sleeping Beauty’s birthday.” That’s because he and four like-minded friends have become increasingly disenchanted by their colleagues’ obsession with their toys. And in a new paper that’s part philosophical treatise and part shot across the bow, they argue that this technological fetish is leading the field astray. “People think technology + big data + machine learning = science,” says Krakauer. “And it’s not.” He and his fellow curmudgeons argue that brains are special because of the behavior they create—everything from a predator’s pounce to a baby’s cry. But the study of such behavior is being de-prioritized, or studied “almost as an afterthought.” Instead, neuroscientists have been focusing on using their new tools to study individual neurons, or networks of neurons. According to Krakauer, the unspoken assumption is that if we collect enough data about the parts, the workings of the whole will become clear. If we fully understand the molecules that dance across a synapse, or the electrical pulses that zoom along a neuron, or the web of connections formed by many neurons, we will eventually solve the mysteries of learning, memory, emotion, and more. “The fallacy is that more of the same kind of work in the infinitely postponed future will transform into knowing why that mother’s crying or why I’m feeling this way,” says Krakauer. And, as he and his colleagues argue, it will not. © 2017 by The Atlantic Monthly Group
Keyword: Brain imaging
Link ID: 23292 - Posted: 02.28.2017
Sara Reardon Like ivy plants that send runners out searching for something to cling to, the brain’s neurons send out shoots that connect with other neurons throughout the organ. A new digital reconstruction method shows three neurons that branch extensively throughout the brain, including one that wraps around its entire outer layer. The finding may help to explain how the brain creates consciousness. Christof Koch, president of the Allen Institute for Brain Science in Seattle, Washington, explained his group’s new technique at a 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland. He showed how the team traced three neurons from a small, thin sheet of cells called the claustrum — an area that Koch believes acts as the seat of consciousness in mice and humans1. Tracing all the branches of a neuron using conventional methods is a massive task. Researchers inject individual cells with a dye, slice the brain into thin sections and then trace the dyed neuron’s path by hand. Very few have been able to trace a neuron through the entire organ. This new method is less invasive and scalable, saving time and effort. Koch and his colleagues engineered a line of mice so that a certain drug activated specific genes in claustrum neurons. When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes. That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain and used a computer program to create a 3D reconstruction of just three glowing cells. © 2017 Macmillan Publishers Limited
By Kerry Grens Brain scans of 3,242 volunteers aged four to 63 years old revealed that those diagnosed with attention deficit hyperactivity disorder (ADHD)—roughly half of the group—had smaller tissue volumes in five brain regions. Because the differences were largest between children, the researchers concluded that ADHD likely involves a delay in brain maturation. The study, published in The Lancet Psychiatry on February 15, is the largest of its kind to date, and the authors hope it will change public perception of the disorder. “I think most scientists in the field already know that the brains of people with ADHD show differences, but I now hope to have shown convincing evidence … that will reach the general public and show that it has [a basis in the brain] just like other psychiatric disorders,” geneticist and coauthor Martine Hoogman of Radboud University in the Netherlands told The Washington Post. “We know that ADHD deals with stigma, but we also know that increasing knowledge will reduce stigma.” Most pronounced among the brain differences between those with and without ADHD was the amygdala, important for emotional processing. “The amygdala is heavily connected to other brain regions. It is a kind of hub for numerous kinds of signaling around salience and significance of events,” Joel Nigg, a psychiatry professor at Oregon Health & Science University School of Medicine who was not part of the study, told CNN. “The bigger story here is that alterations in amygdala have not been widely accepted as part of ADHD, so seeing that effect emerge here is quite interesting.” © 1986-2017 The Scientist
Daqing Li and Ying Li In 1969 Geoffrey Raisman, who has died aged 77, introduced the term “plasticity” to describe the ability of damaged nerve tissue to form new synaptic connections. He discovered that damaged nerves in the central nervous system (CNS) could be repaired and developed the theory that white matter (nerve fibres and supporting cells) is like a pathway – when it is disrupted by injury, such as spinal cord injury, growth of the regenerating fibres is blocked. In 1985 he described how olfactory ensheathing cells (OECs) “open doors” for newly formed nerve fibres in the nose to enter the CNS. Believing that reconstruction of the damaged pathway is essential to repair of the injured CNS and using the unique door-opening capability of OECs, in 1997, together with colleagues, Geoffrey showed that transplantation of OECs into the damaged spinal cord in experimental models repairs the damaged pathway and results in the regeneration of severed nerve fibres and the restoration of lost functions. The study led to a joint clinical trial with Pawel Tabakow and his team at Wroclaw Medical University, Poland. In 2014 the first patient with a complete severance of the thoracic spinal cord received transplantation of his own OECs. The operation enabled the patient, Darek Fidyka, to gain significant neurological recovery of sensation and voluntary movement. He can now get out of his wheelchair and ride a tricycle. The wider application of OECs has also been investigated. In 2012, with his team at University College London, collaborating with the UCL Institute of Ophthalmology and Southwest hospital, at the Third Military Medical University in Chongqing, China, Geoffrey described the protective effect of OECs in an experimental glaucoma model. The discovery has led to a plan to translate this research to clinical application which, it is hoped, will help many sufferers regain sight.
By Jennifer Couzin-Frankel At least two dozen junior and senior researchers are stuck in scientific limbo after being barred from publishing data collected over a 25-year period at a National Institutes of Health (NIH) lab. The unusual ban follows the firing last summer of veteran neurologist Allen Braun by the National Institute on Deafness and Other Communication Disorders (NIDCD) for what many scientists have told Science are relatively minor, if widespread, violations of his lab’s experimental protocol. Most of the violations, which were unearthed after Braun himself reported a problem, involve the prescreening or vetting of volunteers for brain imaging scans and other experiments on language processing. The fallout from the case was recently chronicled on a blog by one of Braun’s former postdocs, and it highlights a not-uncommon problem across science: the career harm to innocent junior investigators following lab misconduct or accidental violations on the part of senior scientists. But this case, say those familiar with it, is extreme. “We’re truly collateral damage,” says Nan Bernstein Ratner of the University of Maryland in College Park, who researches stuttering. She spent 5 years collaborating with Braun. Now, two of her graduate students have had to shift their master’s theses topics, and an undergraduate she mentored cannot publish a planned paper. “The process has been—you can use this term—surreal.” © 2017 American Association for the Advancement of Science
By Esther Landhuis For much of her life Anne Dalton battled depression. She seldom spoke with people. She stayed home a lot. The days dragged on with a sense of “why bother?” for the 61-year-old from New Jersey who used to work at a Wall Street investment firm. After trying more than a dozen combinations of antidepressant drugs to no avail, things got so bad two years ago that Dalton went in for electroconvulsive therapy—in which “basically they shock your brain,” as she puts it. Like Dalton, most of the estimated 16 million U.S. adults who have reported a major depressive episode in the past year find little relief even after several months on antidepressants—a problem that some researchers say may stem from the way mental illness is diagnosed. Objective lab tests can physically confirm heart disease or cancer, but psychiatric conditions are classified somewhat vaguely as clusters of reported symptoms. Doctors consider people clinically depressed if they say they have low mood and meet at least four additional criteria from an overall list of nine. Yet depression can manifest differently from person to person: One might be putting on pounds and sleeping much of the time whereas another might be losing weight, feeling anxious and finding it difficult to sit still, says Conor Liston, a neuroscientist and psychiatrist at Weill Cornell Medical College. “The fact that we lump people together like this has been a big obstacle in understanding the neurobiology of depression,” Liston explains. © 2017 Scientific American,
JoAnna Klein Some microscopes today are so powerful that they can create a picture of the gap between brain cells, which is thousands of times smaller than the width of a human hair. They can even reveal the tiny sacs carrying even tinier nuggets of information to cross over that gap to form memories. And in colorful snapshots made possible by a giant magnet, we can see the activity of 100 billion brain cells talking. Decades before these technologies existed, a man hunched over a microscope in Spain at the turn of the 20th century was making prescient hypotheses about how the brain works. At the time, William James was still developing psychology as a science and Sir Charles Scott Sherrington was defining our integrated nervous system. Meet Santiago Ramón y Cajal, an artist, photographer, doctor, bodybuilder, scientist, chess player and publisher. He was also the father of modern neuroscience. “He’s one of these guys who was really every bit as influential as Pasteur and Darwin in the 19th century,” said Larry Swanson, a neurobiologist at the University of Southern California who contributed a biographical section to the new book “The Beautiful Brain: The Drawings of Santiago Ramón y Cajal.” “He’s harder to explain to the general public, which is probably why he’s not as famous.” Last month, the Weisman Art Museum in Minneapolis opened a traveling exhibit that is the first dedicated solely to Ramón y Cajal’s work. It will make stops in Minneapolis; Vancouver, British Columbia; New York; Cambridge, Mass.; and Chapel Hill, N.C., through April 2019. Ramón y Cajal started out with an interest in the visual arts and photography — he even invented a method for making color photos. But his father pushed him into medical school. Without his artistic background, his work might not have had as much impact, Dr. Swanson said. © 2017 The New York Times Company
Keyword: Brain imaging
Link ID: 23251 - Posted: 02.18.2017
By Kelly Clancy More than two hundred years ago, a French weaver named Joseph Jacquard invented a mechanism that greatly simplified textile production. His design replaced the lowly draw boy—the young apprentice who meticulously chose which threads to feed into the loom to create a particular pattern—with a series of paper punch cards, which had holes dictating the lay of each stitch. The device was so successful that it was repurposed in the first interfaces between humans and computers; for much of the twentieth century, programmers laid out their code like weavers, using a lattice of punched holes. The cards themselves were fussy and fragile. Ethereal information was at the mercy of its paper substrate, coded in a language only experts could understand. But successive computer interfaces became more natural, more flexible. Immutable program instructions were softened to “If x, then y. When a, try b.” Now, long after Jacquard’s invention, we simply ask Amazon’s Echo to start a pot of coffee, or Apple’s Siri to find the closest car wash. In order to make our interactions with machines more natural, we’ve learned to model them after ourselves. Early in the history of artificial intelligence, researchers came up against what is referred to as Moravec’s paradox: tasks that seem laborious to us (arithmetic, for example) are easy for a computer, whereas those that seem easy to us (like picking out a friend’s voice in a noisy bar) have been the hardest for A.I. to master. It is not profoundly challenging to design a computer that can beat a human at a rule-based game like chess; a logical machine does logic well. But engineers have yet to build a robot that can hopscotch. The Austrian roboticist Hans Moravec theorized that this might have something to do with evolution. Since higher reasoning has only recently evolved—perhaps within the last hundred thousand years—it hasn’t had time to become optimized in humans the way that locomotion or vision has. The things we do best are largely unconscious, coded in circuits so ancient that their calculations don’t percolate up to our experience. But because logic was the first form of biological reasoning that we could perceive, our thinking machines were, by necessity, logic-based. © 2017 Condé Nast.
Hannah Devlin A transportable brain-scanning helmet that could be used for rapid brain injury assessments of stroke victims and those felled on the sports pitch or battlefield is being tested by US scientists. The wearable device, known as the PET helmet, is a miniaturised version of the hospital positron emission tomography (PET) scanner, a doughnut-shaped machine which occupies the volume of a small room. Julie Brefczynski-Lewis, the neuroscientist leading the project at West Virginia University, said that the new helmet could dramatically speed up diagnosis and make the difference between a positive outcome and devastating brain damage or death for some patients. “You could roll it right to their bedside and put it on their head,” she said ahead of a presentation at the American Association for the Advancement of Science’s (AAAS) annual meeting in Boston. “Time is brain for stroke.” Despite being only the size of a motorbike helmet, the new device produces remarkably detailed images that could be used to identify regions of trauma to the brain in the ambulance on the way to hospital or at a person’s bedside. The device is currently being tested on healthy volunteers, but could be used clinically within two years, the team predicted.
By Pallab Ghosh Scientists are appealing for more people to donate their brains for research after they die. They say they are lacking the brains of people with disorders such as depression and post-traumatic stress disorder. In part, this shortage results from a lack of awareness that such conditions are due to changes in brain wiring. The researchers' aim is to develop new treatments for mental and neurological disorders. The human brain is as beautiful as it is complex. Its wiring changes and grows as we do. The organ is a physical embodiment of our behaviour and who we are. In recent years, researchers have made links between the shape of the brain and mental and neurological disorders. Most of their specimens are from people with mental or neurological disorders. Samples are requested by scientists to find new treatments for Parkinson's, Alzheimer's and a whole host of psychiatric disorders. But there is a problem. Scientists at McLean Hospital and at brain banks across the world do not have enough specimens for the research community. According Dr Kerry Ressler, who is the chief scientific officer at McLean hospital, new treatments for many mental and neurological diseases are within the grasp of the research community. However, he says it is the lack of brain tissue that is holding back their development. © 2017 BBC.
Keyword: Brain imaging
Link ID: 23241 - Posted: 02.17.2017
By KENNETH CHANG Sir Peter Mansfield, who shared a Nobel Prize for discoveries that underpinned the invention of magnetic resonance imaging, the method of peering inside the human body that revolutionized medicine, died on Wednesday. He was 83. The University of Nottingham in England, where Dr. Mansfield had been a professor of physics, announced his death but not did say where he died. He lived in England. Magnetic resonance imaging, or M.R.I., has enabled doctors to diagnose and examine injuries to ligaments, bones and organs without cutting open the body or risking the radiation dangers of X-rays. “It’s hugely important,” said Charles P. Slichter, an emeritus physics professor at the University of Illinois at Urbana-Champaign. “It’s such an all-pervasive technique.” Dr. Mansfield worked in his laboratory as a postdoctoral researcher in the 1960s. Dr. Mansfield was awarded the Nobel Prize in Physiology or Medicine in 2003, along with Paul C. Lauterbur, a professor at the University of Illinois at Urbana-Champaign. The two had worked independent of each other in studying magnetic resonance imaging. Their research proceeded from an understanding that the nuclei of most atoms act as tiny magnets that line up when placed in a magnetic field. If the field is set at a specific strength, the atoms can absorb and emit radio waves. Scientists initially used the technique, called nuclear magnetic resonance, or N.M.R., to study atoms and molecules, deducing properties from the emitted waves. In his early research, Dr. Mansfield developed N.M.R. techniques to study crystals. Later, in 1972, as he worked to refine and sharpen N.M.R. data, he had a conversation with two colleagues about what applications such advances might lead to. He soon realized that if an object were placed in a nonuniform magnetic field — one that is stronger at one end than the other — scientists might be able to piece together a three-dimensional image of its atomic structure. © 2017 The New York Times Company
Keyword: Brain imaging
Link ID: 23217 - Posted: 02.13.2017