Chapter 2. Cells and Structures: The Anatomy of the Nervous System

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 1123

In 2007, I spent the summer before my junior year of college removing little bits of brain from rats, growing them in tiny plastic dishes, and poring over the neurons in each one. For three months, I spent three or four hours a day, five or six days a week, in a small room, peering through a microscope and snapping photos of the brain cells. The room was pitch black, save for the green glow emitted by the neurons. I was looking to see whether a certain growth factor could protect the neurons from degenerating the way they do in patients with Parkinson's disease. This kind of work, which is common in neuroscience research, requires time and a borderline pathological attention to detail. Which is precisely why my PI trained me, a lowly undergrad, to do it—just as, decades earlier, someone had trained him. Now, researchers think they can train machines to do that grunt work. In a study described in the latest issue of the journal Cell, scientists led by Gladstone Institutes and UC San Francisco neuroscientist Steven Finkbeiner collaborated with researchers at Google to train a machine learning algorithm to analyze neuronal cells in culture. The researchers used a method called deep learning, the machine learning technique driving advancements not just at Google, but Amazon, Facebook, Microsoft. You know, the usual suspects. It relies on pattern recognition: Feed the system enough training data—whether it's pictures of animals, moves from expert players of the board game Go, or photographs of cultured brain cells—and it can learn to identify cats, trounce the world's best board-game players, or suss out the morphological features of neurons.

Keyword: Brain imaging; Learning & Memory
Link ID: 24862 - Posted: 04.13.2018

In a small room tucked away at the University of Toronto, Professor Dan Nemrodov is pulling thoughts right out of people's brains. He straps a hat with electrodes on someone's head and then shows them pictures of faces. By reading brain activity with an electroencephalography (EEG) machine, he's then able to reconstruct faces with almost perfect accuracy. Student participants wearing the cap look at a collection of faces for two hours. At the same time, the EEG software recognizes patterns relating to certain facial features found in the photos. Machine-learning algorithms are then used to recreate the images based on the EEG data, in some cases within 98-per-cent accuracy. Nemrodov and his colleague, Professor Adrian Nestor say this is a big thing. "Ultimately we are involved in a form of mind reading," he says. The technology has huge ramifications for medicine, law, government and business. But the ethical questions are just as huge. Here are some key questions: What can be the benefits of this research? If developed, it can help patients with serious neurological damage. People who are incapacitated to the point that they cannot express themselves or ask a question. According to clinical ethicist Prof. Kerry Bowman and his students at the University of Toronto, this technology can get inside someone's mind and provide a link of communication. It may give that person a chance to exercise their autonomy, especially in regard to informed consent to either continue treatment or stop. ©2018 CBC/Radio-Canada.

Keyword: Brain imaging
Link ID: 24810 - Posted: 04.02.2018

By Liz Tormes When I first started working as a photo researcher for Scientific American MIND in 2013, a large part of my day was spent looking at brains. Lots of them. They appeared on my computer screen in various forms—from black-and-white CT scans featured in dense journals to sad-looking, grey brains sitting on the bottom of glass laboratory jars. At times they were boring, and often they could be downright disturbing. But every now and then I would come across a beautiful 3D image of strange, rainbow-colored pathways in various formations that looked like nothing I had ever seen before. I was sure it had been miscategorized somehow—no way was I looking at a brain! Through my work I have encountered countless images of multi-colored Brainbows, prismatic Diffusion Tensor Imaging (DTI), and even tiny and intricate neon mini-brains grown from actual stem cells in labs. Increasingly I have found myself dazzled, not just by the pictures themselves, but by the scientific and technological advances that have made this type of imaging possible in only the past few years. It was through my photo research that I happened upon the Netherlands Institute for Neuroscience’s (NIN) annual Art of Neuroscience contest. This exciting opportunity for neurologists, fine artists, videographers and illustrators, whose work is inspired by human and animal brains, was something I wanted to share with our readers. © 2018 Scientific American

Keyword: Brain imaging
Link ID: 24803 - Posted: 03.31.2018

By Simon Makin Neuroscientists today know a lot about how individual neurons operate but remarkably little about how large numbers of them work together to produce thoughts, feelings and behavior. What is needed is a wiring diagram for the brain—known as a connectome—to identify the circuits that underlie brain functions. The challenge is dizzying: There are around 100 billion neurons in the human brain, which can each make thousands of connections, or synapses, making potentially hundreds of trillions of connections. So far, researchers have typically used microscopes to visualize neural connections, but this is laborious and expensive work. Now in a paper published March 28 in Nature, an innovative brain-mapping technique developed at Cold Spring Harbor Laboratory (CSHL) has been used to trace the connections emanating from hundreds of neurons in the main visual area of the mouse cortex, the brain’s outer layer. The technique, which exploits the advancing speed and plummeting cost of genetic sequencing, is more efficient than current methods, allowing the team to produce a more detailed picture than previously possible at unprecedented speed. Once the technology matures it could be used to provide clues to the nature of neuro-developmental disorders such as autism that are thought to involve differences in brain wiring. The team, led by Anthony Zador at CSHL and neuroscientist Thomas Mrsic-Flogel of the University of Basel in Switzerland, verified their method by comparing it with a previous gold-standard means of identifying connections among nerve cells—a technique called fluorescent single neuron tracing. This involves introducing into cells genes that produce proteins that fluoresce with a greenish glow, so they and their axons (neurons’ output wires) can be visualized with light microscopy. © 2018 Scientific American

Keyword: Brain imaging; Schizophrenia
Link ID: 24802 - Posted: 03.30.2018

Juliette Jowit The world’s first brain scanner that can be worn as people move around has been invented, by a team who hope the contraption can help children with neurological and mental disorders and reveal how the brain handles social situations. The new scalp caps – made on 3D printers – fit closely to the head, so can record the electromagnetic field produced by electrical currents between brain cells in much finer detail than previously. This design means the scanner can work in ways never possible before: subjects can move about, for example, and even play games with the equipment on, while medics can use it on groups such as babies, children and those with illnesses which cause them to move involuntarily. “This has the potential to revolutionise the brain imaging field, and transform the scientific and clinical questions that can be addressed with human brain imaging,” said Prof Gareth Barnes at University College London, one of three partners in the project. The other two are the University of Nottingham and the Wellcome Trust. The brain imaging technique known as magnetoencephalography, or MEG, has been helping scientists for decades, but in many cases has involved using huge contraptions that look like vintage hair salon driers. The scanners operated further from the head than the new devices, reducing the detail they recorded, and users had to remain incredibly still. © 2018 Guardian News and Media Limited

Keyword: Brain imaging
Link ID: 24780 - Posted: 03.22.2018

Researchers at the University of Calgary say they have developed a portable brain-imaging system that would literally shed light on concussions. Symptoms of a concussion can vary greatly between individuals and include headaches, nausea, loss of memory and lack of co-ordination, which make it difficult to find treatment options. U of C scientist Jeff Dunn says there has been no accepted way to get an image of a concussion, but he and his team have developed a device, called a Near-Infrared Spectroscopy, that measures communication in the brain by measuring oxygen levels in blood. Results show these patterns change after concussion. The device — a cap that contains small lights with sensors connected to a computer — is placed on the top of the head to monitor and measure brain activity while the patient looks at a picture or does a simple activity. "When the brain activates, blood flow goes up but oxygen levels also go up, so the blood actually becomes redder as the brain activates," Dunn said. "And we measure that so we shine a light in and we can see that change in oxygen level and measure the change in absorption." Dunn hopes the images will show a connection between symptoms and abnormalities in the brain that could help doctors identify treatment protocols and recovery timelines. ©2018 CBC/Radio-Canada

Keyword: Brain Injury/Concussion; Brain imaging
Link ID: 24754 - Posted: 03.15.2018

By Ruth Williams When optogenetics burst onto the scene a little over a decade ago, it added a powerful tool to neuroscientists’ arsenal. Instead of merely correlating recorded brain activity with behaviors, researchers could control the cell types of their choosing to produce specific outcomes. Light-sensitive ion channels (opsins) inserted into the cells allow neuronal activity to be controlled by the flick of a switch. Nevertheless, MIT’s Edward Boyden says more precision is needed. Previous approaches achieved temporal resolution in the tens of milliseconds, making them a somewhat blunt instrument for controlling neurons’ millisecond-fast firings. In addition, most optogenetics experiments have involved “activation or silencing of a whole set of neurons,” he says. “But the problem is the brain doesn’t work that way.” When a cell is performing a given function—initiating a muscle movement, recalling a memory—“neighboring neurons can be doing completely different things,” Boyden explains. “So there is a quest now to do single-cell optogenetics.” Illumination techniques such as two-photon excitation with computer-generated holography (a way to precisely sculpt light in 3D) allow light to be focused tightly enough to hit one cell. But even so, Boyden says, if the targeted cell body lies close to the axons or dendrites of neighboring opsin-expressing cells, those will be activated too. © 1986-2018 The Scientist

Keyword: Brain imaging
Link ID: 24732 - Posted: 03.08.2018

By Diana Kwon When optogenetics debuted over a decade ago, it quickly became the method of choice for many neuroscientists. By using light to selectively control ion channels on neurons in living animal brains, researchers could see how manipulating specific neural circuits altered behavior in real time. Since then, scientists have used the technique to study brain circuity and function across a variety of species, from fruit flies to monkeys—the method is even being tested in a clinical trial to restore vision in patients with a rare genetic disorder. Today (February 8) in Science, researchers report successfully conducting optogenetics experiments using injected nanoparticles in mice, inching the field closer to a noninvasive method of stimulating the brain with light that could one day have therapeutic uses. “Optogenetics revolutionized how we all do experimental neuroscience in terms of exploring circuits,” says Thomas McHugh, a neuroscientist at the RIKEN Brain Science Institute in Japan. However, this technique currently requires a permanently implanted fiber—so over the last few years, researchers have started to develop ways to stimulate the brain in less invasive ways. A number of groups devised such techniques using magnetic fields, electric currents, and sound. McHugh and his colleagues decided to try another approach: They chose near-infrared light, which can more easily penetrate tissue than the blue-green light typically used for optogenetics. “What we saw as an advantage was a kind of chemistry-based approach in which we can harness the power of near-infrared light to penetrate tissue, but still use this existing toolbox that's been developed over the last decade of optogenetic channels that respond to visible light,” McHugh says. © 1986-2018 The Scientist

Keyword: Brain imaging
Link ID: 24637 - Posted: 02.09.2018

By Jim Daley Researchers at the D’Or Institute for Research and Education in Brazil have created an algorithm that can use functional magnetic resonance imaging (fMRI) data to identify which musical pieces participants are listening to. The study, published last Friday (February 2) in Scientific Reports, involved six participants listening to 40 pieces of music from various genres, including classical, rock, pop, and jazz. “Our approach was capable of identifying musical pieces with improving accuracy across time and spatial coverage,” the researchers write in the paper. “It is worth noting that these results were obtained for a heterogeneous stimulus set . . . including distinct emotional categories of joy and tenderness.” The researchers first played different musical pieces for the participants and used fMRI to measure the neural signatures of each song. With that data, they taught a computer to identify brain activity that corresponded with the musical dimensions of each piece, including tonality, rhythm, and timbre, as well as a set of lower-level acoustic features. Then, the researchers played the pieces for the participants again while the computer tried to identify the music each person was listening to, based on fMRI responses. The computer was successful in decoding the fMRI information and identifying the musical pieces around 77 percent of the time when it had two options to choose from. When the researchers presented 10 possibilities, the computer was correct 74 percent of the time. © 1986-2018 The Scientist

Keyword: Hearing; Brain imaging
Link ID: 24617 - Posted: 02.06.2018

By Eli Meixler Friday’s Google Doodle celebrates the birthday of Wilder Penfield, a scientist and physician whose groundbreaking contributions to neuroscience earned him the designation “the greatest living Canadian.” Penfield would have turned 127 today. Later celebrated as a pioneering researcher and a humane clinical practitioner, Penfield pursued medicine at Princeton University, believing it to be “the best way to make the world a better place in which to live.” He was drawn to the field of brain surgery, studying neuropathy as a Rhodes scholar at Oxford University. In 1928, Penfield was recruited by McGill University in Montreal, where he also practiced at Royal Victoria Hospital as the city’s first neurosurgeon. Penfield founded the Montreal Neurological Institute with support from the Rockefeller Foundation in 1934, the same year he became a Canadian citizen. Penfield pioneered a treatment for epilepsy that allowed patients to remain fully conscious while a surgeon used electric probes to pinpoint areas of the brain responsible for setting off seizures. The experimental method became known as the Montreal Procedure, and was widely adopted. But Wilder Penfield’s research led him to another discovery: that physical areas of the brain were associated with different duties, such as speech or movement, and stimulating them could generate specific reactions — including, famously, conjuring a memory of the smell of burnt toast. Friday’s animated Google Doodle features an illustrated brain and burning toast. © 2017 Time Inc.

Keyword: Miscellaneous
Link ID: 24576 - Posted: 01.27.2018

By Giorgia Guglielmi ENIGMA, the world’s largest brain mapping project, was “born out of frustration,” says neuroscientist Paul Thompson of the University of Southern California in Los Angeles. In 2009, he and geneticist Nicholas Martin of the Queensland Institute of Medical Research in Brisbane, Australia, were chafing at the limits of brain imaging studies. The cost of MRI scans limited most efforts to a few dozen subjects—too few to draw robust connections about how brain structure is linked to genetic variations and disease. The answer, they realized over a meal at a Los Angeles shopping mall, was to pool images and genetic data from multiple studies across the world. After a slow start, the consortium has brought together nearly 900 researchers across 39 countries to analyze brain scans and genetic data on more than 30,000 people. In an accelerating series of publications, ENIGMA’s crowdsourcing approach is opening windows on how genes and structure relate in the normal brain—and in disease. This week, for example, an ENIGMA study published in the journal Brain compared scans from nearly 4000 people across Europe, the Americas, Asia, and Australia to pinpoint unexpected brain abnormalities associated with common epilepsies. ENIGMA is “an outstanding effort. We should all be doing more of this,” says Mohammed Milad, a neuroscientist at the University of Illinois in Chicago who is not a member of the consortium. ENIGMA’s founders crafted the consortium’s name—Enhancing NeuroImaging Genetics through Meta-Analysis—so that its acronym would honor U.K. mathematician Alan Turing’s code-breaking effort targeting Germany’s Enigma cipher machines during World War II. Like Turing’s project, ENIGMA aims to crack a mystery. Small brain-scanning studies of twins or close relatives done in the 2000s showed that differences in some cognitive and structural brain measures have a genetic basis. © 2018 American Association for the Advancement of Science.

Keyword: Brain imaging; Genes & Behavior
Link ID: 24560 - Posted: 01.24.2018

Harriet Dempsey-Jones Nobody really believes that the shape of our heads are a window into our personalities anymore. This idea, known as “phrenonolgy”, was developed by the German physician Franz Joseph Gall in 1796 and was hugely popular in the 19th century. Today it is often remembered for its dark history – being misused in its later days to back racist and sexist stereoptypes, and its links with Nazi “eugenics”. But despite the fact that it has fallen into disrepute, phrenology as a science has never really been subjected to rigorous, neuroscientific testing. That is, until now. Researchers at the University of Oxford have hacked their own brain scanning software to explore – for the first time – whether there truly is any correspondence between the bumps and contours of your head and aspects of your personality. The results have recently been published in an open science archive, but have also been submitted to the journal Cortex. But why did phrenologists think that bumps on your head might be so informative? Their enigmatic claims were based around a few general principles. Phrenologists believed the brain was comprised of separate “organs” responsible for different aspects of the mind, such as for self-esteem, cautiousness and benevolence. They also thought of the brain like a muscle – the more you used a particular organ the more it would grow in size (hypertrophy), and less used faculties would shrink. The skull would then mould to accommodate these peaks and troughs in the brain’s surface – providing an indirect reflection of the brain, and thus, the dominant features of an person’s character. © 2010–2018, The Conversation US, Inc.

Keyword: Brain imaging
Link ID: 24554 - Posted: 01.23.2018

Laura Sanders Nerve cells in the brain make elaborate connections and exchange lightning-quick messages that captivate scientists. But these cells also sport simpler, hairlike protrusions called cilia. Long overlooked, the little stubs may actually have big jobs in the brain. Researchers are turning up roles for nerve cell cilia in a variety of brain functions. In a region of the brain linked to appetite, for example, cilia appear to play a role in preventing obesity, researchers report January 8 in three studies in Nature Genetics. Cilia perched on nerve cells may also contribute to brain development, nerve cell communication and possibly even learning and memory, other research suggests. “Perhaps every neuron in the brain possesses cilia, and most neuroscientists don’t know they’re there,” says Kirk Mykytyn, a cell biologist at Ohio State University College of Medicine in Columbus. “There’s a big disconnect there.” Most cells in the body — including those in the brain — possess what’s called a primary cilium, made up of lipid molecules and proteins. The functions these appendages perform in parts of the body are starting to come into focus (SN: 11/3/12, p. 16). Cilia in the nose, for example, detect smell molecules, and cilia on rod and cone cells in the eye help with vision. But cilia in the brain are more mysterious. © Society for Science & the Public 2000 - 2017.

Keyword: Obesity
Link ID: 24546 - Posted: 01.20.2018

Ian Sample Science editor Donatella Versace finds it in the conflict of ideas, Jack White under pressure of deadlines. For William S Burroughs, an old Dadaist trick helped: cutting pages into pieces and rearranging the words. Every artist has their own way of generating original ideas, but what is happening inside the brain might not be so individual. In new research, scientists report signature patterns of neural activity that mark out those who are most creative. “We have identified a pattern of brain connectivity that varies across people, but is associated with the ability to come up with creative ideas,” said Roger Beaty, a psychologist at Harvard University. “It’s not like we can predict with perfect accuracy who’s going to be the next Einstein, but we can get a pretty good sense of how flexible a given person’s thinking is.” Creative thinking is one of the primary drivers of cultural and technological change, but the brain activity that underpins original thought has been hard to pin down. In an effort to shed light on the creative process, Beaty teamed up with colleagues in Austria and China to scan people’s brains as they came up with original ideas. The scientists asked the volunteers to perform a creative thinking task as they lay inside a brain scanner. While the machine recorded their white matter at work, the participants had 12 seconds to come up with the most imaginative use for an object that flashed up on a screen. Three independent scorers then rated their answers. © 2018 Guardian News and Media Limited

Keyword: Attention; Brain imaging
Link ID: 24531 - Posted: 01.16.2018

By Matthew Hutson Imagine searching through your digital photos by mentally picturing the person or image you want. Or sketching a new kitchen design without lifting a pen. Or texting a loved one a sunset photo that was never captured on camera. A computer that can read your mind would find many uses in daily life, not to mention for those paralyzed and with no other way to communicate. Now, scientists have created the first algorithm of its kind to interpret—and accurately reproduce—images seen or imagined by another person. It might be decades before the technology is ready for practical use, but researchers are one step closer to building systems that could help us project our inner mind’s eye outward. “I was impressed that it works so well,” says Zhongming Liu, a computer scientist at Purdue University in West Lafayette, Indiana, who helped develop an algorithm that can somewhat reproduce what moviegoers see when they’re watching a film. “This is really cool.” Using algorithms to decode mental images isn’t new. Since 2011, researchers have recreated movie clips, photos, and even dream imagery by matching brain activity to activity recorded earlier when viewing images. But these methods all have their limits: Some deal only with narrow domains like face shape, and others can’t build an image from scratch—instead, they must select from preprogrammed images or categories like “person” or “bird.” This new work can generate recognizable images on the fly and even reproduce shapes that are not seen, but imagined. © 2018 American Association for the Advancement of Science.

Keyword: Vision; Brain imaging
Link ID: 24518 - Posted: 01.11.2018

By DENISE GRADY One blue surgical drape at a time, the patient disappeared, until all that showed was a triangle of her shaved scalp. “Ten seconds of quiet in the room, please,” said Dr. David J. Langer, the chairman of neurosurgery at Lenox Hill Hospital in Manhattan, part of Northwell Health. Silence fell, until he said, “O.K., I’ll take the scissors.” His patient, Anita Roy, 66, had impaired blood flow to the left side of her brain, and Dr. Langer was about to perform bypass surgery on slender, delicate arteries to restore the circulation and prevent a stroke. The operating room was dark, and everyone was wearing 3-D glasses. Lenox Hill is the first hospital in the United States to buy a device known as a videomicroscope, which turns neurosurgery into an immersive and sometimes dizzying expedition into the human brain. Enlarged on a 55-inch monitor, the stubble on Ms. Roy’s shaved scalp spiked up like rebar. The scissors and scalpel seemed big as hockey sticks, and popped out of the screen so vividly that observers felt an urge to duck. “This is like landing on the moon,” said a neurosurgeon who was visiting to watch and learn. The equipment produces magnified, high-resolution, three-dimensional digital images of surgical sites, and lets everyone in the room see exactly what the surgeon is seeing. The videomicroscope has a unique ability to capture “the brilliance and the beauty of the neurosurgical anatomy,” Dr. Langer said. He and other surgeons who have tested it predict it will change the way many brain and spine operations are performed and taught. “The first time I used it, I told students that this gives them an understanding of why I went into neurosurgery in the first place,” Dr. Langer said. © 2018 The New York Times Company

Keyword: Brain imaging
Link ID: 24504 - Posted: 01.09.2018

Tina Hesman Saey In movies, exploring the body up close often involves shrinking to microscopic sizes and taking harrowing rides through the blood. Thanks to a new virtual model, you can journey through a three-dimensional brain. No shrink ray required. The Society for Neuroscience and other organizations have long sponsored the website, which has basic information about how the human brain functions. Recently, the site launched an interactive 3-D brain. A translucent, light pink brain initially rotates in the middle of the screen. With a click of a mouse or a tap of a finger on a mobile device, you can highlight and isolate different parts of the organ. A brief text box then pops up to provide a structure’s name and details about the structure’s function. For instance, the globus pallidus — dual almond-shaped structures deep in the brain — puts a brake on muscle contractions to keep movements smooth. Some blurbs tell how a structure got its name or how researchers figured out what it does. Scientists, for example, have learned a lot about brain function by studying people who have localized brain damage. But the precuneus, a region in the cerebral cortex along the brain’s midline, isn’t usually damaged by strokes or head injuries, so scientists weren’t sure what the region did. Modern brain-imaging techniques that track blood flow and cell activity indicate the precuneus is involved in imagination, self-consciousness and reflecting on memories. |© Society for Science & the Public 2000 - 2018

Keyword: Brain imaging
Link ID: 24502 - Posted: 01.09.2018

by Emilie Reas Functional MRI (fMRI) is one of the most celebrated tools in neuroscience. Because of their unique ability to peer directly into the living brain while an organism thinks, feels and behaves, fMRI studies are often devoted disproportionate media attention, replete with flashy headlines and often grandiose claims. However, the technique has come under a fair amount of criticism from researchers questioning the validity of the statistical methods used to analyze fMRI data, and hence the reliability of fMRI findings. Can we trust those flashy headlines claiming that “scientists have discovered the area of the brain,” or are the masses of fMRI studies plagued by statistical shortcomings? To explore why these studies can be vulnerable to experimental failure, in their new PLOS One study coauthors Henk Cremers, Tor Wager and Tal Yarkoni investigated common statistical issues encountered in typical fMRI studies, and proposed how to avert them moving forward. The reliability of any experiment depends on adequate power to detect real effects and reject spurious ones, which can be influenced by various factors including the sample size (or number of “subjects” in fMRI), how strong the real effect is (“effect size”), whether comparisons are within or between subjects, and the statistical threshold used. To characterize common statistical culprits of fMRI studies, Cremers and colleagues first simulated typical fMRI scenarios before validating these simulations on a real dataset. One scenario simulated weak but diffusely distributed brain activity, and the other scenario simulated strong but localized brain activity (Figure 1). The simulation revealed that effect sizes are generally inflated for weak diffuse, compared to strong localized, activations, especially when the sample size is small. In contrast, effect sizes can actually be underestimated for strong localized scenarios when the sample size is large. Thus, more isn’t always better when it comes to fMRI; the optimal sample size likely depends on the specific brain-behavior relationship under investigation.

Keyword: Brain imaging
Link ID: 24501 - Posted: 01.09.2018

By Meredith Wadman For the first time, scientists have produced evidence in living humans that the protein tau, which mars the brain in Alzheimer’s disease, spreads from neuron to neuron. Although such movement wasn’t directly observed, the finding may illuminate how neurodegeneration occurs in the devastating illness, and it could provide new ideas for stemming the brain damage that robs so many of memory and cognition. Tau is one of two proteins—along with β-amyloid—that form unusual clumps in the brains of people with Alzheimer’s disease. Scientists have long debated which is most important to the condition and, thus, the best target for intervention. Tau deposits are found inside neurons, where they are thought to inhibit or kill them, whereas β-amyloid forms plaques outside brain cells. Researchers at the University of Cambridge in the United Kingdom combined two brain imaging techniques, functional magnetic resonance imaging and positron emission tomography (PET) scanning, in 17 Alzheimer’s patients to map both the buildup of tau and their brains’ functional connectivity—that is, how spatially separated brain regions communicate with each other. Strikingly, they found the largest concentrations of the damaging tau protein in brain regions heavily wired to others, suggesting that tau may spread in a way analogous to influenza during an epidemic, when people with the most social contacts will be at greatest risk of catching the disease. © 2018 American Association for the Advancement of Science.

Keyword: Alzheimers; Brain imaging
Link ID: 24496 - Posted: 01.06.2018

By Mark R. Hutchinson When someone is asked to think about pain, he or she will typically envision a graphic wound or a limb bent at an unnatural angle. However, chronic pain, more technically known as persistent pain, is a different beast altogether. In fact, some would say that the only thing that acute and persistent pain have in common is the word “pain.” The biological mechanisms that create and sustain the two conditions are very different. Pain is typically thought of as the behavioral and emotional results of the transmission of a neuronal signal, and indeed, acute pain, or nociception, results from the activation of peripheral neurons and the transmission of this signal along a connected series of so-called somatosensory neurons up the spinal cord and into the brain. But persistent pain, which is characterized by the overactivation of such pain pathways to cause chronic burning, deep aching, and skin-crawling and electric shock–like sensations, commonly involves another cell type altogether: glia.1 Long considered to be little more than cellular glue holding the brain together, glia, which outnumber neurons 10 to 1, are now appreciated as critical contributors to the health of the central nervous system, with recognized roles in the formation of synapses, neuronal plasticity, and protection against neurodegeneration. And over the past 15 to 20 years, pain researchers have also begun to appreciate the importance of these cells. Research has demonstrated that glia seem to respond and adapt to the cumulative danger signals that can result from disparate kinds of injury and illness, and that they appear to prime neural pathways for the overactivation that causes persistent pain. In fact, glial biology may hold important clues to some of the mysteries that have perplexed the pain research field, such as why the prevalence of persistent pain differs between the sexes and why some analgesic medications fail to work. © 1986-2018 The Scientist

Keyword: Pain & Touch; Glia
Link ID: 24482 - Posted: 01.03.2018