Chapter 2. Cells and Structures: The Anatomy of the Nervous System

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 1016

By Jessica Wright, A laboratory mouse has a modest home: a small, smelly cage lined with soft bedding, which it shares with up to four other animals. But it is home nonetheless—a place of comfort. That is, until the massive hand of a researcher reaches in to pluck it out for an experiment. The experiment might gauge whether a mouse feels anxious or social, or tap the activity in its brain. But does the intrusion of the researcher’s hand influence the very behavior under study? Yes, says Timothy Murphy, professor of cellular and physiological sciences at the University of British Columbia in Vancouver, Canada. Murphy’s team has developed a high-tech cage that allows a mouse to go about its business uninterrupted^1. The cage records the mouse’s every move. Whenever the animal is thirsty, it enters a corridor, attaches its head to an apparatus, and takes a drink while a microscope takes a picture of its brain activity. Murphy and his colleagues have used the cages to measure synchrony between mouse brain regions. In one experiment, the researchers captured more than 7,000 snapshots of brain activity in less than two months—all of them after the mice voluntarily ‘posed’ for the camera. We asked Murphy how he trains mice to participate, and how this approach could help autism research. © 2016 Scientific American

Keyword: Brain imaging
Link ID: 22889 - Posted: 11.19.2016

By Jef Akst WIKIMEDIA COMMONS, GERRYSHAWThe deeper scientists probe into the complexity of the human brain, the more questions seem to arise. One of the most fundamental questions is how many different types of brain cells there are, and how to categorize individual cell types. That dilemma was discussed during a session yesterday (November 11) at the ongoing Society for Neuroscience (SfN) conference in San Diego, California. As Evan Macosko of the Broad Institute said, the human brain comprises billions of brain cells—about 170 billion, according to one recent estimate—and there is a “tremendous amount of diversity in their function.” Now, new tools are supporting the study of single-cell transcriptomes, and the number of brain cell subtypes is skyrocketing. “We saw even greater degrees of heterogeneity in these cell populations than had been appreciated before,” Macosko said of his own single-cell interrogations of the mouse brain. He and others continue to characterize more brain regions, clustering cell types based on differences in gene expression, and then creating subclusters to look for diversity within each cell population. Following Macosko’s talk, Bosiljka Tasic of the Allen Institute for Brain Science emphasized that categorizing cell types into subgroups based on gene expression is not enough. Researchers will need to combine such data with traditional metrics, such as morphology and electrophysiology to “ultimately come up with an integrative taxonomy of cell types,” Tasic said. “Multimodal data acquisition—it’s a big deal and I think it’s going to be a big focus of our future endeavors.” © 1986-2016 The Scientist

Keyword: Brain imaging
Link ID: 22886 - Posted: 11.19.2016

Amber Dance In a study published in Science in September, Cossart, a neurobiologist at the Institute of Neurobiology of the Mediterranean in Marseilles, France, opened up mouse brains to visualize their neural activity as the animals raced on treadmills and rested. As the mice ran, some 50 neurons in their hippocampi fired in sequence, possibly to help the animals measure the distance travelled. Later, when the mice were resting, certain subsets of those neurons turned on again1. This reactivation, Cossart suspects, has to do with encoding and retrieving memory — as if the mouse is recalling its earlier exercise. “The power of imaging is really to be able to see the cells, to see not only the active ones but also the silent ones and to map them on the anatomical structure of the brain,” she says. It has not yet provided proof for Cossart's hypothesis, but the microscope and neural-activity markers behind the techniques represent the very latest in methods to study brain connectivity. In the past, researchers studied just a few neurons at a time using electrodes implanted into the brain. But that gives a fairly crude picture of what is going on, like looking at a monitor with just a couple of functioning pixels, says Rafael Yuste, director of the NeuroTechnology Center at Columbia University in New York City. But new techniques are fleshing out the picture. Scientists can now watch neurons live and in colour, helping them to work out which cells work together. Methods such as Cossart's zoom in at the microscopic scale to catch individual neurons in the act; others provide a whole-brain, or mesoscopic, view. And although it is possible to perform these experiments with an off-the-shelf microscope, scientists have been customizing them to suit their specific purposes; these devices are in various stages of commercialization. © 2016 Macmillan Publishers Limited,

Keyword: Brain imaging
Link ID: 22861 - Posted: 11.12.2016

By Alison F. Takemura In the 1980s, neuroscientists were facing an imaging problem. They had developed a new way to detect neuronal activity with calcium dyes, but visualizing the markers proved challenging. The dyes fluoresced in the presence of calcium ions when illuminated with ultraviolet (UV) light, but it was difficult to build UV lenses for confocal microscopes—instruments that allowed scientists to peer hundreds of micrometers deep into the brain. To make matters worse, because biological tissue scatters light so effectively, confocal scopes required excessive light intensities, which caused irreparable damage to samples. “You basically burned your tissue,” says Winfried Denk, director of the Max Planck Institute of Neurobiology in Martinsried, Germany. The time was ripe for a gentler option, and Denk developed two-photon excitation microscopy in 1990. Instead of using a single photon to excite a calcium dye, scientists could use two photons and half the illumination energy—red or infrared lasers, instead of ultraviolet. The scatter of such low-energy rays caused far less damage to surrounding tissue. The technology had another advantage. To excite a molecule, both photons had to reach it simultaneously. This meant the laser could only excite a tiny patch of tissue where its photons were most concentrated, giving scientists a new level of precision. © 1986-2016 The Scientist

Keyword: Brain imaging
Link ID: 22852 - Posted: 11.10.2016

Sara Reardon Major brain-mapping projects have multiplied in recent years, as neuroscientists develop new technologies to decipher how the brain works. These initiatives focus on understanding the brain, but the World Health Organization (WHO) wants to ensure that they work to translate their early discoveries and technological advances into tests and treatments for brain disorders. “We think there are side branches from projects that could be pursued with a very small investment to benefit public health,” says Shekhar Saxena, director of the WHO’s mental-health and substance-abuse department. Saxena will make that case on 12 November at the annual meeting of the Society for Neuroscience in San Diego, California — continuing a discussion that began in July at the WHO’s headquarters in Geneva, Switzerland. Among the roughly 70 people who attended that first meeting were leaders of the major brain initiatives, including the US BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative, launched in 2013; the European Human Brain Project, started in 2013; and the Japanese Brain/MINDS project, launched in 2014. All of these projects focus on basic research on the brain or the development of sophisticated tools to study it. Clinical applications are an ultimate, rather than an immediate, goal. But at the Geneva meeting, project leaders agreed, in principle, that they should do more to adapt brain-imaging technologies for use in clinical diagnoses. “The WHO is concerned that the emphasis on building these very expensive devices could worsen the health disparities that we have now between the developed and underdeveloped world,” says Walter Koroshetz, director of the US National Institute of Neurological Disorders and Stroke, which is part of the BRAIN Initiative. © 2016 Macmillan Publishers Limited

Keyword: Brain imaging
Link ID: 22846 - Posted: 11.09.2016

By LISA SANDERS, M.D. Yesterday we challenged Well readers to take on the case of a 63-year-old artist who, over the course of several months, developed excruciating headaches, along with changes in his personality, his thinking, even in the way he painted. We provided you with some of the doctor’s notes and medical imaging results that led the doctor who finally made the diagnosis in the right direction. After an extensive evaluation, that doctor asked a single question that led him to make the diagnosis. We asked Well readers to figure out the question the doctor asked and the diagnosis it suggested. It must have been a tough case — or else you were all too worried about the coming election to rise to the challenge — because we got just over 200 responses, fewer than usual. Of those, only six of you figured out the right diagnosis, and only three of you got the question right as well. Despite that, I was very impressed by the thinking of even those who didn’t come up with the right diagnosis. Many of you thought about environmental factors like his recent retirement and his exposure to possible toxins from his painting, and that kind of thinking was, in my opinion, the very essence of thinking like a doctor. Strong work, all of you. The question the doctor asked that led him to the correct diagnosis was: Can you hear your heartbeat in your ears? The patient could. And that suggested the diagnosis: A dural-arteriovenous fistula, or DAVF © 2016 The New York Times Company

Keyword: Pain & Touch
Link ID: 22839 - Posted: 11.07.2016

By Alison F. Takemura In the mid-1980s, György Buzsáki was trying to get inside rats’ heads. Working at the University of California, San Diego, he would anesthetize each animal with ether and hypothermia, cut through its scalp, and drill holes in its skull. Carefully, he’d screw 16 gold-plated stainless steel electrodes into the rat’s brain. When he was done with the surgery, these tiny pieces of metal—just 0.5 mm in diameter—allowed him to measure voltage changes from individual neurons deep in the brain’s folds, all while the rodent was awake and moving around. He could listen to the cells fire action potentials as the animal explored its environment, learning and remembering what it encountered (J Neurosci, 8:4007-26, 1988). In those days, recording from two cells simultaneously was the norm. The 16-site recording in Buzsáki’s 1988 study “was the largest ever in a rat,” he says. Nowadays, scientists can measure voltage changes from 1,000 neurons at the same time with silicon multielectrode arrays. But the basic techniques of using a probe to measure electrical activity within the brain (electrophysiology) or from outside it (electroencephalography, or EEG) are still workhorses of neural imaging labs. “The new tools don’t replace the old ones,” says Jessica Cardin, a neuroscientist at the Yale School of Medicine. “They add new layers of information.” Another decades-old neuroscientific technique that remains popular today is patch clamping. Developed in the late 1970s and early 1980s, it can detect changes in the electric potential of individual cells, or even single ion channels. With a tiny glass pipette suctioned against the cell’s membrane, researchers can make a small tear, sealed by the pipette tip, and detect voltage changes inside the cell. With some improvements, the patch clamp, like electrophysiology and EEG, has remained a regular part of the neuroscientist’s tool kit. Recently, researchers had a robot carry out the process (Nat Methods, 9:585-87, 2012). © 1986-2016 The Scientist

Keyword: Brain imaging
Link ID: 22783 - Posted: 10.25.2016

Robin McKie New visions of the brain and body’s detailed operations will be unveiled by a suite of medical scanners being opened this week. The newly refurbished Wolfson Brain Imaging Centre in the University of Cambridge has been equipped with some of the world’s most powerful magnetic resonance imaging (MRI) and positron emission tomography (PET) scanners and will give its researchers unprecedented power to make images of cancers, study the precise makeup of the cortex and analyse how chemicals in the brain – known as neurotransmitters – underpin the development of schizophrenia and depression. “It is a remarkable set of machines,” says Professor Ed Bullmore, head of neuroscience at Cambridge University. “We will be able to address clinical issues such as the detailed progression of Parkinson’s disease. At the same time, we will be able to address basic issues about the mind. How does the brain develop? How does the adult brain perform its functions?” At the heart of the refurbished centre – funded by the Medical Research Council, Wellcome Trust and Cancer Research UK – are three groundbreaking devices. Only a handful of these exist at institutions outside Cambridge and no institution – other than Cambridge – has all three. “The devices we have assembled are primarily for studying humans and will have a strong research focus,” Bullmore says. A key example is provided by the 7T MRI scanner. Current devices have magnetic fields that have strengths of around 3T (tesla) and can see structures 2-3 mm in size. By contrast, the new Cambridge scanner with its 7T field will have a resolution of around 0.5mm. © 2016 Guardian News and Media Limited

Keyword: Brain imaging
Link ID: 22782 - Posted: 10.24.2016

Sara Reardon Two heads are better than one: an idea that a new global brain initiative hopes to take advantage of. In recent years, brain-mapping initiatives have been popping up around the world. They have different goals and areas of expertise, but now researchers will attempt to apply their collective knowledge in a global push to more fully understand the brain. Thomas Shannon, US Under Secretary of State, announced the launch of the International Brain Initiative on 19 September at a meeting that accompanied the United Nations’ General Assembly in New York City. Details — including which US agency will spearhead the programme and who will pay for it — are still up in the air. However, researchers held a separate, but concurrent, meeting hosted by the US National Science Foundation at Rockefeller University to discuss which aspects of the programmes already in existence could be aligned under the global initiative. The reaction was a mixture of concerns over the fact that attemping to align projects could siphon money and attention from existing initiatives in other countries, and anticipation over the possibilities for advancing our knowledge about the brain. “I thought the most exciting moment in my scientific career was when the president announced the BRAIN Initiative in 2013,” says Cori Bargmann, a neuroscientist at the Rockefeller University in New York City and one of the main architects of the US Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. “But this was better.” © 2016 Macmillan Publishers Limited,

Keyword: Brain imaging
Link ID: 22680 - Posted: 09.22.2016

By Meredith Wadman While the United Nations General Assembly prepared for its sometimes divisive annual general debate on Monday, a less official United Nations of Brain Projects met nearby in a display of international amity and unbounded enthusiasm for the idea that transnational cooperation can, must, and will, at last, explain the brain. The tribe of some 400 neuroscientists, computational biologists, physicists, physicians, ethicists, government science counselors, and private funders convened at The Rockefeller University on Manhattan’s Upper East Side in New York City. The Coordinating Global Brain Projects gathering was mandated by the U.S. Congress in a 2015 law funding the U.S. Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. The meeting aimed to synchronize the explosion of big, ambitious neuroscience efforts being launched from Europe to China. Nearly 50 speakers from more than a dozen countries explained how their nations are plumbing brain science; all seemed eager to be part of the as-yet unmapped coordination that they hope will lead to a mellifluous symphony rather than a cacophony of competing chords. “We are really seeing international cooperation at a level that we have not seen before,” said Rockefeller’s Cori Bargmann, a neurobiologist who with Rafael Yuste of Columbia University convened the meeting with the backing of the universities, the National Science Foundation (NSF), and the Kavli Foundation, a private funder of neuroscience and nanoscience. Bargmann and Yuste have been integral to planning the BRAIN Initiative launched by President Barack Obama in the spring of 2013, which, along with the European Human Brain Project, started the new push for large-scale neuroscience initiatives. “This could be historic,” Yuste said. “I could imagine out of this meeting that groups of people could get together and start international collaborations the way the astronomers and the physicists have been doing for decades.” © 2016 American Association for the Advancement of Science

Keyword: Brain imaging
Link ID: 22678 - Posted: 09.21.2016

By Rajeev Raizada These brain maps show how accurately it was possible to predict neural activation patterns for new, previously unseen sentences, in different regions of the brain. The brighter the area, the higher the accuracy. The most accurate area, which can be seen as the bright yellow strip, is a region in the left side of the brain known as the Superior Temporal Sulcus. This region achieved statistically significant sentence predictions in 11 out of the 14 people whose brains were scanned. Although that was the most accurate region, several other regions, broadly distributed across the brain, also produced significantly accurate sentence predictions Credit: University of Rochester graphic / Andrew Anderson and Xixi Wang. Used with permission Words, like people, can achieve a lot more when they work together than when they stand on their own. Words working together make sentences, and sentences can express meanings that are unboundedly rich. How the human brain represents the meanings of sentences has been an unsolved problem in neuroscience, but my colleagues and I recently published work in the journal Cerebral Cortex that casts some light on the question. Here, my aim is to give a bigger-picture overview of what that work was about, and what it told us that we did not know before. To measure people's brain activation, we used fMRI (functional Magnetic Resonance Imaging). When fMRI studies were first carried out, in the early 1990s, they mostly just asked which parts of the brain "light up,” i.e. which brain areas are active when people perform a given task. © 2016 Scientific American

Keyword: Language; Brain imaging
Link ID: 22676 - Posted: 09.21.2016

By Catherine Caruso Most of us think little of hopping on Google Maps to look at everything from a bird’s-eye view of an entire continent to an on-the-ground view of a specific street, all carefully labeled. Thanks to a digital atlas published this week, the same is now possible with the human brain. Ed Lein and colleagues at the Allen Institute for Brain Science in Seattle have created a comprehensive, open-access digital atlas of the human brain, which was published this week in The Journal of Comparative Neurology. “Essentially what we were trying to do is to create a new reference standard for a very fine anatomical structural map of the complete human brain,” says Lein, the principal investigator on the project. “It may seem a little bit odd, but actually we are a bit lacking in types of basic reference materials for mapping the human brain that we have in other organisms like mouse or like monkey, and that is in large part because of the enormous size and complexity of the human brain.” The project, which spanned five years, focused on a single healthy postmortem brain from a 34-year-old woman. The researchers started with the big picture: They did a complete scan of the brain using two imaging techniques (magnetic resonance imaging and diffusion weighted imaging), which allowed them to capture both overall brain structure and the connectivity of brain fibers. Next the researchers took the brain and sliced it into 2,716 very thin sections for fine-scale, cellular analysis. They stained a portion of the sections with a traditional Nissl stain to gather information about general cell architecture. They then used two other stains to selectively label certain aspects of the brain, including structural elements of cells, fibers in the white matter, and specific types of neurons. © 2016 Scientific American

Keyword: Brain imaging; Development of the Brain
Link ID: 22663 - Posted: 09.17.2016

Laurel Hamers The brains of human ancestors didn’t just grow bigger over evolutionary time. They also amped up their metabolism, demanding more energy for a given volume, a new study suggests. Those increased energy demands might reflect changes in brain structure and organization as cognitive abilities increased, says physiologist Roger Seymour of the University of Adelaide in Australia, a coauthor of the report, published online August 31 in Royal Society Open Science. Blood vessels passing through bones leave behind holes in skulls; bigger holes correspond to bigger blood vessels. And since larger vessels carry more blood, scientists can use hole size to estimate blood flow in extinct hominids’ brains. Blood flow in turn indicates how much energy the brain consumed. (In modern humans, the brain eats up 20 to 25 percent of the energy the body generates when at rest.) Seymour and colleagues focused on the carotid arteries, the vessels that deliver the bulk of the brain’s blood. The team looked at nearly three dozen skulls from 12 hominid species from the last 3 million years, including Australopithecus africanus, Homo neanderthalensis and Homo erectus. In each, the researchers compared the brain’s overall volume with the diameter of the carotid artery’s tiny entrance hole at the base of the skull. “We expected to find that the rate of blood flow was proportional to the brain size,” Seymour says. “But we found that wasn’t the case.” Instead, bigger brains required more blood flow per unit volume than smaller brains. |© Society for Science & the Public 2000 - 2016.

Keyword: Evolution
Link ID: 22616 - Posted: 08.31.2016

By KATE MURPHY We’ve all seen them, those colorful images that show how our brains “light up” when we’re in love, playing a video game, craving chocolate, etc. Created using functional magnetic resonance imaging, or fM.R.I., these pictures are the basis of tens of thousands of scientific papers, the backdrop to TED talks and supporting evidence in best-selling books that tell us how to maintain healthy relationships, make decisions, market products and lose weight. But a study published last month in the Proceedings of the National Academy of Sciences uncovered flaws in the software researchers rely on to analyze fM.R.I. data. The glitch can cause false positives — suggesting brain activity where there is none — up to 70 percent of the time. This cued a chorus of “I told you so!” from critics who have long said fM.R.I. is nothing more than high-tech phrenology. Brain-imaging researchers protested that the software problems were not as bad nor as widespread as the study suggested. The dust-up has caused considerable angst in the fM.R.I. community, about not only the reliability of their pretty pictures but also how limited funding and the pressure to publish splashy results might have allowed such a mistake to go unnoticed for so long. The remedial measures some in the field are now proposing could be a model for the wider scientific community, which, despite breathtaking technological advances, often produces findings that don’t hold up over time. “We have entered an era where the kinds of data and the analyses that people run have gotten incredibly complicated,” said Martin Sereno, the chairman of the cognitive neuroimaging department at the University of California, San Diego. “So you have researchers using sophisticated software programs that they probably don’t understand but are generally accepted and everyone uses.” © 2016 The New York Times Company

Keyword: Brain imaging
Link ID: 22607 - Posted: 08.29.2016

By Sara Chodosh When a single neuron fires, it is an isolated chemical blip. When many fire together, they form a thought. How the brain bridges the gap between these two tiers of neural activity remains a great mystery, but a new kind of technology is edging us closer to solving it. The glowing splash of cyan in the photo above comes from a type of biosensor that can detect the release of very small amounts of neurotransmitters, the signaling molecules that brain cells use to communicate. These sensors, called CNiFERs (pronounced “sniffers”), for cell-based neurotransmitter fluorescent engineered reporters, are enabling scientists to examine the brain in action and up close. This newfound ability, developed as part of the White House BRAIN Initiative, could further our understanding of how brain function arises from the complex interplay of individual neurons, including how complex behaviors like addiction develop. Neuroscientist Paul Slesinger at Icahn School of Medicine at Mount Sinai, one of the senior researchers who spearheaded this research, presented the sensors Monday at the American Chemical Society’s 252nd National Meeting & Exposition. Current technologies have proved either too broad or too specific to track how tiny amounts of neurotransmitters in and around many cells might contribute to the transmission of a thought. Scientists have used functional magnetic resonance imaging to look at blood flow as a surrogate for brain activity over fairly long periods of time or have employed tracers to follow the release of a particular neurotransmitter from a small set of neurons for a few seconds. But CNiFERs make for a happy medium; they allow researchers to monitor multiple neurotransmitters in many cells over significant periods of time. © 2016 Scientific American

Keyword: Brain imaging
Link ID: 22600 - Posted: 08.25.2016

Laura Sanders Brain scientists Eric Jonas and Konrad Kording had grown skeptical. They weren’t convinced that the sophisticated, big data experiments of neuroscience were actually accomplishing anything. So they devised a devilish experiment. Instead of studying the brain of a person, or a mouse, or even a lowly worm, the two used advanced neuroscience methods to scrutinize the inner workings of another information processor — a computer chip. The unorthodox experimental subject, the MOS 6502, is the same chip that dazzled early tech junkies and kids alike in the 1980s by powering Donkey Kong, Space Invaders and Pitfall, as well as the Apple I and II computers. Of course, these experiments were rigged. The scientists already knew everything about how the 6502 works. “The beauty of the microprocessor is that unlike anything in biology, we understand it on every level,” says Jonas, of the University of California, Berkeley. A barrel-hurling gorilla is the enemy in Donkey Kong, a video game powered by the MOS 6502 microprocessor. Along with Space Invaders and Pitfall, this game served as the “behavior” in a recent experiment. Using a simulation of MOS 6502, Jonas and Kording, of Northwestern University in Chicago, studied the behavior of electricity-moving transistors, along with aspects of the chip’s connections and its output, to reveal how it handles information. Since they already knew what the outcomes should be, they were actually testing the methods. By the end of their experiments, Jonas and Kording had discovered almost nothing. |© Society for Science & the Public 2000 - 2016

Keyword: Brain imaging
Link ID: 22597 - Posted: 08.24.2016

By NICHOLAS ST. FLEUR Neuroscientists have developed a way to turn an entire mouse, including its muscles and internal organs, transparent while illuminating the nerve paths that run throughout its body. The process, called uDisco, provides an alternate way for researchers to study an organism’s nervous system without having to slice into sections of its organs or tissues. It allows researchers to use a microscope to trace neurons from the rodent’s brain and spinal cord all the way to its fingers and toes. “When I saw images on the microscope that my students were obtaining, I was like ‘Wow, this is mind blowing,’” said Ali Ertürk, a neuroscientist from the Ludwig Maximilians University of Munich in Germany and an author of the paper. “We can map the neural connectivity in the whole mouse in 3D.” They published their technique Monday in the journal Nature Methods. So far, the technique has been conducted only in mice and rats, but the scientists think it could one day be used to map the human brain. They also said it could be particularly useful for studying the effects of mental disorders like Alzheimer’s disease or schizophrenia. Dr. Ertürk and his colleagues study neurodegenerative disorders, and are particularly interested in diseases that occur from traumatic brain injuries. Researchers often study these diseases by examining thin slices of brain tissue under a microscope. “That is not a good way to study neurons because if you slice the brain, you slice the network,” Dr. Ertürk said. “The best way to look at it is to look at the entire organism, not only the brain lesion but beyond that. We need to see the whole picture.” To do this, Dr. Ertürk and his team developed a two-step process that renders a rodent transparent while keeping its internal organs structurally sound. The mice they used were dead and had been tagged with a special fluorescent protein to make specific parts of their anatomy glow. © 2016 The New York Times Company

Keyword: Brain imaging
Link ID: 22590 - Posted: 08.23.2016

Sara Reardon Neuroscientists have invented a way to watch the ebb and flow of the brain's chemical messengers in real time. They were able to see the surge of neurotransmitters as mice were conditioned — similarly to Pavlov's famous dogs — to salivate in response to a sound. The study, presented at the American Chemical Society’s meeting in Philadelphia, Pennyslvania, on 22 August, uses a technique that could help to disentangle the complex language of neurotransmitters. Ultimately, it could lead to a better understanding of brain circuitry. The brain’s electrical surges are easy to track. But detecting the chemicals that drive this activity — the neurotransmitters that travel between brain cells and lead them to fire — is much harder. “There’s a hidden signalling network in the brain, and we need tools to uncover it,” says Michael Strano, a chemical engineer at the Massachusetts Institute of Technology in Cambridge. In many parts of the brain, neurotransmitters can exist at undetectably low levels. Typically, researchers monitor them by sucking fluid out from between neurons and analysing the contents in the lab. But that technique cannot measure activity in real time. Another option is to insert a metal probe into the space between neurons to measure how neurotransmitters react chemically when they touch metal. But the probe is unable to distinguish between structurally similar molecules, such as dopamine, which is involved in pleasure and reward, and noradrenaline which is involved in alertness. © 2016 Macmillan Publishers Limited

Keyword: Brain imaging; Drug Abuse
Link ID: 22584 - Posted: 08.22.2016

SINCE nobody really knows how brains work, those researching them must often resort to analogies. A common one is that a brain is a sort of squishy, imprecise, biological version of a digital computer. But analogies work both ways, and computer scientists have a long history of trying to improve their creations by taking ideas from biology. The trendy and rapidly developing branch of artificial intelligence known as “deep learning”, for instance, takes much of its inspiration from the way biological brains are put together. The general idea of building computers to resemble brains is called neuromorphic computing, a term coined by Carver Mead, a pioneering computer scientist, in the late 1980s. There are many attractions. Brains may be slow and error-prone, but they are also robust, adaptable and frugal. They excel at processing the sort of noisy, uncertain data that are common in the real world but which tend to give conventional electronic computers, with their prescriptive arithmetical approach, indigestion. The latest development in this area came on August 3rd, when a group of researchers led by Evangelos Eleftheriou at IBM’s research laboratory in Zurich announced, in a paper published in Nature Nanotechnology, that they had built a working, artificial version of a neuron. Neurons are the spindly, highly interconnected cells that do most of the heavy lifting in real brains. The idea of making artificial versions of them is not new. Dr Mead himself has experimented with using specially tuned transistors, the tiny electronic switches that form the basis of computers, to mimic some of their behaviour. © The Economist Newspaper Limited 2016.

Keyword: Robotics; Intelligence
Link ID: 22573 - Posted: 08.18.2016

Neuroscientists peered into the brains of patients with Parkinson’s disease and two similar conditions to see how their neural responses changed over time. The study, funded by the NIH’s Parkinson’s Disease Biomarkers Program and published in Neurology, may provide a new tool for testing experimental medications aimed at alleviating symptoms and slowing the rate at which the diseases damage the brain. “If you know that in Parkinson’s disease the activity in a specific brain region is decreasing over the course of a year, it opens the door to evaluating a therapeutic to see if it can slow that reduction,” said senior author David Vaillancourt, Ph.D., a professor in the University of Florida’s Department of Applied Physiology and Kinesiology. “It provides a marker for evaluating how treatments alter the chronic changes in brain physiology caused by Parkinson’s.” Parkinson’s disease is a neurodegenerative disorder that destroys neurons in the brain that are essential for controlling movement. While many medications exist that lessen the consequences of this neuronal loss, none can prevent the destruction of those cells. Clinical trials for Parkinson’s disease have long relied on observing whether a therapy improves patients’ symptoms, but such studies reveal little about how the treatment affects the underlying progressive neurodegeneration. As a result, while there are treatments that improve symptoms, they become less effective as the neurodegeneration advances. The new study could remedy this issue by providing researchers with measurable targets, called biomarkers, to assess whether a drug slows or even stops the progression of the disease in the brain. “For decades, the field has been searching for an effective biomarker for Parkinson’s disease,” said Debra Babcock, M.D., Ph.D., program director at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS).

Keyword: Parkinsons; Brain imaging
Link ID: 22562 - Posted: 08.16.2016