Chapter 2. Functional Neuroanatomy: The Nervous System and Behavior

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.

Links 1 - 20 of 1009

Sara Reardon Two heads are better than one: an idea that a new global brain initiative hopes to take advantage of. In recent years, brain-mapping initiatives have been popping up around the world. They have different goals and areas of expertise, but now researchers will attempt to apply their collective knowledge in a global push to more fully understand the brain. Thomas Shannon, US Under Secretary of State, announced the launch of the International Brain Initiative on 19 September at a meeting that accompanied the United Nations’ General Assembly in New York City. Details — including which US agency will spearhead the programme and who will pay for it — are still up in the air. However, researchers held a separate, but concurrent, meeting hosted by the US National Science Foundation at Rockefeller University to discuss which aspects of the programmes already in existence could be aligned under the global initiative. The reaction was a mixture of concerns over the fact that attemping to align projects could siphon money and attention from existing initiatives in other countries, and anticipation over the possibilities for advancing our knowledge about the brain. “I thought the most exciting moment in my scientific career was when the president announced the BRAIN Initiative in 2013,” says Cori Bargmann, a neuroscientist at the Rockefeller University in New York City and one of the main architects of the US Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. “But this was better.” © 2016 Macmillan Publishers Limited,

Keyword: Brain imaging
Link ID: 22680 - Posted: 09.22.2016

By Meredith Wadman While the United Nations General Assembly prepared for its sometimes divisive annual general debate on Monday, a less official United Nations of Brain Projects met nearby in a display of international amity and unbounded enthusiasm for the idea that transnational cooperation can, must, and will, at last, explain the brain. The tribe of some 400 neuroscientists, computational biologists, physicists, physicians, ethicists, government science counselors, and private funders convened at The Rockefeller University on Manhattan’s Upper East Side in New York City. The Coordinating Global Brain Projects gathering was mandated by the U.S. Congress in a 2015 law funding the U.S. Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. The meeting aimed to synchronize the explosion of big, ambitious neuroscience efforts being launched from Europe to China. Nearly 50 speakers from more than a dozen countries explained how their nations are plumbing brain science; all seemed eager to be part of the as-yet unmapped coordination that they hope will lead to a mellifluous symphony rather than a cacophony of competing chords. “We are really seeing international cooperation at a level that we have not seen before,” said Rockefeller’s Cori Bargmann, a neurobiologist who with Rafael Yuste of Columbia University convened the meeting with the backing of the universities, the National Science Foundation (NSF), and the Kavli Foundation, a private funder of neuroscience and nanoscience. Bargmann and Yuste have been integral to planning the BRAIN Initiative launched by President Barack Obama in the spring of 2013, which, along with the European Human Brain Project, started the new push for large-scale neuroscience initiatives. “This could be historic,” Yuste said. “I could imagine out of this meeting that groups of people could get together and start international collaborations the way the astronomers and the physicists have been doing for decades.” © 2016 American Association for the Advancement of Science

Keyword: Brain imaging
Link ID: 22678 - Posted: 09.21.2016

By Rajeev Raizada These brain maps show how accurately it was possible to predict neural activation patterns for new, previously unseen sentences, in different regions of the brain. The brighter the area, the higher the accuracy. The most accurate area, which can be seen as the bright yellow strip, is a region in the left side of the brain known as the Superior Temporal Sulcus. This region achieved statistically significant sentence predictions in 11 out of the 14 people whose brains were scanned. Although that was the most accurate region, several other regions, broadly distributed across the brain, also produced significantly accurate sentence predictions Credit: University of Rochester graphic / Andrew Anderson and Xixi Wang. Used with permission Words, like people, can achieve a lot more when they work together than when they stand on their own. Words working together make sentences, and sentences can express meanings that are unboundedly rich. How the human brain represents the meanings of sentences has been an unsolved problem in neuroscience, but my colleagues and I recently published work in the journal Cerebral Cortex that casts some light on the question. Here, my aim is to give a bigger-picture overview of what that work was about, and what it told us that we did not know before. To measure people's brain activation, we used fMRI (functional Magnetic Resonance Imaging). When fMRI studies were first carried out, in the early 1990s, they mostly just asked which parts of the brain "light up,” i.e. which brain areas are active when people perform a given task. © 2016 Scientific American

Keyword: Language; Brain imaging
Link ID: 22676 - Posted: 09.21.2016

By Catherine Caruso Most of us think little of hopping on Google Maps to look at everything from a bird’s-eye view of an entire continent to an on-the-ground view of a specific street, all carefully labeled. Thanks to a digital atlas published this week, the same is now possible with the human brain. Ed Lein and colleagues at the Allen Institute for Brain Science in Seattle have created a comprehensive, open-access digital atlas of the human brain, which was published this week in The Journal of Comparative Neurology. “Essentially what we were trying to do is to create a new reference standard for a very fine anatomical structural map of the complete human brain,” says Lein, the principal investigator on the project. “It may seem a little bit odd, but actually we are a bit lacking in types of basic reference materials for mapping the human brain that we have in other organisms like mouse or like monkey, and that is in large part because of the enormous size and complexity of the human brain.” The project, which spanned five years, focused on a single healthy postmortem brain from a 34-year-old woman. The researchers started with the big picture: They did a complete scan of the brain using two imaging techniques (magnetic resonance imaging and diffusion weighted imaging), which allowed them to capture both overall brain structure and the connectivity of brain fibers. Next the researchers took the brain and sliced it into 2,716 very thin sections for fine-scale, cellular analysis. They stained a portion of the sections with a traditional Nissl stain to gather information about general cell architecture. They then used two other stains to selectively label certain aspects of the brain, including structural elements of cells, fibers in the white matter, and specific types of neurons. © 2016 Scientific American

Keyword: Brain imaging; Development of the Brain
Link ID: 22663 - Posted: 09.17.2016

Laurel Hamers The brains of human ancestors didn’t just grow bigger over evolutionary time. They also amped up their metabolism, demanding more energy for a given volume, a new study suggests. Those increased energy demands might reflect changes in brain structure and organization as cognitive abilities increased, says physiologist Roger Seymour of the University of Adelaide in Australia, a coauthor of the report, published online August 31 in Royal Society Open Science. Blood vessels passing through bones leave behind holes in skulls; bigger holes correspond to bigger blood vessels. And since larger vessels carry more blood, scientists can use hole size to estimate blood flow in extinct hominids’ brains. Blood flow in turn indicates how much energy the brain consumed. (In modern humans, the brain eats up 20 to 25 percent of the energy the body generates when at rest.) Seymour and colleagues focused on the carotid arteries, the vessels that deliver the bulk of the brain’s blood. The team looked at nearly three dozen skulls from 12 hominid species from the last 3 million years, including Australopithecus africanus, Homo neanderthalensis and Homo erectus. In each, the researchers compared the brain’s overall volume with the diameter of the carotid artery’s tiny entrance hole at the base of the skull. “We expected to find that the rate of blood flow was proportional to the brain size,” Seymour says. “But we found that wasn’t the case.” Instead, bigger brains required more blood flow per unit volume than smaller brains. |© Society for Science & the Public 2000 - 2016.

Keyword: Evolution
Link ID: 22616 - Posted: 08.31.2016

By KATE MURPHY We’ve all seen them, those colorful images that show how our brains “light up” when we’re in love, playing a video game, craving chocolate, etc. Created using functional magnetic resonance imaging, or fM.R.I., these pictures are the basis of tens of thousands of scientific papers, the backdrop to TED talks and supporting evidence in best-selling books that tell us how to maintain healthy relationships, make decisions, market products and lose weight. But a study published last month in the Proceedings of the National Academy of Sciences uncovered flaws in the software researchers rely on to analyze fM.R.I. data. The glitch can cause false positives — suggesting brain activity where there is none — up to 70 percent of the time. This cued a chorus of “I told you so!” from critics who have long said fM.R.I. is nothing more than high-tech phrenology. Brain-imaging researchers protested that the software problems were not as bad nor as widespread as the study suggested. The dust-up has caused considerable angst in the fM.R.I. community, about not only the reliability of their pretty pictures but also how limited funding and the pressure to publish splashy results might have allowed such a mistake to go unnoticed for so long. The remedial measures some in the field are now proposing could be a model for the wider scientific community, which, despite breathtaking technological advances, often produces findings that don’t hold up over time. “We have entered an era where the kinds of data and the analyses that people run have gotten incredibly complicated,” said Martin Sereno, the chairman of the cognitive neuroimaging department at the University of California, San Diego. “So you have researchers using sophisticated software programs that they probably don’t understand but are generally accepted and everyone uses.” © 2016 The New York Times Company

Keyword: Brain imaging
Link ID: 22607 - Posted: 08.29.2016

By Sara Chodosh When a single neuron fires, it is an isolated chemical blip. When many fire together, they form a thought. How the brain bridges the gap between these two tiers of neural activity remains a great mystery, but a new kind of technology is edging us closer to solving it. The glowing splash of cyan in the photo above comes from a type of biosensor that can detect the release of very small amounts of neurotransmitters, the signaling molecules that brain cells use to communicate. These sensors, called CNiFERs (pronounced “sniffers”), for cell-based neurotransmitter fluorescent engineered reporters, are enabling scientists to examine the brain in action and up close. This newfound ability, developed as part of the White House BRAIN Initiative, could further our understanding of how brain function arises from the complex interplay of individual neurons, including how complex behaviors like addiction develop. Neuroscientist Paul Slesinger at Icahn School of Medicine at Mount Sinai, one of the senior researchers who spearheaded this research, presented the sensors Monday at the American Chemical Society’s 252nd National Meeting & Exposition. Current technologies have proved either too broad or too specific to track how tiny amounts of neurotransmitters in and around many cells might contribute to the transmission of a thought. Scientists have used functional magnetic resonance imaging to look at blood flow as a surrogate for brain activity over fairly long periods of time or have employed tracers to follow the release of a particular neurotransmitter from a small set of neurons for a few seconds. But CNiFERs make for a happy medium; they allow researchers to monitor multiple neurotransmitters in many cells over significant periods of time. © 2016 Scientific American

Keyword: Brain imaging
Link ID: 22600 - Posted: 08.25.2016

Laura Sanders Brain scientists Eric Jonas and Konrad Kording had grown skeptical. They weren’t convinced that the sophisticated, big data experiments of neuroscience were actually accomplishing anything. So they devised a devilish experiment. Instead of studying the brain of a person, or a mouse, or even a lowly worm, the two used advanced neuroscience methods to scrutinize the inner workings of another information processor — a computer chip. The unorthodox experimental subject, the MOS 6502, is the same chip that dazzled early tech junkies and kids alike in the 1980s by powering Donkey Kong, Space Invaders and Pitfall, as well as the Apple I and II computers. Of course, these experiments were rigged. The scientists already knew everything about how the 6502 works. “The beauty of the microprocessor is that unlike anything in biology, we understand it on every level,” says Jonas, of the University of California, Berkeley. A barrel-hurling gorilla is the enemy in Donkey Kong, a video game powered by the MOS 6502 microprocessor. Along with Space Invaders and Pitfall, this game served as the “behavior” in a recent experiment. Using a simulation of MOS 6502, Jonas and Kording, of Northwestern University in Chicago, studied the behavior of electricity-moving transistors, along with aspects of the chip’s connections and its output, to reveal how it handles information. Since they already knew what the outcomes should be, they were actually testing the methods. By the end of their experiments, Jonas and Kording had discovered almost nothing. |© Society for Science & the Public 2000 - 2016

Keyword: Brain imaging
Link ID: 22597 - Posted: 08.24.2016

By NICHOLAS ST. FLEUR Neuroscientists have developed a way to turn an entire mouse, including its muscles and internal organs, transparent while illuminating the nerve paths that run throughout its body. The process, called uDisco, provides an alternate way for researchers to study an organism’s nervous system without having to slice into sections of its organs or tissues. It allows researchers to use a microscope to trace neurons from the rodent’s brain and spinal cord all the way to its fingers and toes. “When I saw images on the microscope that my students were obtaining, I was like ‘Wow, this is mind blowing,’” said Ali Ertürk, a neuroscientist from the Ludwig Maximilians University of Munich in Germany and an author of the paper. “We can map the neural connectivity in the whole mouse in 3D.” They published their technique Monday in the journal Nature Methods. So far, the technique has been conducted only in mice and rats, but the scientists think it could one day be used to map the human brain. They also said it could be particularly useful for studying the effects of mental disorders like Alzheimer’s disease or schizophrenia. Dr. Ertürk and his colleagues study neurodegenerative disorders, and are particularly interested in diseases that occur from traumatic brain injuries. Researchers often study these diseases by examining thin slices of brain tissue under a microscope. “That is not a good way to study neurons because if you slice the brain, you slice the network,” Dr. Ertürk said. “The best way to look at it is to look at the entire organism, not only the brain lesion but beyond that. We need to see the whole picture.” To do this, Dr. Ertürk and his team developed a two-step process that renders a rodent transparent while keeping its internal organs structurally sound. The mice they used were dead and had been tagged with a special fluorescent protein to make specific parts of their anatomy glow. © 2016 The New York Times Company

Keyword: Brain imaging
Link ID: 22590 - Posted: 08.23.2016

Sara Reardon Neuroscientists have invented a way to watch the ebb and flow of the brain's chemical messengers in real time. They were able to see the surge of neurotransmitters as mice were conditioned — similarly to Pavlov's famous dogs — to salivate in response to a sound. The study, presented at the American Chemical Society’s meeting in Philadelphia, Pennyslvania, on 22 August, uses a technique that could help to disentangle the complex language of neurotransmitters. Ultimately, it could lead to a better understanding of brain circuitry. The brain’s electrical surges are easy to track. But detecting the chemicals that drive this activity — the neurotransmitters that travel between brain cells and lead them to fire — is much harder. “There’s a hidden signalling network in the brain, and we need tools to uncover it,” says Michael Strano, a chemical engineer at the Massachusetts Institute of Technology in Cambridge. In many parts of the brain, neurotransmitters can exist at undetectably low levels. Typically, researchers monitor them by sucking fluid out from between neurons and analysing the contents in the lab. But that technique cannot measure activity in real time. Another option is to insert a metal probe into the space between neurons to measure how neurotransmitters react chemically when they touch metal. But the probe is unable to distinguish between structurally similar molecules, such as dopamine, which is involved in pleasure and reward, and noradrenaline which is involved in alertness. © 2016 Macmillan Publishers Limited

Keyword: Brain imaging; Drug Abuse
Link ID: 22584 - Posted: 08.22.2016

SINCE nobody really knows how brains work, those researching them must often resort to analogies. A common one is that a brain is a sort of squishy, imprecise, biological version of a digital computer. But analogies work both ways, and computer scientists have a long history of trying to improve their creations by taking ideas from biology. The trendy and rapidly developing branch of artificial intelligence known as “deep learning”, for instance, takes much of its inspiration from the way biological brains are put together. The general idea of building computers to resemble brains is called neuromorphic computing, a term coined by Carver Mead, a pioneering computer scientist, in the late 1980s. There are many attractions. Brains may be slow and error-prone, but they are also robust, adaptable and frugal. They excel at processing the sort of noisy, uncertain data that are common in the real world but which tend to give conventional electronic computers, with their prescriptive arithmetical approach, indigestion. The latest development in this area came on August 3rd, when a group of researchers led by Evangelos Eleftheriou at IBM’s research laboratory in Zurich announced, in a paper published in Nature Nanotechnology, that they had built a working, artificial version of a neuron. Neurons are the spindly, highly interconnected cells that do most of the heavy lifting in real brains. The idea of making artificial versions of them is not new. Dr Mead himself has experimented with using specially tuned transistors, the tiny electronic switches that form the basis of computers, to mimic some of their behaviour. © The Economist Newspaper Limited 2016.

Keyword: Robotics; Intelligence
Link ID: 22573 - Posted: 08.18.2016

Neuroscientists peered into the brains of patients with Parkinson’s disease and two similar conditions to see how their neural responses changed over time. The study, funded by the NIH’s Parkinson’s Disease Biomarkers Program and published in Neurology, may provide a new tool for testing experimental medications aimed at alleviating symptoms and slowing the rate at which the diseases damage the brain. “If you know that in Parkinson’s disease the activity in a specific brain region is decreasing over the course of a year, it opens the door to evaluating a therapeutic to see if it can slow that reduction,” said senior author David Vaillancourt, Ph.D., a professor in the University of Florida’s Department of Applied Physiology and Kinesiology. “It provides a marker for evaluating how treatments alter the chronic changes in brain physiology caused by Parkinson’s.” Parkinson’s disease is a neurodegenerative disorder that destroys neurons in the brain that are essential for controlling movement. While many medications exist that lessen the consequences of this neuronal loss, none can prevent the destruction of those cells. Clinical trials for Parkinson’s disease have long relied on observing whether a therapy improves patients’ symptoms, but such studies reveal little about how the treatment affects the underlying progressive neurodegeneration. As a result, while there are treatments that improve symptoms, they become less effective as the neurodegeneration advances. The new study could remedy this issue by providing researchers with measurable targets, called biomarkers, to assess whether a drug slows or even stops the progression of the disease in the brain. “For decades, the field has been searching for an effective biomarker for Parkinson’s disease,” said Debra Babcock, M.D., Ph.D., program director at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS).

Keyword: Parkinsons; Brain imaging
Link ID: 22562 - Posted: 08.16.2016

Are you a giver or a taker? Brain scans have identified a region of the cerebral cortex responsible for generosity – and some of us are kinder than others. The area was identified using a computer game that linked different symbols to cash prizes that either went to the player, or one of the study’s other participants. The volunteers readily learned to score prizes that helped other people, but they tended to learn how to benefit themselves more quickly. Read more: The kindness paradox: Why be generous? MRI scanning revealed that one particular brain area – the subgenual anterior cingulate cortex – seemed to be active when participants chose to be generous, prioritising benefits for someone else over getting rewards for themselves. But Patricia Lockwood, at the University of Oxford, and her team found that this brain area was not equally active in every volunteer. People who rated themselves as having higher levels of empathy learned to benefit others faster, and these people had more activity in this particular brain area, says Lockwood. This finding may lead to new ways to identify and understand anti-social and psychopathic behavior. Journal reference: PNAS, DOI: 10.1073/pnas.1603198113 © Copyright Reed Business Information Ltd.

Keyword: Brain imaging; Emotions
Link ID: 22560 - Posted: 08.16.2016

By Helen Thomson Take a walk while I look inside your brain. Scientists have developed the first wearable PET scanner – allowing them to capture the inner workings of the brain while a person is on the move. The team plans to use it to investigate the exceptional talents of savants, such as perfect memory or exceptional mathematical skill. All available techniques for scanning the deeper regions of our brains require a person to be perfectly still. This limits the kinds of activities we can observe the brain doing, but the new scanner will enable researchers to study brain behaviour in normal life, as well providing a better understanding of the tremors of Parkinson’s disease, and the effectiveness of treatments for stroke. Positron emission tomography scanners track radioactive tracers, injected into the blood, that typically bind to glucose, the molecule that our cells use for energy. In this way, the scanners build 3D images of our bodies, enabling us to see which brain areas are particularly active, or where tumours are guzzling glucose in the body. To adapt this technique for people who are moving around, Stan Majewski at West Virginia University in Morgantown and his colleagues have constructed a ring of 12 radiation detectors that can be placed around a person’s head. This scanner is attached to the ceiling by a bungee-cord contraption, so that the wearer doesn’t feel the extra weight of the scanner. © Copyright Reed Business Information Ltd

Keyword: Brain imaging
Link ID: 22557 - Posted: 08.13.2016

Mo Costandi The human brain is often said to be the most complex object in the known universe, and there’s good reason to believe that it is. That lump of jelly inside your head contains at least 80 billion nerve cells, or neurons, and even more of the non-neuronal cells called glia. Between them, they form hundreds of trillions of precise synaptic connections; but they all have moveable parts, and these connections can change. Neurons can extend and retract their delicate fibres; some types of glial cells can crawl through the brain; and neurons and glia routinely work together to create new connections and eliminate old ones. These processes begin before we are born, and occur until we die, making the brain a highly dynamic organ that undergoes continuous change throughout life. At any given moment, many millions of them are being modified in one way or another, to reshape the brain’s circuitry in response to our daily experiences. Researchers at Yale University have now developed an imaging technique that enables them to visualise the density of synapses in the living human brain, and offers a promising new way of studying how the organ develops and functions, and also how it deteriorates in various neurological and psychiatric conditions. The new method, developed in Richard Carson’s lab at Yale’s School of Engineering and Applied Sciences, is based on positron emission tomography (PET), which detects the radiation emitted by radioactive ‘tracers’ that bind to specific proteins or other molecules after being injected into the body. Until now, the density of synapses in the human brain could only be determined by autopsy, using antibodies that bind to and stain specific synaptic proteins, or electron microscopy to examine the fine structure of the tissue. © 2016 Guardian News and Media Limited

Keyword: Brain imaging
Link ID: 22545 - Posted: 08.11.2016

By Andy Coghlan The switching-off of genes in the human brain has been watched live for the first time. By comparing this activity in different people’s brains, researchers are now on the hunt for abnormalities underlying disorders such as Alzheimer’s disease and schizophrenia. To see where genes are most and least active in the brain, Jacob Hooker at Harvard Medical School and his team developed a radioactive tracer chemical that binds to a type of enzyme called an HDAC. This enzyme deactivates genes inside our cells, stopping them from making the proteins they code for. When injected into people, brain scans can detect where this tracer has bound to an enzyme, and thus where the enzyme is switching off genes. Live epigenetics The switching-off of genes by HDACs is a form of epigenetics – physical changes to the structure of DNA that modify how active genes are without altering their code. Until now, the only way to examine such activity in the brain has been by looking at post-mortem brain tissue. In the image above from the study, genes are least active in the red regions, such as the bulb-shaped cerebellum area towards the bottom right. The black and blue areas show the highest levels of gene activity – where barely any HDACs are present – and the yellow and green areas fall in between. © Copyright Reed Business Information Ltd.

Keyword: Epigenetics; Brain imaging
Link ID: 22544 - Posted: 08.11.2016

Nicola Davis Scientists say that they have discovered a possible explanation for how Alzheimer’s disease spreads in the brain. Alzheimer’s is linked to a buildup of protein plaques and tangles that spread across particular tissues in the brain as the disease progresses. But while the pattern of this spread is well-known, the reason behind the pattern is not. Now scientists say they have uncovered a potential explanation as to why certain tissues of the brain are more vulnerable to Alzheimer’s disease. The vulnerability appears to be linked to variations in the levels of proteins in the brain that protect against the clumping of other proteins - variations that are present decades before the onset of the disease. Hope for Alzheimer's treatment as researchers find licensed drugs halt brain degeneration Read more “Our results indicate that within healthy brains a tell-tale pattern of protein levels predicts the progression of Alzheimer’s disease through the brain [in those that are affected by the disease],” said Rosie Freer, a PhD student at the University of Cambridge and first author of the study. The results could open up the possibility of identifying individuals who are at risk of developing Alzheimer’s long before symptoms appear, as well as offering new insights to those attempting to tackle the disease. Charbel Moussa, director of the Laboratory for Dementia and Parkinsonism at Georgetown University Medical Center said that he agreed with the conclusions of the study. “It is probably true that in cases of diseases like Alzheimer’s and Parkinson’s we may have deficiencies in quality control mechanisms like cleaning out bad proteins that collect in the brain cells,” he said, although he warned that using such findings to predict those more at risk of such disease is likely to be difficult. © 2016 Guardian News and Media Limited

Keyword: Alzheimers; Brain imaging
Link ID: 22543 - Posted: 08.11.2016

By Simon Makin A technology with the potential to blur the boundaries between biology and electronics has just leaped a major hurdle in the race to demonstrate its feasibility. A team at the University of California, Berkeley, led by neuroscientist Jose Carmena and electrical and computer engineer Michel Maharbiz, has provided the first demonstration of what the researchers call “ultrasonic neural dust” to monitor neural activity in a live animal. They recorded activity in the sciatic nerve and a leg muscle of an anesthetized rat in response to electrical stimulation applied to its foot. “My lab has always worked on the boundary between biology and man-made things,” Maharbiz says. “We build tiny gadgets to interface synthetic stuff with biological stuff.” The work was published last week in the journal Neuron. The system uses ultrasound for both wireless communication and the device’s power source, eliminating both wires and batteries. It consists of an external transceiver and what the team calls a “dust mote” about 0.8x1x3 mm size, which is implanted inside the body. The transceiver sends ultrasonic pulses to a piezoelectric crystal in the implant, which converts them into electricity to provide power. The implant records electrical signals in the rat via electrodes, and uses this signal to alter the vibration of the crystal. These vibrations are reflected back to the transceiver, allowing the signal to be recorded—a technique known as backscatter. “This is the first time someone has used ultrasound as a method of powering and communicating with extremely small implantable systems,” says one of the paper’s authors, Dongjin Seo. “This opens up a host of applications in terms of embodied telemetry: being able to put something super-tiny, super-deep in the body, which you can park next to a nerve, organ, muscle or gastrointestinal tract, and read data out wirelessly.” © 2016 Scientific American

Keyword: Brain imaging
Link ID: 22533 - Posted: 08.09.2016

By Anna Vlasits A small corner of the neuroscience world was in a frenzy. It was mid-June and a scientific paper had just been published claiming that years worth of results were riddled with errors. The study had dug into the software used to analyze one kind of brain scan, called functional MRI. The software’s approach was wrong, the researchers wrote, calling into doubt “the validity of some 40,000 fMRI studies”—in other words, all of them. The reaction was swift. Twitter lit up with panicked neuroscientists. Bloggers and reporters rained down headlines citing “seriously flawed” “glitches” and “bugs.” Other scientists thundered out essays defending their studies. Finally, one of the authors of the paper, published in Proceedings of the National Academy of Sciences, stepped into the fray. In a blog post, Thomas Nichols wrote, “There is one number I regret: 40,000.” Their finding, Nichols went on to write, only affects a portion of all fMRI papers—or, some scientists think, possibly none at all. It wasn’t nearly as bad as the hype suggested. The brief kerfuffle could just be dismissed as a tempest in a teapot, science’s self-correcting mechanisms in action. But the study, and its response, heralds a new level of self-scrutiny for fMRI studies, which have been plagued for decades by accusations of shoddy science and pop-culture pandering. fMRI, in other words, is growing up, but not without some pains along the way. A bumpy start for brain scanning © 2016 Scientific American,

Keyword: Brain imaging
Link ID: 22513 - Posted: 08.04.2016

By Andy Coghlan Mysterious shrunken cells have been spotted in the human brain for the first time, and appear to be associated with Alzheimer’s disease. “We don’t know yet if they’re a cause or consequence,” says Marie-Ève Tremblay of Laval University in Québec, Canada, who presented her discovery at the Translational Neuroimmunology conference in Big Sky, Montana, last week. The cells appear to be withered forms of microglia – the cells that keep the brain tidy and free of infection, normally by pruning unwanted brain connections or destroying abnormal and infected brain cells. But the cells discovered by Tremblay appear much darker when viewed using an electron microscope, and they seem to be more destructive. “It took a long time for us to identify them,” says Tremblay, who adds that these shrunken microglia do not show up with the same staining chemicals that normally make microglia visible under the microscope. Compared with normal microglia, the dark cells appear to wrap much more tightly around neurons and the connections between them, called synapses. “It seems they’re hyperactive at synapses,” says Tremblay. Where these microglia are present, synapses often seem shrunken and in the process of being degraded. Tremblay first discovered these dark microglia in mice, finding that they increase in number as mice age, and appear to be linked to a number of things, including stress, the neurodegenerative condition Huntington’s disease and a mouse model of Alzheimer’s disease. “There were 10 times as many dark microglia in Alzheimer’s mice as in control mice,” says Tremblay. © Copyright Reed Business Information Ltd.

Keyword: Alzheimers; Glia
Link ID: 22503 - Posted: 08.02.2016