Chapter 3. Neurophysiology: The Generation, Transmission, and Integration of Neural Signals
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
|By Simon Makin People can control prosthetic limbs, computer programs and even remote-controlled helicopters with their mind, all by using brain-computer interfaces. What if we could harness this technology to control things happening inside our own body? A team of bioengineers in Switzerland has taken the first step toward this cyborglike setup by combining a brain-computer interface with a synthetic biological implant, allowing a genetic switch to be operated by brain activity. It is the world's first brain-gene interface. The group started with a typical brain-computer interface, an electrode cap that can register subjects' brain activity and transmit signals to another electronic device. In this case, the device is an electromagnetic field generator; different types of brain activity cause the field to vary in strength. The next step, however, is totally new—the experimenters used the electromagnetic field to trigger protein production within human cells in an implant in mice. The implant uses a cutting-edge technology known as optogenetics. The researchers inserted bacterial genes into human kidney cells, causing them to produce light-sensitive proteins. Then they bioengineered the cells so that stimulating them with light triggers a string of molecular reactions that ultimately produces a protein called secreted alkaline phosphatase (SEAP), which is easily detectable. They then placed the human cells plus an LED light into small plastic pouches and inserted them under the skin of several mice. © 2015 Scientific American
Hannah Devlin, science correspondent Scientists have raised the alert about an antibiotic routinely prescribed for chest infections, after linking it to an increased risk of epilepsy and cerebral palsy in children whose mothers took the drug during pregnancy. Children of mothers who had taken macrolide antibiotics were found to be almost twice as likely to be affected by the conditions, prompting scientists to call for a review of their use during pregnancy. The study authors urged pregnant women not to stop taking prescribed antibiotics, however. The potential adverse effects are rare and, as yet, unproven, while infections during pregnancy are a well-established cause of health problems in babies. Professor Ruth Gilbert, a clinical epidemiologist who led the research at University College London, said: “The main message is for medicines regulators and whether they need to issue a warning about these drugs. For women, if you’ve got a bacterial infection, it’s more important to get on and treat it.” The study tracked the children of nearly 65,000 women who had been prescribed a variety of antibiotics for illnesses during pregnancy, including chest and throat infections and cystitis. There was no evidence that most antibiotics (including penicillin, which made up 67% of prescriptions), led to an increased risk of the baby developing cerebral palsy or epilepsy. However, when the antibiotics were compared head-to-head, the potential adverse effect of macrolide drugs emerged. Around 10 in 1,000 children whose mothers were given the drug had developed the conditions by the age of seven, compared to 6 in 1,000 children, for those who had other types of antibiotic. © 2015 Guardian News and Media Limited
Mo Costandi Two teams of scientists have developed new ways of stimulating neurons with nanoparticles, allowing them to activate brain cells remotely using light or magnetic fields. The new methods are quicker and far less invasive than other hi-tech methods available, so could be more suitable for potential new treatments for human diseases. Researchers have various methods for manipulating brain cell activity, arguably the most powerful being optogenetics, which enables them to switch specific brain cells on or off with unprecedented precision, and simultaneously record their behaviour, using pulses of light. This is very useful for probing neural circuits and behaviour, but involves first creating genetically engineered mice with light-sensitive neurons, and then inserting the optical fibres that deliver light into the brain, so there are major technical and ethical barriers to its use in humans. Nanomedicine could get around this. Francisco Bezanilla of the University of Chicago and his colleagues knew that gold nanoparticles can absorb light and convert it into heat, and several years ago they discovered that infrared light can make neurons fire nervous impulses by heating up their cell membranes. They therefore attached gold nanorods to three different molecules that recognise and bind to proteins in the cell membranes – the scorpion toxin Ts1, which binds to a sodium channel involved in producing nervous impulses, and antibodies that bind the P2X3 and the TRPV1 channels, both found in dorsal root ganglion (DRG) neurons, which transmit touch and pain information up the spinal cord and into the brain. © 2015 Guardian News and Media Limited
Keyword: Brain imaging
Link ID: 20717 - Posted: 03.25.2015
By Emily Underwood Deep brain stimulation, which now involves surgically inserting electrodes several inches into a person's brain and connecting them to a power source outside the skull, can be an extremely effective treatment for disorders such as Parkinson's disease, obsessive compulsive disorder, and depression. The expensive, invasive procedure doesn't always work, however, and can be risky. Now, a study in mice points to a less invasive way to massage neuronal activity, by injecting metal nanoparticles into the brain and controlling them with magnetic fields. Major technical challenges must be overcome before the approach can be tested in humans, but the technique could eventually provide a wireless, nonsurgical alternative to traditional deep brain stimulation surgery, researchers say. "The approach is very innovative and clever," says Antonio Sastre, a program director in the Division of Applied Science & Technology at the National Institute of Biomedical Imaging and Bioengineering in Bethesda, Maryland. The new work provides "a proof of principle." The inspiration to use magnets to control brain activity in mice first struck materials scientist Polina Anikeeva while working in the lab of neuroscientist-engineer Karl Deisseroth at Stanford University in Palo Alto, California. At the time, Deisseroth and colleagues were refining optogenetics, a tool that can switch specific ensembles of neurons on and off in animals with beams of light. © 2015 American Association for the Advancement of Science.
Keyword: Brain imaging
Link ID: 20690 - Posted: 03.14.2015
By Lizzie Wade SAN JOSE, CALIFORNIA—Humans have been using cannabis for more than 5000 years. So why don’t scientists know more about it? Three experts gathered here at the annual meeting of AAAS (which publishes Science) to discuss what scientists and doctors know about the drug and what they still need to learn. “By the end of this session, you’ll know more about cannabis than your physician does,” said Mark Ware, a family physician at the McGill University Health Center in Montreal, Canada, who organized the talk. How does marijuana work? Our brains are primed to respond to marijuana, because “there are chemicals in our own bodies that act like THC [the psychoactive ingredient in pot]” and other compounds in cannabis called cannabinoids, explained Roger Pertwee, a neuropharmacologist at the University of Aberdeen in the United Kingdom who has studied cannabinoids since the 1960s. Cannabinoids produced by our bodies or ingested through marijuana use react with a series of receptors in our brains called the endocannabinoid system, which is involved in appetite, mood, memory, and pain sensation. Scientists have discovered 104 cannabinoids so far, but “the pharmacology of most of them has yet to be investigated,” Pertwee said. What are the known medical uses of marijuana? Marijuana has been used for decades to stimulate appetite and treat nausea and vomiting, especially in patients undergoing chemotherapy. Its success in easing the symptoms of multiple sclerosis patients led to the development of Sativex, a drug manufactured by GW Pharmaceuticals that includes THC and cannabidiol (CBD), a cannabinoid that isn’t psychoactive. © 2015 American Association for the Advancement of Science
By Ben Thomas The past several years have brought two parallel revolutions in neuroscience. Researchers have begun using genetically encoded sensors to monitor the behavior of individual neurons, and they’ve been using brief pulses of light to trigger certain types of neurons to activate. These two techniques are known collectively as optogenetics—the science of using light to read and activate genetically specified neurons—but until recently, most researchers have used them separately. Though many had tried, no one had succeeded in combining optogenetic readout and stimulation into one unified system that worked in the brains of living animals. But now, a team led by Michael Hausser, a neuroscientist at University College London’s Wolfson Institute for Biomedical Research, has succeeded in creating just such a unified optogenetic input/output system. In a paper published this January in the journal Nature Methods [Scientific American is part of the Nature Publishing Group], the team explain how they’ve used the system to record complex signaling codes used by specific sets of neurons and to “play” those codes back by reactivating the same neural firing patterns they recorded, paving the way to get neural networks in the brains of living animals to recognize and respond to the codes they send. “This is going to be a game-changer,” Hausser says. Conventional optogenetics starts with genes. Certain genes encode instructions for producing light-sensitive proteins. By introducing these genes into brain cells, researchers are able to trick specific populations of those cells—all the neurons in a given brain region that respond to dopamine, for example—to fire their signals in response to tiny pulses of light. © 2015 Scientific American
Keyword: Brain imaging
Link ID: 20514 - Posted: 01.23.2015
John Markoff MENLO PARK, CALIF. — Ann Lam delicately places a laboratory slide holding a slice of brain from a living human onto a small platform in a room the size of a walk-in refrigerator. She closes a heavy door and turns to a row of computers to monitor her experiments. She is using one of the world’s most sophisticated and powerful microscopes, the Stanford Synchrotron Radiation Lightsource, to learn about the distribution of metals in the brains of epilepsy patients. But she has another reason for being here as well. Traditional techniques for staining brain tissue produce byproducts and waste that are hazardous to the environment. And often, this sort of research is performed on animals, something Dr. Lam insists on avoiding. The radiation that illuminates the Stanford microscope was once a waste product produced by the particle accelerators. Now that it has been harnessed — recycled, in a sense — she is able to use it to examine tissue removed from living human patients, not animals. For Dr. Lam, those are important considerations. Indeed, scientists like her worry that neuroscience has become a dirty business. Too often, they say, labs are stocked with toxic chemicals, dangerous instruments and hapless animal subjects. Funding often comes from the military, and some neuroscientists fear their findings may soon be applied in ways that they never intended, raising moral questions that are seldom addressed. In 2012, Dr. Lam and Dr. Elan Ohayon, her husband, founded the Green Neuroscience Laboratory in a former industrial building in the Convoy District, an up-and-coming San Diego neighborhood. Solar panels rest on the roof, and a garden is lovingly tended on the second floor. © 2015 The New York Times Company
Three-year outcomes from an ongoing clinical trial suggest that high-dose immunosuppressive therapy followed by transplantation of a person's own blood-forming stem cells may induce sustained remission in some people with relapsing-remitting multiple sclerosis (RRMS). RRMS is the most common form of MS, a progressive autoimmune disease in which the immune system attacks the brain and spinal cord. The trial is funded by the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health, and conducted by the NIAID-funded Immune Tolerance Network (ITN) External Web Site Policy. Three years after the treatment, called high-dose immunosuppressive therapy and autologous hematopoietic cell transplant or HDIT/HCT, nearly 80 percent of trial participants had survived without experiencing an increase in disability, a relapse of MS symptoms or new brain lesions. Investigators observed few serious early complications or unexpected side effects, although many participants experienced expected side effects of high-dose immunosuppression, including infections and gastrointestinal problems. The three-year findings are published in the Dec. 29, 2014, online issue of JAMA Neurology. “These promising results support the need for future studies to further evaluate the benefits and risks of HDIT/HCT and directly compare this treatment strategy to current MS therapies,” said NIAID Director Anthony S. Fauci, M.D. “If the findings from this study are confirmed, HDIT/HCT may become a potential therapeutic option for people with this often-debilitating disease, particularly those who have not been helped by standard treatments.”
Mo Costandi A team of neuroscientists at University College London has developed a new way of simultaneously recording and manipulating the activity of multiple cells in the brains of live animals using pulses of light. The technique, described today in the journal Nature Methods, combines two existing state-of-the-art neurotechnologies. It may eventually allow researchers to do away with the cumbersome microelectrodes they traditionally used to probe neuronal activity, and to interrogate the brain’s workings at the cellular level in real time and with unprecedented detail. One of them is optogenetics. This involves creating genetically engineered mice expressing algal proteins called Channelrhodopsins in specified groups of neurons. This renders the cells sensitive to light, allowing researchers to switch the cells on or off, depending on which Channelrhodopsin protein they express, and which wavelength of light is used. This can be done on a millisecond-by-millisecond timescale, using pulses of laser light delivered into the animals’ brains via an optical fibre. The other is calcium imaging. Calcium signals are crucial for just about every aspect of neuronal function, and nerve cells exhibit a sudden increase in calcium ion concentration when they begin to fire off nervous impulses. Using dyes that give off green fluorescence in response to increases in calcium concentration, combined with two-photon microscopy, researchers can detect this signature to see which cells are activated. In this way, they can effectively ‘read’ the activity of entire cell populations in brain tissue slices or live brains. Calcium-sensitive dyes are injectable, so targeting them with precision is difficult, and more recently, researchers have developed genetically-encoded calcium sensors to overcome this limitation. Mice can be genetically engineered to express these calcium-sensitive proteins in specific groups of cells; like the dyes before them, they, too, fluoresce in response to increases in calcium ion concentrations in the cells expressing them.
Keyword: Brain imaging
Link ID: 20441 - Posted: 12.23.2014
By Lenny Bernstein There are 60 million epileptics on the planet, and while advances in medication and implantable devices have helped them, the ability to better detect and even predict when they will have debilitating seizures would be a significant improvement in their everyday lives. Imagine, for example, if an epileptic knew with reasonable certainty that his next seizure would not occur for an hour or a day or a week. That might allow him to run to the market or go out for the evening or plan a short vacation with less concern. Computers and even dogs have been tested in the effort to do this, but now a group of organizations battling epilepsy is employing "big data" to help. They sponsored an online competition that drew 504 entrants who tried to develop algorithms that would detect and predict epileptic seizures. Instead of the traditional approach of asking researchers in a handful of labs to tackle the problem, the groups put huge amounts of data online that was recorded from the brains of dogs and people as they had seizures over a number of months. They then challenged anyone interested to use the information to develop detection and prediction models. "Seizure detection and seizure prediction," said Walter J. Koroshetz, deputy director of the National Institute of Neurological Disorders and Stroke (NINDS), are "two fundamental problems in the field that are poised to take significant advantage of large data computation algorithms and benefit from the concept of sharing data and generating reproducible results."
Link ID: 20403 - Posted: 12.08.2014
by Hal Hodson Yet another smartwatch launched this week. Called Embrace, it is rather different from the latest offerings from Apple, Samsung and Motorola: it can spot the warning signs of an epileptic seizure. Embrace was developed by Matteo Lai and his team at a firm called Empatica, with the help of Rosalind Picard at the Massachusetts Institute of Technology. It measures the skin's electrical activity as a proxy for changes deep in the brain, and uses a model built on years of clinical data to tell which changes portend a seizure. It also gathers the usual temperature and motion data that smartwatches collect, allowing the wearer to measure physical activity and sleep quality. Empatica launched a crowdfunding campaign on Indiegogo on Tuesday and has already raised more than $120,000. Backers who pledge $169 will receive an Embrace watch. The idea for the wristband came when Picard and her colleagues were running a study on the emotional states of children with autism, measuring skin conductance at the wrist as part of the study. Picard noticed that one of the children had registered a spike in electrical activity that turned out to have happened 20 minutes before they noticed the symptoms of a seizure. "It shocked me when I realised these things were showing up on the wrist," says Picard. The whole point of Embrace is to prevent sudden unexplained death in epilepsy (SUDEP). Its causes are not fully understood, but Picard says they understand enough to know how to reduce the chances of dying after an epileptic seizure. © Copyright Reed Business Information Ltd.
By Esther Hsieh A little-known fact: the tongue is directly connected to the brain stem. This anatomical feature is now being harnessed by scientists to improve rehabilitation. A team at the University of Wisconsin–Madison recently found that electrically stimulating the tongue can help patients with multiple sclerosis (MS) improve their gait. MS is an incurable disease in which the insulation around the nerves becomes damaged, disrupting the communication between body and brain. One symptom is loss of muscle control. In a study published in the Journal of Neuro-Engineering and Rehabilitation, Wisconsin neuroscientist Yuri Danilov and his team applied painless electrical impulses to the tip of the tongue of MS patients during physical therapy. Over a 14-week trial, patients who got tongue stimulation improved twice as much on variables such as balance and fluidity as did a control group who did the same regimen without stimulation. The tongue has extensive motor and sensory integration with the brain, Danilov explains. The nerves on the tip of the tongue are directly connected to the brain stem, a crucial hub that directs basic bodily processes. Previous research showed that sending electrical pulses through the tongue activated the neural network for balance; such activation may shore up the circuitry weakened by MS. The team is also using tongue stimulation to treat patients with vision loss, stroke damage and Parkinson's. “We have probably discovered a new way for the neurorehabilitation of many neurological disorders,” Danilov says. © 2014 Scientific American
Keyword: Multiple Sclerosis
Link ID: 20332 - Posted: 11.20.2014
By DENISE GRADY An electrical device glued to the scalp can slow cancer growth and prolong survival in people with the deadliest type of brain tumor, researchers reported on Saturday. The device is not a cure and, on average, adds only a few months of life when used along with the standard regimen of surgery, radiation and chemotherapy. Some doctors have questioned its usefulness. But scientists conducting a new study said the device was the first therapy in a decade to extend life in people with glioblastomas, brain tumors in which median survival is 15 months even with the best treatment. The disease affects about 10,000 people a year in the United States and is what killed Senator Edward M. Kennedy in 2009. It is so aggressive and hard to treat that even seemingly small gains in survival are considered important. The new findings mean the device should become part of the standard care offered to all patients with newly diagnosed glioblastomas, the researchers conducting the study said. The equipment consists of four pads carrying transducer arrays that patients glue to their scalps and change every few days. Wires lead to a six-pound operating system and power supply. Except for some scalp irritation, the device has no side effects, the study found. But patients have to wear it more or less around the clock and must keep their heads shaved. It generates alternating, low-intensity electrical fields — so-called tumor-treating fields — that can halt tumor growth by stopping cells from dividing, which leads to their death. The researchers said the technology might also help treat other cancers, and would be tested in mesothelioma and cancers of the lung, ovary, breast and pancreas. © 2014 The New York Times Company
Link ID: 20319 - Posted: 11.17.2014
By JAMES GORMAN Research on the brain is surging. The United States and the European Union have launched new programs to better understand the brain. Scientists are mapping parts of mouse, fly and human brains at different levels of magnification. Technology for recording brain activity has been improving at a revolutionary pace. The National Institutes of Health, which already spends $4.5 billion a year on brain research, consulted the top neuroscientists in the country to frame its role in an initiative announced by President Obama last year to concentrate on developing a fundamental understanding of the brain. Scientists have puzzled out profoundly important insights about how the brain works, like the way the mammalian brain navigates and remembers places, work that won the 2014 Nobel Prize in Physiology or Medicine for a British-American and two Norwegians. So many large and small questions remain unanswered. How is information encoded and transferred from cell to cell or from network to network of cells? Science found a genetic code but there is no brain-wide neural code; no electrical or chemical alphabet exists that can be recombined to say “red” or “fear” or “wink” or “run.” And no one knows whether information is encoded differently in various parts of the brain. Brain scientists may speculate on a grand scale, but they work on a small scale. Sebastian Seung at Princeton, author of “Connectome: How the Brain’s Wiring Makes Us Who We Are,” speaks in sweeping terms of how identity, personality, memory — all the things that define a human being — grow out of the way brain cells and regions are connected to each other. But in the lab, his most recent work involves the connections and structure of motion-detecting neurons in the retinas of mice. Larry Abbott, 64, a former theoretical physicist who is now co-director, with Kenneth Miller, of the Center for Theoretical Neuroscience at Columbia University, is one of the field’s most prominent theorists, and the person whose name invariably comes up when discussions turn to brain theory. © 2014 The New York Times Company
Keyword: Brain imaging
Link ID: 20302 - Posted: 11.11.2014
Mo Costandi The father of modern neuroscience had a sharp eye and an even sharper mind, but he evidently overlooked something rather significant about the basic structure of brain cells. Santiago Ramón y Cajal spent his entire career examining and comparing nervous tissue from different species. He observed the intricate branches we now call dendrites, and the thicker axonal fibres. He also recognised them as distinct components of the neuron, and convinced others that neurons are fundamental components of the nervous system. For Cajal, these cells were “the mysterious butterflies of the soul… whose beating of wings may one day reveal to us the secrets of the mind.” He hunted for them in “the gardens of the grey matter” and, being an accomplished artist, meticulously catalogued the many “delicate and elaborate forms” that they take. As his beautiful drawings show, all neurons have a single axon emanating from one area of the cell body, and one or more dendrites arising from another. This basic structure has been enshrined in textbooks ever since. But there appear to be unusual varieties of soul butterflies that Cajal failed to spot – neuroscientists in Germany have identified neurons that have axons growing from their dendrites, a discovery that challenges our century-old assumption about the form and function of these cells. Cajal stated that information flows through neurons in only one direction – from the dendrites, which receive electrical impulses from other neurons, to the cell body, which processes the information and conveys it to the initial segment of the axon, which then produces its own impulses that travel down it to the nerve terminal. (He indicated this with small arrows in some of his diagrams, such as the one above.) © 2014 Guardian News and Media Limited
Keyword: Brain imaging
Link ID: 20301 - Posted: 11.11.2014
By Amy Robinson Whether you’re walking, talking or contemplating the universe, a minimum of tens of billions of synapses are firing at any given second within your brain. “The weak link in understanding ourselves is really about understanding how our brains generate our minds and how our minds generate our selves,” says MIT neuroscientist Ed Boyden. One cubic millimeter in the brain contains over 100,000 neurons connected through a billion synapses computing on a millisecond timescale. To understand how information flows within these circuits, we first need a “brain parts” list of neurons and glia. But such a list is not enough. We’ll also need to chart how cells are connected and to monitor their activity over time both electrically and chemically. Researchers can do this at small scale thanks to a technology developed in the 1970s called patch clamping. Bringing a tiny glass needle very near to a neuron living within a brain allows researchers to perform microsurgery on single neurons, piercing the cell membrane to do things like record the millivolt electrical impulses flowing through it. Patch clamping also facilitates measurement of proteins contained within the cell, revealing characteristic molecules and contributing to our understanding of why one neuron may behave differently than another. Neuroscientists can even inject glowing dyes in order to see the shape of cells. Patch clamping is a technique that has been used in neuroscience for 40 years. Why now does it make an appearance as a novel neuroscience technology? In a word: robots. © 2014 Scientific American
Keyword: Brain imaging
Link ID: 20279 - Posted: 11.05.2014
by Clare Wilson Call them the neuron whisperers. Researchers are eavesdropping on conversations going on between brain cells in a dish. Rather than hearing the chatter, they watch neurons that have been genetically modified so that the electrical impulses moving along their branched tendrils cause sparkles of red light (see video). Filming these cells at up to 100,000 frames a second is allowing researchers to analyse their firing in unprecedented detail. Until recently, a neuron's electrical activity could only be measured with tiny electrodes. As well as being technically difficult, such "patch clamping" only reveals the voltage at those specific points. The new approach makes the neuron's entire surface fluoresce as the impulse passes by. "Now we see the whole thing sweep through," says Adam Cohen of Harvard University. "We get much more information - like how fast and where does it start and what happens at a branch." The idea is a reverse form of optogenetics – where neurons are given a gene from bacteria that make a light-sensitive protein, so the cells fire when illuminated. The new approach uses genes that make the neurons do the opposite - glow when they fire. "It's pretty cool," says Dimitri Kullmann of University College London. "It's amazing that you can dispense with electrodes." Cohen's team is using the technique to compare cells from typical brains with those from people with disorders such as motor neuron disease or amyotrophic lateral sclerosis. Rather than taking a brain sample, they remove some of the person's skin cells and grow them alongside chemicals that rewind the cells into an embryonic-like state. Another set of chemicals is used to turn these stem cells into neurons. "You can recreate something reminiscent of the person's brain in the dish," says Cohen. © Copyright Reed Business Information Ltd.
Keyword: Brain imaging
Link ID: 20241 - Posted: 10.25.2014
|By Bret Stetka Multiple sclerosis (MS) is an electrical disorder, or rather one of impaired myelin, a fatty, insulating substance that better allows electric current to bolt down our neurons and release the neurotransmitters that help run our bodies and brains. Researchers have speculated for some time that the myelin degradation seen in MS is due, at least in part, to autoimmune activity against the nervous system. Recent work presented at the MS Boston 2014 Meeting suggests that this aberrant immune response begins in the gut. Eighty percent of the human immune system resides in the gastrointestinal tract. Alongside it are the trillions of symbiotic bacteria, fungi and other single-celled organisms that make up our guts’ microbiomes. Normally everyone wins: The microorganisms benefit from a home and a steady food supply; we enjoy the essential assistance they provide in various metabolic and digestive functions. Our microbiomes also help calibrate our immune systems, so our bodies recognize which co-inhabitants should be there and which should not. Yet mounting evidence suggests that when our resident biota are out of balance, they contribute to numerous diseases, including diabetes, rheumatoid arthritis, autism and, it appears, MS by inciting rogue immune activity that can spread throughout the body and brain. One study presented at the conference, out of Brigham and Women’s Hospital (BWH), reported a single-celled organism called methanobrevibacteriaceae that activates the immune system is enriched in the gastrointestinal tracts of MS patients whereas bacteria that suppress immune activity are depleted. Other work, which resulted from a collaboration among 10 academic researcher centers across the U.S. and Canada, reported significantly altered gut flora in pediatric MS patients while a group of Japanese researchers found that yeast consumption reduced the chances of mice developing an MS-like disease by altering gut flora. © 2014 Scientific American
Keyword: Multiple Sclerosis
Link ID: 20186 - Posted: 10.09.2014
|By Nathan Collins Step aside, huge magnets and radioactive tracers—soon some brain activity will be revealed by simply training dozens of red lights on the scalp. A new study in Nature Photonics finds this optical technique can replicate functional MRI experiments, and it is more comfortable, more portable and less expensive. The method is an enhancement of diffuse optical tomography (DOT), in which a device shines tiny points of red light at a subject's scalp and analyzes the light that bounces back. The red light reflects off red hemoglobin in the blood but does not interact as much with tissues of other colors, which allows researchers to recover an fMRI-like image of changing blood flow in the brain at work. For years researchers attempting to use DOT have been limited by the difficulty of packing many heavy light sources and detectors into the small area around the head. They also needed better techniques for analyzing the flood of data that the detectors collected. Now researchers at Washington University in St. Louis and the University of Birmingham in England report they have solved those problems and made the first high-density DOT (HD-DOT) brain scans. The team first engineered a “double halo” structure to support the weight of 96 lights and 92 detectors, more than double the number in earlier arrays. The investigators also dealt with the computing challenges associated with that many lights—for example, they figured out how to filter out interference from blood flow in the scalp and other tissues. The team then used HD-DOT to successfully replicate fMRI studies of vision and language processing—a task impossible for other fMRI alternatives, such as functional near-infrared spectroscopy or electroencephalography, which do not cover a large enough swath of the brain or have sufficient resolution to pinpoint active brain areas. Finally, the team scanned the brains of people who have implanted electrodes for Parkinson's disease—something fMRI can never do because the machine generates electromagnetic waves that can destroy electronic devices such as pacemakers. © 2014 Scientific American
Keyword: Brain imaging
Link ID: 20151 - Posted: 10.02.2014
Michael Häusser Use light to read out and control neural activity! This idea, so easily expressed and understood, has fired the imagination of neuroscientists for decades. The advantages of using light as an effector are obvious1: it is noninvasive, can be precisely targeted with exquisite spatial and temporal precision, can be used simultaneously at multiple wavelengths and locations, and can report the presence or activity of specific molecules. However, despite early progress2 and encouragement3, it is only recently that widely usable approaches for optical readout and manipulation of specific neurons have become available. These new approaches rely on genetically encoded proteins that can be targeted to specific neuronal subtypes, giving birth to the term 'optogenetics' to signal the combination of genetic targeting and optical interrogation4. On the readout side, highly sensitive probes have been developed for imaging synaptic release, intracellular calcium (a proxy for neural activity) and membrane voltage. On the manipulation side, a palette of proteins for both activation and inactivation of neurons with millisecond precision using different wavelengths of light have been identified and optimized. The extraordinary versatility and power of these new optogenetic tools are spurring a revolution in neuroscience research, and they have rapidly become part of the standard toolkit of thousands of research labs around the world. Although optogenetics may not yet be a household word (though try it on your mother; she may surprise you), there can be no better proof that optogenetics has become part of the scientific mainstream than the 2013 Brain Prize being awarded to the sextet that pioneered optogenetic manipulation (http://www.thebrainprize.org/flx/prize_winners/prize_winners_2013/) and the incorporation of optogenetics as a central plank in the US National Institutes of Health BRAIN Initiative5. Moreover, there is growing optimism about the prospect of using optogenetic probes not only to understand mechanisms of disease in animal models but also to treat disease in humans, particularly in more accessible parts of the brain such as the retina6. © 2014 Macmillan Publishers Limited
Keyword: Brain imaging
Link ID: 20142 - Posted: 10.01.2014