Chapter 2. Functional Neuroanatomy: The Nervous System and Behavior
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
NOBODY knows how the brain works. But researchers are trying to find out. One of the most eye-catching weapons in their arsenal is functional magnetic-resonance imaging (fMRI). In this, MRI scanners normally employed for diagnosis are used to study volunteers for the purposes of research. By watching people’s brains as they carry out certain tasks, neuroscientists hope to get some idea of which bits of the brain specialise in doing what. The results look impressive. Thousands of papers have been published, from workmanlike investigations of the role of certain brain regions in, say, recalling directions or reading the emotions of others, to spectacular treatises extolling the use of fMRI to detect lies, to work out what people are dreaming about or even to deduce whether someone truly believes in God. But the technology has its critics. Many worry that dramatic conclusions are being drawn from small samples (the faff involved in fMRI makes large studies hard). Others fret about over-interpreting the tiny changes the technique picks up. A deliberately provocative paper published in 2009, for example, found apparent activity in the brain of a dead salmon. Now, researchers in Sweden have added to the doubts. As they reported in the Proceedings of the National Academies of Science, a team led by Anders Eklund at Linkoping University has found that the computer programs used by fMRI researchers to interpret what is going on in their volunteers’ brains appear to be seriously flawed. © The Economist Newspaper Limited 2016
Keyword: Brain imaging
Link ID: 22444 - Posted: 07.15.2016
The most sophisticated, widely adopted, and important tool for looking at living brain activity actually does no such thing. Called functional magnetic resonance imaging, what it really does is scan for the magnetic signatures of oxygen-rich blood. Blood indicates that the brain is doing something, but it’s not a direct measure of brain activity. Which is to say, there’s room for error. That’s why neuroscientists use special statistics to filter out noise in their fMRIs, verifying that the shaded blobs they see pulsing across their computer screens actually relate to blood flowing through the brain. If those filters don’t work, an fMRI scan is about as useful at detecting neuronal activity as your dad’s “brain sucking alien” hand trick. And a new paper suggests that might actually be the case for thousands of fMRI studies over the past 15 years. The paper, published June 29 in the Proceedings of the National Academy of Science, threw 40,000 fMRI studies done over the past 15 years into question. But many neuroscientists—including the study’s whistleblowing authors—are now saying the negative attention is overblown. Neuroscience has long struggled over just how useful fMRI data is at showing brain function. “In the early days these fMRI signals were very small, buried in a huge amount of noise,” says Elizabeth Hillman, a biomedical engineer at the Zuckerman Institute at Columbia University. A lot of this noise is literal: noise from the scanner, noise from the electrical components, noise from the person’s body as it breathes and pumps blood.
Keyword: Brain imaging
Link ID: 22413 - Posted: 07.09.2016
Andrew Orlowski Special Report If the fMRI brain-scanning fad is well and truly over, then many fashionable intellectual ideas look like collateral damage, too. What might generously be called the “British intelligentsia” – our chattering classes – fell particularly hard for the promise that “new discoveries in brain science” had revealed a new understanding of human behaviour, which shed new light on emotions, personality and decision making. But all they were looking at was statistical quirks. There was no science to speak of, the results of the experiments were effectively meaningless, and couldn’t support the (often contradictory) conclusions being touted. The fMRI machine was a very expensive way of legitimising an anecdote. This is an academic scandal that’s been waiting to explode for years, for plenty of warning signs were there. In 2005, Ed Vul, now a psychology professor at UCSD, and Hal Pashler – then and now at UCSD – were puzzled by a claim being made in a talk by a neuroscience researcher. He was explaining study that purported to report a high correlation between a test subject’s brain activity and the speed with which they left the room after the study. “It seemed unbelievable to us that activity in this specific brain area could account for so much of the variance in walking speed,” explained Vul. “Especially so, because the fMRI activity was measured some two hours before the walking happened. So either activity in this area directly controlled motor action with a delay of two hours — something we found hard to believe — or there was something fishy going on.” IT © 1998–2016
Keyword: Brain imaging
Link ID: 22410 - Posted: 07.08.2016
It's no secret that passwords aren't impenetrable. Even outside of major incidents like the celebrity nude photo hack, or when millions of passwords get released online, like what happened to Twitter recently, many of us may still be at risk of having our data compromised due to password-related security flaws. According to a June 2015 survey from mobile identity company TeleSign, two in five people were notified in the preceding year that their personal information was compromised or that they had been hacked or had their password stolen. But a new technology developed by the BioSense lab at the University of California, Berkeley could make all of that a thing of the past. Over the course of three years, the lab's co-director, John Chuang, and his graduate students have been working on a technology called passthoughts, which would use a person's brainwaves to identify them, according to CNET. The team has found that a passthought — something like a song that someone could sing in their mind — isn't easily forgotten and can achieve a 99-per-cent authentication accuracy rate. The device used to capture passthoughts resembles a telephone headset. It relies on EEG technology, detecting electrical activity in your brain via electrodes strapped to your head. And although Chuang's team say the technology has improved greatly in recent years, the awkwardness of the device might hinder it from being widely adopted. ©2016 CBC/Radio-Canada.
Keyword: Brain imaging
Link ID: 22401 - Posted: 07.06.2016
Laura Sanders Busy nerve cells in the brain are hungry and beckon oxygen-rich blood to replenish themselves. But active nerve cells in newborn mouse brains can’t yet make this request, and their silence leaves them hungry, scientists report June 22 in the Journal of Neuroscience. Instead of being a dismal starvation diet, this lean time may actually spur the brain to develop properly. The new results, though, muddy the interpretation of the brain imaging technique called functional MRI when it is used on infants. Most people assume that all busy nerve cells, or neurons, signal nearby blood vessels to replenish themselves. But there were hints from fMRI studies of young children that their brains don’t always follow this rule. “The newborn brain is doing something weird,” says study coauthor Elizabeth Hillman of Columbia University. That weirdness, she suspected, might be explained by an immature communication system in young brains. To find out, she and her colleagues looked for neuron-blood connections in mice as they grew. “What we’re trying to do is create a road map for what we think you actually should see,” Hillman says. When 7-day-old mice were touched on their hind paws, a small group of neurons in the brain responded instantly, firing off messages in a flurry of activity. Despite this action, no fresh blood arrived, the team found. By 13 days, the nerve cell reaction got bigger, spreading across a wider stretch of the brain. Still the blood didn’t come. But by the time the mice reached adulthood, neural activity prompted an influx of blood. The results show that young mouse brains lack the ability to send blood to busy neurons, a skill that influences how the brain operates (SN: 11/14/15, p. 22). © Society for Science & the Public 2000 - 2016.
In a study of stroke patients, investigators confirmed through MRI brain scans that there was an association between the extent of disruption to the brain’s protective blood-brain barrier and the severity of bleeding following invasive stroke therapy. The results of the National Institutes of Health-funded study were published in Neurology. These findings are part of the Diffusion and Perfusion Imaging Evaluation for Understanding Stroke Evolution (DEFUSE)-2 Study, which was designed to see how MRIs can help determine which patients undergo endovascular therapy following ischemic stroke caused by a clot blocking blood flow to the brain. Endovascular treatment targets the ischemic clot itself, either removing it or breaking it up with a stent. The blood-brain barrier is a layer of cells that protects the brain from harmful molecules passing through the bloodstream. After stroke, the barrier is disrupted, becoming permeable and losing control over what gets into the brain. “The biggest impact of this research is that information from MRI scans routinely collected at a number of research hospitals and stroke centers can inform treating physicians on the risk of bleeding,” said Richard Leigh, M.D., a scientist at NIH’s National Institute of Neurological Disorders and Stroke (NINDS) and an author on the study. In this study, brain scans were collected from more than 100 patients before they underwent endovascular therapy, within 12 hours of stroke onset. Dr. Leigh and his team obtained the images from DEFUSE-2 investigators.
By Monique Brouillette The brain presents a unique challenge for medical treatment: it is locked away behind an impenetrable layer of tightly packed cells. Although the blood-brain barrier prevents harmful chemicals and bacteria from reaching our control center, it also blocks roughly 95 percent of medicine delivered orally or intravenously. As a result, doctors who treat patients with neurodegenerative diseases, such as Parkinson's, often have to inject drugs directly into the brain, an invasive approach that requires drilling into the skull. Some scientists have had minor successes getting intravenous drugs past the barrier with the help of ultrasound or in the form of nanoparticles, but those methods can target only small areas. Now neuroscientist Viviana Gradinaru and her colleagues at the California Institute of Technology show that a harmless virus can pass through the barricade and deliver treatment throughout the brain. Gradinaru's team turned to viruses because the infective agents are small and adept at entering cells and hijacking the DNA within. They also have protein shells that can hold beneficial deliveries, such as drugs or genetic therapies. To find a suitable virus to enter the brain, the researchers engineered a strain of an adeno-associated virus into millions of variants with slightly different shell structures. They then injected these variants into a mouse and, after a week, recovered the strains that made it into the brain. A virus named AAV-PHP.B most reliably crossed the barrier. © 2016 Scientific American,
Link ID: 22313 - Posted: 06.13.2016
By David Shultz We still may not know what causes consciousness in humans, but scientists are at least learning how to detect its presence. A new application of a common clinical test, the positron emission tomography (PET) scan, seems to be able to differentiate between minimally conscious brains and those in a vegetative state. The work could help doctors figure out which brain trauma patients are the most likely to recover—and even shed light on the nature of consciousness. “This is really cool what these guys did here,” says neuroscientist Nicholas Schiff at Cornell University, who was not involved in the study. “We’re going to make great use of it.” PET scans work by introducing a small amount of radionuclides into the body. These radioactive compounds act as a tracer and naturally emit subatomic particles called positrons over time, and the gamma rays indirectly produced by this process can be detected by imaging equipment. The most common PET scan uses fluorodeoxyglucose (FDG) as the tracer in order to show how glucose concentrations change in tissue over time—a proxy for metabolic activity. Compared with other imaging techniques, PET scans are relatively cheap and easy to perform, and are routinely used to survey for cancer, heart problems, and other diseases. In the new study, researchers used FDG-PET scans to analyze the resting cerebral metabolic rate—the amount of energy being used by the tissue—of 131 patients with a so-called disorder of consciousness and 28 healthy controls. Disorders of consciousness can refer to a wide range of problems, ranging from a full-blown coma to a minimally conscious state in which patients may experience brief periods where they can communicate and follow instructions. Between these two extremes, patients may be said to be in a vegetative state or exhibit unresponsive wakefulness, characterized by open eyes and basic reflexes, but no signs of awareness. Most disorders of consciousness result from head trauma, and where someone falls on the consciousness continuum is typically determined by the severity of the injury. © 2016 American Association for the Advancement of Science
By Teal Burrell In neuroscience, neurons get all the glory. Or rather, they used to. Researchers are beginning to discover the importance of something outside the neurons—a structure called the perineuronal net. This net might reveal how memories are stored and how various diseases ravage the brain. The realization of important roles for structures outside neurons serves as a reminder that the brain is a lot more complicated than we thought. Or, it’s exactly as complicated as neuroscientists thought it was 130 years ago. In 1882, Italian physician and scientist Camillo Golgi described a structure that enveloped cells in the brain in a thin layer. He later named it the pericellular net. His word choice was deliberate; he carefully avoided the word “neuron” since he was engaged in a battle with another neuroscience luminary, Santiago Ramón y Cajal, over whether the nervous system was a continuous meshwork of cells that were fused together—Golgi’s take—or a collection of discrete cells, called neurons—Ramón y Cajal’s view. Ramón y Cajal wasn’t having it. He argued Golgi was wrong about the existence of such a net, blaming the findings on Golgi’s eponymous staining technique, which, incidentally, is still used today. Ramón y Cajal’s influence was enough to shut down the debate. While some Golgi supporters labored in vain to prove the nets existed, their findings never took hold. Instead, over the next century, neuroscientists focused exclusively on neurons, the discrete cells of the nervous system that relay information between one another, giving rise to movements, perceptions, and emotions. (The two adversaries would begrudgingly share a Nobel Prize in 1906 for their work describing the nervous system.) © 1996-2016 WGBH Educational Foundation
Link ID: 22252 - Posted: 05.26.2016
By Emily Underwood One of the telltale signs of Alzheimer’s disease (AD) is sticky plaques of ß-amyloid protein, which form around neurons and are thought by a large number of scientists to bog down information processing and kill cells. For more than a decade, however, other researchers have fingered a second protein called tau, found inside brain cells, as a possible culprit. Now, a new imaging study of 10 people with mild AD suggests that tau deposits—not amyloid—are closely linked to symptoms such as memory loss and dementia. Although this evidence won’t itself resolve the amyloid-tau debate, the finding could spur more research into new, tau-targeting treatments and lead to better diagnostic tools, researchers say. Scientists have long used an imaging technique called positron emission tomography (PET) to visualize ß-amyloid deposits marked by radioactive chemical tags in the brains of people with AD. Combined with postmortem analyses of brain tissue, these studies have demonstrated that people with AD have far more ß-amyloid plaques in their brains than healthy people, at least as a general rule. But they have also revealed a puzzle: Roughly 30% of people without any signs of dementia have brains “chock-full” of ß-amyloid at autopsy, says neurologist Beau Ances at Washington University in St. Louis in Missouri. That mystery has inspired many in the AD field to ask whether a second misfolded protein, tau, is the real driver of the condition’s neurodegeneration and symptoms, or at least an important accomplice. Until recently, the only ways to test that hypothesis were to measure tau in brain tissue after a person died, or in a sample of cerebrospinal fluid (CSF) extracted from a living person by needle. But in the past several years, researchers have developed PET imaging agents that can harmlessly bind to tau in the living brain. The more tau deposits found in the temporal lobe, a brain region associated with memory, the more likely a person was to show deficits on a battery of memory and attention tests, the team reports today in Science Translational Medicine. © 2016 American Association for the Advancement of Science.
By Matthew Hutson Last week, Nature, the world’s most prestigious science journal, published a beautiful picture of a brain on its cover. The computer-generated image, taken from a paper in the issue, showed the organ’s outer layer almost completely covered with sprinkles of colorful words. The paper presents a “semantic map” revealing which parts of the brain’s cortex—meaning its outer layer, the one responsible for higher thought—respond to various spoken words. The study has generated widespread interest, receiving coverage from newspapers and websites around the world. The paper was also accompanied by an online interactive model that allowed users to explore exactly how words are mapped in our brains. The combination yielded a popular frenzy, one prompting the question: Why are millions of people suddenly so interested in the neuroanatomical distribution of linguistic representations? Have they run out of cat videos? The answer, I think, is largely the same as the answer to why “This Is Your Brain on X” (where X = food, politics, sex, podcasts, whatever) is a staple of news headlines, often residing above an fMRI image of a brain lit up in fascinating, mysterious patterns: People have a fundamental misunderstanding of the field of neuroscience and what it can tell us. But before explaining why people shouldn’t be excited about this research, let’s look at what the research tells us and why we should be excited. Different parts of the brain process different elements of thought, and some regions of the cortex are organized into “maps” such that the distance between different locations corresponds to the physical and/or conceptual distance between what it represents.
By BENEDICT CAREY Listening to music may make the daily commute tolerable, but streaming a story through the headphones can make it disappear. You were home; now you’re at your desk: What happened? Storytelling happened, and now scientists have mapped the experience of listening to podcasts, specifically “The Moth Radio Hour,” using a scanner to track brain activity. In a paper published Wednesday by the journal Nature, a research team from the University of California, Berkeley, laid out a detailed map of the brain as it absorbed and responded to a story. Widely dispersed sensory, emotional and memory networks were humming, across both hemispheres of the brain; no story was “contained” in any one part of the brain, as some textbooks have suggested. The team, led by Alexander Huth, a postdoctoral researcher in neuroscience, and Jack Gallant, a professor of psychology, had seven volunteers listen to episodes of “The Moth” — first-person stories of love, loss, betrayal, flight from an abusive husband, and more — while recording brain activity with an M.R.I. machine. Sign Up for the Science Times Newsletter Every week, we'll bring you stories that capture the wonders of the human body, nature and the cosmos. Using novel computational methods, the group broke down the stories into units of meaning: social elements, for example, like friends and parties, as well as locations and emotions . They found that these concepts fell into 12 categories that tended to cause activation in the same parts of people’s brains at the same points throughout the stories. They then retested that model by seeing how it predicted M.R.I. activity while the volunteers listened to another Moth story. Would related words like mother and father, or times, dates and numbers trigger the same parts of people’s brains? The answer was yes. © 2016 The New York Times Company
Ian Sample Science editor Scientists have created an “atlas of the brain” that reveals how the meanings of words are arranged across different regions of the organ. Like a colourful quilt laid over the cortex, the atlas displays in rainbow hues how individual words and the concepts they convey can be grouped together in clumps of white matter. “Our goal was to build a giant atlas that shows how one specific aspect of language is represented in the brain, in this case semantics, or the meanings of words,” said Jack Gallant, a neuroscientist at the University of California, Berkeley. No single brain region holds one word or concept. A single brain spot is associated with a number of related words. And each single word lights up many different brain spots. Together they make up networks that represent the meanings of each word we use: life and love; death and taxes; clouds, Florida and bra. All light up their own networks. Described as a “tour de force” by one researcher who was not involved in the study, the atlas demonstrates how modern imaging can transform our knowledge of how the brain performs some of its most important tasks. With further advances, the technology could have a profound impact on medicine and other fields. “It is possible that this approach could be used to decode information about what words a person is hearing, reading, or possibly even thinking,” said Alexander Huth, the first author on the study. One potential use would be a language decoder that could allow people silenced by motor neurone disease or locked-in syndrome to speak through a computer. © 2016 Guardian News and Media Limited
By Simon Makin Everyone's brain is different. Until recently neuroscience has tended to gloss this over by averaging results from many brain scans in trying to elicit general truths about how the organ works. But in a major development within the field researchers have begun documenting how brain activity differs between individuals. Such differences had been largely thought of as transient and uninteresting but studies are starting to show that they are innate properties of people's brains, and that knowing them better might ultimately help treat neurological disorders. The latest study, published April 8 in Science, found that the brain activity of individuals who were just biding their time in a brain scanner contained enough information to predict how their brains would function during a range of ordinary activities. The researchers used these at-rest signatures to predict which regions would light up—which groups of brain cells would switch on—during gambling, reading and other tasks they were asked to perform in the scanner. The technique might be used one day to assess whether certain areas of the brains of people who are paralyzed or in a comatose state are still functional, the authors say. The study capitalizes on a relatively new method of brain imaging that looks at what is going on when a person essentially does nothing. The technique stems from the mid-1990s work of biomedical engineer Bharat Biswal, now at New Jersey Institute of Technology. Biswal noticed that scans he had taken while participants were resting in a functional magnetic resonance imaging (fMRI) scanner displayed orderly, low-frequency oscillations. He had been looking for ways to remove background noise from fMRI signals but quickly realized these oscillations were not noise. His work paved the way for a new approach known as resting-state fMRI. © 2016 Scientific American
By Frank McGurty More than 40 percent of retired NFL players tested with advanced scanning technology showed signs of traumatic brain injury, a much higher rate than in the general population, according to a new study of the long-term risks of playing American football. The research, presented at an American Academy of Neurology meeting that began in Vancouver on Monday, is one of the first to provide "objective evidence" of traumatic brain injury in a large sample of National Football League veterans while they are living, said Dr. Francis X. Conidi, one of the study's authors. Conidi, a neurologist at the Florida Center for Headache and Sports Neurology and a faculty member at the Florida State University College of Medicine, said traumatic brain injury was often a "precursor" to CTE, a degenerative brain disease. "What we do know is that players with traumatic brain injury have a high incidence of going on to develop neurological degenerative disease later on in life," Conidi told Reuters. CTE, or chronic traumatic encephalopathy, has been found in dozens of the NFL's top players after they died. At present, a CTE diagnosis is only possible after death. The brain tissue of 59 or 62 deceased former NFL players examined by Boston University's CTE Center have tested positive for CTE, according to its website. The disease, which can lead to aggression and dementia, may have led to the suicides of several NFL athletes, including Hall of Famer Junior Seau. In the new study, the largest of its kind, 40 living former players were given sensitive brain scans, known as diffusion tensor imaging (DTI), as well as thinking and memory tests. © 2016 Scientific American,
Ian Dunt There is a remarkable lack of research into a drug that some scientists initially considered to be a key tool in understanding consciousness, and that has since been shown to help people deal with anxiety and depression. The new study on the impact of LSD on the brain is the first in the UK since the drug was banned in 1966. Incredibly, it’s also the first anywhere to use brain scans taken while a person is under the influence of the drug. Nowadays, we associate LSD with hippies murmuring about the nature of reality, but it wasn’t always this way. Between the invention of the drug in 1952 and its banning in the UK, around a thousand papers on it were published. Then LSD was made illegal. The UK Home Office promised to allow scientists to continue experiments with the drug, and it’s true that they remain legal. But they are also effectively impossible. The obstacles against research – regulatory, financial, professional and political – are just too high for any sensible person to cope with. Research using outlawed drugs with no accepted medical value requires a “schedule 1” licence from the Home Office. It takes about a year to get and involves a barrage of criminal record checks. All told, its price tag comes in at about £5000, with a costly annual top-up assessment to follow. © Copyright Reed Business Information Ltd.
Helen Shen Clamping an electrode to the brain cell of a living animal to record its electrical chatter is a task that demands finesse and patience. Known as ‘whole-cell patch-clamping’, it is reputedly the “finest art in neuroscience”, says neurobiologist Edward Boyden, and one that only a few dozen laboratories around the world specialize in. But researchers are trying to demystify this art by turning it into a streamlined, automated technique that any laboratory could attempt, using robotics and downloadable source code. “Patch-clamping provides a unique view into neural circuits, and it’s a very exciting technique but is really underused,” says neuroscientist Karel Svoboda at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia. “That’s why automation is a really, really exciting direction.” On 3 March, Boyden, at the Massachusetts Institute of Technology in Cambridge, and his colleagues published detailed instructions on how to assemble and operate an automated system for whole-cell patch-clamping1, a concept that they first described in 20122. The guide represents the latest fruits of Boyden’s partnership with the laboratory of Craig Forest, a mechanical engineer at the Georgia Institute of Technology in Atlanta who specializes in robotic automation for research. © 2016 Nature Publishing Group
Keyword: Brain imaging
Link ID: 22066 - Posted: 04.04.2016
Quirin Schiermeier & Alison Abbott The ability to study brain processes in real time is one of the goals of the Human Brain Project's newly-released computing tools. Europe’s major brain-research project has unveiled a set of prototype computing tools and called on the global neuroscience community to start using them. The move marks the end of the 30-month ramp-up phase of the Human Brain Project (HBP), and the start of its operational phase. The release of the computing platforms — which include brain-simulation tools, visualization software and a pair of remotely accessible supercomputers to study brain processes in real time — could help to allay concerns about the €1-billion (US$1.1-billion) project’s benefits to the wider scientific community. “The new platforms open countless new possibilities to analyse the human brain,” said Katrin Amunts, a neuroscientist at the Jülich Research Centre in Germany and a member of the project’s board of directors, at a press conference on 30 March. “We are proud to offer the global brain community a chance to participate.” But it is not clear how the platforms — some freely accessible, others available only on the success of a peer-reviewed application — will resonate with brain researchers outside the project. “At this point, no one can say whether or not the research platforms will be a success,” says Andreas Herz, chair of computational neuroscience at the Ludwig Maximilian University of Munich in Germany. © 2016 Nature Publishing Group
Keyword: Brain imaging
Link ID: 22061 - Posted: 04.01.2016
By Matthew Hutson Earlier this month, a computer program called AlphaGo defeated a (human) world champion of the board game Go, years before most experts expected computers to rival the best flesh-and-bone players. But then last week, Microsoft was forced to silence its millennial-imitating chatbot Tay for blithely parroting Nazi propaganda and misogynistic attacks after just one day online, her failure a testimony to the often underestimated role of human sensibility in intelligent behavior. Why are we so compelled to pit human against machine, and why are we so bad at predicting the outcome? As the number of jobs susceptible to automation rises, and as Stephen Hawking, Elon Musk, and Bill Gates warn that artificial intelligence poses an existential threat to humanity, it’s natural to wonder how humans measure up to our future robot overlords. But even those tracking technology’s progress in taking on human skills have a hard time setting an accurate date for the uprising. That’s in part because one prediction strategy popular among both scientists and journalists—benchmarking the human brain with digital metrics such as bits, hertz, and million instructions per section, or MIPS—is severely misguided. And doing so could warp our expectations of what technology can do for us and to us. Since their development, digital computers have become a standard metaphor for the mind and brain. The comparison makes sense, in that brains and computers both transform input into output. Most human brains, like computers, can also manipulate abstract symbols. (Think arithmetic or language processing.) But like any metaphor, this one has limitations.
By Emily Underwood This tangle of wiry filaments is not a bird’s nest or a root system. Instead, it’s the largest map to date of the connections between brain cells—in this case, about 200 from a mouse’s visual cortex. To map the roughly 1300 connections, or synapses, between the cells, researchers used an electron microscope to take millions of nanoscopic pictures from a speck of tissue not much bigger than a dust mite, carved into nearly 3700 slices. Then, teams of “annotators” traced the spindly projections of the synapses, digitally stitching stacked slices together to form the 3D map. The completed map reveals some interesting clues about how the mouse brain is wired: Neurons that respond to similar visual stimuli, such as vertical or horizontal bars, are more likely to be connected to one another than to neurons that carry out different functions, the scientists report online today in Nature. (In the image above, some neurons are color-coded according to their sensitivity to various line orientations.) Ultimately, by speeding up and automating the process of mapping such networks in both mouse and human brain tissue, researchers hope to learn how the brain’s structure enables us to sense, remember, think, and feel. © 2016 American Association for the Advancement of Science
Keyword: Brain imaging
Link ID: 22041 - Posted: 03.29.2016