Chapter 2. Cells and Structures: The Anatomy of the Nervous System
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By Teal Burrell In neuroscience, neurons get all the glory. Or rather, they used to. Researchers are beginning to discover the importance of something outside the neurons—a structure called the perineuronal net. This net might reveal how memories are stored and how various diseases ravage the brain. The realization of important roles for structures outside neurons serves as a reminder that the brain is a lot more complicated than we thought. Or, it’s exactly as complicated as neuroscientists thought it was 130 years ago. In 1882, Italian physician and scientist Camillo Golgi described a structure that enveloped cells in the brain in a thin layer. He later named it the pericellular net. His word choice was deliberate; he carefully avoided the word “neuron” since he was engaged in a battle with another neuroscience luminary, Santiago Ramón y Cajal, over whether the nervous system was a continuous meshwork of cells that were fused together—Golgi’s take—or a collection of discrete cells, called neurons—Ramón y Cajal’s view. Ramón y Cajal wasn’t having it. He argued Golgi was wrong about the existence of such a net, blaming the findings on Golgi’s eponymous staining technique, which, incidentally, is still used today. Ramón y Cajal’s influence was enough to shut down the debate. While some Golgi supporters labored in vain to prove the nets existed, their findings never took hold. Instead, over the next century, neuroscientists focused exclusively on neurons, the discrete cells of the nervous system that relay information between one another, giving rise to movements, perceptions, and emotions. (The two adversaries would begrudgingly share a Nobel Prize in 1906 for their work describing the nervous system.) © 1996-2016 WGBH Educational Foundation
Link ID: 22252 - Posted: 05.26.2016
By Emily Underwood One of the telltale signs of Alzheimer’s disease (AD) is sticky plaques of ß-amyloid protein, which form around neurons and are thought by a large number of scientists to bog down information processing and kill cells. For more than a decade, however, other researchers have fingered a second protein called tau, found inside brain cells, as a possible culprit. Now, a new imaging study of 10 people with mild AD suggests that tau deposits—not amyloid—are closely linked to symptoms such as memory loss and dementia. Although this evidence won’t itself resolve the amyloid-tau debate, the finding could spur more research into new, tau-targeting treatments and lead to better diagnostic tools, researchers say. Scientists have long used an imaging technique called positron emission tomography (PET) to visualize ß-amyloid deposits marked by radioactive chemical tags in the brains of people with AD. Combined with postmortem analyses of brain tissue, these studies have demonstrated that people with AD have far more ß-amyloid plaques in their brains than healthy people, at least as a general rule. But they have also revealed a puzzle: Roughly 30% of people without any signs of dementia have brains “chock-full” of ß-amyloid at autopsy, says neurologist Beau Ances at Washington University in St. Louis in Missouri. That mystery has inspired many in the AD field to ask whether a second misfolded protein, tau, is the real driver of the condition’s neurodegeneration and symptoms, or at least an important accomplice. Until recently, the only ways to test that hypothesis were to measure tau in brain tissue after a person died, or in a sample of cerebrospinal fluid (CSF) extracted from a living person by needle. But in the past several years, researchers have developed PET imaging agents that can harmlessly bind to tau in the living brain. The more tau deposits found in the temporal lobe, a brain region associated with memory, the more likely a person was to show deficits on a battery of memory and attention tests, the team reports today in Science Translational Medicine. © 2016 American Association for the Advancement of Science.
By Matthew Hutson Last week, Nature, the world’s most prestigious science journal, published a beautiful picture of a brain on its cover. The computer-generated image, taken from a paper in the issue, showed the organ’s outer layer almost completely covered with sprinkles of colorful words. The paper presents a “semantic map” revealing which parts of the brain’s cortex—meaning its outer layer, the one responsible for higher thought—respond to various spoken words. The study has generated widespread interest, receiving coverage from newspapers and websites around the world. The paper was also accompanied by an online interactive model that allowed users to explore exactly how words are mapped in our brains. The combination yielded a popular frenzy, one prompting the question: Why are millions of people suddenly so interested in the neuroanatomical distribution of linguistic representations? Have they run out of cat videos? The answer, I think, is largely the same as the answer to why “This Is Your Brain on X” (where X = food, politics, sex, podcasts, whatever) is a staple of news headlines, often residing above an fMRI image of a brain lit up in fascinating, mysterious patterns: People have a fundamental misunderstanding of the field of neuroscience and what it can tell us. But before explaining why people shouldn’t be excited about this research, let’s look at what the research tells us and why we should be excited. Different parts of the brain process different elements of thought, and some regions of the cortex are organized into “maps” such that the distance between different locations corresponds to the physical and/or conceptual distance between what it represents.
By BENEDICT CAREY Listening to music may make the daily commute tolerable, but streaming a story through the headphones can make it disappear. You were home; now you’re at your desk: What happened? Storytelling happened, and now scientists have mapped the experience of listening to podcasts, specifically “The Moth Radio Hour,” using a scanner to track brain activity. In a paper published Wednesday by the journal Nature, a research team from the University of California, Berkeley, laid out a detailed map of the brain as it absorbed and responded to a story. Widely dispersed sensory, emotional and memory networks were humming, across both hemispheres of the brain; no story was “contained” in any one part of the brain, as some textbooks have suggested. The team, led by Alexander Huth, a postdoctoral researcher in neuroscience, and Jack Gallant, a professor of psychology, had seven volunteers listen to episodes of “The Moth” — first-person stories of love, loss, betrayal, flight from an abusive husband, and more — while recording brain activity with an M.R.I. machine. Sign Up for the Science Times Newsletter Every week, we'll bring you stories that capture the wonders of the human body, nature and the cosmos. Using novel computational methods, the group broke down the stories into units of meaning: social elements, for example, like friends and parties, as well as locations and emotions . They found that these concepts fell into 12 categories that tended to cause activation in the same parts of people’s brains at the same points throughout the stories. They then retested that model by seeing how it predicted M.R.I. activity while the volunteers listened to another Moth story. Would related words like mother and father, or times, dates and numbers trigger the same parts of people’s brains? The answer was yes. © 2016 The New York Times Company
Ian Sample Science editor Scientists have created an “atlas of the brain” that reveals how the meanings of words are arranged across different regions of the organ. Like a colourful quilt laid over the cortex, the atlas displays in rainbow hues how individual words and the concepts they convey can be grouped together in clumps of white matter. “Our goal was to build a giant atlas that shows how one specific aspect of language is represented in the brain, in this case semantics, or the meanings of words,” said Jack Gallant, a neuroscientist at the University of California, Berkeley. No single brain region holds one word or concept. A single brain spot is associated with a number of related words. And each single word lights up many different brain spots. Together they make up networks that represent the meanings of each word we use: life and love; death and taxes; clouds, Florida and bra. All light up their own networks. Described as a “tour de force” by one researcher who was not involved in the study, the atlas demonstrates how modern imaging can transform our knowledge of how the brain performs some of its most important tasks. With further advances, the technology could have a profound impact on medicine and other fields. “It is possible that this approach could be used to decode information about what words a person is hearing, reading, or possibly even thinking,” said Alexander Huth, the first author on the study. One potential use would be a language decoder that could allow people silenced by motor neurone disease or locked-in syndrome to speak through a computer. © 2016 Guardian News and Media Limited
By Simon Makin Everyone's brain is different. Until recently neuroscience has tended to gloss this over by averaging results from many brain scans in trying to elicit general truths about how the organ works. But in a major development within the field researchers have begun documenting how brain activity differs between individuals. Such differences had been largely thought of as transient and uninteresting but studies are starting to show that they are innate properties of people's brains, and that knowing them better might ultimately help treat neurological disorders. The latest study, published April 8 in Science, found that the brain activity of individuals who were just biding their time in a brain scanner contained enough information to predict how their brains would function during a range of ordinary activities. The researchers used these at-rest signatures to predict which regions would light up—which groups of brain cells would switch on—during gambling, reading and other tasks they were asked to perform in the scanner. The technique might be used one day to assess whether certain areas of the brains of people who are paralyzed or in a comatose state are still functional, the authors say. The study capitalizes on a relatively new method of brain imaging that looks at what is going on when a person essentially does nothing. The technique stems from the mid-1990s work of biomedical engineer Bharat Biswal, now at New Jersey Institute of Technology. Biswal noticed that scans he had taken while participants were resting in a functional magnetic resonance imaging (fMRI) scanner displayed orderly, low-frequency oscillations. He had been looking for ways to remove background noise from fMRI signals but quickly realized these oscillations were not noise. His work paved the way for a new approach known as resting-state fMRI. © 2016 Scientific American
By Frank McGurty More than 40 percent of retired NFL players tested with advanced scanning technology showed signs of traumatic brain injury, a much higher rate than in the general population, according to a new study of the long-term risks of playing American football. The research, presented at an American Academy of Neurology meeting that began in Vancouver on Monday, is one of the first to provide "objective evidence" of traumatic brain injury in a large sample of National Football League veterans while they are living, said Dr. Francis X. Conidi, one of the study's authors. Conidi, a neurologist at the Florida Center for Headache and Sports Neurology and a faculty member at the Florida State University College of Medicine, said traumatic brain injury was often a "precursor" to CTE, a degenerative brain disease. "What we do know is that players with traumatic brain injury have a high incidence of going on to develop neurological degenerative disease later on in life," Conidi told Reuters. CTE, or chronic traumatic encephalopathy, has been found in dozens of the NFL's top players after they died. At present, a CTE diagnosis is only possible after death. The brain tissue of 59 or 62 deceased former NFL players examined by Boston University's CTE Center have tested positive for CTE, according to its website. The disease, which can lead to aggression and dementia, may have led to the suicides of several NFL athletes, including Hall of Famer Junior Seau. In the new study, the largest of its kind, 40 living former players were given sensitive brain scans, known as diffusion tensor imaging (DTI), as well as thinking and memory tests. © 2016 Scientific American,
Ian Dunt There is a remarkable lack of research into a drug that some scientists initially considered to be a key tool in understanding consciousness, and that has since been shown to help people deal with anxiety and depression. The new study on the impact of LSD on the brain is the first in the UK since the drug was banned in 1966. Incredibly, it’s also the first anywhere to use brain scans taken while a person is under the influence of the drug. Nowadays, we associate LSD with hippies murmuring about the nature of reality, but it wasn’t always this way. Between the invention of the drug in 1952 and its banning in the UK, around a thousand papers on it were published. Then LSD was made illegal. The UK Home Office promised to allow scientists to continue experiments with the drug, and it’s true that they remain legal. But they are also effectively impossible. The obstacles against research – regulatory, financial, professional and political – are just too high for any sensible person to cope with. Research using outlawed drugs with no accepted medical value requires a “schedule 1” licence from the Home Office. It takes about a year to get and involves a barrage of criminal record checks. All told, its price tag comes in at about £5000, with a costly annual top-up assessment to follow. © Copyright Reed Business Information Ltd.
Helen Shen Clamping an electrode to the brain cell of a living animal to record its electrical chatter is a task that demands finesse and patience. Known as ‘whole-cell patch-clamping’, it is reputedly the “finest art in neuroscience”, says neurobiologist Edward Boyden, and one that only a few dozen laboratories around the world specialize in. But researchers are trying to demystify this art by turning it into a streamlined, automated technique that any laboratory could attempt, using robotics and downloadable source code. “Patch-clamping provides a unique view into neural circuits, and it’s a very exciting technique but is really underused,” says neuroscientist Karel Svoboda at the Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Virginia. “That’s why automation is a really, really exciting direction.” On 3 March, Boyden, at the Massachusetts Institute of Technology in Cambridge, and his colleagues published detailed instructions on how to assemble and operate an automated system for whole-cell patch-clamping1, a concept that they first described in 20122. The guide represents the latest fruits of Boyden’s partnership with the laboratory of Craig Forest, a mechanical engineer at the Georgia Institute of Technology in Atlanta who specializes in robotic automation for research. © 2016 Nature Publishing Group
Keyword: Brain imaging
Link ID: 22066 - Posted: 04.04.2016
Quirin Schiermeier & Alison Abbott The ability to study brain processes in real time is one of the goals of the Human Brain Project's newly-released computing tools. Europe’s major brain-research project has unveiled a set of prototype computing tools and called on the global neuroscience community to start using them. The move marks the end of the 30-month ramp-up phase of the Human Brain Project (HBP), and the start of its operational phase. The release of the computing platforms — which include brain-simulation tools, visualization software and a pair of remotely accessible supercomputers to study brain processes in real time — could help to allay concerns about the €1-billion (US$1.1-billion) project’s benefits to the wider scientific community. “The new platforms open countless new possibilities to analyse the human brain,” said Katrin Amunts, a neuroscientist at the Jülich Research Centre in Germany and a member of the project’s board of directors, at a press conference on 30 March. “We are proud to offer the global brain community a chance to participate.” But it is not clear how the platforms — some freely accessible, others available only on the success of a peer-reviewed application — will resonate with brain researchers outside the project. “At this point, no one can say whether or not the research platforms will be a success,” says Andreas Herz, chair of computational neuroscience at the Ludwig Maximilian University of Munich in Germany. © 2016 Nature Publishing Group
Keyword: Brain imaging
Link ID: 22061 - Posted: 04.01.2016
By Matthew Hutson Earlier this month, a computer program called AlphaGo defeated a (human) world champion of the board game Go, years before most experts expected computers to rival the best flesh-and-bone players. But then last week, Microsoft was forced to silence its millennial-imitating chatbot Tay for blithely parroting Nazi propaganda and misogynistic attacks after just one day online, her failure a testimony to the often underestimated role of human sensibility in intelligent behavior. Why are we so compelled to pit human against machine, and why are we so bad at predicting the outcome? As the number of jobs susceptible to automation rises, and as Stephen Hawking, Elon Musk, and Bill Gates warn that artificial intelligence poses an existential threat to humanity, it’s natural to wonder how humans measure up to our future robot overlords. But even those tracking technology’s progress in taking on human skills have a hard time setting an accurate date for the uprising. That’s in part because one prediction strategy popular among both scientists and journalists—benchmarking the human brain with digital metrics such as bits, hertz, and million instructions per section, or MIPS—is severely misguided. And doing so could warp our expectations of what technology can do for us and to us. Since their development, digital computers have become a standard metaphor for the mind and brain. The comparison makes sense, in that brains and computers both transform input into output. Most human brains, like computers, can also manipulate abstract symbols. (Think arithmetic or language processing.) But like any metaphor, this one has limitations.
By Emily Underwood This tangle of wiry filaments is not a bird’s nest or a root system. Instead, it’s the largest map to date of the connections between brain cells—in this case, about 200 from a mouse’s visual cortex. To map the roughly 1300 connections, or synapses, between the cells, researchers used an electron microscope to take millions of nanoscopic pictures from a speck of tissue not much bigger than a dust mite, carved into nearly 3700 slices. Then, teams of “annotators” traced the spindly projections of the synapses, digitally stitching stacked slices together to form the 3D map. The completed map reveals some interesting clues about how the mouse brain is wired: Neurons that respond to similar visual stimuli, such as vertical or horizontal bars, are more likely to be connected to one another than to neurons that carry out different functions, the scientists report online today in Nature. (In the image above, some neurons are color-coded according to their sensitivity to various line orientations.) Ultimately, by speeding up and automating the process of mapping such networks in both mouse and human brain tissue, researchers hope to learn how the brain’s structure enables us to sense, remember, think, and feel. © 2016 American Association for the Advancement of Science
Keyword: Brain imaging
Link ID: 22041 - Posted: 03.29.2016
By BENEDICT CAREY BEDFORD, Mass. — In a small room banked by refrigerators of preserved brains, a pathologist held a specimen up to the light in frank admiration. Then it was time to cut — once in half and then a thick slice from the back, the tissue dense and gray-pink, teeming with folds and swirls. It was the brain of a professional running back. “There,” said Dr. Ann McKee, the chief of neuropathology at the V.A. Boston Healthcare System and a professor of neurology and pathology at Boston University’s medical school, pointing to a key area that had an abnormal separation. “That’s one thing we look for right away.” Over the past several years, Dr. McKee’s lab, housed in a pair of two-story brick buildings in suburban Boston, has repeatedly made headlines by revealing that deceased athletes, including at least 90 former N.F.L. players, were found to have had a degenerative brain disease called chronic traumatic encephalopathy, or C.T.E., that is believed to cause debilitating memory and mood problems. This month, after years of denying or playing down a connection, a top N.F.L. official acknowledged at a hearing in Washington that playing football and having C.T.E. were “certainly” linked. His statement effectively ended a very public dispute over whether head blows sustained while playing football are associated with the disorder. But it will not resolve a quieter debate among scientists about how much risk each football player has of developing it, or answer questions about why some players seem far more vulnerable to it than others. Some researchers worry that the rising drumbeat of C.T.E. diagnoses is far outpacing scientific progress in pinpointing the symptoms, risks and prevalence of the disease. The American Academy of Clinical Neuropsychology, an organization of brain injury specialists, is preparing a public statement to point out that much of the science of C.T.E. is still unsettled and to contend that the evidence to date should not be interpreted to mean that parents must keep their children off sports teams, officials of the group say. © 2016 The New York Times Company
By Simon Makin Brain implants have been around for decades—stimulating motor areas to alleviate Parkinson's disease symptoms, for example—but until now they have all suffered from the same limitation: because brains move slightly during physical activity and as we breathe and our heart beats, rigid implants rub and damage tissue. This means that eventually, because of both movement and scar-tissue formation, they lose contact with the cells they were monitoring. Now a group of researchers, led by chemist Charles Lieber of Harvard University, has overcome these problems using a fine, flexible mesh. In 2012 the team showed that cells could be grown around such a mesh, but that left the problem of how to get one inside a living brain. The solution the scientists devised was to draw the mesh—measuring a few millimeters wide—into a syringe, so it would roll up like a scroll inside the 100-micron-wide needle, and inject it through a hole in the skull. In a study published in Nature Nanotechnology last year, the team injected meshes studded with 16 electrodes into two brain regions in mice. The mesh is composed of extremely thin, nanoscale polymer threads, sparsely distributed so that 95 percent of it is empty space. It has a level of flexibility similar to brain tissue. “You're starting to make this nonliving system look like the biological system you're trying to probe,” Lieber explains. “That's been the goal of my group's work, to blur the distinction between electronics as we know it and the computer inside our heads.” © 2016 Scientific American
Keyword: Brain imaging
Link ID: 21950 - Posted: 03.03.2016
Story by Amy Ellis Nutt She relaxed in the recliner, her eyes closed, her hands resting lightly in her lap. The psychiatrist’s assistant made small talk while pushing the woman’s hair this way and that, dabbing her head with spots of paste before attaching the 19 electrodes to her scalp. In the struggle over the future of psychiatry, researchers are looking deep within the brain to understand mental illness and find new therapeutic tools. As the test started, her anxiety ticked up. And that’s when it began: the sensation of being locked in a vise. First, she couldn’t move. Then she was shrinking, collapsing in on herself like some human black hole. It was a classic panic attack — captured in vivid color on the computer screen that psychiatrist Hasan Asif was watching. “It’s going to be okay,” he said, his voice quiet and soothing. “Just stay with it.” The images playing out in front of him were entirely unexpected; this clearly wasn’t a resting state for his patient. With each surge of anxiety, a splotch of red bloomed on the computer screen. Excessive activity of high-energy brain waves near the top of her head indicated hyper-arousal and stress. Decreased activity in the front of her brain, where emotions are managed, showed she couldn’t summon the resources to keep calm.
Laura Sanders In a multivirus competition, a newcomer came out on top for its ability to transport genetic cargo to a mouse’s brain cells. The engineered virus AAV-PHP.B was best at delivering a gene that instructed Purkinje cells, the dots in the micrograph above, to take on a whitish glow. Unaffected surrounding cells in the mouse cerebellum look blue. Cargo carried by viruses like AAV-PHP.B could one day replace faulty genes in the brains of people. AAV-PHP.B beat out other viruses including a similar one called AAV9, which is already used to get genes into the brains of mice. Genes delivered by AAV-PHP.B also showed up in the spinal cord, retina and elsewhere in the body, Benjamin Deverman of Caltech and colleagues report in the February Nature Biotechnology. Similar competitions could uncover viruses with the ability to deliver genes to specific types of cells, the researchers write. Selective viruses that can also get into the brain would enable deeper studies of the brain and might improve gene therapy techniques in people. © Society for Science & the Public 2000 - 2016
By NATALIE ANGIER Whether to enliven a commute, relax in the evening or drown out the buzz of a neighbor’s recreational drone, Americans listen to music nearly four hours a day. In international surveys, people consistently rank music as one of life’s supreme sources of pleasure and emotional power. We marry to music, graduate to music, mourn to music. Every culture ever studied has been found to make music, and among the oldest artistic objects known are slender flutes carved from mammoth bone some 43,000 years ago — 24,000 years before the cave paintings of Lascaux. Given the antiquity, universality and deep popularity of music, many researchers had long assumed that the human brain must be equipped with some sort of music room, a distinctive piece of cortical architecture dedicated to detecting and interpreting the dulcet signals of song. Yet for years, scientists failed to find any clear evidence of a music-specific domain through conventional brain-scanning technology, and the quest to understand the neural basis of a quintessential human passion foundered. Now researchers at the Massachusetts Institute of Technology have devised a radical new approach to brain imaging that reveals what past studies had missed. By mathematically analyzing scans of the auditory cortex and grouping clusters of brain cells with similar activation patterns, the scientists have identified neural pathways that react almost exclusively to the sound of music — any music. It may be Bach, bluegrass, hip-hop, big band, sitar or Julie Andrews. A listener may relish the sampled genre or revile it. No matter. When a musical passage is played, a distinct set of neurons tucked inside a furrow of a listener’s auditory cortex will fire in response. Other sounds, by contrast — a dog barking, a car skidding, a toilet flushing — leave the musical circuits unmoved. Nancy Kanwisher and Josh H. McDermott, professors of neuroscience at M.I.T., and their postdoctoral colleague Sam Norman-Haignere reported their results in the journal Neuron. The findings offer researchers a new tool for exploring the contours of human musicality. © 2016 The New York Times Company
By Jonathan Webb Science reporter, BBC News Scientists have reproduced the wrinkled shape of a human brain using a simple gel model with two layers. They made a solid replica of a foetal brain, still smooth and unfolded, and coated it with a second layer which expanded when dunked into a solvent. That expansion produced a network of furrows that was remarkably similar to the pattern seen in a real human brain. This suggests that brain folds are caused by physics: the outer part grows faster than the rest, and crumples. Such straightforward, mechanical buckling is one of several proposed explanations for the distinctive twists and turns of the brain's outermost blanket of cells, called the "cortex". Alternatively, researchers have suggested that biochemical signals might trigger expansion and contraction in particular parts of the sheet, or that the folds arise because of stronger connections between specific areas. "There have been several hypotheses, but the challenge has been that they are difficult to test experimentally," said Tuomas Tallinen, a soft matter physicist at the University of Jyväskylä in Finland and a co-author of the study, which appears in Nature Physics. "I think it's very significant... that we can actually recreate the folding process using this quite simple, physical model." Humans are one of just a few animals - among them whales, pigs and some other primates - that possess these iconic undulations. In other creatures, and early in development, the cortex is smooth. The replica in the study was based on an MRI brain scan from a 22-week-old foetus - the stage just before folds usually appear. © 2016 BBC.
Keyword: Development of the Brain
Link ID: 21848 - Posted: 02.02.2016
By Neuroskeptic We’ve learned this week that computers can play Go. But at least there’s one human activity they will never master: neuroscience. A computer will never be a neuroscientist. Except… hang on. A new paper just out in Neuroimage describes something called The Automatic Neuroscientist. Oh. So what is this new neuro-robot? According to its inventors, Romy Lorenz and colleagues of Imperial College London, it’s a framework for using “real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state.” It works like this. You put someone in an MRI scanner and start an fMRI sequence to record their brain activity. The Automatic Neuroscientist (TAN) shows them a series of different stimuli (e.g. images or sounds) and measures the neural responses. It then learns which stimuli activate different parts of the brain, and works out the best stimuli in order to elicit a particular target pattern of brain activity (which is specified at the outset.) This is not an entirely new idea as Lorenz et al. acknowledge, but they say that theirs is the first general framework. Lorenz et al. conducted a proof-of-concept study in which they asked TAN to maximize the difference in brain activity between the lateral occipital cortex (LOC) and superior temporal cortex, by presenting visual and auditory stimuli of varying levels of complexity.
By Simon Makin Multi-color image of whole brain for brain imaging research. This image was created using a computer image processing program (called SUMA), which is used to make sense of data generated by functional Magnetic Resonance Imaging (fMRI). National Institute of Mental Health, National Institutes of Health Understanding how brains work is one of the greatest scientific challenges of our times, but despite the impression sometimes given in the popular press, researchers are still a long way from some basic levels of understanding. A project recently funded by the Obama administration's BRAIN (Brain Research through Advancing Innovative Neurotechnologies) initiative is one of several approaches promising to deliver novel insights by developing new tools that involves a marriage of nanotechnology and optics. There are close to 100 billion neurons in the human brain. Researchers know a lot about how these individual cells behave, primarily through “electrophysiology,” which involves sticking fine electrodes into cells to record their electrical activity. We also know a fair amount about the gross organization of the brain into partially specialized anatomical regions, thanks to whole-brain imaging technologies like functional magnetic resonance imaging (fMRI), which measure how blood oxygen levels change as regions that work harder demand more oxygen to fuel metabolism. We know little, however, about how the brain is organized into distributed “circuits” that underlie faculties like, memory or perception. And we know even less about how, or even if, cells are arranged into “local processors” that might act as components in such networks. © 2016 Scientific American
Keyword: Brain imaging
Link ID: 21840 - Posted: 02.01.2016