Chapter 2. Cells and Structures: The Anatomy of the Nervous System
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Joseph Brean U.S. President Barack Obama’s much-hyped BRAIN initiative to crack the mysteries of consciousness via a finely detailed map of the brain in action took its first big step this week, with the release of a strategy report that foresees “revolutionary advances” in the $100-million effort to “crack the brain’s code,” perhaps in as little as “a few years.” “We stand on the verge of a great journey into the unknown,” the report says, explicitly comparing BRAIN to the Apollo moon shot, and predicting it will “change human society forever.” As a grand challenge, Apollo was an unambiguous success, despite the vast expense and human costs, but there is a growing sense among scientists, if not legacy-minded politicians, that the road ahead for modern neuroscience will be pocked with disappointment, with more impenetrable mysteries than solvable problems. As the world approaches what some are calling “peak neuro,” after three decades of over-hyped “brain porn,” the optimistic hope is that Mr. Obama’s BRAIN project will lead to a detailed and dynamic map of the brain, and thus reveal both how it works and how it fails in such diseases as Alzheimer’s or autism. The pessimistic fear, however, is that the “speed of thought,” as Mr. Obama described it, is just too quick for our current brain imaging technologies, primarily functional magnetic resonance imaging (fMRI). As the anonymous blogger Neuroskeptic, a British brain scientist who tracks the misinterpretation of brain scan studies by both scientists and media, put it in an email, “there’s just as much hype and misrepresentation as ever.” The more we learn about the brain, the less we seem to know. With its potential overstated and its aspirations presented as foregone conclusions, the relatively new field of neuroscience is in a period of self-reflection, said Jackie Sullivan, a philosopher of neuroscience at Western University in London Ont. “The vast majority of neuroscientists are well aware that the goals going forward need to be more modest,” she said. © 2013 National Post
by Andy Coghlan The two major brain abnormalities that underlie Alzheimer's disease can now be viewed simultaneously in brain scans while people are still alive, providing new insight into how the disease develops and whether drugs are working. The breakthrough comes from the development of a harmless tracer chemical that is injected into the bloodstream and accumulates exclusively in "tau tangles" – one type of abnormality that occurs in the brains of people with Alzheimer's and other kinds of dementia. Fluorescent light emitted from the chemical is picked up using positron emission tomography (PET), showing exactly where the tangles are. The tracer remains in the brain for a few hours before being broken down and expelled from the body. Similar tracers already exist for beta amyloid plaques, the other major anatomical feature of Alzheimer's, so the one for tau tangles completes the picture. "This is a big step forward," says John Hardy, an Alzheimer's researcher at University College London. "This is of critical significance, as tau lesions are known to be more intimately associated with neuronal loss than plaques," says Makoto Higuchi of the National Institute of Radiological Sciences in Chiba, Japan, and head of the team who developed the new tracer. The tracer could help researchers unravel exactly how Alzheimer's develops, and enable earlier diagnosis and monitoring of treatments. © Copyright Reed Business Information Ltd.
Posted by Gary Marcus On Monday, the National Institutes of Health released a fifty-eight-page report on the future of neuroscience—the first substantive step in developing President Obama’s BRAIN Initiative, which seeks to “revolutionize our understanding of the human mind and uncover new ways to treat, prevent, and cure brain disorders like Alzheimer’s, schizophrenia, autism, epilepsy, and traumatic brain injury.” Assembled by an advisory panel of fifteen scientists led by Cori Bargmann, of Rockefeller University, and William Newsome, of Stanford, the report assesses the state of neuroscience and offers a vision for the field’s future. The core challenge, as the report puts it, is simply that “brains—even small ones—are dauntingly complex”: Information flows in parallel through many different circuits at once; different components of a single functional circuit may be distributed across many brain structures and be spatially intermixed with the components of other circuits; feedback signals from higher levels constantly modulate the activity within any given circuit; and neuromodulatory chemicals can rapidly alter the effective wiring of any circuit. To tackle the brain’s immense complexity, the report outlines nine goals for the initiative. No effort to study the brain is likely to succeed without devoting serious attention to all nine, which range from creating structural maps of its static, physical connections to developing new ways of recording continuous, dynamic activity as it perceives the world and directs action. A less flashy, equally critical goal is to create a “census” of the brain’s basic cell types, which neuroscientists haven’t yet established. (The committee also devotes attention to ethical questions that could arise, such as what should happen if neural enhancement—the use of engineering to alter the brain—becomes a realistic possibility.) © 2013 Condé Nast.
Keyword: Brain imaging
Link ID: 18668 - Posted: 09.18.2013
By JAMES GORMAN In the first hint of how the Brain Initiative announced by President Obama in April could take shape, an advisory group on Monday recommended that the main target of research by the National Institutes of Health should be systems and circuits involving thousands to millions of brain cells — not the entire brain or individual cells and molecules. The National Institutes of Health working group was meant to focus specifically on how the federal agency should spend its $40 million brain initiative budget in 2014. However, Dr. Rafael Yuste, a neuroscientist at Columbia University who was not a member of the group, said that the recommendations, which he agreed with, were so ambitious that it “could be a charter for neuroscience for the next 10 to 15 years.” Dr. Francis S. Collins, director of the N.I.H., who accepted the report and its recommendations, said that he had asked the group, led by Cori Bargmann of Rockefeller University and Bill Newsome of Stanford, to think big, and that it would be the job of the N.I.H. to make actual spending decisions. Dr. Bargmann agreed that the overall goal of figuring out “how circuits in the brain generate complex thoughts and behavior” was not something to be tackled with the $40 million that the N.I.H. hopes to have for 2014. “You can’t do all of that in year one, you can’t do all of that with $40 million, and you can’t do all of that at N.I.H. either,” she said. The $40 million for the N.I.H. is part of a White House proposal for $100 million in spending on the initiative in the 2014 budget. The initiative also includes money for the National Science Foundation and the Defense Advanced Research Projects Agency. Several major private research foundations are also joining in the effort with their own research. © 2013 The New York Times Company
Keyword: Brain imaging
Link ID: 18653 - Posted: 09.17.2013
By Jay Van Bavel and Dominic Packer On the heels of the decade of the brain and the development of neuroimaging, it is nearly impossible to open a science magazine or walk through a bookstore without encountering images of the human brain. As prominent neuroscientist, Martha Farah, remarked “Brain images are the scientific icon of our age, replacing Bohr’s planetary atom as the symbol of science”. The rapid rise to prominence of cognitive neuroscience has been accompanied by an equally swift rise in practitioners and snake oil salesmen who make promises that neuroimaging cannot yet deliver. Critics inside and outside of the discipline have both been swift to condemn sloppy claims that MRI can tell us who we plan to vote for, if we love our iPhones, and why we believe in God. Yet, the constant parade of overtrumped results has lead to the rise of “The new neuro-skeptics” who argue that neuroscience is either unable to answer the interesting questions, or worse, that scientists have simply been seduced by the flickering lights of the brain. The notion that MRI images have attained an undue influence over scientists, granting agencies, and the public gained traction in 2008 when psychologists David McCabe and Alan Castel published a paper showing that brain images could be used to deceive. In a series of experiments, they found that Colorado State University undergraduates rated descriptions of scientific studies higher in scientific reasoning if they were accompanied by a 3-D image of the brain (see Figure), rather than a mere bar graph or a topographic map of brain activity on the scalp (presumably from electroencephalography). © 2013 Scientific American
By Ben Thomas As Albert Einstein famously said, “No problem can be solved from the same level of consciousness that created it.” The history of science is littered with so-called “intractable” problems that researchers later cracked wide open using techniques their ancestors could hardly imagine. Biologists in the 1950s looked at the staggeringly complex (and beautiful) three-dimensional shapes into which proteins fold and declared that a reliably predictive mathematical model of these convolutions might be unachievable in our lifetimes. But over the past few years, folks with home computers have joined forces to crack many longstanding protein-folding problems using the online game FoldIt. Instead of relying on the number-crunching power of a single supercomputer or network, crowdsourced games like FoldIt translate vast and complex data sets into simple online interfaces that anyone can learn to operate. The crowdsourced astronomy game Galaxy Zoo also depends on an army of “citizen scientists” for classification of stars hundreds of light years away; while Google built its image search technology on an image-labeling game. In fact, every time you “verify your humanity” on a web form by typing out nonsensical reCAPTCHA text, you’re actually helping Google transcribe books from the world’s libraries into a digital format. And now, a worldwide team of neuroscience researchers have begun using this crowdsource approach to crack open one of the greatest problems in any scientific field: The construction of a complete wiring diagram for a mammalian brain. © 2013 Scientific American,
Keyword: Brain imaging
Link ID: 18634 - Posted: 09.12.2013
By Michele Solis Like truth and beauty, pain is subjective and hard to pin down. What hurts one moment might not register the next, and our moods and thoughts color the experience of pain. According to a report in April in the New England Journal of Medicine, however, researchers may one day be able to measure the experience of pain by scanning the brain—a much needed improvement over the subjective ratings of between one and 10 that patients are currently asked to give. Led by neuroscientist Tor Wager of the University of Colorado at Boulder, researchers used functional MRI on healthy participants who were given heated touches to their arm, some pleasantly warm, others painfully hot. During the painful touches, a scattered group of brain regions consistently turned on. Although these regions have been previously associated with pain, the new study detected a striking and consistent jump in their activity when people reported pain, with much greater accuracy than previous studies had attained. This neural signature appeared in 93 percent of subjects reporting to feel painful heat, ramping up as pain intensity increased and receding after participants took a painkiller. The researchers determined that the brain activity specifically marked physical pain rather than a generally unpleasant experience, because it did not emerge in people shown a picture of a lover who had recently dumped them. Although physical pain and emotional pain involve some of the same regions, the study showed that fine-grained differences in activation separate the two conditions. © 2013 Scientific American
By Nathan Seppa A tiny probe equipped with a laser might reveal what the human eye doesn’t always see: the difference between a tumor and healthy tissue. A new study suggests the device might provide brain surgeons with a roadmap as they go about the delicate business of removing tumors. Surgeons try to excise as much of brain tumors as possible, but they risk harming the patient if they remove healthy tissue. “This problem,” says surgeon Daniel Orringer of the University of Michigan in Ann Arbor, “has vexed brain surgeons for as long as they have taken out tumors,” since the first half of the 20th century. “Basically, we do it by feel — the texture, color and vascularity of the tissues. Tumors tend to bleed a little more than normal brain.” Although removing and testing tissue samples, or biopsies, can help to characterize the tissue at the tumor margins, it’s a cumbersome and time-consuming process. In the new study, Orringer and his colleagues instead exposed such borderline brain tissues to a weak laser. Then they used Raman spectroscopy, a technique that reveals vibrations of specific chemical bonds in tissues. The revved up form of Raman spectroscopy that the researchers used is sensitive enough to distinguish between proteins and lipids. Since tumors are higher in protein than healthy brain tissue, the authors designed the technique to present protein signatures as blue images on a screen, and lipids as green. © Society for Science & the Public 2000 - 2013
Keyword: Brain imaging
Link ID: 18622 - Posted: 09.09.2013
By ERIC R. KANDEL THESE days it is easy to get irritated with the exaggerated interpretations of brain imaging — for example, that a single fMRI scan can reveal our innermost feelings — and with inflated claims about our understanding of the biological basis of our higher mental processes. Such irritation has led a number of thoughtful people to declare that we can never achieve a truly sophisticated understanding of the biological foundation of complex mental activity. In fact, recent newspaper articles have argued that psychiatry is a “semi-science” whose practitioners cannot base their treatment of mental disorders on the same empirical evidence as physicians who treat disorders of the body can. The problem for many people is that we cannot point to the underlying biological bases of most psychiatric disorders. In fact, we are nowhere near understanding them as well as we understand disorders of the liver or the heart. But this is starting to change. Consider the biology of depression. We are beginning to discern the outlines of a complex neural circuit that becomes disordered in depressive illnesses. Helen Mayberg, at Emory University, and other scientists used brain-scanning techniques to identify several components of this circuit, two of which are particularly important. One is Area 25 (the subcallosal cingulate region), which mediates our unconscious and motor responses to emotional stress; the other is the right anterior insula, a region where self-awareness and interpersonal experience come together. These two regions connect to the hypothalamus, which plays a role in basic functions like sleep, appetite and libido, and to three other important regions of the brain: the amygdala, which evaluates emotional salience; the hippocampus, which is concerned with memory; and the prefrontal cortex, which is the seat of executive function and self-esteem. All of these regions can be disturbed in depressive illnesses. © 2013 The New York Times Company
R. Douglas Fields The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative announced by US President Barack Obama in April seeks to map and monitor the function of neural connections in the entire brains of experimental animals, and eventually in the human cerebral cortex. Several researchers have raised doubts about the project, cautioning that mapping the brain is a much more complex endeavour than mapping the human genome, and its usefulness more uncertain. I believe that exploring neural networks and developing techniques with which to do so are important goals that should be vigorously supported. But simply scaling up current efforts to chart neural connections is unlikely to deliver the promised benefits — which include understanding perception, consciousness, how the brain produces memories, and the development of treatments for diseases such as epilepsy, depression and schizophrenia1. A major stumbling block is the project's failure to consider that although the human brain contains roughly 100 billion neurons, it contains billions more non-electrical brain cells called glia2. These reside outside the neuronal 'connectome' and operate beyond the reach of tools designed to probe electrical signalling in neurons. Dismissed as connective tissue when they were first described in the mid-1800s, glia have long been neglected in the quest to understand neuronal signalling. Research is revealing that glia can sense neuronal activity and control it3. Various studies also indicate that glia operate in diverse mental processes, for instance, in the formation of memories. They have a central role in brain injury and disease, and they are even at the root of various disorders — such as schizophrenia and Alzheimer's — previously presumed to be exclusively neuronal. That the word 'glia' was not uttered in any of the announcements of the BRAIN Initiative, nor written anywhere in the 'white papers' published in 2012 and 2013 in prominent journals outlining the ambitious plan1, 4, speaks volumes about the need for the community of neuroscientists behind the initiative to expand its thinking. © 2013 Nature Publishing Group
A "window to the brain" implant which would allow doctors to see through the skull and possibly treat patients has been devised by US researchers. It uses a see-through version of the same material used for hip implants. The team at University of California, Riverside, say it could allow lasers to be fired into the brain to treat neurological disorders. The implant was reported in the journal Nanomedicine: Nanotechnology, Biology and Medicine. The researchers say emerging laser-treatments in stroke and cancer care and brain imaging require access to the brain. However, they are limited as a part of the skull needs to be removed and replaced each time a treatment is performed. Instead the team of scientists have devised a transparent implant that would replace a small section of the skull. They have converted a material - yttria-stabilized zirconia that is used in some ceramic hip implants and dental crowns - to make it transparent. They argue the material would be safe to implant, but would also provide a window onto the brain. Professor of mechanical engineering, Guillermo Aguilar, said: "This is a case of a science fiction sounding idea becoming science fact, with strong potential for positive impact on patients. BBC © 2013
Keyword: Brain imaging
Link ID: 18604 - Posted: 09.04.2013
by Adam Gopnik Good myths turn on simple pairs— God and Lucifer, Sun and Moon, Jerry and George—and so an author who makes a vital duo is rewarded with a long-lived audience. No one in 1900 would have thought it possible that a century later more people would read Conan Doyle’s Holmes and Watson stories than anything of George Meredith’s, but we do. And so Gene Roddenberry’s “Star Trek,” despite the silly plots and the cardboard-seeming sets, persists in its many versions because it captures a deep and abiding divide. Mr. Spock speaks for the rational, analytic self who assumes that the mind is a mechanism and that everything it does is logical, Captain Kirk for the belief that what governs our life is not only irrational but inexplicable, and the better for being so. The division has had new energy in our time: we care most about a person who is like a thinking machine at a moment when we have begun to have machines that think. Captain Kirk, meanwhile, is not only a Romantic, like so many other heroes, but a Romantic on a starship in a vacuum in deep space. When your entire body is every day dissolved, reënergized, and sent down to a new planet, and you still believe in the ineffable human spirit, you have really earned the right to be a soul man. Writers on the brain and the mind tend to divide into Spocks and Kirks, either embracing the idea that consciousness can be located in a web of brain tissue or debunking it. For the past decade, at least, the Spocks have been running the Enterprise: there are books on your brain and music, books on your brain and storytelling, books that tell you why your brain makes you want to join the Army, and books that explain why you wish that Bar Refaeli were in the barracks with you. The neurological turn has become what the “cultural” turn was a few decades ago: the all-purpose non-explanation explanation of everything. Thirty years ago, you could feel loftily significant by attaching the word “culture” to anything you wanted to inspect: we didn’t live in a violent country, we lived in a “culture of violence”; we didn’t have sharp political differences, we lived in a “culture of complaint”; and so on. In those days, Time, taking up the American pursuit of pleasure, praised Christopher Lasch’s “The Culture of Narcissism”; now Time has a cover story on happiness and asks whether we are “hardwired” to pursue it. © 2013 Condé Nast.
By Maia Szalavitz That little zing you get when someone “likes” your picture or sings your praises on Facebook? That’s the reward center in your brain getting a boost. And that response can predict how much time and energy you put into the social media site, according to new research. In one of the first studies to explore the effects of social media on the brain, scientists led by Dar Meshi, a postdoctoral researcher at the Freie Universität in Berlin, imaged the brains of 31 Facebook users while they viewed pictures of either themselves or others that were accompanied by positive captions. The research was published in Frontiers in Human Neuroscience. “We found that we could predict the intensity of people’s Facebook use outside the scanner by looking at their brain’s response to positive social feedback inside the scanner,” says Meshi. Specifically, a region called the nucleus accumbens, which processes rewarding feelings about food, sex, money and social acceptance became more active in response to praise for oneself compared to praise of others. And that activation was associated with more time on the social media site. Social affirmation tends to be one of life’s great joys, whether it occurs online or off, so it’s not surprising that it would light up this area. Few people are immune to the lures of flattery, after all. But do these results suggest that the “likes” on Facebook can become addictive? While all addictive experiences activate the region, such activation alone isn’t sufficient to establish an addiction. © 2013 Time Inc
American researchers say they’ve performed what they believe is the first ever human-to-human brain interface, where one person was able to send a brain signal to trigger the hand motions of another person. “It was both exciting and eerie to watch an imagined section from my brain get translated into actual action by another brain,” said Rajesh Rao, a professor of computer science and engineering at the University of Washington, in a statement. Previous studies have done brain-to-brain transmissions between rats and one was done between a human and a rat. Rao was able to send a brain signal through the internet – utilizing electrical brain recordings and a form of magnetic stimulation – to the other side of the university campus to his colleague Andrea Stocco, an assistant professor of psychology, triggering Stocco’s finger to move on a keyboard. “The internet was a way to connect computers, and now it can be a way to connect brains,” said Stocco. “We want to take the knowledge of a brain and transmit it directly from brain to brain.” On Aug. 12, Rao sat in his lab with a cap on his head. The cap had electrodes hooked up to an electroencephalography machine, which reads the brain’s electrical activity. Meanwhile, Stocco was at his lab across campus, wearing a similar cap which had a transcranial magnetic stimulation coil place over his left motor cortex – the part of the brain that controls hand movement. Rao looked at a computer and in his mind, he played a video game. When he was supposed to fire a cannon at a target, he imagined moving his right hand, which stayed motionless. Stocco, almost instantaneously, moved his right index finger to push the space bar on the keyboard in front of him. Only simple brain signals, not thoughts “This was basically a one-way flow of information from my brain to his,” said Rao. © CBC 2013
Erika Check Hayden US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today. The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study. “The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says. The study, published in Proceedings of the National Academy of Sciences1, was conducted by John Ioannidis, a physician at Stanford University in California, and Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression. The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses. And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses. © 2013 Nature Publishing Group
By Laura Sanders Despite the adage, there actually is such a thing as bad publicity, a fact that brain scientists have lately discovered. A couple of high-profile opinion pieces in the New York Times have questioned the usefulness of neuroscience, claiming, as columnist David Brooks did in June, that studying brain activity will never reveal the mind. Or that neuroscience is a pesky distraction from solving real social problems, as scholar Benjamin Fong wrote on August 11. Let’s start with Brooks. Some of his complaints about brain scans, with their colorful blobs lighting up active parts of the brain, are quite legitimate. Functional MRI studies are notoriously difficult to make sense of. In fact, this powerful technology has been used to find brain activity in a dead salmon. Dubious fMRI studies do trickle into the hands of sensationalistic journalists, medical hucksters and marketers, who twist the results into self-serving sound bites. All true. But Brooks’ essay conflates the entire field of neuroscience with some bad seeds. Some studies should never have been done, others mislead people, waste resources and sensationalize their results. But for every one of those studies, countless others tell us something important about how the human brain works. Serious scientists use a huge variety of techniques — yes, even fMRI — responsibly, and interpret their results cautiously. Judging the whole enterprise of neuroscience by its weakest studies is disingenuous. There is bad science, just like there’s bad food, bad music and bad TV. Trashing all brain research because a tiny bit of it stinks is like throwing your new flat screen off a balcony because you accidentally turned on Jersey Shore. © Society for Science & the Public 2000 - 2013
By: George Will, Washington Post PRINCETON, N.J. — Fifty years from now, when Malia and Sasha are grandmothers, their father’s presidency might seem most consequential because of a small sum — $100 million —for studying something small. “As humans,” Barack Obama said when announcing the initiative to study the brain, “we can identify galaxies light-years away ... but we still haven’t unlocked the mystery of the three pounds of matter that sits between our ears.” Actually, understanding the brain will be a resounding success without unlocking the essential mystery, which is: How does matter become conscious of itself? Or should we say, how does it become — or acquire — consciousness? Just trying to describe this subject takes scientists onto intellectual terrain long occupied by philosophers. Those whose field is the philosophy of mind will learn from scientists such as Princeton’s David Tank, aleader of the BRAIN Initiative, which aims at understanding how brain regions and cells work together, moment to moment, throughout our lives. If, as is said, a physicist is an atom’s way of knowing about atoms, thena neuroscientist like Tank is a brain cell’s way of knowing about brain cells. Each of us has about 100 billion of those, each of which communicates with an average of 10,000 other nerve cells. The goal of neuroscientists is to discover how these neural conversations give rise to a thought, a memory ora decision. And to understand how the brain functions, from which we may understand disorders such as autism, schizophrenia and epilepsy. © 2013 Forum Communications Co.
Keyword: Brain imaging
Link ID: 18560 - Posted: 08.26.2013
By CARL ZIMMER Evolutionary biologists have come to recognize humans as a tremendous evolutionary force. In hospitals, we drive the evolution of resistant bacteria by giving patients antibiotics. In the oceans, we drive the evolution of small-bodied fish by catching the big ones. In a new study, a University of Minnesota biologist, Emilie C. Snell-Rood, offers evidence suggesting we may be driving evolution in a more surprising way. As we alter the places where animals live, we may be fueling the evolution of bigger brains. Dr. Snell-Rood bases her conclusion on a collection of mammal skulls kept at the Bell Museum of Natural History at the University of Minnesota. Dr. Snell-Rood picked out 10 species to study, including mice, shrews, bats and gophers. She selected dozens of individual skulls that were collected as far back as a century ago. An undergraduate student named Naomi Wick measured the dimensions of the skulls, making it possible to estimate the size of their brains. Two important results emerged from their research. In two species — the white-footed mouse and the meadow vole — the brains of animals from cities or suburbs were about 6 percent bigger than the brains of animals collected from farms or other rural areas. Dr. Snell-Rood concludes that when these species moved to cities and towns, their brains became significantly bigger. Dr. Snell-Rood and Ms. Wick also found that in rural parts of Minnesota, two species of shrews and two species of bats experienced an increase in brain size as well. Dr. Snell-Rood proposes that the brains of all six species have gotten bigger because humans have radically changed Minnesota. Where there were once pristine forests and prairies, there are now cities and farms. In this disrupted environment, animals that were better at learning new things were more likely to survive and have offspring. © 2013 The New York Times Company
By Michelle Roberts Health editor, BBC News online Brain scans may allow detection of dyslexia in pre-school children even before they start to read, say researchers. A US team found tell-tale signs on scans that have already been seen in adults with the condition. And these brain differences could be a cause rather than a consequence of dyslexia - something unknown until now - the Journal of Neuroscience reports. Scans could allow early diagnosis and intervention, experts hope. The part of the brain affected is called the arcuate fasciculus. Among the 40 school-entry children they studied they found some had shrinkage of this brain region, which processes word sounds and language. They asked the same children to do several different types of pre-reading tests, such as trying out different sounds in words. Those children with a smaller arcuate fasciculus had lower scores. It is too early to say if the structural brain differences found in the study are a marker of dyslexia. The researchers plan to follow up groups of children as they progress through school to determine this. Lead researcher Prof John Gabrieli said: "We don't know yet how it plays out over time, and that's the big question. BBC © 2013
By Scicurious There are lots of challenges when it comes to studying the brain, but one of the biggest is that it’s very hard to see. Aside from being, you know, inside your skull, the many electrical and chemical signals which the brain uses are impossible to see with the naked eye. We have ways to look at neurons and how they convey information. For example, to record the electrical signals from a single neuron, you can piece it with a tiny electrode, to get access inside the membrane (electrophysiology). You can then stimulate the neuron to fire, or record as it fires spontaneously. For techniques like optogenetics, you can insert a gene into the neuron that makes it fire (or not) in response to light. When you shine the light, you can make the neuron fire. So you can make a neuron fire, or see a neuron fire. With things like voltammetry, we can see neurotransmitters, chemicals as they are released from a neuron and sent as signals on to other neurons. Techniques like these have made huge strides in what we understand about neurons and how they work. But…you can only do this for a few neurons at a time. This becomes a problem, because the brain does not work as one neuron at a time. Instead, neurons organize into networks, A neuron fires, which impinges upon many more neurons, all of which will react in different ways, depending on what input they receive and when. Often many neurons have to fire to get a result, often it’s a single specific pattern of neurons. An ideal technique would be one where we could see neurons fire spontaneously, in real time, and then see where those signals GO, to actually see a network in action. And where we could see it…without taking the brain out first. It looks like that technique might be here. © 2013 Scientific American
Keyword: Brain imaging
Link ID: 18496 - Posted: 08.13.2013