Chapter 2. Cells and Structures: The Anatomy of the Nervous System
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
By BENEDICT CAREY Jack Belliveau, a Harvard scientist whose quest to capture the quicksilver flare of thought inside a living brain led to the first magnetic resonance image of human brain function, died on Feb. 14 in San Mateo, Calif. He was 55. The cause was complications of a gastrointestinal disorder, said his wife, Brigitte Poncelet-Belliveau, a researcher who worked with him at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital. He lived in Boston. His wife said he died suddenly while visiting an uncle at his childhood home, which he owned. Dr. Belliveau was a 30-year-old graduate student at the Martinos Center when he hatched a scheme to “see” the neural trace of brain activity. Doctors had for decades been taking X-rays and other images of the brain to look for tumors and other lesions and to assess damage from brain injuries. Researchers had also mapped blood flow using positron emission tomography scans, but that required making and handling radioactive trace chemicals, whose signature vanished within minutes. Very few research centers had the technical knowledge or the machinery to pull it off. Dr. Belliveau tried a different approach. He had developed a technique to track blood flow, called dynamic susceptibility contrast, using an M.R.I. scanner that took split-second images, faster than was usual at the time. This would become a standard technique for assessing blood perfusion in stroke patients and others, but Dr. Belliveau thought he would try it to spy on a normal brain in the act of thinking or perceiving. “He went out to RadioShack and bought a strobe light, like you’d see in a disco,” said Dr. Bruce Rosen, director of the Martinos Center and one of Dr. Belliveau’s advisers at the time. “He thought the strobe would help image the visual areas of the brain, where there was a lot of interest.” © 2014 The New York Times Company
Keyword: Brain imaging
Link ID: 19337 - Posted: 03.10.2014
Sara Reardon A flipped mental switch is all it takes to make a fly fall in love — even if its object of desire is a ball of wax. A technique called thermogenetics allows researchers to control fly behaviour by activating specific neurons with heat. Combining the system with techniques that use light to trigger neurons could help to elucidate how different neural circuits work together to control complex behaviours such as courtship. Optogenetics — triggering neurons with light — has been successful in mice but has not been pursued much in flies, says Barry Dickson, a neuroscientist at the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Virginia. A fibre-optic cable embedded in a mouse’s brain can deliver light to cells genetically engineered to make light-activated proteins, but flies are too small for these fibre optics. Neither will these cells be activated when the flies are put into an illuminated box, because most wavelengths of visible light cannot penetrate a fly’s exoskeleton. Heat can penetrate the exoskeleton, however. Researchers have already studied fly behaviour by adding a heat-activated protein called TRPA1 to neural circuits that control behaviours such as mating and decision-making. When these flies are placed in a hot box, the TRPA1 neurons begin to fire within minutes and drive the fly’s actions1. But it would be better to trigger the behaviours more quickly. So Dickson’s lab has developed a system called the Fly Mind-Altering Device (FlyMAD), which uses a video camera to track the fly as it moves around in a box. The device then shines an infrared laser at the fly to deliver heat directly to the head. Dickson’s group presented the system last October at the Neurobiology of Drosophila conference at Cold Spring Harbor Laboratory in New York, and he is now submitting the work to a peer-reviewed journal. © 2014 Nature Publishing Group
Brendan Borrell Scientists can now take snapshots of where and how thousands of genes are expressed in intact tissue samples, ranging from a slice of a human brain to the embryo of a fly. The technique, reported today in Science1, can turn a microscope slide into a tool for creating data-rich, three-dimensional maps of how cells interact with one another — a key to understanding the origins of diseases such as cancer. The methodology also has broader applications, enabling researchers to create, for instance, unique molecular ‘barcodes’ to trace connections between cells in the brain, a stated goal of the US National Institutes of Health's Human Connectome Project. Previously, molecular biologists had a limited spatial view of gene expression, the process by which a stretch of double-stranded DNA is turned into single-stranded RNAs, which can in turn be translated into protein products. Researchers could either grind up a hunk of tissue and catalogue all the RNAs they found there, or use fluorescent markers to track the expression of up to 30 RNAs inside each cell of a tissue sample. The latest technique maps up to thousands of RNAs. Mapping the matrix In a proof-of-principle study, molecular biologist George Church of Harvard Medical School in Boston, Massachusetts, and his colleagues scratched a layer of cultured connective-tissue cells and sequenced the RNA of cells that migrated to the wound during the healing process. Out of 6,880 genes sequenced, the researchers identified 12 that showed changes in gene expression, including eight that were known to be involved in cell migration but had not been studied in wound healing, the researchers say. “This verifies that the technique could be used to do rapidly what has taken scientists years of looking at gene products one by one,” says Robert Singer, a molecular cell biologist at Albert Einstein College of Medicine in New York, who was not involved in the study. © 2014 Nature Publishing Group,
By JAMES GORMAN SEATTLE — When Clay Reid decided to leave his job as a professor at Harvard Medical School to become a senior investigator at the Allen Institute for Brain Science in Seattle in 2012, some of his colleagues congratulated him warmly and understood right away why he was making the move. Others shook their heads. He was, after all, leaving one of the world’s great universities to go to the academic equivalent of an Internet start-up, albeit an extremely well- financed, very ambitious one, created in 2003 by Paul Allen, a founder of Microsoft. Still, “it wasn’t a remotely hard decision,” Dr. Reid said. He wanted to mount an all-out investigation of a part of the mouse brain. And although he was happy at Harvard, the Allen Institute offered not only great colleagues and deep pockets, but also an approach to science different from the classic university environment. The institute was already mapping the mouse brain in fantastic detail, and specialized in the large-scale accumulation of information in atlases and databases available to all of science. Now, it was expanding, and trying to merge its semi-industrial approach to data gathering with more traditional science driven by individual investigators, by hiring scientists like Christof Koch from the California Institute of Technology as chief scientific officer in 2011 and Dr. Reid. As a senior investigator, he would lead a group of about 100, and work with scientists, engineers and technicians in other groups. Without the need to apply regularly for federal grants, Dr. Reid could concentrate on one piece of the puzzle of how the brain works. He would try to decode the workings of one part of the mouse brain, the million neurons in the visual cortex, from, as he puts it, “molecules to behavior.” © 2014 The New York Times Company
Link ID: 19291 - Posted: 02.25.2014
By JoNel Aleccia The first of 18,000 University of California, Santa Barbara, students lined up for shots Monday as the school began offering an imported vaccine to halt an outbreak of dangerous meningitis that sickened four, including one young man who lost his feet. "My dad's a pediatrician and he's been sending me emails over and over to go get it," said Carly Chianese, 20, a junior from Bayville, N.Y., who showed up a half-hour before the UCSB clinic opened. It’s the second time in three months that government health officials have inoculated U.S. college students with an emergency vaccine, Bexsero, to protect against the B strain of meningitis. More than 5,400 students at Princeton University in New Jersey received the vaccine in December after an outbreak sickened eight there. Another 4,400 got booster shots last week. No new cases have been detected at UCSB since November, but health officials said the vaccine licensed in Europe, Australia and Canada but not in the U.S. would stop future spread of the infection. Current vaccines available in the U.S. protect against four strains of meningitis, but not the B strain. Bacterial meningitis is a serious infection that kills 1 in 10 affected and leaves 20 percent with severe disabilities. Shots will be offered at UCSB from Monday through March 7, with a second series planned for later this spring. “During the last couple of outbreaks on college campuses, there have been additional cases over a year or two years,” said Dr. Amanda Cohn, a medical epidemiologist with the Centers for Disease Control and Prevention. “There is certainly that possibility. We strongly recommend that students get vaccinated.”
Link ID: 19289 - Posted: 02.25.2014
By Meeri Kim, How often, and how well, do you remember your dreams? Some people seem to be super-dreamers, able to recall effortlessly their dreams in vivid detail almost every day. Others struggle to remember even a vague fragment or two. A new study has discovered that heightened blood flow activity within certain regions of the brain could help explain the great dreamer divide. In general, dream recall is thought to require some amount of wakefulness during the night for the vision to be encoded in longer-term memory. But it is not known what causes some people to wake up more than others. A team of French researchers looked at brain activation maps of sleeping subjects and homed in on areas that could be responsible for nighttime wakefulness. When comparing two groups of dreamers on the opposite ends of the recall spectrum, the maps revealed that the temporoparietal junction — an area responsible for collecting and processing information from the external world — was more highly activated in high-recallers. The researchers speculate that this allows these people to sense environmental noises in the night and wake up momentarily — and, in the process, store dream memories for later recall. In support of this hypothesis, previous medical cases have found that when these same portions of the brain are damaged by stroke, patients lose the ability to remember their dreams, even though they can still achieve the REM (rapid eye movement) stage of sleep in which dreaming usually occurs. © 1996-2014 The Washington Post
by Laura Sanders When the president of the United States makes a request, scientists usually listen. Physicists created the atomic bomb for President Roosevelt. NASA engineers put men on the moon for President Kennedy. Biologists presented their first draft of the human genetic catalog to an appreciative President Clinton. So when President Obama announced an ambitious plan to understand the brain in April 2013, people were quick to view it as the next Manhattan Project, or Human Genome Project, or moon shot. But these analogies may not be so apt. Compared with understanding the mysterious inner workings of the brain, those other endeavors started with an end in sight. In a human brain, 85 billion nerve cells communicate via trillions of connections using complex patterns of electrical jolts and more than 100 different chemicals. A pea-sized lump of brain tissue contains more information than the Library of Congress. But unlike those orderly shelved and cataloged books, the organization of the brain remains mostly indecipherable, concealing the mysteries underlying thought, learning, emotion and memory. Still, as with other challenging enterprises prompted by presidential initiatives, success would change the world. A deep understanding of how the brain works, and what goes wrong when it doesn’t, could lead to a dazzling array of treatments for brain disorders — from autism and Alzheimer’s disease to depression and drug addiction — that afflict millions of people around the world. |© Society for Science & the Public 2000 - 2013.
Keyword: Brain imaging
Link ID: 19223 - Posted: 02.08.2014
|By Geoffrey Giller Working memory—our ability to store pieces of information temporarily—is crucial both for everyday activities like dialing a phone number as well as for more taxing tasks like arithmetic and accurate note-taking. The strength of working memory is often measured with cognitive tests, such as repeating lists of numbers in reverse order or recalling sequences of dots on a screen. For children, performance on working memory assessments is considered a strong predictor for future academic performance. Yet cognitive tests can fail to identify children whose brain development is lagging in subtle ways that may lead to future deficits in working memory and, thus, in learning. Doctors give the tests periodically and plot the results along a development curve, much like a child’s height and weight. By the time these tests reveal that a child’s working memory is below average, however, it may be too late to do much about it. But in a new study, published January 29 in The Journal of Neuroscience, scientists demonstrated that they could predict the future working memory of children and adolescents by examining brain scans from two different types of magnetic resonance imaging (MRI), instead of looking only at cognitive tests. Henrik Ullman, a PhD student at the Karolinska Institute in Stockholm and the lead author on the paper, says that this was the first study attempting to use MRI scans to predict future working memory capacity. “We were pretty surprised when we found what we actually found,” Ullman says. © 2014 Scientific American,
By ABIGAIL ZUGER, M.D. In history’s long parade of pushy mothers and miserably obedient children, no episode beats Dr. Frank H. Netter’s for a happy ending. Both parties got the last laugh. Netter was born to immigrant parents in New York in 1906. He was an artist from the time he could grab a pencil, doodling through high school, winning a scholarship to art school, and enunciating intentions of making his living as an illustrator. Then his mother stepped in, and with an iron hand, deflected him to medicine. Frank’s siblings and cousins all had respectable careers, she informed him, and he would, too. To his credit, he lasted quite a while: through medical school, hospital training and almost an entire year as a qualified doctor. But he continued drawing the whole time, making sketches in his lecture notes to clarify abstruse medical concepts for himself, then doing the same for classmates and even professors. Then, fatefully, his work attracted the notice of advertising departments at pharmaceutical companies. In the midst of the Depression, he demanded and received $7,500 for a series of five drawings, many times what he might expect to earn from a full year of medical practice. He put down his scalpel for good. Thanks to a five-decade exclusive contract with Ciba (now Novartis), he ultimately became possibly the best-known medical illustrator in the world, creating thousands of watercolor plates depicting every aspect of 20th-century medicine. His illustrations were virtually never used to market specific products, but distributed free of charge to doctors as a public service, and collected into popular textbooks. © 2014 The New York Times Company
Keyword: Brain imaging
Link ID: 19197 - Posted: 02.04.2014
by Aviva Rutkin "He moistened his lips uneasily." It sounds like a cheap romance novel, but this line is actually lifted from quite a different type of prose: a neuroscience study. Along with other sentences, including "Have you got enough blankets?" and "And what eyes they were", it was used to build the first map of how the brain processes the building blocks of speech – distinct units of sound known as phonemes. The map reveals that the brain devotes distinct areas to processing different types of phonemes. It might one day help efforts to read off what someone is hearing from a brain scan. "If you could see the brain of someone who is listening to speech, there is a rapid activation of different areas, each responding specifically to a particular feature the speaker is producing," says Nima Mesgarani, an electrical engineer at Columbia University in New York City. Snakes on a brain To build the map, Mesgarani's team turned to a group of volunteers who already had electrodes implanted in their brains as part of an unrelated treatment for epilepsy. The invasive electrodes sit directly on the surface of the brain, providing a unique and detailed view of neural activity. The researchers got the volunteers to listen to hundreds of snippets of speech taken from a database designed to provide an efficient way to cycle through a wide variety of phonemes, while monitoring the signals from the electrodes. As well as those already mentioned, sentences ran the gamut from "It had gone like clockwork" to "Junior, what on Earth's the matter with you?" to "Nobody likes snakes". © Copyright Reed Business Information Ltd.
By Jennifer Ouellette It was a brisk October day in a Greenwich Village café when New York University neuroscientist David Poeppel crushed my dream of writing the definitive book on the science of the self. I had naively thought I could take a light-hearted romp through genotyping, brain scans, and a few personality tests and explain how a fully conscious unique individual emerges from the genetic primordial ooze. Instead, I found myself scrambling to navigate bumpy empirical ground that was constantly shifting beneath my feet. How could a humble science writer possibly make sense of something so elusively complex when the world’s most brilliant thinkers are still grappling with this marvelous integration that makes us us? “You can’t. Why should you?” Poeppel asked bluntly when I poured out my woes. “We work for years and years on seemingly simple problems, so why should a very complicated problem yield an intuition? It’s not going to happen that way. You’re not going to find the answer.” Well, he was right. Darn it. But while I might not have found the Ultimate Answer to the source of the self, it proved to be an exciting journey and I learned some fascinating things along the way. 1. Genes are deterministic but they are not destiny. Except for earwax consistency. My earwax is my destiny. We tend to think of our genome as following a “one gene for one trait” model, but the real story is far more complicated. True, there is one gene that codes for a protein that determines whether you will have wet or dry earwax, but most genes serve many more than one function and do not act alone. Height is a simple trait that is almost entirely hereditary, but there is no single gene helpfully labeled height. Rather, there are several genes interacting with one another that determine how tall we will be. Ditto for eye color. It’s even more complicated for personality traits, health risk factors, and behaviors, where traits are influenced, to varying degrees, by parenting, peer pressure, cultural influences, unique life experiences, and even the hormones churning around us as we develop in the womb.
Alison Abbott By slicing up and reconstructing the brain of Henry Gustav Molaison, researchers have confirmed predictions about a patient that has already contributed more than most to neuroscience. No big scientific surprises emerge from the anatomical analysis, which was carried out by Jacopo Annese of the Brain Observatory at the University of California, San Diego, and his colleagues, and published today in Nature Communications1. But it has confirmed scientists’ deductions about the parts of the brain involved in learning and memory. “The confirmation is surely important,” says Richard Morris, who studies learning and memory at the University of Edinburgh, UK. “The patient is a classic case, and so the paper will be extensively cited.” Molaison, known in the scientific literature as patient H.M., lost his ability to store new memories in 1953 after surgeon William Scoville removed part of his brain — including a large swathe of the hippocampus — to treat his epilepsy. That provided the first conclusive evidence that the hippocampus is fundamental for memory. H.M. was studied extensively by cognitive neuroscientists during his life. After H.M. died in 2008, Annese set out to discover exactly what Scoville had excised. The surgeon had made sketches during the operation, and brain-imaging studies in the 1990s confirmed that the lesion corresponded to the sketches, although was slightly smaller. But whereas brain imaging is relatively low-resolution, Annese and his colleagues were able to carry out an analysis at the micrometre scale. © 2014 Nature Publishing Group
by Helen Thomson The brain that made the greatest contribution to neuroscience and to our understanding of memory has become a gift that keeps on giving. A 3D reconstruction of the brain of Henry Molaison, whose surgery to cure him of epilepsy left him with no short-term memory, will allow scientists to continue to garner insights into the brain for years to come. "Patient HM" became arguably the most famous person in neuroscience after he had several areas of his brain removed in 1953. His resulting amnesia and willingness to be tested have given us unprecedented insights into where memories are formed and stored in the brain. On his death in 2008, HM was revealed to the world as Henry Molaison. Now, a post-mortem examination of his brain, and a new kind of virtual 3D reconstruction, have been published. As a child, Molaison had major epileptic seizures. Anti-epileptic drugs failed, so he sought help from neurosurgeon William Scoville at Hartford Hospital in Connecticut. When Molaison was 27 years old, Scoville removed portions of his medial temporal lobes, which included an area called the hippocampus on both sides of his brain. As a result, Molaison's epilepsy became manageable, but he could not form any new memories, a condition known as anterograde amnesia. He also had difficulty recollecting his long-term past – partial retrograde amnesia.
Keyword: Learning & Memory
Link ID: 19172 - Posted: 01.27.2014
By Gary Stix The blood-brain barrier is the Berlin Wall of human anatomy and physiology Its closely packed cells shield neurons and the like from toxins and pathogens, while letting pass glucose and other essential chemicals for brain metabolism (caffeine?). For years, pharmaceutical companies and academic researchers have engaged in halting efforts to traverse this imposing blockade in order to deliver some of the big molecules that might potentially help slow the progression of devastating neurological diseases. Like would-be refugees from the former East Germany, many medications get snagged by border guards during the crossing—a molecular security force that either impedes or digests any invader. There have been many attempts to secure safe passage—deploying chemicals that make brain-barrier “endothelial” cells shrivel up, or wielding tiny catheters or minute bubbles that slip through minuscule breaches. Success has been mixed at best—none of these molecular cargo carriers have made their way as far as human trials. Roche, the Swiss-based drugmaker, reported in the Jan. 8 Neuron a bit of progress toward overcoming the lingering technical impediments. The study described a new technique that tricks one of the BBB’s natural checkpoints to let through an elaborately engineered drug that attacks the amyloid-beta protein fragments that may be the primary culprit inflicting the damage wrought by Alzheimer’s. The subterfuge involves the transferrin receptor, a docking site used to transport iron into the brain. Roche took a fragment of an antibody that binds the transferrin receptor and latched it onto another antibody that, once on the other side of the BBB, attaches to and then removes amyloid. © 2014 Scientific American
Link ID: 19121 - Posted: 01.13.2014
By JAMES GORMAN ST. LOUIS — Deanna Barch talks fast, as if she doesn’t want to waste any time getting to the task at hand, which is substantial. She is one of the researchers here at Washington University working on the first interactive wiring diagram of the living, working human brain. To build this diagram she and her colleagues are doing brain scans and cognitive, psychological, physical and genetic assessments of 1,200 volunteers. They are more than a third of the way through collecting information. Then comes the processing of data, incorporating it into a three-dimensional, interactive map of the healthy human brain showing structure and function, with detail to one and a half cubic millimeters, or less than 0.0001 cubic inches. Dr. Barch is explaining the dimensions of the task, and the reasons for undertaking it, as she stands in a small room, where multiple monitors are set in front of a window that looks onto an adjoining room with an M.R.I. machine, in the psychology building. She asks a research assistant to bring up an image. “It’s all there,” she says, reassuring a reporter who has just emerged from the machine, and whose brain is on display. And so it is, as far as the parts are concerned: cortex, amygdala, hippocampus and all the other regions and subregions, where memories, fear, speech and calculation occur. But this is just a first go-round. It is a static image, in black and white. There are hours of scans and tests yet to do, though the reporter is doing only a demonstration and not completing the full routine. Each of the 1,200 subjects whose brain data will form the final database will spend a good 10 hours over two days being scanned and doing other tests. The scientists and technicians will then spend at least another 10 hours analyzing and storing each person’s data to build something that neuroscience does not yet have: a baseline database for structure and activity in a healthy brain that can be cross-referenced with personality traits, cognitive skills and genetics. And it will be online, in an interactive map available to all. © 2014 The New York Times Company
Keyword: Brain imaging
Link ID: 19106 - Posted: 01.07.2014
By JAMES GORMAN St. Louis — I knew I wouldn’t find my “self” in a brain scan. I also knew as I headed into the noisy torpedo tube of a souped-up M.R.I. machine at Washington University in St. Louis that unless there was something terribly wrong (“Igor, look! His head is filled with Bitcoins!”), I would receive no news of the particulars of how my brain was arranged. Even if I had been one of the 1,200 volunteers in the part of the Human Connectome Project being conducted there, I wouldn’t have gotten a report of my own personal connectome and what it meant. Once the 10 hours of scans and tests are finished, and 10 hours more of processing and analysis done, the data for each of the volunteers — all anonymous — becomes part of a database to help scientists develop tools so that one day such an individual report might be possible. Besides, I was just going through a portion of the process, to see what it was like. Even so, I do have this sense of myself as an individual, different from others in ways good, bad and inconsequential, and the pretty reasonable feeling that whatever a “self” is, it lies behind my eyes and between my ears. That’s where I feel that “I” live. So I couldn’t shake the sense that there would be something special in seeing my brain, even if I couldn’t actually spot where all the song lyrics I’ve memorized are stored, or locate my fondness for cooking and singing and my deep disappointment that I can’t carry a tune (though I can follow a recipe). So I climbed into the M.R.I. machine. I tried to hold my head perfectly still as I stared at a spot marked by a cross, tried to corral my fading memory to perform well on tests, curled my toes and moved my fingers so that muscle motion could be mapped, and wondered at the extraordinary noises M.R.I. machines make. © 2014 The New York Times Company
After nearly a year of meetings and public debate, the National Institutes of Health (NIH) today announced how it intends to spend its share of funding for the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a $110 million U.S. effort to jump-start the development of new technologies that can map the brain’s vast and intricate neural circuits in action. In short, it’s looking for big ideas, such as taking a census of all the cells in the brain, even if there’s little data so far on how to accomplish them. The agency is calling for grant applications in six “high-priority” research areas drawn from a September report by its 15-member scientific advisory committee for the project. The agency is committing to spend roughly $40 million per year for 3 years on these areas, says Story Landis, director of the National Institute of Neurological Disorders and Stroke. “We hope that there will be additional funds that will become available, but obviously that depends upon what our budget is,” she says. The six funding streams center almost exclusively on proof-of-concept testing and development of new technologies and novel approaches for tasks considered fundamental to understanding how neurons work together to produce behavior in the brain; for example, classifying different types of brain cells, and determining how they contribute to specific neural circuits. NIH’s focus on innovation means that most grant applicants will not have to supply preliminary data for their proposals—a departure from “business as usual” that will likely startle many scientists and reviewers but is necessary to give truly innovative ideas a fair shot, Landis says. Only one call for funding, aimed at optimizing existing technologies for recording and manipulating large numbers of neurons that “aren’t ready for prime time,” will require such background, she says. © 2013 American Association for the Advancement of Science.
Keyword: Brain imaging
Link ID: 19047 - Posted: 12.18.2013
A study in mice shows how a breakdown of the brain’s blood vessels may amplify or cause problems associated with Alzheimer’s disease. The results published in Nature Communications suggest that blood vessel cells called pericytes may provide novel targets for treatments and diagnoses. “This study helps show how the brain’s vascular system may contribute to the development of Alzheimer’s disease,” said study leader Berislav V. Zlokovic, M.D. Ph.D., director of the Zilkha Neurogenetic Institute at the Keck School of Medicine of the University of Southern California, Los Angeles. The study was co-funded by the National Institute of Neurological Diseases and Stroke (NINDS) and the National Institute on Aging (NIA), parts of the National Institutes of Health. Alzheimer’s disease is the leading cause of dementia. It is an age-related disease that gradually erodes a person’s memory, thinking, and ability to perform everyday tasks. Brains from Alzheimer’s patients typically have abnormally high levels of plaques made up of accumulations of beta-amyloid protein next to brain cells, tau protein that clumps together to form neurofibrillary tangles inside neurons, and extensive neuron loss. Vascular dementias, the second leading cause of dementia, are a diverse group of brain disorders caused by a range of blood vessel problems. Brains from Alzheimer’s patients often show evidence of vascular disease, including ischemic stroke, small hemorrhages, and diffuse white matter disease, plus a buildup of beta-amyloid protein in vessel walls. Furthermore, previous studies suggest that APOE4, a genetic risk factor for Alzheimer’s disease, is linked to brain blood vessel health and integrity.
Link ID: 19033 - Posted: 12.14.2013
By Ingfei Chen The way doctors diagnose Alzheimer's disease may be starting to change. Traditionally clinicians have relied on tests of memory and reasoning skills and reports of social withdrawal to identify patients with Alzheimer's. Such assessments can, in expert hands, be fairly conclusive—but they are not infallible. Around one in five people who are told they have the neurodegenerative disorder actually have other forms of dementia or, sometimes, another problem altogether, such as depression. To know for certain that someone has Alzheimer's, doctors must remove small pieces of the brain, examine the cells under a microscope and count the number of protein clumps called amyloid plaques. An unusually high number of plaques is a key indicator of Alzheimer's. Because such a procedure risks further impairing a patient's mental abilities, it is almost always performed posthumously. In the past 10 years, however, scientists have developed sophisticated brain scans that can estimate the amount of plaque in the brain while people are still alive. In the laboratory, these scans have been very useful in studying the earliest stages of Alzheimer's, before overt symptoms appear. The results are reliable enough that last year the Food and Drug Administration approved one such test called Amyvid to help evaluate patients with memory deficits or other cognitive difficulties. Despite the FDA's approval, lingering doubts about the exact role of amyloid in Alzheimer's and ambivalence about the practical value of information provided by the scan have fueled debate about when to order an Amyvid test. Not everyone who has an excessive amount of amyloid plaque develops Alzheimer's, and at the moment, there is generally no way to predict whom the unlucky ones will be. Recent studies have shown that roughly one third of older citizens in good mental health have moderate to high levels of plaque, with no noticeable ill effects. And raising the specter of the disorder in the absence of symptoms may upset more people than it helps because no effective treatments exist—at least not yet. © 2013 Scientific American
Helen Shen Dyslexia may be caused by impaired connections between auditory and speech centres of the brain, according to a study published today in Science1. The research could help to resolve conflicting theories about the root causes of the disorder, and lead to targeted interventions. When people learn to read, their brains make connections between written symbols and components of spoken words. But people with dyslexia seem to have difficulty identifying and manipulating the speech sounds to be linked to written symbols. Researchers have long debated whether the underlying representations of these sounds are disrupted in the dyslexic brain, or whether they are intact but language-processing centres are simply unable to access them properly. A team led by Bart Boets, a clinical psychologist at the Catholic University of Leuven in Belgium, analysed brain scans and found that phonetic representations of language remain intact in adults with dyslexia, but may be less accessible than in controls because of deficits in brain connectivity. "The authors took a really inventive and thoughtful approach," says John Gabrieli, a neuroscientist at the Massachusetts Institute of Technology in Cambridge, Massachusetts. "They got a pretty clear answer." Communication channels Boets and his team used a technique called multivoxel pattern analysis to study fine-scale brain signals as people listened to a battery of linguistic fragments such as 'ba' and 'da'. To the researchers' surprise, neural activity in the primary and secondary auditory cortices of participants with dyslexia showed consistently distinct signals for different sounds. © 2013 Nature Publishing Group