Chapter 2. Functional Neuroanatomy: The Nervous System and Behavior
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
A high-resolution map of the human brain in utero is providing hints about the origins of brain disorders including schizophrenia and autism. The map shows where genes are turned on and off throughout the entire brain at about the midpoint of pregnancy, a time when critical structures are taking shape, researchers Wednesday in the journal Nature. "It's a pretty big leap," says , an investigator at the in Seattle who played a central role in creating the map. "Basically, there was no information of this sort prior to this project." Having a map like this is important because many psychiatric and behavioral problems appear to begin before birth, "even though they may not manifest until teenage years or even the early 20s," says , director of the . The human brain is often called the most complex object in the universe. Yet its basic architecture is created in just nine months, when it grows from a single cell to more than 80 billion cells organized in a way that will eventually let us think and feel and remember. "We're talking about a remarkable process," a process controlled by our genes, Lein says. So he and a large team of researchers decided to use genetic techniques to create a map that would help reveal this process. Funding came from the 2009 federal stimulus package. The massive effort required tens of thousands of brain tissue samples so small that they had to be cut out with a laser. Researchers used brain tissue from aborted fetuses, which the Obama administration has authorized over the objections of abortion opponents. ©2014 NPR
A new study has raised new questions about how MRI scanners work in the quest to understand the brain. The research, led by Professor Brian Trecox and a team of international researchers, used a brand new technique to assess fluctuations in the performance of brain scanners as they were being used during a series of basic experiments. The results are due to appear in the Journal of Knowledge in Neuroscience: General later today. “Most people think that we know a lot about how MRI scanners actually work. The truth is, we don’t,” says Trecox. “We’ve even been misleading the public about the name – we made up functional Magnetic Resonance Imaging in 1983 because it sounded scientific and technical. fMRI really stands for flashy, Magically Rendered Images. So we thought: why not put an MRI scanner in an MRI scanner, and figure out what’s going on inside?” To do this, Trecox and his team built a giant imaging machine – thought to be the world’s largest – using funds from a Kickstarter campaign and a local bake sale. They then took a series of scans of standard-sized MRI scanners while they were repeatedly switched on and off, in one of the largest and most robust neuroscience studies of its type. “We tested six different MRI scanners,” says Eric Salmon, a PhD student involved in the project. “We found activation in an area called insular cortex in four of the six machines when they were switched on,” he added. In humans, the insular cortex has previously been implicated in a wide range of functions, including consciousness and self-awareness. According to Trecox and his team, activation in this area has never been found in imaging machines before. While Salmon acknowledged that the results should be treated with caution – research assistants were found asleep in at least two of the machines – the results nevertheless provide a potentially huge step in our understanding of the tools we use to research the brain. © 2014 Guardian News and Media Limited
Keyword: Brain imaging
Link ID: 19435 - Posted: 04.01.2014
Matt Wall Given the media coverage brain imaging studies get, you might think that they are constantly revealing important secrets about this mysterious organ. Catherine Loveday thinks otherwise. She makes the point that using brain-scanning technology to understand what a diseased brain is doing is only of academic interest. It is the study of the mind through behaviour and other cognitive functions, she argues, that leads to useful insights about disorders and treatments. There is some truth here, but as a scientist who uses brain scans every day, I would argue that they contribute a lot more than Loveday gives them credit for. The main problem is that, when it comes to the brain, all analogies are hopelessly crude. The distinction between hardware and software – or the brain and the mind – only has limited practical usefulness. Since all mental processes arise as a result of brain processes, it follows that all mental problems are also a result of dysfunctions in the physical brain. This will be seen by many as an extreme and reductionist position, but a specific example should help to show that it has some value. Parkinson’s disease is a degenerative disorder that causes a variety of symptoms including motor problems, sleep disturbance, various cognitive issues, and often depression. This variety of symptoms might suggest that the underlying problem in Parkinson’s is quite broad and complex, affecting several brain systems. However, it turns out the cause of all these symptoms is quite specific: a loss of neurons in a region of the brain called the substantia nigra. © 2014 Guardian News and Media Limited
Keyword: Brain imaging
Link ID: 19414 - Posted: 03.27.2014
Sara Reardon The US brain-research programme aims to create tools to image and control brain activity, while its European counterpart hopes to create a working computational model of the organ. It seems a natural pairing, almost like the hemispheres of a human brain: two controversial and ambitious projects that seek to decipher the body's control center are poised to join forces. The European Union’s €1-billion (US$1.3-billion) Human Brain Project (HBP) and the United States’ $1-billion Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative will launch a collaboration later this year, according to government officials involved in both projects. Representative Chaka Fattah (Democrat, Pennslyvania) hinted at the plan in a speech on 12 March. The brain, he says, ”is something that has defied understanding. You can't imagine a more important scientific cooperation”, says Fattah, the highest-ranking Democratic member of a House of Representatives panel that oversees funding for several US science agencies. Details about how closely the US and European programmes will coordinate are still nebulous, but US government officials say that the effort will include all of the BRAIN Initiative's government partners — the US National Institutes of Health (NIH), the National Science Foundation and Defense Advanced Research Projects Agency. Henry Markram, a neuroscientist at the Swiss Federal Institute of Technology in Lausanne (EPFL), who directs the HBP, says that Israel's brain initiative will also be involved. © 2014 Nature Publishing Group
Keyword: Brain imaging
Link ID: 19384 - Posted: 03.19.2014
By Klint Finley Today’s neuroscientists need expertise in more than just the human brain. They must also be accomplished hardware engineers, capable of building new tools for analyzing the brain and collecting data from it. There are many off-the-shelf commercial instruments that help you do such things, but they’re usually expensive and hard to customize, says Josh Siegle, a doctoral student at the Wilson Lab at MIT. “Neuroscience tends to have a pretty hacker-oriented culture,” he says. “A lot of people have a very specific idea of how an experiment needs to be done, so they build their own tools.” The problem, Siegle says, is that few neuroscientists share the tools they build. And because they’re so focused on creating tools for their specific experiments, he says, researchers don’t often consider design principles like modularity, which would allow them to reuse tools in other experiments. That can mean too much redundant work as researchers spend time solving problems others already have solved, and building things from scratch instead of repurposing old tools. ‘We just want to build awareness of how open source eliminates redundancy, reduces costs, and increases productivity’ That’s why Siegle and Jakob Voigts of the Moore Lab at Brown University founded Open Ephys, a project for sharing open source neuroscience hardware designs. They started by posting designs for the tools they use to record electrical signals in the brain. They hope to kick start an open source movement within neuroscience by making their designs public, and encouraging others to do the same. “We don’t necessarily want people to use our tools specifically,” Siegle says. “We just want to build awareness of how open source eliminates redundancy, reduces costs, and increase productivity.” © 2014 Condé Nast.
Link ID: 19353 - Posted: 03.12.2014
Penis envy. Repression. Libido. Ego. Few have left a legacy as enduring and pervasive as Sigmund Freud. Despite being dismissed long ago as pseudoscientific, Freudian concepts such as these not only permeate many aspects of popular culture, but also had an overarching influence on, and played an important role in the development of, modern psychology, leading Time magazine to name him as one of the most important thinkers of the 20th century. Before his rise to fame as the founding father of psychoanalysis, however, Freud trained and worked as a neurologist. He carried out pioneering neurobiological research, which was cited by Santiago Ramóny Cajal, the father of modern neuroscience, and helped to establish neuroscience as a discipline. The eldest of eight children, Freud was born on 6 May, 1856, in the Moravian town of Příbor, in what is now the Czech Republic. Four years later, Freud's father Jakob, a wool merchant, moved the family to Austria in search of new business opportunities. Freud subsequently entered the university there, aged just 17, to study medicine and, in the second year of his degree, became preoccupied with scientific research. His early work was a harbinger of things to come – it focused on the sexual organs of the eel. The work was, by all accounts, satisfactory, but Freud was disappointed with his results and, perhaps dismayed by the prospect of dissecting more eels, moved to Ernst Brücke's laboratory in 1877. There, he switched to studying the biology of nervous tissue, an endeavour that would last for 10 years. © 2014 Guardian News and Media Limited
Link ID: 19350 - Posted: 03.12.2014
By BENEDICT CAREY Jack Belliveau, a Harvard scientist whose quest to capture the quicksilver flare of thought inside a living brain led to the first magnetic resonance image of human brain function, died on Feb. 14 in San Mateo, Calif. He was 55. The cause was complications of a gastrointestinal disorder, said his wife, Brigitte Poncelet-Belliveau, a researcher who worked with him at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital. He lived in Boston. His wife said he died suddenly while visiting an uncle at his childhood home, which he owned. Dr. Belliveau was a 30-year-old graduate student at the Martinos Center when he hatched a scheme to “see” the neural trace of brain activity. Doctors had for decades been taking X-rays and other images of the brain to look for tumors and other lesions and to assess damage from brain injuries. Researchers had also mapped blood flow using positron emission tomography scans, but that required making and handling radioactive trace chemicals, whose signature vanished within minutes. Very few research centers had the technical knowledge or the machinery to pull it off. Dr. Belliveau tried a different approach. He had developed a technique to track blood flow, called dynamic susceptibility contrast, using an M.R.I. scanner that took split-second images, faster than was usual at the time. This would become a standard technique for assessing blood perfusion in stroke patients and others, but Dr. Belliveau thought he would try it to spy on a normal brain in the act of thinking or perceiving. “He went out to RadioShack and bought a strobe light, like you’d see in a disco,” said Dr. Bruce Rosen, director of the Martinos Center and one of Dr. Belliveau’s advisers at the time. “He thought the strobe would help image the visual areas of the brain, where there was a lot of interest.” © 2014 The New York Times Company
Keyword: Brain imaging
Link ID: 19337 - Posted: 03.10.2014
Sara Reardon A flipped mental switch is all it takes to make a fly fall in love — even if its object of desire is a ball of wax. A technique called thermogenetics allows researchers to control fly behaviour by activating specific neurons with heat. Combining the system with techniques that use light to trigger neurons could help to elucidate how different neural circuits work together to control complex behaviours such as courtship. Optogenetics — triggering neurons with light — has been successful in mice but has not been pursued much in flies, says Barry Dickson, a neuroscientist at the Howard Hughes Medical Institute's Janelia Farm Research Campus in Ashburn, Virginia. A fibre-optic cable embedded in a mouse’s brain can deliver light to cells genetically engineered to make light-activated proteins, but flies are too small for these fibre optics. Neither will these cells be activated when the flies are put into an illuminated box, because most wavelengths of visible light cannot penetrate a fly’s exoskeleton. Heat can penetrate the exoskeleton, however. Researchers have already studied fly behaviour by adding a heat-activated protein called TRPA1 to neural circuits that control behaviours such as mating and decision-making. When these flies are placed in a hot box, the TRPA1 neurons begin to fire within minutes and drive the fly’s actions1. But it would be better to trigger the behaviours more quickly. So Dickson’s lab has developed a system called the Fly Mind-Altering Device (FlyMAD), which uses a video camera to track the fly as it moves around in a box. The device then shines an infrared laser at the fly to deliver heat directly to the head. Dickson’s group presented the system last October at the Neurobiology of Drosophila conference at Cold Spring Harbor Laboratory in New York, and he is now submitting the work to a peer-reviewed journal. © 2014 Nature Publishing Group
Brendan Borrell Scientists can now take snapshots of where and how thousands of genes are expressed in intact tissue samples, ranging from a slice of a human brain to the embryo of a fly. The technique, reported today in Science1, can turn a microscope slide into a tool for creating data-rich, three-dimensional maps of how cells interact with one another — a key to understanding the origins of diseases such as cancer. The methodology also has broader applications, enabling researchers to create, for instance, unique molecular ‘barcodes’ to trace connections between cells in the brain, a stated goal of the US National Institutes of Health's Human Connectome Project. Previously, molecular biologists had a limited spatial view of gene expression, the process by which a stretch of double-stranded DNA is turned into single-stranded RNAs, which can in turn be translated into protein products. Researchers could either grind up a hunk of tissue and catalogue all the RNAs they found there, or use fluorescent markers to track the expression of up to 30 RNAs inside each cell of a tissue sample. The latest technique maps up to thousands of RNAs. Mapping the matrix In a proof-of-principle study, molecular biologist George Church of Harvard Medical School in Boston, Massachusetts, and his colleagues scratched a layer of cultured connective-tissue cells and sequenced the RNA of cells that migrated to the wound during the healing process. Out of 6,880 genes sequenced, the researchers identified 12 that showed changes in gene expression, including eight that were known to be involved in cell migration but had not been studied in wound healing, the researchers say. “This verifies that the technique could be used to do rapidly what has taken scientists years of looking at gene products one by one,” says Robert Singer, a molecular cell biologist at Albert Einstein College of Medicine in New York, who was not involved in the study. © 2014 Nature Publishing Group,
By JAMES GORMAN SEATTLE — When Clay Reid decided to leave his job as a professor at Harvard Medical School to become a senior investigator at the Allen Institute for Brain Science in Seattle in 2012, some of his colleagues congratulated him warmly and understood right away why he was making the move. Others shook their heads. He was, after all, leaving one of the world’s great universities to go to the academic equivalent of an Internet start-up, albeit an extremely well- financed, very ambitious one, created in 2003 by Paul Allen, a founder of Microsoft. Still, “it wasn’t a remotely hard decision,” Dr. Reid said. He wanted to mount an all-out investigation of a part of the mouse brain. And although he was happy at Harvard, the Allen Institute offered not only great colleagues and deep pockets, but also an approach to science different from the classic university environment. The institute was already mapping the mouse brain in fantastic detail, and specialized in the large-scale accumulation of information in atlases and databases available to all of science. Now, it was expanding, and trying to merge its semi-industrial approach to data gathering with more traditional science driven by individual investigators, by hiring scientists like Christof Koch from the California Institute of Technology as chief scientific officer in 2011 and Dr. Reid. As a senior investigator, he would lead a group of about 100, and work with scientists, engineers and technicians in other groups. Without the need to apply regularly for federal grants, Dr. Reid could concentrate on one piece of the puzzle of how the brain works. He would try to decode the workings of one part of the mouse brain, the million neurons in the visual cortex, from, as he puts it, “molecules to behavior.” © 2014 The New York Times Company
Link ID: 19291 - Posted: 02.25.2014
By JoNel Aleccia The first of 18,000 University of California, Santa Barbara, students lined up for shots Monday as the school began offering an imported vaccine to halt an outbreak of dangerous meningitis that sickened four, including one young man who lost his feet. "My dad's a pediatrician and he's been sending me emails over and over to go get it," said Carly Chianese, 20, a junior from Bayville, N.Y., who showed up a half-hour before the UCSB clinic opened. It’s the second time in three months that government health officials have inoculated U.S. college students with an emergency vaccine, Bexsero, to protect against the B strain of meningitis. More than 5,400 students at Princeton University in New Jersey received the vaccine in December after an outbreak sickened eight there. Another 4,400 got booster shots last week. No new cases have been detected at UCSB since November, but health officials said the vaccine licensed in Europe, Australia and Canada but not in the U.S. would stop future spread of the infection. Current vaccines available in the U.S. protect against four strains of meningitis, but not the B strain. Bacterial meningitis is a serious infection that kills 1 in 10 affected and leaves 20 percent with severe disabilities. Shots will be offered at UCSB from Monday through March 7, with a second series planned for later this spring. “During the last couple of outbreaks on college campuses, there have been additional cases over a year or two years,” said Dr. Amanda Cohn, a medical epidemiologist with the Centers for Disease Control and Prevention. “There is certainly that possibility. We strongly recommend that students get vaccinated.”
Link ID: 19289 - Posted: 02.25.2014
By Meeri Kim, How often, and how well, do you remember your dreams? Some people seem to be super-dreamers, able to recall effortlessly their dreams in vivid detail almost every day. Others struggle to remember even a vague fragment or two. A new study has discovered that heightened blood flow activity within certain regions of the brain could help explain the great dreamer divide. In general, dream recall is thought to require some amount of wakefulness during the night for the vision to be encoded in longer-term memory. But it is not known what causes some people to wake up more than others. A team of French researchers looked at brain activation maps of sleeping subjects and homed in on areas that could be responsible for nighttime wakefulness. When comparing two groups of dreamers on the opposite ends of the recall spectrum, the maps revealed that the temporoparietal junction — an area responsible for collecting and processing information from the external world — was more highly activated in high-recallers. The researchers speculate that this allows these people to sense environmental noises in the night and wake up momentarily — and, in the process, store dream memories for later recall. In support of this hypothesis, previous medical cases have found that when these same portions of the brain are damaged by stroke, patients lose the ability to remember their dreams, even though they can still achieve the REM (rapid eye movement) stage of sleep in which dreaming usually occurs. © 1996-2014 The Washington Post
by Laura Sanders When the president of the United States makes a request, scientists usually listen. Physicists created the atomic bomb for President Roosevelt. NASA engineers put men on the moon for President Kennedy. Biologists presented their first draft of the human genetic catalog to an appreciative President Clinton. So when President Obama announced an ambitious plan to understand the brain in April 2013, people were quick to view it as the next Manhattan Project, or Human Genome Project, or moon shot. But these analogies may not be so apt. Compared with understanding the mysterious inner workings of the brain, those other endeavors started with an end in sight. In a human brain, 85 billion nerve cells communicate via trillions of connections using complex patterns of electrical jolts and more than 100 different chemicals. A pea-sized lump of brain tissue contains more information than the Library of Congress. But unlike those orderly shelved and cataloged books, the organization of the brain remains mostly indecipherable, concealing the mysteries underlying thought, learning, emotion and memory. Still, as with other challenging enterprises prompted by presidential initiatives, success would change the world. A deep understanding of how the brain works, and what goes wrong when it doesn’t, could lead to a dazzling array of treatments for brain disorders — from autism and Alzheimer’s disease to depression and drug addiction — that afflict millions of people around the world. |© Society for Science & the Public 2000 - 2013.
Keyword: Brain imaging
Link ID: 19223 - Posted: 02.08.2014
|By Geoffrey Giller Working memory—our ability to store pieces of information temporarily—is crucial both for everyday activities like dialing a phone number as well as for more taxing tasks like arithmetic and accurate note-taking. The strength of working memory is often measured with cognitive tests, such as repeating lists of numbers in reverse order or recalling sequences of dots on a screen. For children, performance on working memory assessments is considered a strong predictor for future academic performance. Yet cognitive tests can fail to identify children whose brain development is lagging in subtle ways that may lead to future deficits in working memory and, thus, in learning. Doctors give the tests periodically and plot the results along a development curve, much like a child’s height and weight. By the time these tests reveal that a child’s working memory is below average, however, it may be too late to do much about it. But in a new study, published January 29 in The Journal of Neuroscience, scientists demonstrated that they could predict the future working memory of children and adolescents by examining brain scans from two different types of magnetic resonance imaging (MRI), instead of looking only at cognitive tests. Henrik Ullman, a PhD student at the Karolinska Institute in Stockholm and the lead author on the paper, says that this was the first study attempting to use MRI scans to predict future working memory capacity. “We were pretty surprised when we found what we actually found,” Ullman says. © 2014 Scientific American,
By ABIGAIL ZUGER, M.D. In history’s long parade of pushy mothers and miserably obedient children, no episode beats Dr. Frank H. Netter’s for a happy ending. Both parties got the last laugh. Netter was born to immigrant parents in New York in 1906. He was an artist from the time he could grab a pencil, doodling through high school, winning a scholarship to art school, and enunciating intentions of making his living as an illustrator. Then his mother stepped in, and with an iron hand, deflected him to medicine. Frank’s siblings and cousins all had respectable careers, she informed him, and he would, too. To his credit, he lasted quite a while: through medical school, hospital training and almost an entire year as a qualified doctor. But he continued drawing the whole time, making sketches in his lecture notes to clarify abstruse medical concepts for himself, then doing the same for classmates and even professors. Then, fatefully, his work attracted the notice of advertising departments at pharmaceutical companies. In the midst of the Depression, he demanded and received $7,500 for a series of five drawings, many times what he might expect to earn from a full year of medical practice. He put down his scalpel for good. Thanks to a five-decade exclusive contract with Ciba (now Novartis), he ultimately became possibly the best-known medical illustrator in the world, creating thousands of watercolor plates depicting every aspect of 20th-century medicine. His illustrations were virtually never used to market specific products, but distributed free of charge to doctors as a public service, and collected into popular textbooks. © 2014 The New York Times Company
Keyword: Brain imaging
Link ID: 19197 - Posted: 02.04.2014
by Aviva Rutkin "He moistened his lips uneasily." It sounds like a cheap romance novel, but this line is actually lifted from quite a different type of prose: a neuroscience study. Along with other sentences, including "Have you got enough blankets?" and "And what eyes they were", it was used to build the first map of how the brain processes the building blocks of speech – distinct units of sound known as phonemes. The map reveals that the brain devotes distinct areas to processing different types of phonemes. It might one day help efforts to read off what someone is hearing from a brain scan. "If you could see the brain of someone who is listening to speech, there is a rapid activation of different areas, each responding specifically to a particular feature the speaker is producing," says Nima Mesgarani, an electrical engineer at Columbia University in New York City. Snakes on a brain To build the map, Mesgarani's team turned to a group of volunteers who already had electrodes implanted in their brains as part of an unrelated treatment for epilepsy. The invasive electrodes sit directly on the surface of the brain, providing a unique and detailed view of neural activity. The researchers got the volunteers to listen to hundreds of snippets of speech taken from a database designed to provide an efficient way to cycle through a wide variety of phonemes, while monitoring the signals from the electrodes. As well as those already mentioned, sentences ran the gamut from "It had gone like clockwork" to "Junior, what on Earth's the matter with you?" to "Nobody likes snakes". © Copyright Reed Business Information Ltd.
By Jennifer Ouellette It was a brisk October day in a Greenwich Village café when New York University neuroscientist David Poeppel crushed my dream of writing the definitive book on the science of the self. I had naively thought I could take a light-hearted romp through genotyping, brain scans, and a few personality tests and explain how a fully conscious unique individual emerges from the genetic primordial ooze. Instead, I found myself scrambling to navigate bumpy empirical ground that was constantly shifting beneath my feet. How could a humble science writer possibly make sense of something so elusively complex when the world’s most brilliant thinkers are still grappling with this marvelous integration that makes us us? “You can’t. Why should you?” Poeppel asked bluntly when I poured out my woes. “We work for years and years on seemingly simple problems, so why should a very complicated problem yield an intuition? It’s not going to happen that way. You’re not going to find the answer.” Well, he was right. Darn it. But while I might not have found the Ultimate Answer to the source of the self, it proved to be an exciting journey and I learned some fascinating things along the way. 1. Genes are deterministic but they are not destiny. Except for earwax consistency. My earwax is my destiny. We tend to think of our genome as following a “one gene for one trait” model, but the real story is far more complicated. True, there is one gene that codes for a protein that determines whether you will have wet or dry earwax, but most genes serve many more than one function and do not act alone. Height is a simple trait that is almost entirely hereditary, but there is no single gene helpfully labeled height. Rather, there are several genes interacting with one another that determine how tall we will be. Ditto for eye color. It’s even more complicated for personality traits, health risk factors, and behaviors, where traits are influenced, to varying degrees, by parenting, peer pressure, cultural influences, unique life experiences, and even the hormones churning around us as we develop in the womb.
Alison Abbott By slicing up and reconstructing the brain of Henry Gustav Molaison, researchers have confirmed predictions about a patient that has already contributed more than most to neuroscience. No big scientific surprises emerge from the anatomical analysis, which was carried out by Jacopo Annese of the Brain Observatory at the University of California, San Diego, and his colleagues, and published today in Nature Communications1. But it has confirmed scientists’ deductions about the parts of the brain involved in learning and memory. “The confirmation is surely important,” says Richard Morris, who studies learning and memory at the University of Edinburgh, UK. “The patient is a classic case, and so the paper will be extensively cited.” Molaison, known in the scientific literature as patient H.M., lost his ability to store new memories in 1953 after surgeon William Scoville removed part of his brain — including a large swathe of the hippocampus — to treat his epilepsy. That provided the first conclusive evidence that the hippocampus is fundamental for memory. H.M. was studied extensively by cognitive neuroscientists during his life. After H.M. died in 2008, Annese set out to discover exactly what Scoville had excised. The surgeon had made sketches during the operation, and brain-imaging studies in the 1990s confirmed that the lesion corresponded to the sketches, although was slightly smaller. But whereas brain imaging is relatively low-resolution, Annese and his colleagues were able to carry out an analysis at the micrometre scale. © 2014 Nature Publishing Group
by Helen Thomson The brain that made the greatest contribution to neuroscience and to our understanding of memory has become a gift that keeps on giving. A 3D reconstruction of the brain of Henry Molaison, whose surgery to cure him of epilepsy left him with no short-term memory, will allow scientists to continue to garner insights into the brain for years to come. "Patient HM" became arguably the most famous person in neuroscience after he had several areas of his brain removed in 1953. His resulting amnesia and willingness to be tested have given us unprecedented insights into where memories are formed and stored in the brain. On his death in 2008, HM was revealed to the world as Henry Molaison. Now, a post-mortem examination of his brain, and a new kind of virtual 3D reconstruction, have been published. As a child, Molaison had major epileptic seizures. Anti-epileptic drugs failed, so he sought help from neurosurgeon William Scoville at Hartford Hospital in Connecticut. When Molaison was 27 years old, Scoville removed portions of his medial temporal lobes, which included an area called the hippocampus on both sides of his brain. As a result, Molaison's epilepsy became manageable, but he could not form any new memories, a condition known as anterograde amnesia. He also had difficulty recollecting his long-term past – partial retrograde amnesia.
Keyword: Learning & Memory
Link ID: 19172 - Posted: 01.27.2014
By Gary Stix The blood-brain barrier is the Berlin Wall of human anatomy and physiology Its closely packed cells shield neurons and the like from toxins and pathogens, while letting pass glucose and other essential chemicals for brain metabolism (caffeine?). For years, pharmaceutical companies and academic researchers have engaged in halting efforts to traverse this imposing blockade in order to deliver some of the big molecules that might potentially help slow the progression of devastating neurological diseases. Like would-be refugees from the former East Germany, many medications get snagged by border guards during the crossing—a molecular security force that either impedes or digests any invader. There have been many attempts to secure safe passage—deploying chemicals that make brain-barrier “endothelial” cells shrivel up, or wielding tiny catheters or minute bubbles that slip through minuscule breaches. Success has been mixed at best—none of these molecular cargo carriers have made their way as far as human trials. Roche, the Swiss-based drugmaker, reported in the Jan. 8 Neuron a bit of progress toward overcoming the lingering technical impediments. The study described a new technique that tricks one of the BBB’s natural checkpoints to let through an elaborately engineered drug that attacks the amyloid-beta protein fragments that may be the primary culprit inflicting the damage wrought by Alzheimer’s. The subterfuge involves the transferrin receptor, a docking site used to transport iron into the brain. Roche took a fragment of an antibody that binds the transferrin receptor and latched it onto another antibody that, once on the other side of the BBB, attaches to and then removes amyloid. © 2014 Scientific American
Link ID: 19121 - Posted: 01.13.2014