Chapter 2. Cells and Structures: The Anatomy of the Nervous System

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1116

By Diana Kwon When optogenetics debuted over a decade ago, it quickly became the method of choice for many neuroscientists. By using light to selectively control ion channels on neurons in living animal brains, researchers could see how manipulating specific neural circuits altered behavior in real time. Since then, scientists have used the technique to study brain circuity and function across a variety of species, from fruit flies to monkeys—the method is even being tested in a clinical trial to restore vision in patients with a rare genetic disorder. Today (February 8) in Science, researchers report successfully conducting optogenetics experiments using injected nanoparticles in mice, inching the field closer to a noninvasive method of stimulating the brain with light that could one day have therapeutic uses. “Optogenetics revolutionized how we all do experimental neuroscience in terms of exploring circuits,” says Thomas McHugh, a neuroscientist at the RIKEN Brain Science Institute in Japan. However, this technique currently requires a permanently implanted fiber—so over the last few years, researchers have started to develop ways to stimulate the brain in less invasive ways. A number of groups devised such techniques using magnetic fields, electric currents, and sound. McHugh and his colleagues decided to try another approach: They chose near-infrared light, which can more easily penetrate tissue than the blue-green light typically used for optogenetics. “What we saw as an advantage was a kind of chemistry-based approach in which we can harness the power of near-infrared light to penetrate tissue, but still use this existing toolbox that's been developed over the last decade of optogenetic channels that respond to visible light,” McHugh says. © 1986-2018 The Scientist

Keyword: Brain imaging
Link ID: 24637 - Posted: 02.09.2018

By Jim Daley Researchers at the D’Or Institute for Research and Education in Brazil have created an algorithm that can use functional magnetic resonance imaging (fMRI) data to identify which musical pieces participants are listening to. The study, published last Friday (February 2) in Scientific Reports, involved six participants listening to 40 pieces of music from various genres, including classical, rock, pop, and jazz. “Our approach was capable of identifying musical pieces with improving accuracy across time and spatial coverage,” the researchers write in the paper. “It is worth noting that these results were obtained for a heterogeneous stimulus set . . . including distinct emotional categories of joy and tenderness.” The researchers first played different musical pieces for the participants and used fMRI to measure the neural signatures of each song. With that data, they taught a computer to identify brain activity that corresponded with the musical dimensions of each piece, including tonality, rhythm, and timbre, as well as a set of lower-level acoustic features. Then, the researchers played the pieces for the participants again while the computer tried to identify the music each person was listening to, based on fMRI responses. The computer was successful in decoding the fMRI information and identifying the musical pieces around 77 percent of the time when it had two options to choose from. When the researchers presented 10 possibilities, the computer was correct 74 percent of the time. © 1986-2018 The Scientist

Keyword: Hearing; Brain imaging
Link ID: 24617 - Posted: 02.06.2018

By Eli Meixler Friday’s Google Doodle celebrates the birthday of Wilder Penfield, a scientist and physician whose groundbreaking contributions to neuroscience earned him the designation “the greatest living Canadian.” Penfield would have turned 127 today. Later celebrated as a pioneering researcher and a humane clinical practitioner, Penfield pursued medicine at Princeton University, believing it to be “the best way to make the world a better place in which to live.” He was drawn to the field of brain surgery, studying neuropathy as a Rhodes scholar at Oxford University. In 1928, Penfield was recruited by McGill University in Montreal, where he also practiced at Royal Victoria Hospital as the city’s first neurosurgeon. Penfield founded the Montreal Neurological Institute with support from the Rockefeller Foundation in 1934, the same year he became a Canadian citizen. Penfield pioneered a treatment for epilepsy that allowed patients to remain fully conscious while a surgeon used electric probes to pinpoint areas of the brain responsible for setting off seizures. The experimental method became known as the Montreal Procedure, and was widely adopted. But Wilder Penfield’s research led him to another discovery: that physical areas of the brain were associated with different duties, such as speech or movement, and stimulating them could generate specific reactions — including, famously, conjuring a memory of the smell of burnt toast. Friday’s animated Google Doodle features an illustrated brain and burning toast. © 2017 Time Inc.

Keyword: Miscellaneous
Link ID: 24576 - Posted: 01.27.2018

By Giorgia Guglielmi ENIGMA, the world’s largest brain mapping project, was “born out of frustration,” says neuroscientist Paul Thompson of the University of Southern California in Los Angeles. In 2009, he and geneticist Nicholas Martin of the Queensland Institute of Medical Research in Brisbane, Australia, were chafing at the limits of brain imaging studies. The cost of MRI scans limited most efforts to a few dozen subjects—too few to draw robust connections about how brain structure is linked to genetic variations and disease. The answer, they realized over a meal at a Los Angeles shopping mall, was to pool images and genetic data from multiple studies across the world. After a slow start, the consortium has brought together nearly 900 researchers across 39 countries to analyze brain scans and genetic data on more than 30,000 people. In an accelerating series of publications, ENIGMA’s crowdsourcing approach is opening windows on how genes and structure relate in the normal brain—and in disease. This week, for example, an ENIGMA study published in the journal Brain compared scans from nearly 4000 people across Europe, the Americas, Asia, and Australia to pinpoint unexpected brain abnormalities associated with common epilepsies. ENIGMA is “an outstanding effort. We should all be doing more of this,” says Mohammed Milad, a neuroscientist at the University of Illinois in Chicago who is not a member of the consortium. ENIGMA’s founders crafted the consortium’s name—Enhancing NeuroImaging Genetics through Meta-Analysis—so that its acronym would honor U.K. mathematician Alan Turing’s code-breaking effort targeting Germany’s Enigma cipher machines during World War II. Like Turing’s project, ENIGMA aims to crack a mystery. Small brain-scanning studies of twins or close relatives done in the 2000s showed that differences in some cognitive and structural brain measures have a genetic basis. © 2018 American Association for the Advancement of Science.

Keyword: Brain imaging; Genes & Behavior
Link ID: 24560 - Posted: 01.24.2018

Harriet Dempsey-Jones Nobody really believes that the shape of our heads are a window into our personalities anymore. This idea, known as “phrenonolgy”, was developed by the German physician Franz Joseph Gall in 1796 and was hugely popular in the 19th century. Today it is often remembered for its dark history – being misused in its later days to back racist and sexist stereoptypes, and its links with Nazi “eugenics”. But despite the fact that it has fallen into disrepute, phrenology as a science has never really been subjected to rigorous, neuroscientific testing. That is, until now. Researchers at the University of Oxford have hacked their own brain scanning software to explore – for the first time – whether there truly is any correspondence between the bumps and contours of your head and aspects of your personality. The results have recently been published in an open science archive, but have also been submitted to the journal Cortex. But why did phrenologists think that bumps on your head might be so informative? Their enigmatic claims were based around a few general principles. Phrenologists believed the brain was comprised of separate “organs” responsible for different aspects of the mind, such as for self-esteem, cautiousness and benevolence. They also thought of the brain like a muscle – the more you used a particular organ the more it would grow in size (hypertrophy), and less used faculties would shrink. The skull would then mould to accommodate these peaks and troughs in the brain’s surface – providing an indirect reflection of the brain, and thus, the dominant features of an person’s character. © 2010–2018, The Conversation US, Inc.

Keyword: Brain imaging
Link ID: 24554 - Posted: 01.23.2018

Laura Sanders Nerve cells in the brain make elaborate connections and exchange lightning-quick messages that captivate scientists. But these cells also sport simpler, hairlike protrusions called cilia. Long overlooked, the little stubs may actually have big jobs in the brain. Researchers are turning up roles for nerve cell cilia in a variety of brain functions. In a region of the brain linked to appetite, for example, cilia appear to play a role in preventing obesity, researchers report January 8 in three studies in Nature Genetics. Cilia perched on nerve cells may also contribute to brain development, nerve cell communication and possibly even learning and memory, other research suggests. “Perhaps every neuron in the brain possesses cilia, and most neuroscientists don’t know they’re there,” says Kirk Mykytyn, a cell biologist at Ohio State University College of Medicine in Columbus. “There’s a big disconnect there.” Most cells in the body — including those in the brain — possess what’s called a primary cilium, made up of lipid molecules and proteins. The functions these appendages perform in parts of the body are starting to come into focus (SN: 11/3/12, p. 16). Cilia in the nose, for example, detect smell molecules, and cilia on rod and cone cells in the eye help with vision. But cilia in the brain are more mysterious. © Society for Science & the Public 2000 - 2017.

Keyword: Obesity
Link ID: 24546 - Posted: 01.20.2018

Ian Sample Science editor Donatella Versace finds it in the conflict of ideas, Jack White under pressure of deadlines. For William S Burroughs, an old Dadaist trick helped: cutting pages into pieces and rearranging the words. Every artist has their own way of generating original ideas, but what is happening inside the brain might not be so individual. In new research, scientists report signature patterns of neural activity that mark out those who are most creative. “We have identified a pattern of brain connectivity that varies across people, but is associated with the ability to come up with creative ideas,” said Roger Beaty, a psychologist at Harvard University. “It’s not like we can predict with perfect accuracy who’s going to be the next Einstein, but we can get a pretty good sense of how flexible a given person’s thinking is.” Creative thinking is one of the primary drivers of cultural and technological change, but the brain activity that underpins original thought has been hard to pin down. In an effort to shed light on the creative process, Beaty teamed up with colleagues in Austria and China to scan people’s brains as they came up with original ideas. The scientists asked the volunteers to perform a creative thinking task as they lay inside a brain scanner. While the machine recorded their white matter at work, the participants had 12 seconds to come up with the most imaginative use for an object that flashed up on a screen. Three independent scorers then rated their answers. © 2018 Guardian News and Media Limited

Keyword: Attention; Brain imaging
Link ID: 24531 - Posted: 01.16.2018

By Matthew Hutson Imagine searching through your digital photos by mentally picturing the person or image you want. Or sketching a new kitchen design without lifting a pen. Or texting a loved one a sunset photo that was never captured on camera. A computer that can read your mind would find many uses in daily life, not to mention for those paralyzed and with no other way to communicate. Now, scientists have created the first algorithm of its kind to interpret—and accurately reproduce—images seen or imagined by another person. It might be decades before the technology is ready for practical use, but researchers are one step closer to building systems that could help us project our inner mind’s eye outward. “I was impressed that it works so well,” says Zhongming Liu, a computer scientist at Purdue University in West Lafayette, Indiana, who helped develop an algorithm that can somewhat reproduce what moviegoers see when they’re watching a film. “This is really cool.” Using algorithms to decode mental images isn’t new. Since 2011, researchers have recreated movie clips, photos, and even dream imagery by matching brain activity to activity recorded earlier when viewing images. But these methods all have their limits: Some deal only with narrow domains like face shape, and others can’t build an image from scratch—instead, they must select from preprogrammed images or categories like “person” or “bird.” This new work can generate recognizable images on the fly and even reproduce shapes that are not seen, but imagined. © 2018 American Association for the Advancement of Science.

Keyword: Vision; Brain imaging
Link ID: 24518 - Posted: 01.11.2018

By DENISE GRADY One blue surgical drape at a time, the patient disappeared, until all that showed was a triangle of her shaved scalp. “Ten seconds of quiet in the room, please,” said Dr. David J. Langer, the chairman of neurosurgery at Lenox Hill Hospital in Manhattan, part of Northwell Health. Silence fell, until he said, “O.K., I’ll take the scissors.” His patient, Anita Roy, 66, had impaired blood flow to the left side of her brain, and Dr. Langer was about to perform bypass surgery on slender, delicate arteries to restore the circulation and prevent a stroke. The operating room was dark, and everyone was wearing 3-D glasses. Lenox Hill is the first hospital in the United States to buy a device known as a videomicroscope, which turns neurosurgery into an immersive and sometimes dizzying expedition into the human brain. Enlarged on a 55-inch monitor, the stubble on Ms. Roy’s shaved scalp spiked up like rebar. The scissors and scalpel seemed big as hockey sticks, and popped out of the screen so vividly that observers felt an urge to duck. “This is like landing on the moon,” said a neurosurgeon who was visiting to watch and learn. The equipment produces magnified, high-resolution, three-dimensional digital images of surgical sites, and lets everyone in the room see exactly what the surgeon is seeing. The videomicroscope has a unique ability to capture “the brilliance and the beauty of the neurosurgical anatomy,” Dr. Langer said. He and other surgeons who have tested it predict it will change the way many brain and spine operations are performed and taught. “The first time I used it, I told students that this gives them an understanding of why I went into neurosurgery in the first place,” Dr. Langer said. © 2018 The New York Times Company

Keyword: Brain imaging
Link ID: 24504 - Posted: 01.09.2018

Tina Hesman Saey In movies, exploring the body up close often involves shrinking to microscopic sizes and taking harrowing rides through the blood. Thanks to a new virtual model, you can journey through a three-dimensional brain. No shrink ray required. The Society for Neuroscience and other organizations have long sponsored the website BrainFacts.org, which has basic information about how the human brain functions. Recently, the site launched an interactive 3-D brain. A translucent, light pink brain initially rotates in the middle of the screen. With a click of a mouse or a tap of a finger on a mobile device, you can highlight and isolate different parts of the organ. A brief text box then pops up to provide a structure’s name and details about the structure’s function. For instance, the globus pallidus — dual almond-shaped structures deep in the brain — puts a brake on muscle contractions to keep movements smooth. Some blurbs tell how a structure got its name or how researchers figured out what it does. Scientists, for example, have learned a lot about brain function by studying people who have localized brain damage. But the precuneus, a region in the cerebral cortex along the brain’s midline, isn’t usually damaged by strokes or head injuries, so scientists weren’t sure what the region did. Modern brain-imaging techniques that track blood flow and cell activity indicate the precuneus is involved in imagination, self-consciousness and reflecting on memories. |© Society for Science & the Public 2000 - 2018

Keyword: Brain imaging
Link ID: 24502 - Posted: 01.09.2018

by Emilie Reas Functional MRI (fMRI) is one of the most celebrated tools in neuroscience. Because of their unique ability to peer directly into the living brain while an organism thinks, feels and behaves, fMRI studies are often devoted disproportionate media attention, replete with flashy headlines and often grandiose claims. However, the technique has come under a fair amount of criticism from researchers questioning the validity of the statistical methods used to analyze fMRI data, and hence the reliability of fMRI findings. Can we trust those flashy headlines claiming that “scientists have discovered the area of the brain,” or are the masses of fMRI studies plagued by statistical shortcomings? To explore why these studies can be vulnerable to experimental failure, in their new PLOS One study coauthors Henk Cremers, Tor Wager and Tal Yarkoni investigated common statistical issues encountered in typical fMRI studies, and proposed how to avert them moving forward. The reliability of any experiment depends on adequate power to detect real effects and reject spurious ones, which can be influenced by various factors including the sample size (or number of “subjects” in fMRI), how strong the real effect is (“effect size”), whether comparisons are within or between subjects, and the statistical threshold used. To characterize common statistical culprits of fMRI studies, Cremers and colleagues first simulated typical fMRI scenarios before validating these simulations on a real dataset. One scenario simulated weak but diffusely distributed brain activity, and the other scenario simulated strong but localized brain activity (Figure 1). The simulation revealed that effect sizes are generally inflated for weak diffuse, compared to strong localized, activations, especially when the sample size is small. In contrast, effect sizes can actually be underestimated for strong localized scenarios when the sample size is large. Thus, more isn’t always better when it comes to fMRI; the optimal sample size likely depends on the specific brain-behavior relationship under investigation.

Keyword: Brain imaging
Link ID: 24501 - Posted: 01.09.2018

By Meredith Wadman For the first time, scientists have produced evidence in living humans that the protein tau, which mars the brain in Alzheimer’s disease, spreads from neuron to neuron. Although such movement wasn’t directly observed, the finding may illuminate how neurodegeneration occurs in the devastating illness, and it could provide new ideas for stemming the brain damage that robs so many of memory and cognition. Tau is one of two proteins—along with β-amyloid—that form unusual clumps in the brains of people with Alzheimer’s disease. Scientists have long debated which is most important to the condition and, thus, the best target for intervention. Tau deposits are found inside neurons, where they are thought to inhibit or kill them, whereas β-amyloid forms plaques outside brain cells. Researchers at the University of Cambridge in the United Kingdom combined two brain imaging techniques, functional magnetic resonance imaging and positron emission tomography (PET) scanning, in 17 Alzheimer’s patients to map both the buildup of tau and their brains’ functional connectivity—that is, how spatially separated brain regions communicate with each other. Strikingly, they found the largest concentrations of the damaging tau protein in brain regions heavily wired to others, suggesting that tau may spread in a way analogous to influenza during an epidemic, when people with the most social contacts will be at greatest risk of catching the disease. © 2018 American Association for the Advancement of Science.

Keyword: Alzheimers; Brain imaging
Link ID: 24496 - Posted: 01.06.2018

By Mark R. Hutchinson When someone is asked to think about pain, he or she will typically envision a graphic wound or a limb bent at an unnatural angle. However, chronic pain, more technically known as persistent pain, is a different beast altogether. In fact, some would say that the only thing that acute and persistent pain have in common is the word “pain.” The biological mechanisms that create and sustain the two conditions are very different. Pain is typically thought of as the behavioral and emotional results of the transmission of a neuronal signal, and indeed, acute pain, or nociception, results from the activation of peripheral neurons and the transmission of this signal along a connected series of so-called somatosensory neurons up the spinal cord and into the brain. But persistent pain, which is characterized by the overactivation of such pain pathways to cause chronic burning, deep aching, and skin-crawling and electric shock–like sensations, commonly involves another cell type altogether: glia.1 Long considered to be little more than cellular glue holding the brain together, glia, which outnumber neurons 10 to 1, are now appreciated as critical contributors to the health of the central nervous system, with recognized roles in the formation of synapses, neuronal plasticity, and protection against neurodegeneration. And over the past 15 to 20 years, pain researchers have also begun to appreciate the importance of these cells. Research has demonstrated that glia seem to respond and adapt to the cumulative danger signals that can result from disparate kinds of injury and illness, and that they appear to prime neural pathways for the overactivation that causes persistent pain. In fact, glial biology may hold important clues to some of the mysteries that have perplexed the pain research field, such as why the prevalence of persistent pain differs between the sexes and why some analgesic medications fail to work. © 1986-2018 The Scientist

Keyword: Pain & Touch; Glia
Link ID: 24482 - Posted: 01.03.2018

By NEIL GENZLINGER Ben Barres, a neuroscientist who did groundbreaking work on brain cells known as glia and their possible relation to diseases like Parkinson’s, and who was an outspoken advocate of equal opportunity for women in the sciences, died on Wednesday at his home in Palo Alto, Calif. He was 63. In announcing the death, Stanford University, where Dr. Barres was a professor, said he had had pancreatic cancer. Dr. Barres was transgender, having transitioned from female to male in 1997, when he was in his 40s and well into his career. That gave him a distinctive outlook on the difficulties that women and members of minorities face in academia. and especially in the sciences. An article he wrote for the journal Nature in 2006 titled “Does Gender Matter?” took on some prominent scholars who had argued that women were not advancing in the sciences because of innate differences in their aptitude. “I am suspicious when those who are at an advantage proclaim that a disadvantaged group of people is innately less able,” he wrote. “Historically, claims that disadvantaged groups are innately inferior have been based on junk science and intolerance.” The article cited studies documenting obstacles facing women, but it also drew on Dr. Barres’s personal experiences. He recounted dismissive treatment he had received when he was a woman and how that had changed when he became a man. “By far,” he wrote, “the main difference that I have noticed is that people who don’t know I am transgendered treat me with much more respect: I can even complete a whole sentence without being interrupted by a man.” Dr. Barres (pronounced BARE-ess) was born on Sept. 13, 1954, in West Orange, N.J., with the given name Barbara. “I knew from a very young age — 5 or 6 — that I wanted to be a scientist, that there was something fun about it and I would enjoy doing it,” he told The New York Times in 2006. “I decided I would go to M.I.T. when I was 12 or 13.” © 2017 The New York Times Company

Keyword: Glia; Sexual Behavior
Link ID: 24472 - Posted: 12.30.2017

By Sharon Begley Technologies to detect brain activity — fine, we’ll come right out and call it mind reading — as well as to change it are moving along so quickly that “a bit of a gold rush is happening, both on the academic side and the corporate side,” Michel Maharbiz of the University of California, Berkeley, told a recent conference at the Massachusetts Institute of Technology. Here are three fast-moving areas of neuroscience we’ll be watching in 2018: Neural dust/neurograins Whatever you call these electronics, they’re really, really tiny. We’re eagerly awaiting results from DARPA’s $65 million neural engineering program, which aims to develop a brain implant that can communicate digitally with the outside world. The first step is detecting neurons’ electrochemical signaling (DARPA, the Pentagon’s Defense Advanced Research Projects Agency, says 1 million neurons at a time would be nice). To do that, scientists at Brown University are developing salt-grain-sized “neurograins” containing an electrode to detect neural firing as well as to zap neurons to fire, all via a radio frequency antenna. Advertisement Maharbiz’s “neural dust” is already able to do the first part. The tiny wireless devices can detect what neurons are doing, he and his colleagues reported in a 2016 rat study. (The study’s lead scientist recently moved to Elon Musk’s startup Neuralink, one of a growing number of brain-tech companies.) Now Maharbiz and team are also working on making neural dust receive outside signals and cause neurons to fire in certain ways. Such “stimdust” would be “the smallest [nerve] stimulator ever built,” Maharbiz said. Eventually, scientists hope, they’ll know the neural code for, say, walking, letting them transmit the precise code needed to let a paralyzed patient walk. They’re also deciphering the neural code for understanding spoken language, which raises the specter of outside signals making people hear voices — raising ethical issues that, experts said, neurotech will generate in abundance. © 2017 Scientific American

Keyword: Brain imaging; Robotics
Link ID: 24468 - Posted: 12.29.2017

Acclaimed Stanford neuroscientist Ben Barres, MD, PhD, died on Dec. 27, 20 months after being diagnosed with pancreatic cancer. He was 63. Barres’ path-breaking discoveries of the crucial roles played by glial cells — the unsung majority of brain cells, which aren’t nerve cells — revolutionized the field of neuroscience. Barres was incontestably visionary yet, ironically, face-blind — he suffered from prosopagnosia, an inability to distinguish faces, and relied on voices or visual cues such as hats and hairstyles to identify even people he knew well. And there were many of them. A professor of neurobiology, of developmental biology and of neurology, Barres was widely praised as a stellar and passionate scientist whose methodologic rigor was matched only by his energy and enthusiasm. He was devoted to his scholarly pursuits and to his trainees, advocating unrelentingly on their behalf. He especially championed the cause of women in academia, with whom he empathized; he was transgender. “Ben was a remarkable person. He will be remembered as a brilliant scientist who transformed our understanding of glial cells and as a tireless advocate who promoted equity and diversity at every turn,” said Marc Tessier-Lavigne, PhD, president of Stanford University. “He was also a beloved mentor to students and trainees, a dear friend to many in our community and a champion for the fundamental dignity of us all.”

Keyword: Glia
Link ID: 24461 - Posted: 12.28.2017

by Bethany Brookshire An astonishing number of things that scientists know about brains and behavior are based on small groups of highly educated, mostly white people between the ages of 18 and 21. In other words, those conclusions are based on college students. College students make a convenient study population when you’re a researcher at a university. It makes for a biased sample, but one that’s still useful for some types of studies. It would be easy to think that for studies of, say, how the typical brain develops, a brain is just a brain, no matter who’s skull its resting in. A biased sample shouldn’t really matter, right? Wrong. Studies heavy in rich, well-educated brains may provide a picture of brain development that’s inaccurate for the American population at large, a recent study found. The results provide a strong argument for scientists to pay more attention to who, exactly, they’re studying in their brain imaging experiments. It’s “a solid piece of evidence showing that those of us in neuroimaging need to do a better job thinking about our sample, where it’s coming from and who we can generalize our findings to,” says Christopher Monk, who studies psychology and neuroscience at the University of Michigan in Ann Arbor. The new study is an example of what happens when epidemiology experiments — studies of patterns in health and disease — crash into studies of brain imaging. “In epidemiology we think about sample composition a lot,” notes Kaja LeWinn, an epidemiologist at the University of California in San Francisco. Who is in the study, where they live and what they do is crucial to finding out how disease patterns spread and what contributes to good health. But in conversations with her colleagues in psychiatry about brain imaging, LeWinn realized they weren’t thinking very much about whose brains they were looking at. Particularly when studying healthy populations, she says, there was an idea that “a brain is a brain is a brain.” |© Society for Science & the Public 2000 - 2017. All rights reserved.

Keyword: Brain imaging; Development of the Brain
Link ID: 24432 - Posted: 12.16.2017

Laura Sanders If more nerve cells mean more smarts, then dogs beat cats, paws down, a new study on carnivores shows. That harsh reality may shock some friends of felines, but scientists say the real surprises are inside the brains of less popular carnivores. Raccoon brains are packed with nerve cells, for instance, while brown bear brains are sorely lacking. By comparing the numbers of nerve cells, or neurons, among eight species of carnivores (ferret, banded mongoose, raccoon, cat, dog, hyena, lion and brown bear), researchers now have a better understanding of how different-sized brains are built. This neural accounting, described in an upcoming Frontiers in Neuroanatomy paper, may ultimately help reveal how brain features relate to intelligence. For now, the multispecies tally raises more questions than it answers, says zoologist Sarah Benson-Amram of the University of Wyoming in Laramie. “It shows us that there’s a lot more out there that we need to study to really be able to understand the evolution of brain size and how it relates to cognition,” she says. Neuroscientist Suzana Herculano-Houzel of Vanderbilt University in Nashville and colleagues gathered brains from the different species of carnivores. For each animal, the researchers whipped up batches of “brain soup,” tissue dissolved in a detergent. Using a molecule that attaches selectively to neurons in this slurry, researchers could count the number of neurons in each bit of brain real estate. |© Society for Science & the Public 2000 - 2017.

Keyword: Evolution
Link ID: 24430 - Posted: 12.16.2017

Tina Hesman Saey PHILADELPHIA — Flat brains growing on microscope slides may have revealed a new wrinkle in the story of how the brain folds. Cells inside the brains contract, while cells on the outside grow and push outward, researchers at the Weizmann Institute of Science in Rehovot, Israel, discovered from working with the lab-grown brains, or organoids. This push and pull results in folds in the organoids similar to those found in full-size brains. Orly Reiner reported the results December 5 at the joint meeting of the American Society for Cell Biology and the European Molecular Biology Organization. Reiner and her colleagues sandwiched human brain stem cells between a glass microscope slide and a porous membrane. The apparatus allowed the cells access to nutrients and oxygen while giving the researchers a peek at how the organoids grew. The cells formed layered sheets that closed up at the edges, making the organoids resemble pita bread, Reiner said. Wrinkles began to form in the outer layers of the organoids about six days after the mini brains started growing. These brain organoids may help explain why people with lissencephaly — a rare brain malformation in which the ridges and folds are missing — have smooth brains. The researchers used the CRISPR/Cas9 gene-editing system to make a mutation in the LIS1 gene. People with lissencephaly often have mutations in that gene. Cells carrying the mutation didn’t contract or move normally, the team found. |© Society for Science & the Public 2000 - 2017.

Keyword: Development of the Brain
Link ID: 24421 - Posted: 12.14.2017

/ By Rae Ellen Bichell In mid-October, Dr. David Bennett, a neurologist who directs the Alzheimer’s Disease Center at Rush University Medical Center in Chicago, stood in a St. Louis auditorium packed with nuns. His goal: To convince them — particularly the ones without brain disease — to donate their brains to science. “We are beginning to understand how little we actually know about the human brain.” Politicians, Bennett is fond of saying, can walk into a room and separate people from their money. “I can walk into a room and separate people from their brains.” To Bennett, making such acquisitions is, in some ways, more crucial than ever. Demand for brains for scientific research is rising across the board — driven in varying degrees by increased funding for research on brain disorders, rising incidence of age-related brain disease, big technological leaps in scientific tools used to analyze the brain, and a growing sense that sometimes, studying animals just isn’t good enough to understand and fix human disease. But more than this, scientists like Bennett are realizing that the brains they have traditionally studied (Bennett maintains 4,000 square feet of cabinets and freezers full of brain slices in Chicago), are too often riddled with the signs of end stage Alzheimer’s and other maladies that contribute to dementia. Far more rare are comparatively healthy brains that can allow scientists to more accurately identify what causes dementia — and what protects us from it. That deficiency now has Bennett and other scientists working hard to stock their shelves with a particularly precious resource: the brains of people like Sister Carleen Reck, who heard Bennett speak and thought his request for brain donations was a good idea, so she signed an anatomical gift act. Copyright 2017 Undark

Keyword: Alzheimers; Brain imaging
Link ID: 24417 - Posted: 12.11.2017