Chapter 2. Functional Neuroanatomy: The Nervous System and Behavior

Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.


Links 1 - 20 of 1131

Maria Temming Getting robots to do what we want would be a lot easier if they could read our minds. That sci-fi dream might not be so far off. With a new robot control system, a human can stop a bot from making a mistake and get the machine back on track using brain waves and simple hand gestures. People who oversee robots in factories, homes or hospitals could use this setup, to be presented at the Robotics: Science and Systems conference on June 28, to ensure bots operate safely and efficiently. Electrodes worn on the head and forearm allow a person to control the robot. The head-worn electrodes detect electrical signals called error-related potentials — which people’s brains unconsciously generate when they see someone goof up — and send an alert to the robot. When the robot receives an error signal, it stops what it is doing. The person can then make hand gestures — detected by arm-worn electrodes that monitor electrical muscle signals — to show the bot what it should do instead. MIT roboticist Daniela Rus and colleagues tested the system with seven volunteers. Each user supervised a robot that moved a drill toward one of three possible targets, each marked by an LED bulb, on a mock airplane fuselage. Whenever the robot zeroed in on the wrong target, the user’s mental error-alert halted the bot. And when the user flicked his or her wrist left or right to redirect the robot, the machine moved toward the proper target. In more than 1,000 trials, the robot initially aimed for the correct target about 70 percent of the time, and with human intervention chose the right target more than 97 percent of the time. The team plans to build a system version that recognizes a wider variety of user movements. That way, “you can gesture how the robot should move, and your motion can be more fluidly interpreted,” says study coauthor Joseph DelPreto, also a roboticist at MIT. |© Society for Science & the Public 2000 - 2018

Keyword: Brain imaging; Robotics
Link ID: 25111 - Posted: 06.20.2018

By Matt Warren Scientists regularly comb through 3D data, from medical images to maps of the moon, yet they are often stuck using flat computer screens that can’t fully represent 3D data sets. Now, researchers have developed a method of 3D printing that lets scientists produce stunning, high-definition 3D copies of their data. Conventional 3D-printing converts data into a computer model made up of tiny, connected triangles. But this process can create awkward images: The fine lines of the brain’s white matter, for example, show up as bulky tubes. Conventional printing also has problems creating objects where solid parts (or data points) are separated by empty space. The new process is far more direct. Instead of transforming into a computer model, the data set is sliced up into thousands of horizontal images, each consisting of hundreds of thousands of voxels, or 3D pixels. Each voxel is printed with droplets of colored resin hardened by ultraviolet light. Different colors can be combined to create new ones, and transparent resin is used to represent empty space. Each layer is printed, one on top of another, to gradually build up a 3D structure. So far, the researchers have used the voxel printing process to produce high-definition models of brain scans, topographical maps, and laser-scanned statues. And although it may take some time to get there, the team sees a day when anyone will be able to print off a copy of their data at the press of a button, from archaeologists reproducing important artifacts for conservation to doctors creating models of body parts to plan surgical procedures. Posted in: © 2018 American Association for the Advancement of Science.

Keyword: Brain imaging
Link ID: 25050 - Posted: 06.02.2018

By Dennis Normile SHANGHAI, CHINA—The nascent China Brain Project took another step toward reality last week with the launch of the Shanghai Research Center for Brain Science and Brain-Inspired Intelligence. The new center and its Beijing counterpart, launched 2 months ago, are expected to become part of an ambitious national effort to bring China to the forefront of neuroscience. But details of that 15-year project—expected to rival similar U.S. and EU efforts in scale and ambition—are still being worked out, 2 years after the government made it a priority. Preparation for the national effort “was taking quite a long time,” says Zhang Xu, a neuroscientist and executive director of the new center here. So Beijing and Shanghai got started on their own plans, he says. China’s growing research prowess and an increasing societal interest in neuroscience—triggered in part by an aging population—as well as commercial opportunities and government support are all coming together to make this “a good time for China’s brain science efforts,” Zhang says. Government planners called for brain research to be a key science and technology project in the nation’s 13th Five-Year Plan, adopted in spring 2016. The effort would have three main pillars, according to a November 2016 Neuron paper from a group that included Poo Mu-ming, director of Shanghai’s Institute of Neuroscience (ION), part of the Chinese Academy of Sciences (CAS). It would focus on basic research on neural mechanisms underlying cognition, translational studies of neurological diseases with an emphasis on early diagnosis and intervention, and brain simulations to advance artificial intelligence and robotics. Support under the 5-year plan was just the start of a 15-year program, the group wrote. © 2018 American Association for the Advancement of Science.

Keyword: Brain imaging
Link ID: 25013 - Posted: 05.23.2018

Prion diseases are slow degenerative brain diseases that occur in people and various other mammals. No vaccines or treatments are available, and these diseases are almost always fatal. Scientists have found little evidence of a protective immune response to prion infections. Further, microglia — brain cells usually involved in the first level of host defense against infections of the brain — have been thought to worsen these diseases by secreting toxic molecules that can damage nerve cells. Now, scientists have used an experimental drug, PLX5622, to test the role of microglia against scrapie, a prion disease of sheep. PLX5622 rapidly kills most of the microglia in the brain. When researchers gave the drug to mice infected with scrapie, microglia were eliminated and the mice died one month faster than did untreated mice. The results, published in the Journal of Virology by researchers from the National Institute of Allergy and Infectious Diseases at the National Institutes of Health, suggest that microglia can defend against a prion infection and thus slow the course of disease. The scientists hypothesize that microglia trap and destroy the aggregated prion proteins that cause brain damage. The findings suggest that drugs that increase the helpful activity of microglia may have a role in slowing the progression of prion diseases. Researchers are now studying the details of how microglia may be able to destroy prions in the brain. The scientists note that microglia could have a similar beneficial effect on other neurodegenerative diseases associated with protein aggregation, such as Alzheimer’s disease and Parkinson’s disease.

Keyword: Prions; Glia
Link ID: 24991 - Posted: 05.18.2018

By Ashley Yeager At first glance, neurons and muscle cells are the stars of gross motor function. Muscle movement results from coordination between nerve and muscle cells: when an action potential arrives at the presynaptic neuron terminal, calcium ions flow, causing proteins to fuse with the cell membrane and release some of the neuron’s contents, including acetylcholine, into the cleft between the neuron and muscle cell. Acetylcholine binds to receptors on the muscle cell, sending calcium ions into it and causing it to contract. But there’s also a third kind of cell at neuromuscular junctions, a terminal/perisynaptic Schwann cell (TPSC). These cells are known to aid in synapse formation and in the repair of injured peripheral motor axons, but their possible role in synaptic communications has been largely ignored. Problems with synaptic communication can underlie muscle fatigue, notes neuroscientist Thomas Gould of the University of Nevada, Reno, in an email to The Scientist. “Because these cells are activated by synaptic activity, we wondered what the role of this activation was.” To investigate, he and his colleagues stimulated motor neurons from neonatal mouse diaphragm tissue producing a calcium indicator, and found that TPSCs released calcium ions from the endoplasmic reticulum into the cytosol and could take in potassium ions from the synaptic cleft between neurons and muscle cells. However, TPSCs lacking the protein purinergic 2Y1 receptor (P2Y1R) didn’t release calcium or appear to take in potassium ions. © 1986-2018 The Scientist

Keyword: Glia; Muscles
Link ID: 24964 - Posted: 05.12.2018

by Lindsey Bever For years, Kendra Jackson battled an incessantly runny nose — sniffling and sneezing, blowing and losing sleep each night. Jackson said she initially thought she was getting a cold, then, as her symptoms persisted, doctors suggested it was likely seasonal allergies, putting her among the more than 50 million Americans who struggle with them each year. But the symptoms never cleared up, and, as the years went by, Jackson started to worry that it might be something worse. She told ABC affiliate KETV this week her nose ran “like a waterfall, continuously, and then it would run to the back of my throat.” “Everywhere I went,” she added, “I always had a box of Puffs, always stuffed in my pocket.” She had frequent headaches. And she could rarely sleep. Doctors at Nebraska Medicine in Omaha recently diagnosed Jackson with a cerebrospinal fluid (CSF) leak, a condition in which the watery liquid surrounding the brain spills out through a hole or tear in the skull and then drains into the ears or the nose, according to Johns Hopkins Medicine. The doctors told Jackson that she was losing an estimated half-pint of the fluid per day through her nose, according to KETV. © 1996-2018 The Washington Post

Keyword: Pain & Touch; Brain Injury/Concussion
Link ID: 24953 - Posted: 05.09.2018

Nicola Davis Researchers in the US say they have managed to keep the brains of decapitated pigs alive outside of the body for up to 36 hours by circulating an oxygen-rich fluid through the organs. While the scientists, led by Yale University neuroscientist Nenad Sestan, say the brains are not conscious, they add the feat might help researchers to probe how the brain works, and aid studies into experimental treatments for diseases ranging from cancer to dementia. The revelation, disclosed in the MIT Technology Review and based on comments Sestan made at a meeting at the US National Institutes of Health in March, has received a mixed reaction in the scientific community. A neuroscientist explains: the need for ‘empathetic citizens’ - podcast Anna Devor, a neuroscientist at the University of California, San Diego, told the MIT Technology Review the feat could help researchers probe the connections between brain cells, allowing them to build a “brain atlas”. However others were quick to stress that the development did not mean humans could expect to cheat death any time soon, noting that it is not possible to transplant a brain into a new body. “That animal brain is not aware of anything, I am very confident of that,” Sestan is reported to have told the NIH meeting. But he noted that ethical considerations abound: “Hypothetically, somebody takes this technology, makes it better, and restores someone’s [brain] activity. That is restoring a human being. If that person has memory, I would be freaking out completely.” © 2018 Guardian News and Media Limited

Keyword: Miscellaneous
Link ID: 24919 - Posted: 04.28.2018

In 2007, I spent the summer before my junior year of college removing little bits of brain from rats, growing them in tiny plastic dishes, and poring over the neurons in each one. For three months, I spent three or four hours a day, five or six days a week, in a small room, peering through a microscope and snapping photos of the brain cells. The room was pitch black, save for the green glow emitted by the neurons. I was looking to see whether a certain growth factor could protect the neurons from degenerating the way they do in patients with Parkinson's disease. This kind of work, which is common in neuroscience research, requires time and a borderline pathological attention to detail. Which is precisely why my PI trained me, a lowly undergrad, to do it—just as, decades earlier, someone had trained him. Now, researchers think they can train machines to do that grunt work. In a study described in the latest issue of the journal Cell, scientists led by Gladstone Institutes and UC San Francisco neuroscientist Steven Finkbeiner collaborated with researchers at Google to train a machine learning algorithm to analyze neuronal cells in culture. The researchers used a method called deep learning, the machine learning technique driving advancements not just at Google, but Amazon, Facebook, Microsoft. You know, the usual suspects. It relies on pattern recognition: Feed the system enough training data—whether it's pictures of animals, moves from expert players of the board game Go, or photographs of cultured brain cells—and it can learn to identify cats, trounce the world's best board-game players, or suss out the morphological features of neurons.

Keyword: Brain imaging; Learning & Memory
Link ID: 24862 - Posted: 04.13.2018

In a small room tucked away at the University of Toronto, Professor Dan Nemrodov is pulling thoughts right out of people's brains. He straps a hat with electrodes on someone's head and then shows them pictures of faces. By reading brain activity with an electroencephalography (EEG) machine, he's then able to reconstruct faces with almost perfect accuracy. Student participants wearing the cap look at a collection of faces for two hours. At the same time, the EEG software recognizes patterns relating to certain facial features found in the photos. Machine-learning algorithms are then used to recreate the images based on the EEG data, in some cases within 98-per-cent accuracy. Nemrodov and his colleague, Professor Adrian Nestor say this is a big thing. "Ultimately we are involved in a form of mind reading," he says. The technology has huge ramifications for medicine, law, government and business. But the ethical questions are just as huge. Here are some key questions: What can be the benefits of this research? If developed, it can help patients with serious neurological damage. People who are incapacitated to the point that they cannot express themselves or ask a question. According to clinical ethicist Prof. Kerry Bowman and his students at the University of Toronto, this technology can get inside someone's mind and provide a link of communication. It may give that person a chance to exercise their autonomy, especially in regard to informed consent to either continue treatment or stop. ©2018 CBC/Radio-Canada.

Keyword: Brain imaging
Link ID: 24810 - Posted: 04.02.2018

By Liz Tormes When I first started working as a photo researcher for Scientific American MIND in 2013, a large part of my day was spent looking at brains. Lots of them. They appeared on my computer screen in various forms—from black-and-white CT scans featured in dense journals to sad-looking, grey brains sitting on the bottom of glass laboratory jars. At times they were boring, and often they could be downright disturbing. But every now and then I would come across a beautiful 3D image of strange, rainbow-colored pathways in various formations that looked like nothing I had ever seen before. I was sure it had been miscategorized somehow—no way was I looking at a brain! Through my work I have encountered countless images of multi-colored Brainbows, prismatic Diffusion Tensor Imaging (DTI), and even tiny and intricate neon mini-brains grown from actual stem cells in labs. Increasingly I have found myself dazzled, not just by the pictures themselves, but by the scientific and technological advances that have made this type of imaging possible in only the past few years. It was through my photo research that I happened upon the Netherlands Institute for Neuroscience’s (NIN) annual Art of Neuroscience contest. This exciting opportunity for neurologists, fine artists, videographers and illustrators, whose work is inspired by human and animal brains, was something I wanted to share with our readers. © 2018 Scientific American

Keyword: Brain imaging
Link ID: 24803 - Posted: 03.31.2018

By Simon Makin Neuroscientists today know a lot about how individual neurons operate but remarkably little about how large numbers of them work together to produce thoughts, feelings and behavior. What is needed is a wiring diagram for the brain—known as a connectome—to identify the circuits that underlie brain functions. The challenge is dizzying: There are around 100 billion neurons in the human brain, which can each make thousands of connections, or synapses, making potentially hundreds of trillions of connections. So far, researchers have typically used microscopes to visualize neural connections, but this is laborious and expensive work. Now in a paper published March 28 in Nature, an innovative brain-mapping technique developed at Cold Spring Harbor Laboratory (CSHL) has been used to trace the connections emanating from hundreds of neurons in the main visual area of the mouse cortex, the brain’s outer layer. The technique, which exploits the advancing speed and plummeting cost of genetic sequencing, is more efficient than current methods, allowing the team to produce a more detailed picture than previously possible at unprecedented speed. Once the technology matures it could be used to provide clues to the nature of neuro-developmental disorders such as autism that are thought to involve differences in brain wiring. The team, led by Anthony Zador at CSHL and neuroscientist Thomas Mrsic-Flogel of the University of Basel in Switzerland, verified their method by comparing it with a previous gold-standard means of identifying connections among nerve cells—a technique called fluorescent single neuron tracing. This involves introducing into cells genes that produce proteins that fluoresce with a greenish glow, so they and their axons (neurons’ output wires) can be visualized with light microscopy. © 2018 Scientific American

Keyword: Brain imaging; Schizophrenia
Link ID: 24802 - Posted: 03.30.2018

Juliette Jowit The world’s first brain scanner that can be worn as people move around has been invented, by a team who hope the contraption can help children with neurological and mental disorders and reveal how the brain handles social situations. The new scalp caps – made on 3D printers – fit closely to the head, so can record the electromagnetic field produced by electrical currents between brain cells in much finer detail than previously. This design means the scanner can work in ways never possible before: subjects can move about, for example, and even play games with the equipment on, while medics can use it on groups such as babies, children and those with illnesses which cause them to move involuntarily. “This has the potential to revolutionise the brain imaging field, and transform the scientific and clinical questions that can be addressed with human brain imaging,” said Prof Gareth Barnes at University College London, one of three partners in the project. The other two are the University of Nottingham and the Wellcome Trust. The brain imaging technique known as magnetoencephalography, or MEG, has been helping scientists for decades, but in many cases has involved using huge contraptions that look like vintage hair salon driers. The scanners operated further from the head than the new devices, reducing the detail they recorded, and users had to remain incredibly still. © 2018 Guardian News and Media Limited

Keyword: Brain imaging
Link ID: 24780 - Posted: 03.22.2018

Researchers at the University of Calgary say they have developed a portable brain-imaging system that would literally shed light on concussions. Symptoms of a concussion can vary greatly between individuals and include headaches, nausea, loss of memory and lack of co-ordination, which make it difficult to find treatment options. U of C scientist Jeff Dunn says there has been no accepted way to get an image of a concussion, but he and his team have developed a device, called a Near-Infrared Spectroscopy, that measures communication in the brain by measuring oxygen levels in blood. Results show these patterns change after concussion. The device — a cap that contains small lights with sensors connected to a computer — is placed on the top of the head to monitor and measure brain activity while the patient looks at a picture or does a simple activity. "When the brain activates, blood flow goes up but oxygen levels also go up, so the blood actually becomes redder as the brain activates," Dunn said. "And we measure that so we shine a light in and we can see that change in oxygen level and measure the change in absorption." Dunn hopes the images will show a connection between symptoms and abnormalities in the brain that could help doctors identify treatment protocols and recovery timelines. ©2018 CBC/Radio-Canada

Keyword: Brain Injury/Concussion; Brain imaging
Link ID: 24754 - Posted: 03.15.2018

By Ruth Williams When optogenetics burst onto the scene a little over a decade ago, it added a powerful tool to neuroscientists’ arsenal. Instead of merely correlating recorded brain activity with behaviors, researchers could control the cell types of their choosing to produce specific outcomes. Light-sensitive ion channels (opsins) inserted into the cells allow neuronal activity to be controlled by the flick of a switch. Nevertheless, MIT’s Edward Boyden says more precision is needed. Previous approaches achieved temporal resolution in the tens of milliseconds, making them a somewhat blunt instrument for controlling neurons’ millisecond-fast firings. In addition, most optogenetics experiments have involved “activation or silencing of a whole set of neurons,” he says. “But the problem is the brain doesn’t work that way.” When a cell is performing a given function—initiating a muscle movement, recalling a memory—“neighboring neurons can be doing completely different things,” Boyden explains. “So there is a quest now to do single-cell optogenetics.” Illumination techniques such as two-photon excitation with computer-generated holography (a way to precisely sculpt light in 3D) allow light to be focused tightly enough to hit one cell. But even so, Boyden says, if the targeted cell body lies close to the axons or dendrites of neighboring opsin-expressing cells, those will be activated too. © 1986-2018 The Scientist

Keyword: Brain imaging
Link ID: 24732 - Posted: 03.08.2018

By Diana Kwon When optogenetics debuted over a decade ago, it quickly became the method of choice for many neuroscientists. By using light to selectively control ion channels on neurons in living animal brains, researchers could see how manipulating specific neural circuits altered behavior in real time. Since then, scientists have used the technique to study brain circuity and function across a variety of species, from fruit flies to monkeys—the method is even being tested in a clinical trial to restore vision in patients with a rare genetic disorder. Today (February 8) in Science, researchers report successfully conducting optogenetics experiments using injected nanoparticles in mice, inching the field closer to a noninvasive method of stimulating the brain with light that could one day have therapeutic uses. “Optogenetics revolutionized how we all do experimental neuroscience in terms of exploring circuits,” says Thomas McHugh, a neuroscientist at the RIKEN Brain Science Institute in Japan. However, this technique currently requires a permanently implanted fiber—so over the last few years, researchers have started to develop ways to stimulate the brain in less invasive ways. A number of groups devised such techniques using magnetic fields, electric currents, and sound. McHugh and his colleagues decided to try another approach: They chose near-infrared light, which can more easily penetrate tissue than the blue-green light typically used for optogenetics. “What we saw as an advantage was a kind of chemistry-based approach in which we can harness the power of near-infrared light to penetrate tissue, but still use this existing toolbox that's been developed over the last decade of optogenetic channels that respond to visible light,” McHugh says. © 1986-2018 The Scientist

Keyword: Brain imaging
Link ID: 24637 - Posted: 02.09.2018

By Jim Daley Researchers at the D’Or Institute for Research and Education in Brazil have created an algorithm that can use functional magnetic resonance imaging (fMRI) data to identify which musical pieces participants are listening to. The study, published last Friday (February 2) in Scientific Reports, involved six participants listening to 40 pieces of music from various genres, including classical, rock, pop, and jazz. “Our approach was capable of identifying musical pieces with improving accuracy across time and spatial coverage,” the researchers write in the paper. “It is worth noting that these results were obtained for a heterogeneous stimulus set . . . including distinct emotional categories of joy and tenderness.” The researchers first played different musical pieces for the participants and used fMRI to measure the neural signatures of each song. With that data, they taught a computer to identify brain activity that corresponded with the musical dimensions of each piece, including tonality, rhythm, and timbre, as well as a set of lower-level acoustic features. Then, the researchers played the pieces for the participants again while the computer tried to identify the music each person was listening to, based on fMRI responses. The computer was successful in decoding the fMRI information and identifying the musical pieces around 77 percent of the time when it had two options to choose from. When the researchers presented 10 possibilities, the computer was correct 74 percent of the time. © 1986-2018 The Scientist

Keyword: Hearing; Brain imaging
Link ID: 24617 - Posted: 02.06.2018

By Eli Meixler Friday’s Google Doodle celebrates the birthday of Wilder Penfield, a scientist and physician whose groundbreaking contributions to neuroscience earned him the designation “the greatest living Canadian.” Penfield would have turned 127 today. Later celebrated as a pioneering researcher and a humane clinical practitioner, Penfield pursued medicine at Princeton University, believing it to be “the best way to make the world a better place in which to live.” He was drawn to the field of brain surgery, studying neuropathy as a Rhodes scholar at Oxford University. In 1928, Penfield was recruited by McGill University in Montreal, where he also practiced at Royal Victoria Hospital as the city’s first neurosurgeon. Penfield founded the Montreal Neurological Institute with support from the Rockefeller Foundation in 1934, the same year he became a Canadian citizen. Penfield pioneered a treatment for epilepsy that allowed patients to remain fully conscious while a surgeon used electric probes to pinpoint areas of the brain responsible for setting off seizures. The experimental method became known as the Montreal Procedure, and was widely adopted. But Wilder Penfield’s research led him to another discovery: that physical areas of the brain were associated with different duties, such as speech or movement, and stimulating them could generate specific reactions — including, famously, conjuring a memory of the smell of burnt toast. Friday’s animated Google Doodle features an illustrated brain and burning toast. © 2017 Time Inc.

Keyword: Miscellaneous
Link ID: 24576 - Posted: 01.27.2018

By Giorgia Guglielmi ENIGMA, the world’s largest brain mapping project, was “born out of frustration,” says neuroscientist Paul Thompson of the University of Southern California in Los Angeles. In 2009, he and geneticist Nicholas Martin of the Queensland Institute of Medical Research in Brisbane, Australia, were chafing at the limits of brain imaging studies. The cost of MRI scans limited most efforts to a few dozen subjects—too few to draw robust connections about how brain structure is linked to genetic variations and disease. The answer, they realized over a meal at a Los Angeles shopping mall, was to pool images and genetic data from multiple studies across the world. After a slow start, the consortium has brought together nearly 900 researchers across 39 countries to analyze brain scans and genetic data on more than 30,000 people. In an accelerating series of publications, ENIGMA’s crowdsourcing approach is opening windows on how genes and structure relate in the normal brain—and in disease. This week, for example, an ENIGMA study published in the journal Brain compared scans from nearly 4000 people across Europe, the Americas, Asia, and Australia to pinpoint unexpected brain abnormalities associated with common epilepsies. ENIGMA is “an outstanding effort. We should all be doing more of this,” says Mohammed Milad, a neuroscientist at the University of Illinois in Chicago who is not a member of the consortium. ENIGMA’s founders crafted the consortium’s name—Enhancing NeuroImaging Genetics through Meta-Analysis—so that its acronym would honor U.K. mathematician Alan Turing’s code-breaking effort targeting Germany’s Enigma cipher machines during World War II. Like Turing’s project, ENIGMA aims to crack a mystery. Small brain-scanning studies of twins or close relatives done in the 2000s showed that differences in some cognitive and structural brain measures have a genetic basis. © 2018 American Association for the Advancement of Science.

Keyword: Brain imaging; Genes & Behavior
Link ID: 24560 - Posted: 01.24.2018

Harriet Dempsey-Jones Nobody really believes that the shape of our heads are a window into our personalities anymore. This idea, known as “phrenonolgy”, was developed by the German physician Franz Joseph Gall in 1796 and was hugely popular in the 19th century. Today it is often remembered for its dark history – being misused in its later days to back racist and sexist stereoptypes, and its links with Nazi “eugenics”. But despite the fact that it has fallen into disrepute, phrenology as a science has never really been subjected to rigorous, neuroscientific testing. That is, until now. Researchers at the University of Oxford have hacked their own brain scanning software to explore – for the first time – whether there truly is any correspondence between the bumps and contours of your head and aspects of your personality. The results have recently been published in an open science archive, but have also been submitted to the journal Cortex. But why did phrenologists think that bumps on your head might be so informative? Their enigmatic claims were based around a few general principles. Phrenologists believed the brain was comprised of separate “organs” responsible for different aspects of the mind, such as for self-esteem, cautiousness and benevolence. They also thought of the brain like a muscle – the more you used a particular organ the more it would grow in size (hypertrophy), and less used faculties would shrink. The skull would then mould to accommodate these peaks and troughs in the brain’s surface – providing an indirect reflection of the brain, and thus, the dominant features of an person’s character. © 2010–2018, The Conversation US, Inc.

Keyword: Brain imaging
Link ID: 24554 - Posted: 01.23.2018

Laura Sanders Nerve cells in the brain make elaborate connections and exchange lightning-quick messages that captivate scientists. But these cells also sport simpler, hairlike protrusions called cilia. Long overlooked, the little stubs may actually have big jobs in the brain. Researchers are turning up roles for nerve cell cilia in a variety of brain functions. In a region of the brain linked to appetite, for example, cilia appear to play a role in preventing obesity, researchers report January 8 in three studies in Nature Genetics. Cilia perched on nerve cells may also contribute to brain development, nerve cell communication and possibly even learning and memory, other research suggests. “Perhaps every neuron in the brain possesses cilia, and most neuroscientists don’t know they’re there,” says Kirk Mykytyn, a cell biologist at Ohio State University College of Medicine in Columbus. “There’s a big disconnect there.” Most cells in the body — including those in the brain — possess what’s called a primary cilium, made up of lipid molecules and proteins. The functions these appendages perform in parts of the body are starting to come into focus (SN: 11/3/12, p. 16). Cilia in the nose, for example, detect smell molecules, and cilia on rod and cone cells in the eye help with vision. But cilia in the brain are more mysterious. © Society for Science & the Public 2000 - 2017.

Keyword: Obesity
Link ID: 24546 - Posted: 01.20.2018