Chapter 2. Functional Neuroanatomy: The Nervous System and Behavior
Follow us on Facebook and Twitter, or subscribe to our mailing list, to receive news updates. Learn more.
Are you a giver or a taker? Brain scans have identified a region of the cerebral cortex responsible for generosity – and some of us are kinder than others. The area was identified using a computer game that linked different symbols to cash prizes that either went to the player, or one of the study’s other participants. The volunteers readily learned to score prizes that helped other people, but they tended to learn how to benefit themselves more quickly. Read more: The kindness paradox: Why be generous? MRI scanning revealed that one particular brain area – the subgenual anterior cingulate cortex – seemed to be active when participants chose to be generous, prioritising benefits for someone else over getting rewards for themselves. But Patricia Lockwood, at the University of Oxford, and her team found that this brain area was not equally active in every volunteer. People who rated themselves as having higher levels of empathy learned to benefit others faster, and these people had more activity in this particular brain area, says Lockwood. This finding may lead to new ways to identify and understand anti-social and psychopathic behavior. Journal reference: PNAS, DOI: 10.1073/pnas.1603198113 © Copyright Reed Business Information Ltd.
By Helen Thomson Take a walk while I look inside your brain. Scientists have developed the first wearable PET scanner – allowing them to capture the inner workings of the brain while a person is on the move. The team plans to use it to investigate the exceptional talents of savants, such as perfect memory or exceptional mathematical skill. All available techniques for scanning the deeper regions of our brains require a person to be perfectly still. This limits the kinds of activities we can observe the brain doing, but the new scanner will enable researchers to study brain behaviour in normal life, as well providing a better understanding of the tremors of Parkinson’s disease, and the effectiveness of treatments for stroke. Positron emission tomography scanners track radioactive tracers, injected into the blood, that typically bind to glucose, the molecule that our cells use for energy. In this way, the scanners build 3D images of our bodies, enabling us to see which brain areas are particularly active, or where tumours are guzzling glucose in the body. To adapt this technique for people who are moving around, Stan Majewski at West Virginia University in Morgantown and his colleagues have constructed a ring of 12 radiation detectors that can be placed around a person’s head. This scanner is attached to the ceiling by a bungee-cord contraption, so that the wearer doesn’t feel the extra weight of the scanner. © Copyright Reed Business Information Ltd
Keyword: Brain imaging
Link ID: 22557 - Posted: 08.13.2016
Mo Costandi The human brain is often said to be the most complex object in the known universe, and there’s good reason to believe that it is. That lump of jelly inside your head contains at least 80 billion nerve cells, or neurons, and even more of the non-neuronal cells called glia. Between them, they form hundreds of trillions of precise synaptic connections; but they all have moveable parts, and these connections can change. Neurons can extend and retract their delicate fibres; some types of glial cells can crawl through the brain; and neurons and glia routinely work together to create new connections and eliminate old ones. These processes begin before we are born, and occur until we die, making the brain a highly dynamic organ that undergoes continuous change throughout life. At any given moment, many millions of them are being modified in one way or another, to reshape the brain’s circuitry in response to our daily experiences. Researchers at Yale University have now developed an imaging technique that enables them to visualise the density of synapses in the living human brain, and offers a promising new way of studying how the organ develops and functions, and also how it deteriorates in various neurological and psychiatric conditions. The new method, developed in Richard Carson’s lab at Yale’s School of Engineering and Applied Sciences, is based on positron emission tomography (PET), which detects the radiation emitted by radioactive ‘tracers’ that bind to specific proteins or other molecules after being injected into the body. Until now, the density of synapses in the human brain could only be determined by autopsy, using antibodies that bind to and stain specific synaptic proteins, or electron microscopy to examine the fine structure of the tissue. © 2016 Guardian News and Media Limited
Keyword: Brain imaging
Link ID: 22545 - Posted: 08.11.2016
By Andy Coghlan The switching-off of genes in the human brain has been watched live for the first time. By comparing this activity in different people’s brains, researchers are now on the hunt for abnormalities underlying disorders such as Alzheimer’s disease and schizophrenia. To see where genes are most and least active in the brain, Jacob Hooker at Harvard Medical School and his team developed a radioactive tracer chemical that binds to a type of enzyme called an HDAC. This enzyme deactivates genes inside our cells, stopping them from making the proteins they code for. When injected into people, brain scans can detect where this tracer has bound to an enzyme, and thus where the enzyme is switching off genes. Live epigenetics The switching-off of genes by HDACs is a form of epigenetics – physical changes to the structure of DNA that modify how active genes are without altering their code. Until now, the only way to examine such activity in the brain has been by looking at post-mortem brain tissue. In the image above from the study, genes are least active in the red regions, such as the bulb-shaped cerebellum area towards the bottom right. The black and blue areas show the highest levels of gene activity – where barely any HDACs are present – and the yellow and green areas fall in between. © Copyright Reed Business Information Ltd.
Nicola Davis Scientists say that they have discovered a possible explanation for how Alzheimer’s disease spreads in the brain. Alzheimer’s is linked to a buildup of protein plaques and tangles that spread across particular tissues in the brain as the disease progresses. But while the pattern of this spread is well-known, the reason behind the pattern is not. Now scientists say they have uncovered a potential explanation as to why certain tissues of the brain are more vulnerable to Alzheimer’s disease. The vulnerability appears to be linked to variations in the levels of proteins in the brain that protect against the clumping of other proteins - variations that are present decades before the onset of the disease. Hope for Alzheimer's treatment as researchers find licensed drugs halt brain degeneration Read more “Our results indicate that within healthy brains a tell-tale pattern of protein levels predicts the progression of Alzheimer’s disease through the brain [in those that are affected by the disease],” said Rosie Freer, a PhD student at the University of Cambridge and first author of the study. The results could open up the possibility of identifying individuals who are at risk of developing Alzheimer’s long before symptoms appear, as well as offering new insights to those attempting to tackle the disease. Charbel Moussa, director of the Laboratory for Dementia and Parkinsonism at Georgetown University Medical Center said that he agreed with the conclusions of the study. “It is probably true that in cases of diseases like Alzheimer’s and Parkinson’s we may have deficiencies in quality control mechanisms like cleaning out bad proteins that collect in the brain cells,” he said, although he warned that using such findings to predict those more at risk of such disease is likely to be difficult. © 2016 Guardian News and Media Limited
By Simon Makin A technology with the potential to blur the boundaries between biology and electronics has just leaped a major hurdle in the race to demonstrate its feasibility. A team at the University of California, Berkeley, led by neuroscientist Jose Carmena and electrical and computer engineer Michel Maharbiz, has provided the first demonstration of what the researchers call “ultrasonic neural dust” to monitor neural activity in a live animal. They recorded activity in the sciatic nerve and a leg muscle of an anesthetized rat in response to electrical stimulation applied to its foot. “My lab has always worked on the boundary between biology and man-made things,” Maharbiz says. “We build tiny gadgets to interface synthetic stuff with biological stuff.” The work was published last week in the journal Neuron. The system uses ultrasound for both wireless communication and the device’s power source, eliminating both wires and batteries. It consists of an external transceiver and what the team calls a “dust mote” about 0.8x1x3 mm size, which is implanted inside the body. The transceiver sends ultrasonic pulses to a piezoelectric crystal in the implant, which converts them into electricity to provide power. The implant records electrical signals in the rat via electrodes, and uses this signal to alter the vibration of the crystal. These vibrations are reflected back to the transceiver, allowing the signal to be recorded—a technique known as backscatter. “This is the first time someone has used ultrasound as a method of powering and communicating with extremely small implantable systems,” says one of the paper’s authors, Dongjin Seo. “This opens up a host of applications in terms of embodied telemetry: being able to put something super-tiny, super-deep in the body, which you can park next to a nerve, organ, muscle or gastrointestinal tract, and read data out wirelessly.” © 2016 Scientific American
Keyword: Brain imaging
Link ID: 22533 - Posted: 08.09.2016
By Anna Vlasits A small corner of the neuroscience world was in a frenzy. It was mid-June and a scientific paper had just been published claiming that years worth of results were riddled with errors. The study had dug into the software used to analyze one kind of brain scan, called functional MRI. The software’s approach was wrong, the researchers wrote, calling into doubt “the validity of some 40,000 fMRI studies”—in other words, all of them. The reaction was swift. Twitter lit up with panicked neuroscientists. Bloggers and reporters rained down headlines citing “seriously flawed” “glitches” and “bugs.” Other scientists thundered out essays defending their studies. Finally, one of the authors of the paper, published in Proceedings of the National Academy of Sciences, stepped into the fray. In a blog post, Thomas Nichols wrote, “There is one number I regret: 40,000.” Their finding, Nichols went on to write, only affects a portion of all fMRI papers—or, some scientists think, possibly none at all. It wasn’t nearly as bad as the hype suggested. The brief kerfuffle could just be dismissed as a tempest in a teapot, science’s self-correcting mechanisms in action. But the study, and its response, heralds a new level of self-scrutiny for fMRI studies, which have been plagued for decades by accusations of shoddy science and pop-culture pandering. fMRI, in other words, is growing up, but not without some pains along the way. A bumpy start for brain scanning © 2016 Scientific American,
Keyword: Brain imaging
Link ID: 22513 - Posted: 08.04.2016
By Andy Coghlan Mysterious shrunken cells have been spotted in the human brain for the first time, and appear to be associated with Alzheimer’s disease. “We don’t know yet if they’re a cause or consequence,” says Marie-Ève Tremblay of Laval University in Québec, Canada, who presented her discovery at the Translational Neuroimmunology conference in Big Sky, Montana, last week. The cells appear to be withered forms of microglia – the cells that keep the brain tidy and free of infection, normally by pruning unwanted brain connections or destroying abnormal and infected brain cells. But the cells discovered by Tremblay appear much darker when viewed using an electron microscope, and they seem to be more destructive. “It took a long time for us to identify them,” says Tremblay, who adds that these shrunken microglia do not show up with the same staining chemicals that normally make microglia visible under the microscope. Compared with normal microglia, the dark cells appear to wrap much more tightly around neurons and the connections between them, called synapses. “It seems they’re hyperactive at synapses,” says Tremblay. Where these microglia are present, synapses often seem shrunken and in the process of being degraded. Tremblay first discovered these dark microglia in mice, finding that they increase in number as mice age, and appear to be linked to a number of things, including stress, the neurodegenerative condition Huntington’s disease and a mouse model of Alzheimer’s disease. “There were 10 times as many dark microglia in Alzheimer’s mice as in control mice,” says Tremblay. © Copyright Reed Business Information Ltd.
Every year, hundreds of human brains are delivered to a network of special research centres. Why do these "brain banks" exist and what do they do? Rachael Buchanan was given rare access. A neuroscientist once told me with great insistence that brains are beautiful. His words came back to me as I watched a technician at the Bristol brain bank carefully dissect one of the facility's freshly donated specimens. The intricate folds and switchbacks of its surface and its delicate branching structures, revealed by her cuts, were entrancing. They seem only faintly to echo the complexity and power that tissue had held in life. The brain being methodically portioned up for storage was one of around 40 donations the South West Dementia Brain Bank receives each year. This bank in Bristol is one of 10 centres that make up the Medical Research Council's Brain Bank Network. Between them annually they supply hundreds of samples of research tissue to scientists in the UK and abroad. One of the thousand brains already fixed and frozen in the store rooms at Bristol is that of Angela Carlson. Written into that 3lb (1.4kg) of dissected tissue are the experiences, memories and knowledge of a very adventurous woman, for her time. She spent her teens in the land army during World War Two, followed by stints as a cook and child minder in the USA, and in what was then Persia. Twice widowed and without children, she eventually settled in Dorset to be near her niece Susan Jonas. She died there from dementia, aged 89. © 2016 BBC
By BENEDICT CAREY Solving a hairy math problem might send a shudder of exultation along your spinal cord. But scientists have historically struggled to deconstruct the exact mental alchemy that occurs when the brain successfully leaps the gap from “Say what?” to “Aha!” Now, using an innovative combination of brain-imaging analyses, researchers have captured four fleeting stages of creative thinking in math. In a paper published in Psychological Science, a team led by John R. Anderson, a professor of psychology and computer science at Carnegie Mellon University, demonstrated a method for reconstructing how the brain moves from understanding a problem to solving it, including the time the brain spends in each stage. The imaging analysis found four stages in all: encoding (downloading), planning (strategizing), solving (performing the math), and responding (typing out an answer). “I’m very happy with the way the study worked out, and I think this precision is about the limit of what we can do” with the brain imaging tools available, said Dr. Anderson, who wrote the report with Aryn A. Pyke and Jon M. Fincham, both also at Carnegie Mellon. To capture these quicksilver mental operations, the team first taught 80 men and women how to interpret a set of math symbols and equations they had not seen before. The underlying math itself wasn’t difficult, mostly addition and subtraction, but manipulating the newly learned symbols required some thinking. The research team could vary the problems to burden specific stages of the thinking process — some were hard to encode, for instance, while others extended the length of the planning stage. The scientists used two techniques of M.R.I. data analysis to sort through what the participants’ brains were doing. One technique tracked the neural firing patterns during the solving of each problem; the other identified significant shifts from one kind of mental state to another. The subjects solved 88 problems each, and the research team analyzed the imaging data from those solved successfully. © 2016 The New York Times Company
By ERICA GOODE You are getting sleepy. Very sleepy. You will forget everything you read in this article. Hypnosis has become a common medical tool, used to reduce pain, help people stop smoking and cure them of phobias. But scientists have long argued about whether the hypnotic “trance” is a separate neurophysiological state or simply a product of a hypnotized person’s expectations. A study published on Thursday by Stanford researchers offers some evidence for the first explanation, finding that some parts of the brain function differently under hypnosis than during normal consciousness. The study was conducted with functional magnetic resonance imaging, a scanning method that measures blood flow in the brain. It found changes in activity in brain areas that are thought to be involved in focused attention, the monitoring and control of the body’s functioning, and the awareness and evaluation of a person’s internal and external environments. “I think we have pretty definitive evidence here that the brain is working differently when a person is in hypnosis,” said Dr. David Spiegel, a professor of psychiatry and behavioral sciences at Stanford who has studied the effectiveness of hypnosis. Functional imaging is a blunt instrument and the findings can be difficult to interpret, especially when a study is looking at activity levels in many brain areas. Still, Dr. Spiegel said, the findings might help explain the intense absorption, lack of self-consciousness and suggestibility that characterize the hypnotic state. © 2016 The New York Times Company
Laura Sanders Under duress, nerve cells get a little help from their friends. Brain cells called astrocytes send their own energy-producing mitochondria to struggling nerve cells. Those gifts may help the neurons rebound after injuries such as strokes, scientists propose in the July 28 Nature. It was known that astrocytes — star-shaped glial cells that, among other jobs, support neurons — take in and dispose of neurons’ discarded mitochondria. Now it turns out that mitochondria can move the other way, too. This astrocyte-to-neuron transfer is surprising, says neuroscientist Jarek Aronowski of the University of Texas Health Science Center at Houston. “Bottom line: It’s sort of shocking.” Study coauthor Eng Lo of Massachusetts General Hospital and Harvard Medical School cautions that the work is at a very early stage. But he hopes that a deeper understanding of this process might ultimately point out new ways to protect the brain from damage. Mitochondria produce the energy that powers cells in the body. Scientists have spotted the organelles moving into damaged cells in other parts of the body, including the lungs, heart and liver. The new study turns up signs of this mitochondrial generosity in the brain. Astrocytes produce mitochondria and shunt them out into the soup that surrounds cells, Lo and colleagues found. The researchers then put neurons into this mitochondria-rich broth. When starved of glucose and oxygen — a situation that approximates a stroke — the neurons took in the astrocyte-made organelles. |© Society for Science & the Public 2000 - 2016
By Emily Underwood If your car’s battery dies, you might call on roadside assistance—or a benevolent bystander—for a jump. When damaged neurons lose their “batteries,” energy-generating mitochondria, they call on a different class of brain cells, astrocytes, for a boost, a new study suggests. These cells respond by donating extra mitochondria to the floundering neurons. The finding, still preliminary, might lead to novel ways to help people recover from stroke or other brain injuries, scientists say. “This is a very interesting and important study because it describes a new mechanism whereby astrocytes may protect neurons,” says Reuven Stein, a neurobiologist at The Rabin Institute of Neurobiology in Tel Aviv, Israel, who was not involved in the study. To keep up with the energy-intensive work of transmitting information throughout the brain, neurons need a lot of mitochondria, the power plants that produce the molecular fuel—ATP—that keeps cells alive and working. Mitochondria must be replaced often in neurons, in a process of self-replication called fission—the organelles were originally microbes captured inside a cell as part of a symbiosis. But if mitochondria are damaged or if they can’t keep up with a cell’s needs, energy supplies can run out, killing the cell. In 2014, researchers published the first evidence that cells can transfer mitochondria in the brain—but it seemed more a matter of throwing out the trash. When neurons expel damaged mitochondria, astrocytes swallow them and break them down. Eng Lo and Kazuhide Hayakawa, both neuroscientsists at Massachusetts General Hospital in Charlestown, wondered whether the transfer could go the other way as well—perhaps astrocytes donated working mitochondria to neurons in distress. Research by other groups supported that idea: A 2012 study, for example, found that stem cells from bone marrow can donate mitochondria to lung cells after severe injury. © 2016 American Association for the Advancement of Science
By Jessica Boddy Ever wonder what it looks like when brain cells chat up a storm? Researchers have found a way to watch the conversation in action without ever cracking open a skull. This glimpse into the brain’s communication system could open new doors to diagnosing and treating disorders from epilepsy to Alzheimer’s disease. Being able to see where—and how—living brain cells are working is “the holy grail in neuroscience,” says Howard Federoff, a neurologist at Georgetown University in Washington, D.C., who was not involved with the work. “This is a possible new tool that could bring us closer to that.” Neurons, which are only slightly longer than the width of a human hair, are laid out in the brain like a series of tangled highways. Signals must travel down these highways, but there’s a catch: The cells don’t actually touch. They’re separated by tiny gaps called synapses, where messages, with the assistance of electricity, jump from neuron to neuron to reach their destinations. The number of functional synapses that fire in one area—a measure known as synaptic density—tends to be a good way to figure out how healthy the brain is. Higher synaptic density means more signals are being sent successfully. If there are significant interruptions in large sections of the neuron highway, many signals may never reach their destinations, leading to disorders like Huntington disease. The only way to look at synaptic density in the brain, however, is to biopsy nonliving brain tissue. That means there’s no way for researchers to investigate how diseases like Alzheimer’s progress—something that could hold secrets to diagnosis and treatment. © 2016 American Association for the Advancement of Science
Keyword: Brain imaging
Link ID: 22472 - Posted: 07.23.2016
Carl Zimmer The brain looks like a featureless expanse of folds and bulges, but it’s actually carved up into invisible territories. Each is specialized: Some groups of neurons become active when we recognize faces, others when we read, others when we raise our hands. On Wednesday, in what many experts are calling a milestone in neuroscience, researchers published a spectacular new map of the brain, detailing nearly 100 previously unknown regions — an unprecedented glimpse into the machinery of the human mind. Scientists will rely on this guide as they attempt to understand virtually every aspect of the brain, from how it develops in children and ages over decades, to how it can be corrupted by diseases like Alzheimer’s and schizophrenia. “It’s a step towards understanding why we’re we,” said David Kleinfeld, a neuroscientist at the University of California, San Diego, who was not involved in the research. Scientists created the map with advanced scanners and computers running artificial intelligence programs that “learned” to identify the brain’s hidden regions from vast amounts of data collected from hundreds of test subjects, a far more sophisticated and broader effort than had been previously attempted. While an important advance, the new atlas is hardly the final word on the brain’s workings. It may take decades for scientists to figure out what each region is doing, and more will be discovered in coming decades. “This map you should think of as version 1.0,” said Matthew F. Glasser, a neuroscientist at Washington University School of Medicine and lead author of the new research. “There may be a version 2.0 as the data get better and more eyes look at the data. We hope the map can evolve as the science progresses.” © 2016 The New York Times Company
Keyword: Brain imaging
Link ID: 22466 - Posted: 07.21.2016
Ian Sample Science editor When the German neurologist Korbinian Brodmann first sliced and mapped the human brain more than a century ago he identified 50 distinct regions in the crinkly surface called the cerebral cortex that governs much of what makes us human. Now researchers have updated the 100-year-old map in a scientific tour de force which reveals that the human brain has at least 180 different regions that are important for language, perception, consciousness, thought, attention and sensation. The landmark achievement hands neuroscientists their most comprehensive map of the cortex so far, one that is expected to supersede Brodmann’s as the standard researchers use to talk about the various areas of the brain. Scientists at Washington University in St Louis created the map by combining highly-detailed MRI scans from 210 healthy young adults who had agreed to take part in the Human Connectome Project, a massive effort that aims to understand how neurons in the brain are connected. Most previous maps of the human brain have been created by looking at only one aspect of the tissues, such as how the cells look under a microscope, or how active areas become when a person performs a certain task. But maps made in different ways do not always look the same, which casts doubt on where one part of the brain stops and another starts. Writing in the journal Nature, Matthew Glasser and others describe how they combined scans of brain structure, function and connectivity to produce the new map, which confirmed the existence of 83 known brain regions and added 97 new ones. Some scans were taken while patients simply rested in the machine, while others were recorded as they performed maths tasks, listened to stories, or categorised objects, for example by stating whether an image was of a tool or an animal. © 2016 Guardian News and Media Limited
Keyword: Brain imaging
Link ID: 22465 - Posted: 07.21.2016
NOBODY knows how the brain works. But researchers are trying to find out. One of the most eye-catching weapons in their arsenal is functional magnetic-resonance imaging (fMRI). In this, MRI scanners normally employed for diagnosis are used to study volunteers for the purposes of research. By watching people’s brains as they carry out certain tasks, neuroscientists hope to get some idea of which bits of the brain specialise in doing what. The results look impressive. Thousands of papers have been published, from workmanlike investigations of the role of certain brain regions in, say, recalling directions or reading the emotions of others, to spectacular treatises extolling the use of fMRI to detect lies, to work out what people are dreaming about or even to deduce whether someone truly believes in God. But the technology has its critics. Many worry that dramatic conclusions are being drawn from small samples (the faff involved in fMRI makes large studies hard). Others fret about over-interpreting the tiny changes the technique picks up. A deliberately provocative paper published in 2009, for example, found apparent activity in the brain of a dead salmon. Now, researchers in Sweden have added to the doubts. As they reported in the Proceedings of the National Academies of Science, a team led by Anders Eklund at Linkoping University has found that the computer programs used by fMRI researchers to interpret what is going on in their volunteers’ brains appear to be seriously flawed. © The Economist Newspaper Limited 2016
Keyword: Brain imaging
Link ID: 22444 - Posted: 07.15.2016
The most sophisticated, widely adopted, and important tool for looking at living brain activity actually does no such thing. Called functional magnetic resonance imaging, what it really does is scan for the magnetic signatures of oxygen-rich blood. Blood indicates that the brain is doing something, but it’s not a direct measure of brain activity. Which is to say, there’s room for error. That’s why neuroscientists use special statistics to filter out noise in their fMRIs, verifying that the shaded blobs they see pulsing across their computer screens actually relate to blood flowing through the brain. If those filters don’t work, an fMRI scan is about as useful at detecting neuronal activity as your dad’s “brain sucking alien” hand trick. And a new paper suggests that might actually be the case for thousands of fMRI studies over the past 15 years. The paper, published June 29 in the Proceedings of the National Academy of Science, threw 40,000 fMRI studies done over the past 15 years into question. But many neuroscientists—including the study’s whistleblowing authors—are now saying the negative attention is overblown. Neuroscience has long struggled over just how useful fMRI data is at showing brain function. “In the early days these fMRI signals were very small, buried in a huge amount of noise,” says Elizabeth Hillman, a biomedical engineer at the Zuckerman Institute at Columbia University. A lot of this noise is literal: noise from the scanner, noise from the electrical components, noise from the person’s body as it breathes and pumps blood.
Keyword: Brain imaging
Link ID: 22413 - Posted: 07.09.2016
Andrew Orlowski Special Report If the fMRI brain-scanning fad is well and truly over, then many fashionable intellectual ideas look like collateral damage, too. What might generously be called the “British intelligentsia” – our chattering classes – fell particularly hard for the promise that “new discoveries in brain science” had revealed a new understanding of human behaviour, which shed new light on emotions, personality and decision making. But all they were looking at was statistical quirks. There was no science to speak of, the results of the experiments were effectively meaningless, and couldn’t support the (often contradictory) conclusions being touted. The fMRI machine was a very expensive way of legitimising an anecdote. This is an academic scandal that’s been waiting to explode for years, for plenty of warning signs were there. In 2005, Ed Vul, now a psychology professor at UCSD, and Hal Pashler – then and now at UCSD – were puzzled by a claim being made in a talk by a neuroscience researcher. He was explaining study that purported to report a high correlation between a test subject’s brain activity and the speed with which they left the room after the study. “It seemed unbelievable to us that activity in this specific brain area could account for so much of the variance in walking speed,” explained Vul. “Especially so, because the fMRI activity was measured some two hours before the walking happened. So either activity in this area directly controlled motor action with a delay of two hours — something we found hard to believe — or there was something fishy going on.” IT © 1998–2016
Keyword: Brain imaging
Link ID: 22410 - Posted: 07.08.2016
It's no secret that passwords aren't impenetrable. Even outside of major incidents like the celebrity nude photo hack, or when millions of passwords get released online, like what happened to Twitter recently, many of us may still be at risk of having our data compromised due to password-related security flaws. According to a June 2015 survey from mobile identity company TeleSign, two in five people were notified in the preceding year that their personal information was compromised or that they had been hacked or had their password stolen. But a new technology developed by the BioSense lab at the University of California, Berkeley could make all of that a thing of the past. Over the course of three years, the lab's co-director, John Chuang, and his graduate students have been working on a technology called passthoughts, which would use a person's brainwaves to identify them, according to CNET. The team has found that a passthought — something like a song that someone could sing in their mind — isn't easily forgotten and can achieve a 99-per-cent authentication accuracy rate. The device used to capture passthoughts resembles a telephone headset. It relies on EEG technology, detecting electrical activity in your brain via electrodes strapped to your head. And although Chuang's team say the technology has improved greatly in recent years, the awkwardness of the device might hinder it from being widely adopted. ©2016 CBC/Radio-Canada.
Keyword: Brain imaging
Link ID: 22401 - Posted: 07.06.2016